assisted-writing Made for Pulsar!

This package empowers you with in-editor text completion capabilities, leveraging the power of local Large Language Models (LLMs) through either llama.cpp or text-generation webui, or the cloud-based Google AI Studio Gemini API.

keyasuda

25

0

0.0.11

MIT

GitHub

This package consumes the following services:

Assisted Writing: Power Your Pulsar with AI Text Generation

This package empowers you with in-editor text completion capabilities, leveraging the power of either local Large Language Models (LLMs) with llama.cpp, text-generation webui, or Ollama, or the Google AI Studio Gemini API.

A screenshot

Key Features:

Getting Started:

  1. Choose your LLM backend:
    • For llama.cpp / text-generation webui: Ensure either llama-server or text-generation webui (with the --api) is running.
    • For Ollama: Ensure the Ollama server is running and you have a model installed (e.g., ollama pull llama3).
    • For Google AI Studio Gemini: Set up your Google AI Studio API key (see instructions below).
  2. Configure Settings: Specify your API endpoint URI and other parameters within the package settings.
  3. Compose Your Prompt: Type your desired text into the editor.
  4. Position Cursor: Place the cursor at the point where you want the LLM to generate text.
  5. Trigger Completion: Invoke text completion by pressing CTRL+ALT+ENTER or navigating to "Assisted Writing: run" in the Command palette.
  6. Abort Completion (Optional): Press ESC to stop the generation process if needed.

Settings: