Get Started

Install the extension, satisfy prerequisites, and run Corvus in under five minutes.

Prerequisites

  • VS Code 1.85 or newer
  • Node.js and pnpm if you install from source
  • Ollama (recommended) for local chat — ollama.ai

Install the extension

Choose one:

  • VS Code Marketplace — Search for “RavenKit” in the Extensions view and install. (When published.)
  • VSIX — Download the latest .vsix from GitHub Releases, then in VS Code: Extensions → ⋯ → Install from VSIX...
  • From source — Clone the repo, then from apps/ravenkit: pnpm install, pnpm compile, then pnpm run vsix:package. Install the generated ravenkit.vsix as above.

Set up Ollama (recommended)

Corvus works best with a local model. Install Ollama, then:

  1. Install Ollama from ollama.ai.
  2. Pull a model: ollama pull qwen2.5-coder:7b or ollama pull llama3.2.
  3. Start the server: ollama serve (often already running).

You can also use Anthropic (Claude) or OpenAI by setting the provider and API key in Settings.

First run

  1. Open the Command Palette (Cmd+Shift+P / Ctrl+Shift+P).
  2. Run RavenKit: Open Chat Agent (or press Cmd+Shift+R / Ctrl+Shift+R).
  3. Type a message; Corvus will use your workspace and tools to respond.

Optional: set RavenKit: Select Project if you’re in a monorepo so the agent and System Lens use the right scope.

View all commands · Configure provider and model