AI assistance built into your terminal. Ask questions, get help with commands, and use your terminal output as context — with any provider.
Connect to the AI provider you already use. Each provider uses its native API with streaming support.
Anthropic's Claude models via the Anthropic API. Supports Claude 4 Opus, Sonnet, and Haiku.
OpenAI's GPT models via the OpenAI API. Supports GPT-4o, GPT-4, and GPT-3.5.
Google's Gemini models via the Gemini API. Supports Gemini Pro and Ultra.
Mistral AI models via the Mistral API. Supports Mistral Large and Codestral.
xAI's Grok models via the xAI API.
Run models locally with Ollama. No API key needed — everything stays on your machine.
Mind can see what's on your screen. Select terminal output and send it to the AI as context — error messages, logs, command output. Ask "what does this error mean?" and get an answer that references your actual terminal content.
Responses stream in token by token, just like in a chat interface. No waiting for the full response to generate before you can start reading. Cancel a response mid-stream if you've seen enough.
Switch between providers with a dropdown. Each provider's API key is stored separately. Start a conversation with Claude, switch to GPT-4 for a second opinion, try Ollama for a local answer — all in the same session.
The AI dialog floats above your terminal so you can reference output while chatting. Resize it, move it, or dismiss it. Press Ctrl+Shift+A to toggle it open or closed.
Code blocks in AI responses are syntax-highlighted and easy to copy. Markdown formatting is rendered inline — headers, lists, bold, inline code — so responses are readable, not raw text.
When you launch Claude Code in a terminal pane, yaw detects it and offers to snap the AI dialog alongside it. Side-by-side AI coding with your terminal — no manual window arrangement.
API keys are encrypted with AES-256-GCM and stored locally, just like connection credentials. Keys are never sent anywhere except to the provider's own API endpoint.