Dictate. Capture. Compress. Inject.
Send context-rich prompts to any AI — without leaving your workflow.
Vapor sits between your brain and any LLM — capturing context, compressing prompts, and injecting them where you need them.
Hold Fn to speak — on-device transcription via Apple Speech. No cloud, no latency, no privacy concerns.
40–60% token reduction. Strip filler, fuse concepts, preserve meaning. Choose Local LLM (free, on-device) or OpenRouter (cloud).
Send compressed prompts directly into ChatGPT, Claude, Gemini, Grok, Perplexity — no copy-paste needed.
Auto-detects screenshots on your Desktop. Add them to context with one keypress. Vapor sees what you see.
Scan live browser tabs for structured data — tables, JSON, XHR feeds, articles. Capture it all into context.
Captured pages, articles, and research — all in one sidebar. Search, filter, and insert context directly into your prompts.
Hold the Fn key to speak, or type your prompt. Vapor captures your intent — voice or keyboard, your choice.
Hit ⌘↩. Vapor strips filler words and fuses concepts into dense, token-efficient form. 40–60% reduction, meaning preserved.
Hit ⌘⇧P. Vapor injects the compressed prompt directly into your AI chat tab. Auto-submit optional.
Along the way, context flows in automatically:
Free. Private. On-device.
Cloud. Powerful. Configurable.
The Chrome extension links Vapor directly to your AI chat interfaces. Three steps to set up.
Load the extension — included in the DMG. Open chrome://extensions, enable Developer mode, click "Load unpacked".
Copy the auth token — open Vapor Settings → Browser → Authentication → Copy.
Paste into the extension — click the Vapor icon → Settings → Paste → Save. Connected!
Supported AI sites:
Vapor is MIT-licensed. No telemetry, no tracking, no lock-in. Fork it, modify it, ship it.