Skip to content

Setting Up Gemini

Google's Gemini models are a strong option for tasks that need high accuracy, complex reasoning, or large context windows. Quincy supports Gemini as a first-class provider alongside local models and Anthropic.

Get an API Key

  1. Go to aistudio.google.com
  2. Sign in with your Google account
  3. Click Get API key and create a new key
  4. Copy the key — you'll need it in the next step

Add Gemini to Quincy

If this is your first time running Quincy, the onboarding wizard will offer Gemini as a provider option. Select it and paste your API key when prompted.

If Quincy is already set up, you can add Gemini as an additional provider during any conversation — just ask:

> Add Gemini as a provider

Quincy will prompt you for your API key interactively.

Where Is the Key Stored?

Your API key is stored in the macOS Keychain — it is never written to a config file, environment variable, or any other plaintext location on disk. The key is stored securely and isolated from other applications.

When an agent needs to call the Gemini API, it retrieves the key from the Keychain at runtime. The key is passed directly to the HTTP client — the LLM itself never sees it. See Security & Trust for the full picture.

When to Use Gemini

Gemini is a good fit for:

  • Complex reasoning — Multi-step planning, analysis, and decision-making
  • Long context — Gemini's large context windows handle lengthy documents and conversation histories well
  • Accuracy-critical tasks — When a task demands reliable reasoning that local models may not deliver consistently
  • Orchestration — The main orchestrator benefits from a capable cloud model since it decides how to break down and delegate tasks

For routine, focused tasks (file reading, simple queries, structured extraction), local models are often fast enough and keep everything private. See Choosing Models for guidance on blending local and cloud models.