Getting Started
Sign up, install the SDK, and send your first traces in under 5 minutes. This guide walks you through going from zero to seeing your first LLM traces in the Overmind dashboard.
1. Sign Up or Use Open-Source
Section titled “1. Sign Up or Use Open-Source”Managed Service
Section titled “Managed Service”Sign up and onboard at console.overmindlab.ai.
Self-hosted Open-Source Edition
Section titled “Self-hosted Open-Source Edition”Follow the readme for our open-source edition. This is fully local app that provides all key features of Overmind.
Regardless of your choice, the app will automatically create your project and API key and generate copy-paste integration snippet — no setup steps needed.
2. Install the SDK
Section titled “2. Install the SDK”pip install overmind# Pick the provider(s) you usenpm install @overmind-lab/trace-sdk openai # OpenAInpm install @overmind-lab/trace-sdk @anthropic-ai/sdk # Anthropicnpm install @overmind-lab/trace-sdk @google/genai # Google Gemini3. Initialize Overmind SDK
Section titled “3. Initialize Overmind SDK”Call overmind.init() once at startup, before any LLM calls. Your existing LLM code stays exactly as-is.
Before:
import osfrom openai import OpenAI
os.environ["OPENAI_API_KEY"] = 'sk-proj-'
client = OpenAI()response = client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": "Explain quantum computing"}],)After:
import osimport overmindfrom openai import OpenAI
os.environ["OVERMIND_API_KEY"] = 'ovr_'os.environ["OPENAI_API_KEY"] = 'sk-proj-'
overmind.init(service_name="my-service", environment="production")
# existing code unchangedclient = OpenAI()response = client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": "Explain quantum computing"}],)That’s it — add overmind.init() and set your API key. No proxy, no import changes, no key sharing.
Every LLM call is automatically traced (input, output, model, latency, token count, cost) and sent to Overmind. Your OpenAI and other provider keys stay in your environment.
Other LLM providers
Overmind supports OpenAI, Anthropic, Google Gemini, and Agno. Pass the providers you use:
import overmind
# Explicit providersovermind.init(service_name="my-service", providers=["openai"])overmind.init(service_name="my-service", providers=["anthropic"])overmind.init(service_name="my-service", providers=["google"])overmind.init(service_name="my-service", providers=["agno"])
# Or auto-detect all installed providersovermind.init(service_name="my-service")Initialize the Overmind client and call initTracing() before any LLM calls, passing the imported provider module:
OpenAI:
import { OpenAI } from "openai";import { OvermindClient } from "@overmind-lab/trace-sdk";
const overmindClient = new OvermindClient({ apiKey: process.env.OVERMIND_API_KEY!, appName: "my app",});
overmindClient.initTracing({ enableBatching: false, enabledProviders: { openai: OpenAI },});
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await openai.chat.completions.create({ model: "gpt-5-mini", messages: [{ role: "user", content: "Explain quantum computing" }],});
await overmindClient.shutdown();Anthropic:
import * as Anthropic from "@anthropic-ai/sdk";import { OvermindClient } from "@overmind-lab/trace-sdk";
const overmindClient = new OvermindClient({ apiKey: process.env.OVERMIND_API_KEY!, appName: "my app",});
overmindClient.initTracing({ enableBatching: false, enabledProviders: { anthropic: Anthropic },});
const client = new Anthropic.default({ apiKey: process.env.ANTHROPIC_API_KEY });
const message = await client.messages.create({ model: "claude-sonnet-4-20250514", max_tokens: 1024, messages: [{ role: "user", content: "Explain quantum computing" }],});
console.log(message.content[0].text);
await overmindClient.shutdown();Google Gemini:
import * as GoogleGenAI from "@google/genai";import { OvermindClient } from "@overmind-lab/trace-sdk";
const overmindClient = new OvermindClient({ apiKey: process.env.OVERMIND_API_KEY!, appName: "my app",});
overmindClient.initTracing({ enableBatching: false, enabledProviders: { googleGenAI: GoogleGenAI },});
const client = new GoogleGenAI.GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY,});
const response = await client.models.generateContent({ model: "gemini-2.0-flash", contents: "Explain quantum computing",});
console.log(response.text);
await overmindClient.shutdown();That’s it — quick setup and your existing LLM code works as-is. No proxy, no key sharing.
Traces are forwarded to https://api.overmindlab.ai and appear in your Overmind dashboard.
4. Send 30+ Traces
Section titled “4. Send 30+ Traces”Run your application normally. Once Overmind has collected at least 30 traces, the optimization engine kicks in:
- Prompt templates are automatically extracted from your traces, creating Agents
- The LLM judge evaluates each trace on quality, cost, and latency
- Prompt and model experimentation begins
You can watch traces appear in real time at console.overmindlab.ai under the Traces tab.
5. Check the Home page
Section titled “5. Check the Home page”Go to the Overmind home page. You’ll see:
- Agents: We extract all prompt templates from your traces and automatically create Agents. Each agent has its own prompt template, evaluation criteria and model backtesting.
- Suggestions: For each agent we will automatically create Suggestions - a result of our optimisation engine which will suggest a better prompt or model based on each Agent’s use case.
What Happens Next
Section titled “What Happens Next”Once you have traces flowing, Overmind automatically:
- Extracts your Agent — identifies the structure of your prompts and separates fixed templates from variable inputs creating Agents
- Evaluates with your Agent — based on Agent traces we create evaluation criteria and use LLM judge to assess your Agent’s performance. We also monitor latency and cost.
- Experiments with prompts — generates candidate prompt variations and tests them against your historical inputs on quality, cost and latency
- Experiments with models — replays your traces through different models to find cost/quality tradeoffs
- Shows recommendations — surfaces the best alternatives in your dashboard
- Collects your feedback - at any time you can adjust evaluation criteria, agent description and provide feedback on our scoring and suggestions.
You provide feedback (accept, reject, tweak) and the system learns. See How Optimization Works for the full details.
Environment Variables
Section titled “Environment Variables”| Variable | Required | Description |
|---|---|---|
OVERMIND_API_KEY | Yes | Your Overmind API key (shown after signup) |
OVERMIND_SERVICE_NAME | No | Service name (overrides service_name param default) |
OVERMIND_ENVIRONMENT | No | Environment name (overrides environment param default) |
OPENAI_API_KEY | If using OpenAI | Your OpenAI API key |
ANTHROPIC_API_KEY | If using Anthropic | Your Anthropic API key |
GEMINI_API_KEY | If using Google | Your Google/Gemini API key |
Next Steps
Section titled “Next Steps”- How Optimization Works — understand the evaluation and recommendation loop
- Python SDK Reference — full Python client API, tracing, and advanced usage
- JavaScript SDK Reference — full JS/TS client API,
initTracingoptions, and resource attributes