SDK Reference - Python
Full reference for the Overmind Python SDK — initialize once and every LLM call is automatically traced. Includes provider setup, custom spans, user tagging, and exception capture.
Installation
Section titled “Installation”pip install overmindInstall alongside your LLM provider:
pip install overmind openai # OpenAIpip install overmind anthropic # Anthropicpip install overmind google-genai # Google Geminipip install overmind agno # Agnoovermind.init()
Section titled “overmind.init()”Initialize the SDK once at application startup, before any LLM calls. After calling init(), your existing LLM client code is automatically instrumented — no import changes, no proxy.
import overmind
overmind.init( overmind_api_key="ovr_...", service_name="my-service", environment="production", providers=["openai"],)Parameters
Section titled “Parameters”| Parameter | Type | Default | Description |
|---|---|---|---|
overmind_api_key | str | None | None | Your Overmind API key. Falls back to OVERMIND_API_KEY env var. |
service_name | str | "unknown-service" | Name of your service, shown in the dashboard. Also reads OVERMIND_SERVICE_NAME env var. |
environment | str | "development" | Deployment environment ("production", "staging", etc.). Also reads OVERMIND_ENVIRONMENT env var. |
providers | list[str] | [] | Which LLM providers to instrument. Supported: "openai", "anthropic", "google", "agno". Pass an empty list to auto-detect all installed providers. |
overmind_base_url | str | None | None | Override the Overmind API endpoint. Falls back to OVERMIND_API_URL env var, then https://api.overmindlab.ai. |
Environment Variables
Section titled “Environment Variables”| Variable | Description |
|---|---|
OVERMIND_API_KEY | Your Overmind API key |
OVERMIND_SERVICE_NAME | Service name (overridden by service_name param) |
OVERMIND_ENVIRONMENT | Environment name (overridden by environment param) |
OVERMIND_API_URL | Custom API endpoint URL |
Provider Examples
Section titled “Provider Examples”OpenAI
import overmindfrom openai import OpenAI
overmind.init(service_name="my-service", providers=["openai"])
client = OpenAI()response = client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": "What is quantum computing?"}],)print(response.choices[0].message.content)Anthropic
import overmindimport anthropic
overmind.init(service_name="my-service", providers=["anthropic"])
client = anthropic.Anthropic()message = client.messages.create( model="claude-haiku-4-5", max_tokens=1024, messages=[{"role": "user", "content": "What is quantum computing?"}],)print(message.content[0].text)Google Gemini
import overmindfrom google import genai
overmind.init(service_name="my-service", providers=["google"])
client = genai.Client()response = client.models.generate_content( model="gemini-2.0-flash", contents="What is quantum computing?",)print(response.text)Agno
import overmindfrom agno.agent import Agentfrom agno.models.openai import OpenAIChat
overmind.init(service_name="my-service", providers=["agno"])
agent = Agent(model=OpenAIChat(id="gpt-4o-mini"), markdown=True)agent.print_response("Write a haiku about the ocean.")Auto-detect all installed providers
import overmind
# Empty providers list = auto-instruments every supported package that is installedovermind.init(service_name="my-service")overmind.get_tracer()
Section titled “overmind.get_tracer()”Get the OpenTelemetry Tracer instance for creating custom spans around arbitrary code blocks.
tracer = overmind.get_tracer()
with tracer.start_as_current_span("process-document") as span: span.set_attribute("document.id", doc_id) result = process(doc)Raises: RuntimeError if overmind.init() has not been called.
Returns: An opentelemetry.trace.Tracer instance.
Custom span example
Section titled “Custom span example”import overmindfrom openai import OpenAI
overmind.init(service_name="pipeline")
client = OpenAI()tracer = overmind.get_tracer()
def summarise_document(doc_id: str, text: str) -> str: with tracer.start_as_current_span("summarise") as span: span.set_attribute("doc.id", doc_id) span.set_attribute("doc.length", len(text))
response = client.chat.completions.create( model="gpt-4o-mini", messages=[ {"role": "system", "content": "Summarise the following document concisely."}, {"role": "user", "content": text}, ], ) return response.choices[0].message.contentovermind.set_user()
Section titled “overmind.set_user()”Associate the current trace with a user. Call this once per request — typically in middleware — so all LLM calls made during that request are tagged with the user’s identity.
overmind.set_user(user_id="user-123", email="alice@example.com", username="alice")Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
user_id | str | Yes | Unique user identifier |
email | str | None | No | User’s email address |
username | str | None | No | User’s display name |
overmind.set_tag()
Section titled “overmind.set_tag()”Add a custom key-value attribute to the current span. Use this to attach any metadata you want to appear alongside traces in the dashboard.
overmind.set_tag("feature.flag", "new-checkout-flow")overmind.set_tag("tenant.id", tenant_id)overmind.set_tag("workflow", "order-processing")Parameters
Section titled “Parameters”| Parameter | Type | Description |
|---|---|---|
key | str | Attribute name |
value | str | Attribute value |
overmind.capture_exception()
Section titled “overmind.capture_exception()”Record an exception on the current span and mark its status as ERROR. Use this in except blocks where you want the trace to reflect the failure.
try: result = client.chat.completions.create(...)except Exception as e: overmind.capture_exception(e) raiseParameters
Section titled “Parameters”| Parameter | Type | Description |
|---|---|---|
exception | Exception | The exception to record |
Full Example
Section titled “Full Example”import osimport overmindfrom openai import OpenAI
os.environ["OVERMIND_API_KEY"] = "ovr_your_key_here"
overmind.init( service_name="customer-support", environment="production", providers=["openai"],)
client = OpenAI()
def handle_support_query(user_id: str, question: str) -> str: overmind.set_user(user_id=user_id) overmind.set_tag("workflow", "support")
tracer = overmind.get_tracer() with tracer.start_as_current_span("handle-query"): try: response = client.chat.completions.create( model="gpt-4o-mini", messages=[ {"role": "system", "content": "You are a helpful customer support agent."}, {"role": "user", "content": question}, ], ) return response.choices[0].message.content except Exception as e: overmind.capture_exception(e) raise
answer = handle_support_query("user-123", "How do I reset my password?")print(answer)Every call produces a trace in the Overmind dashboard. After 30+ traces, the optimization engine analyses your prompts and suggests improvements.
- Call
init()once: Place it at the top of your entry point (main.py,app.py, etc.) before any LLM calls or framework setup. - Use
service_namemeaningfully: If your app has different agents (e.g., support bot, summariser, code assistant), give each a distinctservice_name. This helps Overmind extract cleaner templates and produce more targeted recommendations. - Tag traces with context: Use
set_user()andset_tag()to add metadata that helps you filter and debug traces in the dashboard. - Let it run: The more traces Overmind collects, the better its recommendations. Run your app normally and check the dashboard after a day or two for initial suggestions.