Skip to content
Get started

Integrations

Overmind works with popular LLM frameworks and providers. The SDK automatically instruments supported providers and frameworks.

ProviderStatusproviders value
OpenAISupported"openai"
AnthropicSupported"anthropic"
Google GeminiSupported"google"
AgnoSupported"agno"

Call overmind_sdk.init() once at startup with the providers you use. Your existing LLM client code stays unchanged — no import swaps, no proxy routing.

from overmind_sdk import init
init(service_name="my-service", providers=["openai", "anthropic"])

Pass an empty providers list (or omit it) to auto-detect all installed providers.

ProviderStatusenabledProviders keyPackage
OpenAISupportedopenai: OpenAIopenai
AnthropicSupportedanthropic: Anthropic@anthropic-ai/sdk
Google GeminiSupportedgoogleGenAI: GoogleGenAI@google/genai

Pass the imported provider module to initTracing({ enabledProviders: { ... } }). Your existing LLM code works unchanged.


FrameworkStatusNotes
AgnoSupportedTraces agent runs, tool calls, and reasoning steps
OpenAI Agents SDKSupportedTraces agent tool calls and reasoning steps
LangChainComing soonProtocol-level support for chains and agents
CrewAIComing soonMulti-agent workflow tracing

The Python SDK uses OpenTelemetry auto-instrumentation. Call init() once at startup and every subsequent LLM call from your existing client is automatically traced.

  • OpenAI — captures all chat completion, response, and embedding calls
  • Anthropic — captures all message calls
  • Google Gemini — captures all generate_content calls
  • Agno — captures agent runs, tool calls, and model interactions
from overmind_sdk import init
init(service_name="my-service") # instruments all installed providers

The JS SDK instruments LLM calls via initTracing(). Call it once before any LLM calls and all subsequent provider calls are traced automatically.

  • OpenAI — captures chat completions, completions, and image generation calls
  • Anthropic — captures all message calls
  • Google Gemini — captures all generateContent and streaming calls

If you’re not using Python, you can send traces directly via the OTLP HTTP endpoint:

POST https://api.overmindlab.ai/api/v1/traces
Header: X-API-Token: ovr_your_token_here
Content-Type: application/x-protobuf

The endpoint accepts standard OpenTelemetry OTLP trace data. Any OpenTelemetry SDK (Go, Java, Node.js, etc.) can export to this endpoint.


We’re actively adding support for more providers and frameworks. If you have a specific integration request, reach out to support@overmindlab.ai