Skip to content
Get started

SDK Reference - Javascript

Full reference for the Overmind JavaScript/TypeScript SDK. Instrument your LLM calls with a single initTracing() call and every call is automatically traced and sent to Overmind for optimization.

JavaScript/TypeScript SDK for Overmind — automatic LLM observability powered by OpenTelemetry.

Instrument your LLM calls with a single initTracing() call. Traces are exported to the Overmind platform with zero changes to your existing AI code. Supports OpenAI, Anthropic, and Google Gemini.

Install the SDK along with the provider(s) you use:

Terminal window
# OpenAI
npm install @overmind-lab/trace-sdk openai
# Anthropic
npm install @overmind-lab/trace-sdk @anthropic-ai/sdk
# Google Gemini
npm install @overmind-lab/trace-sdk @google/genai
import { OpenAI } from "openai";
import { OvermindClient } from "@overmind-lab/trace-sdk";
const overmindClient = new OvermindClient({
apiKey: process.env.OVERMIND_API_KEY!,
appName: "my app",
});
overmindClient.initTracing({
enableBatching: false,
enabledProviders: { openai: OpenAI },
});
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await openai.chat.completions.create({
model: "gpt-5-mini",
messages: [{ role: "user", content: "Explain quantum computing" }],
});
await overmindClient.shutdown();
import * as Anthropic from "@anthropic-ai/sdk";
import { OvermindClient } from "@overmind-lab/trace-sdk";
const overmindClient = new OvermindClient({
apiKey: process.env.OVERMIND_API_KEY!,
appName: "my app",
});
overmindClient.initTracing({
enableBatching: false,
enabledProviders: { anthropic: Anthropic },
});
const client = new Anthropic.default({ apiKey: process.env.ANTHROPIC_API_KEY });
const message = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: "Explain quantum computing" }],
});
console.log(message.content[0].text);
await overmindClient.shutdown();
import * as GoogleGenAI from "@google/genai";
import { OvermindClient } from "@overmind-lab/trace-sdk";
const overmindClient = new OvermindClient({
apiKey: process.env.OVERMIND_API_KEY!,
appName: "my app",
});
overmindClient.initTracing({
enableBatching: false,
enabledProviders: { googleGenAI: GoogleGenAI },
});
const client = new GoogleGenAI.GoogleGenAI({
apiKey: process.env.GEMINI_API_KEY,
});
const response = await client.models.generateContent({
model: "gemini-2.0-flash",
contents: "Explain quantum computing",
});
console.log(response.text);
await overmindClient.shutdown();

Traces are sent automatically to https://api.overmindlab.ai and will appear in your Overmind dashboard.

You can enable multiple providers in a single initTracing() call:

import { OpenAI } from "openai";
import * as Anthropic from "@anthropic-ai/sdk";
import * as GoogleGenAI from "@google/genai";
import { OvermindClient } from "@overmind-lab/trace-sdk";
const overmindClient = new OvermindClient({
apiKey: process.env.OVERMIND_API_KEY!,
appName: "my app",
});
overmindClient.initTracing({
enableBatching: true,
enabledProviders: {
openai: OpenAI,
anthropic: Anthropic,
googleGenAI: GoogleGenAI,
},
});
OptionTypeRequiredDescription
apiKeystringYesYour Overmind API key. Falls back to OVERMIND_API_KEY env var.
appNamestringNoName of your service, shown in the dashboard. Defaults to "overmind-js".
baseUrlstringNoOverride the Overmind ingest endpoint. Defaults to OVERMIND_TRACES_URL env var or https://api.overmindlab.ai.
OptionTypeRequiredDescription
enabledProvidersobjectYesPass one or more imported provider modules to instrument. See supported keys below.
enableBatchingbooleanYestrue to batch spans before export (recommended for production), false to export immediately.
instrumentationsInstrumentation[]NoAdditional OpenTelemetry instrumentations to register.
spanProcessorsSpanProcessor[]NoAdditional span processors (e.g. custom exporters).
KeyImportProvider
openaiimport { OpenAI } from "openai"OpenAI
anthropicimport * as Anthropic from "@anthropic-ai/sdk"Anthropic
googleGenAIimport * as GoogleGenAI from "@google/genai"Google Gemini
VariableDescription
OVERMIND_API_KEYYour Overmind API key
OVERMIND_TRACES_URLOverride the traces ingest base URL
DEPLOYMENT_ENVIRONMENTTag traces with an environment (e.g. production, staging). Defaults to development.
OPENAI_API_KEYYour OpenAI API key
ANTHROPIC_API_KEYYour Anthropic API key
GEMINI_API_KEYYour Google Gemini API key

For each enabled provider, the SDK automatically captures:

  • Prompts and completions (messages, contents, tool calls)
  • Model name, temperature, top-p, max tokens
  • Token usage (prompt, completion, total)
  • Latency per request
  • Errors and exceptions

All data is attached to OpenTelemetry spans and exported to Overmind.

Call shutdown() to flush any buffered spans and cleanly stop the OpenTelemetry SDK before your process exits. This is especially important when enableBatching: true is used, as buffered spans may not have been exported yet.

await overmindClient.shutdown();

A typical pattern for scripts or serverless handlers:

try {
// ... your LLM calls
} finally {
await overmindClient.shutdown();
}

For long-running servers, hook it into your process exit signal:

process.on("SIGTERM", async () => {
await overmindClient.shutdown();
process.exit(0);
});

Enable batching in production to reduce network overhead:

overmindClient.initTracing({
enableBatching: true,
enabledProviders: { openai: OpenAI, anthropic: Anthropic, googleGenAI: GoogleGenAI },
});

Use enableBatching: false during local development to see traces immediately.

Every trace is tagged with the following attributes automatically:

AttributeValue
service.nameValue of appName
service.versionSDK version
deployment.environmentDEPLOYMENT_ENVIRONMENT env var or "development"
overmind.sdk.nameovermind-js
overmind.sdk.versionSDK version