Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.latitude.so/llms.txt

Use this file to discover all available pages before exploring further.

Latitude Telemetry instruments your AI application and sends traces to Latitude. Built entirely on OpenTelemetry, it works alongside your existing observability stack (Datadog, Sentry, Jaeger, etc.) without conflicts or vendor lock-in. Once connected, every LLM execution becomes a trace in Latitude that you can inspect in the Traces view, enrich with scores and annotations, and evaluate with Evaluations.

Agentic installation

The fastest way to add Latitude is to let your coding agent (Claude Code, Cursor, Windsurf, etc.) install it for you. The Latitude skill guides the agent through codebase discovery (existing OpenTelemetry, conflicting LLM-observability vendors, which LLM SDKs are in use, where LLM calls actually happen), picks the right install path, places initialization correctly, and verifies that traces land in your project.

Ask your coding agent

Paste this prompt into your agent:
Install the Latitude AI skill from github.com/latitude-dev/skills and use it to add tracing to this application with Latitude following best practices.
The agent will fetch the skill, read your codebase, ask only the questions it can’t answer from the code, and produce a working install.

Install the skill manually

If you prefer to install the skill ahead of time (no global setup needed — npx runs it directly):
npx skills add latitude-dev/skills --skill "latitude-telemetry"
Then in your agent:
Add tracing to this application with Latitude following best practices.
The skill covers TypeScript and Python, the providers and frameworks listed in Supported Integrations, and audits existing OpenTelemetry setups for compatibility.

Manual installation

One SDK bootstrap class sets up everything: auto-instrumentation, the Latitude exporter, and async context propagation:
npm install @latitude-data/telemetry
import { Latitude } from "@latitude-data/telemetry"
import OpenAI from "openai"

const latitude = new Latitude({
  apiKey: process.env.LATITUDE_API_KEY!,
  projectSlug: process.env.LATITUDE_PROJECT_SLUG!,
  instrumentations: { openai: OpenAI },
})

await latitude.ready

const openai = new OpenAI()
const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello" }],
})

await latitude.shutdown()
That’s it. Your LLM calls now appear as traces in Latitude.

Adding Context with capture()

Auto-instrumentation traces LLM calls without any extra code. Use capture() when you want to attach business context such as user IDs, session IDs, tags, or metadata to group and filter traces in Latitude.
import { Latitude, capture } from "@latitude-data/telemetry"
import OpenAI from "openai"

const latitude = new Latitude({
  apiKey: process.env.LATITUDE_API_KEY!,
  projectSlug: process.env.LATITUDE_PROJECT_SLUG!,
  instrumentations: { openai: OpenAI },
})

await latitude.ready

await capture(
  "handle-user-request",
  async () => {
    const response = await openai.chat.completions.create({
      model: "gpt-4o",
      messages: [{ role: "user", content: userMessage }],
    })
    return response.choices[0].message.content
  },
  {
    userId: "user_123",
    sessionId: "session_abc",
    tags: ["production", "v2-agent"],
    metadata: { requestId: "req-xyz" },
  },
)

await latitude.shutdown()
capture() does not create spans. It only attaches context to spans created by auto-instrumentation. Wrap the request or agent entrypoint once; you don’t need to wrap every internal step.

Streaming

When streaming responses, consume the stream inside the capture() callback so the span duration covers the full operation and child spans nest correctly:
await capture("stream-reply", async () => {
  const stream = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: input }],
    stream: true,
  })

  for await (const chunk of stream) {
    const content = chunk.choices[0]?.delta?.content
    if (content) res.write(content)
  }
  res.end()
})

How It Fits Into Your Stack

Latitude Telemetry is built on OpenTelemetry standards:
  1. Auto-instrumentation patches your LLM SDK (OpenAI, Anthropic, etc.) to emit spans for every call.
  2. LatitudeSpanProcessor filters for LLM-relevant spans (gen_ai.*, ai.*, openinference.* attributes) and exports them to Latitude via OTLP.
  3. capture() uses OpenTelemetry’s native context.with() to attach Latitude-specific attributes (user, session, tags) to spans within its scope.
If you already run Sentry or another OpenTelemetry-compatible SDK, initialize it first and construct Latitude second so Latitude can attach to the existing provider when possible. You can also add LatitudeSpanProcessor explicitly alongside existing processors. See the TypeScript SDK or Python SDK reference for advanced setup.

Supported Integrations

Providers

ProviderInstrumentationPackage (TS)Package (Python)
OpenAI"openai"openaiopenai
Anthropic"anthropic"@anthropic-ai/sdkanthropic
Amazon Bedrock"bedrock"@aws-sdk/client-bedrock-runtimeboto3
Cohere"cohere"cohere-aicohere
Together AI"togetherai"together-aitogether
Vertex AI"vertexai"@google-cloud/vertexaigoogle-cloud-aiplatform
Google AI Platform"aiplatform"@google-cloud/aiplatformgoogle-cloud-aiplatform
Azure OpenAI"openai"openaiopenai

Frameworks

FrameworkInstrumentationPackage (TS)Package (Python)
Vercel AI SDK-ai-
OpenAI Agents SDK"openai-agents"@openai/agentsopenai-agents
LangChain"langchain"langchainlangchain-core
LlamaIndex"llamaindex"llamaindexllama-index
Mastra-@mastra/core-

Next Steps