Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.latitude.so/llms.txt

Use this file to discover all available pages before exploring further.

Overview

This guide shows you how to integrate Latitude Telemetry into an application that uses the OpenAI Agents SDK (@openai/agents for TypeScript, openai-agents for Python). The Agents SDK uses the OpenAI Responses API, which is not covered by the standard openai instrumentation. Latitude hooks into the Agents SDK’s native tracing system instead, so you get spans for every agent run, generation, response, function call, handoff, and guardrail — automatically.
You’ll keep calling the Agents SDK exactly as you do today. Telemetry observes agent runs, tool calls, and handoffs as they happen.

Requirements

  • A Latitude account and API key
  • A Latitude project slug
  • A project that uses the OpenAI Agents SDK

Steps

1

Install

npm install @latitude-data/telemetry @openai/agents
2

Initialize and use

import { initLatitude, capture } from "@latitude-data/telemetry"
import { Agent, run, tool } from "@openai/agents"
import { z } from "zod"

const latitude = initLatitude({
  apiKey: process.env.LATITUDE_API_KEY!,
  projectSlug: process.env.LATITUDE_PROJECT_SLUG!,
  instrumentations: ["openai-agents"],
})

await latitude.ready

const getWeather = tool({
  name: "get_weather",
  description: "Returns the current weather for a city.",
  parameters: z.object({ city: z.string() }),
  execute: async ({ city }) => `The weather in ${city} is sunny.`,
})

const agent = new Agent({
  name: "Weather agent",
  instructions: "Answer weather questions using get_weather.",
  tools: [getWeather],
  model: "gpt-4o-mini",
})

await capture("weather-agent-run", () =>
  run(agent, "What's the weather in Barcelona?"),
)

await latitude.shutdown()

What you get

Each agent run shows up as a trace with nested spans:
  • Agent spans — agent name, configured tools, handoff targets, output type
  • Generation / Response spans — model, input/output messages, token usage, response id
  • Function spans — tool calls with input arguments and output
  • Handoff spansfrom_agentto_agent
  • Guardrail spans — guardrail name and whether it triggered
  • MCP spans — listed tools per server
Wrap a request or job with capture() to attach a userId, sessionId, tags, or metadata to every span produced inside.

How it works

Latitude registers a TracingProcessor with the Agents SDK (via addTraceProcessor in TypeScript, the OpenInference instrumentor in Python) and translates the SDK’s spans into OpenTelemetry spans on Latitude’s tracer. No monkey-patching of the OpenAI client is required, so this works regardless of whether your agents use the Responses API (default) or the Chat Completions API.

Seeing Your Traces

Once connected, traces appear automatically in Latitude:
  1. Open your project in the Latitude dashboard
  2. Each agent run shows the full hierarchy of agent → generation/response → tool calls and handoffs
  3. Token usage and latency are aggregated at every level