Overview
This guide shows you how to integrate Latitude Telemetry into an existing application that uses the official Gemini SDK (google-genai).
After completing these steps:
- Every Gemini call (e.g.
generate_content) can be captured as a log in Latitude.
- Logs are grouped under a prompt, identified by a
path, inside a Latitude project.
- You can inspect inputs/outputs, measure latency, and debug your Gemini-powered features from the Latitude dashboard.
You’ll keep calling Gemini exactly as you do today — Telemetry simply observes
and enriches those calls.
Requirements
Before you start, make sure you have:
- A Latitude account and API key
- A Latitude project ID
- A Node.js or Python-based project that uses the Gemini SDK
That’s it — prompts do not need to be created ahead of time.
Steps
Install requirements
Add the Latitude Telemetry package to your project:npm add @latitude-data/telemetry
pip install latitude-telemetry
Wrap your Gemini-powered feature
Since Gemini doesn’t have automatic instrumentation in TypeScript, you need to manually create spans to track your Gemini calls.import { LatitudeTelemetry } from '@latitude-data/telemetry'
import { GoogleGenAI } from '@google/genai'
const telemetry = new LatitudeTelemetry(process.env.LATITUDE_API_KEY)
async function generateSupportReply(input: string) {
return telemetry.capture(
{
projectId: 123, // The ID of your project in Latitude
path: 'generate-support-reply', // Add a path to identify this prompt in Latitude
},
async () => {
const model = 'gemini-2.0-flash'
// 1) Start the completion span
const span = telemetry.span.completion({
model,
input: [{ role: 'user', content: input }]
})
try {
// 2) Call Gemini as usual
const google = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY })
const response = await google.models.generateContent({
model,
contents: input,
})
const text = response.text
// 3) End the span (attach output + useful metadata)
span.end({
output: [{ role: 'assistant', content: text }],
})
return text
} catch (error) {
// Make sure to close the span even on errors
span.fail(error)
throw error
}
}
)
}
Python has automatic instrumentation for Gemini. You can use the capture method as a decorator (recommended) or as a context manager:Using decorator (recommended)
import os
import google.generativeai as genai
from latitude_telemetry import Telemetry, Instrumentors, TelemetryOptions
telemetry = Telemetry(
os.environ["LATITUDE_API_KEY"],
TelemetryOptions(instrumentors=[Instrumentors.GoogleGenAI]),
)
@telemetry.capture(
project_id=123, # The ID of your project in Latitude
path="generate-support-reply", # Add a path to identify this prompt in Latitude
)
def generate_support_reply(input: str) -> str:
model = genai.GenerativeModel("gemini-1.5-flash")
response = model.generate_content(input)
return response.text
import os
import google.generativeai as genai
from latitude_telemetry import Telemetry, Instrumentors, TelemetryOptions
telemetry = Telemetry(
os.environ["LATITUDE_API_KEY"],
TelemetryOptions(instrumentors=[Instrumentors.GoogleGenAI]),
)
def generate_support_reply(input: str) -> str:
with telemetry.capture(
project_id=123, # The ID of your project in Latitude
path="generate-support-reply", # Add a path to identify this prompt in Latitude
):
model = genai.GenerativeModel("gemini-1.5-flash")
response = model.generate_content(input)
return response.text
The path:
- Identifies the prompt in Latitude
- Can be new or existing
- Should not contain spaces or special characters (use letters, numbers,
- _ / .)
Seeing your logs in Latitude
Once your feature is wrapped, logs will appear automatically.
- Open the prompt in your Latitude dashboard (identified by
path)
- Go to the Traces section
- Each execution will show:
- Input and output messages
- Model and token usage
- Latency and errors
- One trace per feature invocation
Each Gemini call appears as a child span under the captured prompt execution, giving you a full, end-to-end view of what happened.
That’s it
No changes to your Gemini calls, no special return values, and no extra plumbing — just wrap the feature you want to observe.