Overview
This guide shows you how to integrate Latitude Telemetry into an existing application that uses the official Azure OpenAI SDK.
After completing these steps:
- Every Azure OpenAI call (e.g.
chat.completions.create) can be captured as a log in Latitude.
- Logs are grouped under a prompt, identified by a
path, inside a Latitude project.
- You can inspect inputs/outputs, measure latency, and debug Azure OpenAI-powered features from the Latitude dashboard.
You’ll keep calling Azure OpenAI exactly as you do today — Telemetry simply
observes and enriches those calls.
Requirements
Before you start, make sure you have:
- A Latitude account and API key
- A Latitude project ID
- A Node.js or Python-based project that uses the Azure OpenAI SDK
That’s it — prompts do not need to be created ahead of time.
Steps
Install requirements
Add the Latitude Telemetry package to your project:npm add @latitude-data/telemetry
pip install latitude-telemetry
Wrap your Azure OpenAI-powered feature
Initialize Latitude Telemetry and wrap the code that calls Azure OpenAI using telemetry.capture.import { LatitudeTelemetry } from '@latitude-data/telemetry'
import OpenAI, { AzureOpenAI } from 'openai'
const telemetry = new LatitudeTelemetry(
process.env.LATITUDE_API_KEY,
{ instrumentations: { openai: OpenAI } }
)
async function generateSupportReply(input: string) {
return telemetry.capture(
{
projectId: 123, // The ID of your project in Latitude
path: 'generate-support-reply', // Add a path to identify this prompt in Latitude
},
async () => {
const client = new AzureOpenAI({
endpoint: process.env.AZURE_OPENAI_ENDPOINT,
apiKey: process.env.AZURE_OPENAI_API_KEY,
apiVersion: '2024-02-01',
})
const completion = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: input }],
})
return completion.choices[0].message.content
}
)
}
You can use the capture method as a decorator (recommended) or as a context manager:Using decorator (recommended)
import os
from openai import AzureOpenAI
from latitude_telemetry import Telemetry, Instrumentors, TelemetryOptions
telemetry = Telemetry(
os.environ["LATITUDE_API_KEY"],
TelemetryOptions(instrumentors=[Instrumentors.OpenAI]),
)
@telemetry.capture(
project_id=123, # The ID of your project in Latitude
path="generate-support-reply", # Add a path to identify this prompt in Latitude
)
def generate_support_reply(input: str) -> str:
client = AzureOpenAI(
azure_endpoint="https://your-resource.openai.azure.com/",
api_key=os.environ["AZURE_OPENAI_API_KEY"],
api_version="2024-02-01",
)
completion = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": input}],
)
return completion.choices[0].message.content
import os
from openai import AzureOpenAI
from latitude_telemetry import Telemetry, Instrumentors, TelemetryOptions
telemetry = Telemetry(
os.environ["LATITUDE_API_KEY"],
TelemetryOptions(instrumentors=[Instrumentors.OpenAI]),
)
def generate_support_reply(input: str) -> str:
with telemetry.capture(
project_id=123, # The ID of your project in Latitude
path="generate-support-reply", # Add a path to identify this prompt in Latitude
):
client = AzureOpenAI(
azure_endpoint="https://your-resource.openai.azure.com/",
api_key=os.environ["AZURE_OPENAI_API_KEY"],
api_version="2024-02-01",
)
completion = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": input}],
)
return completion.choices[0].message.content
The path:
- Identifies the prompt in Latitude
- Can be new or existing
- Should not contain spaces or special characters (use letters, numbers,
- _ / .)
Seeing your logs in Latitude
Once your feature is wrapped, logs will appear automatically.
- Open the prompt in your Latitude dashboard (identified by
path)
- Go to the Traces section
- Each execution will show:
- Input and output messages
- Model and token usage
- Latency and errors
- One trace per feature invocation
Each Azure OpenAI call appears as a child span under the captured prompt execution, giving you a full, end-to-end view of what happened.
That’s it
No changes to your Azure OpenAI calls, no special return values, and no extra plumbing — just wrap the feature you want to observe.