Overview
This guide shows you how to integrate Latitude Telemetry into an existing application that uses the official Together AI SDK.
After completing these steps:
- Every Together AI call (e.g.
generate) can be captured as a log in Latitude.
- Logs are grouped under a prompt, identified by a
path, inside a Latitude project.
- You can inspect inputs/outputs, measure latency, and debug Together AI-powered features from the Latitude dashboard.
You’ll keep calling Together AI exactly as you do today — Telemetry simply
observes and enriches those calls.
Requirements
Before you start, make sure you have:
- A Latitude account and API key
- A Latitude project ID
- A Node.js or Python-based project that uses the Together AI SDK
That’s it — prompts do not need to be created ahead of time.
Steps
Install requirements
Add the Latitude Telemetry package to your project:npm add @latitude-data/telemetry
pip install latitude-telemetry
Initialize Latitude Telemetry
Create a single Telemetry instance when your app starts.You must pass the Together AI SDK so Telemetry can instrument it.import { LatitudeTelemetry } from '@latitude-data/telemetry'
import { Together } from 'together-ai'
export const telemetry = new LatitudeTelemetry(
process.env.LATITUDE_API_KEY,
{
instrumentations: {
together: Together, // This enables automatic tracing for the Together AI SDK
},
}
)
import os
from latitude_telemetry import Telemetry, Instrumentors, TelemetryOptions
telemetry = Telemetry(
os.environ["LATITUDE_API_KEY"],
TelemetryOptions(
instrumentors=[Instrumentors.Together], # This enables automatic tracing for the Together AI SDK
),
)
The Telemetry instance should only be created once. Any Together AI client
instantiated after this will be automatically traced.
Wrap your Together AI-powered feature
Wrap the code that calls Together AI using telemetry.capture.import { telemetry } from './telemetry'
import { Together } from 'together-ai'
export async function generateSupportReply(input: string) {
return telemetry.capture(
{
projectId: 123, // The ID of your project in Latitude
path: 'generate-support-reply', // Add a path to identify this prompt in Latitude
},
async () => {
// Your regular LLM-powered feature code here
const client = new Together({ ... });
const response = await client.generate({ ... })
// You can return anything you want — the value is passed through unchanged
return response;
}
)
}
You can use the capture method as a decorator (recommended) or as a context manager:Using decorator (recommended)
from together import Together
from telemetry import telemetry
@telemetry.capture(
project_id=123, # The ID of your project in Latitude
path="generate-support-reply", # Add a path to identify this prompt in Latitude
)
def generate_support_reply(input: str) -> str:
# Your regular LLM-powered feature code here
client = Together()
response = client.chat.completions.create(
model="meta-llama/Llama-3-70b-chat-hf",
messages=[{"role": "user", "content": input}],
)
# You can return anything you want — the value is passed through unchanged
return response.choices[0].message.content
from together import Together
from telemetry import telemetry
def generate_support_reply(input: str) -> str:
with telemetry.capture(
project_id=123, # The ID of your project in Latitude
path="generate-support-reply", # Add a path to identify this prompt in Latitude
):
# Your regular LLM-powered feature code here
client = Together()
response = client.chat.completions.create(
model="meta-llama/Llama-3-70b-chat-hf",
messages=[{"role": "user", "content": input}],
)
# You can return anything you want — the value is passed through unchanged
return response.choices[0].message.content
The path:
- Identifies the prompt in Latitude
- Can be new or existing
- Should not contain spaces or special characters (use letters, numbers,
- _ / .)
Seeing your logs in Latitude
Once your feature is wrapped, logs will appear automatically.
- Open the prompt in your Latitude dashboard (identified by
path)
- Go to the Traces section
- Each execution will show:
- Input and output messages
- Model and token usage
- Latency and errors
- One trace per feature invocation
Each Together AI call appears as a child span under the captured prompt execution, giving you a full, end-to-end view of what happened.
That’s it
No changes to your Together AI calls, no special return values, and no extra plumbing — just wrap the feature you want to observe.