This integration is only available in the Python SDK.
Overview
This guide shows you how to integrate Latitude Telemetry into an existing application that uses the official IBM watsonx.ai SDK.
After completing these steps:
- Every watsonx.ai call (e.g.
generate, generate_text) can be captured as a log in Latitude.
- Logs are grouped under a prompt, identified by a
path, inside a Latitude project.
- You can inspect inputs/outputs, measure latency, and debug watsonx.ai-powered features from the Latitude dashboard.
You’ll keep calling watsonx.ai exactly as you do today — Telemetry simply
observes and enriches those calls.
Requirements
Before you start, make sure you have:
- A Latitude account and API key
- A Latitude project ID
- A Python-based project that uses the IBM watsonx.ai SDK (
ibm-watsonx-ai)
That’s it — prompts do not need to be created ahead of time.
Steps
Install requirements
Add the Latitude Telemetry package to your project:pip install latitude-telemetry
Wrap your watsonx.ai-powered feature
Initialize Latitude Telemetry and wrap the code that calls watsonx.ai using telemetry.capture.You can use the capture method as a decorator (recommended) or as a context manager:Using decorator (recommended)
import os
from ibm_watsonx_ai.foundation_models import Model
from ibm_watsonx_ai.metanames import GenTextParamsMetaNames as GenParams
from latitude_telemetry import Telemetry, Instrumentors, TelemetryOptions
telemetry = Telemetry(
os.environ["LATITUDE_API_KEY"],
TelemetryOptions(instrumentors=[Instrumentors.Watsonx]),
)
@telemetry.capture(
project_id=123, # The ID of your project in Latitude
path="generate-support-reply", # Add a path to identify this prompt in Latitude
)
def generate_support_reply(input: str) -> str:
model = Model(
model_id="ibm/granite-13b-chat-v2",
credentials={"url": "https://us-south.ml.cloud.ibm.com", "apikey": os.environ["WATSONX_API_KEY"]},
project_id=os.environ["WATSONX_PROJECT_ID"],
)
parameters = {
GenParams.MAX_NEW_TOKENS: 100,
}
response = model.generate_text(prompt=input, params=parameters)
return response
import os
from ibm_watsonx_ai.foundation_models import Model
from ibm_watsonx_ai.metanames import GenTextParamsMetaNames as GenParams
from latitude_telemetry import Telemetry, Instrumentors, TelemetryOptions
telemetry = Telemetry(
os.environ["LATITUDE_API_KEY"],
TelemetryOptions(instrumentors=[Instrumentors.Watsonx]),
)
def generate_support_reply(input: str) -> str:
with telemetry.capture(
project_id=123, # The ID of your project in Latitude
path="generate-support-reply", # Add a path to identify this prompt in Latitude
):
model = Model(
model_id="ibm/granite-13b-chat-v2",
credentials={"url": "https://us-south.ml.cloud.ibm.com", "apikey": os.environ["WATSONX_API_KEY"]},
project_id=os.environ["WATSONX_PROJECT_ID"],
)
parameters = {
GenParams.MAX_NEW_TOKENS: 100,
}
response = model.generate_text(prompt=input, params=parameters)
return response
The path:
- Identifies the prompt in Latitude
- Can be new or existing
- Should not contain spaces or special characters (use letters, numbers,
- _ / .)
Seeing your logs in Latitude
Once your feature is wrapped, logs will appear automatically.
- Open the prompt in your Latitude dashboard (identified by
path)
- Go to the Traces section
- Each execution will show:
- Input and output messages
- Model and token usage
- Latency and errors
- One trace per feature invocation
Each watsonx.ai call appears as a child span under the captured prompt execution, giving you a full, end-to-end view of what happened.
That’s it
No changes to your watsonx.ai calls, no special return values, and no extra plumbing — just wrap the feature you want to observe.