Latitude’s Python integration has the following main features:

  • Automatic tracing of LLM calls
  • Interact with Latitude’s prompt manager from code: create, update and delete prompts
  • Render Latitude prompts locally and run them against your LLM providers
  • Run prompts with Latitude’s high-performing gateway
  • Trigger LLM as judge and human in the loop evaluations
  • Programmatically push external logs to Latitude for evaluation and monitoring

Installation

To install the Latitude SDK, use your preferred package manager:

pip install latitude-sdk
# or
poetry add latitude-sdk
# or
uv add latitude-sdk

Getting Started

First, import the Latitude class from the SDK and initialize it with your API key:

from latitude_sdk import Latitude, LatitudeOptions

sdk = Latitude('your-api-key-here', LatitudeOptions(
    project_id=12345, # Optional, otherwise you have to provide it in each method
    version_uuid='optional-version-uuid', # Optional, by default it targets the latest live version
))

Examples

Check out our cookbook for more examples of how to use Latitude’s SDK.

Telemetry

Latitude can automatically trace all your LLM calls from most major providers and frameworks using our OpenTelemetry integration. We recommend this approach to easily get started using Latitude’s full capabilities.

Here’s how to integrate with the all supported providers/frameworks:

Learn more about traces and how to monitor them with Latitude.

A note during development

Latitude’s OpenTelemetry integration batches requests automatically in order to improve performance. This is helpful in production workloads, but during development you may want to disable batching. This can be done by setting the disable_batch option to True:

from latitude_telemetry import TelemetryOptions

Latitude('your-api-key-here', LatitudeOptions(
    telemetry=TelemetryOptions(
        disable_batch=True,
    ),
))

Prompt Management

Get or create a prompt

To get or create a prompt, use the get_or_create method:

from latitude_sdk import GetOrCreatePromptOptions

await sdk.prompts.get_or_create('path/to/your/prompt', GetOrCreatePromptOptions(
    project_id=12345, # Optional, if you did not provide it in the constructor
    version_uuid='optional-version-uuid', # Optional, by default it targets the latest live version
    prompt='Your prompt here', # Optional, this will be the contents of your prompt if it does not exist
))

Run a prompt with your LLM provider

The render method will render your prompt and return the configuration and messages to use with your LLM provider. This render step is completely local and does not use Latitude’s runtime services:

from latitude_sdk import GetPromptOptions, RenderPromptOptions
from promptl_ai import Adapter
from openai import OpenAI

prompt = await sdk.prompts.get('path/to/your/prompt', GetPromptOptions(
    project_id=12345, # Optional, if you did not provide it in the constructor
    version_uuid='optional-version-uuid', # Optional, by default it targets the latest live version
))

result = await sdk.prompts.render(prompt, RenderPromptOptions(
    parameters={
        # Any parameters your prompt expects
    },
    adapter=Adapter.OpenAI, # Optional, by default is OpenAI
))

response = openai.chat.completions.create(
    **result.config,
    messages=[message.model_dump() for message in result.messages],
)

You can also execute chains by providing an on_step callback to the render_chain method, which will be called for each step of the chain to generate the corresponding response:

from typing import Any, Dict, List, Sequence, Union
from latitude_sdk import RenderChainOptions
from promptl_ai import Adapter, MessageLike

async def on_step(messages: List[MessageLike], config: Dict[str, Any]) -> Union[str, MessageLike, Sequence[MessageLike]]:
    ...
    return "Assistant message"

result = await sdk.prompts.render_chain(prompt, on_step, RenderChainOptions(
    parameters={
        # Any parameters your prompt expects
    },
    adapter=Adapter.OpenAI, # Optional, by default is OpenAI
))

render and render_chain only work with the latest version of Latitude’s open source prompt syntax: PromptL

Run a prompt through Latitude Gateway

Latitude’s Gateway is a high-performing gateway that proxies your LLM calls between your application and the LLM provider. It includes some additional features like automatic prompt caching based on content and prompt configuration.

In order to run a prompt through Latitude’s Gateway, use the run method:

from latitude_sdk import RunPromptOptions

await sdk.prompts.run('path/to/your/prompt', RunPromptOptions(
    project_id=12345, # Optional if you provided it in the constructor
    version_uuid='optional-version-uuid', # Optional, by default it targets the latest live version
    stream=False, # Optional, by default it's false
    parameters={
        # Any parameters your prompt expects
    },
    tools={
        # Any tools your prompt expects
    },
    on_event=lambda event: print(event), # Handle events during execution
    on_finished=lambda result: print(result), # Handle the final result
    on_error=lambda error: print(error), # Handle any errors
))

Running a prompt with tools

When you run a prompt with tools, you can define and supply the corresponding tool handlers to the Latitude SDK. These handlers will be called automatically when the LLM invokes the tools. The tool results will be returned to the LLM, and the conversation will continue.

from latitude_sdk import OnToolCallDetails

async def get_coordinates(arguments: Dict[str, Any], details: OnToolCallDetails) -> int:
    # Arguments from the LLM. You could use a Pydantic model to validate the arguments and to have type hinting
    location = arguments["location"]
    ...
    # The result can be anything JSON serializable
    return 12345

async def get_weather(arguments: Dict[str, Any], details: OnToolCallDetails) -> Dict[str, Any]:
    ...
    # The result can be anything JSON serializable
    return {"weather": "sunny"}

await sdk.prompts.run('path/to/your/prompt', RunPromptOptions(
    ...
    tools={
        "get_coordinates": get_coordinates,
        "get_weather": get_weather,
    },
    ...
))

Any exception raised in the tool handler will be caught and sent to the LLM as a tool result error.

Pausing tool execution

If you need to pause the execution of the tools, you can do so by returning details.pause_execution() in the tool handler. You can resume the conversation later by returning the tool results in the sdk.prompts.chat method.

from latitude_sdk import OnToolCallDetails

# OnToolCallDetails:
# .id: Called tool id
# .name: Called tool name, `get_weather` in this case
# .pause_execution: Signal to optionally pause the execution
# .messages: List of conversation messages so far
# .conversation_uuid: Conversation identifier, to be used in the `sdk.prompts.chat` method
# .requested_tool_calls: All tool calls requested by the LLM, including this one

async def get_weather(arguments: Dict[str, Any], details: OnToolCallDetails) -> Any:
    # Let's imagine `get_weather` takes a long time to execute and you want to do it in the background.
    # With `pause_execution`, you can pause the conversation, get the weather data, and resume the conversation
    # later, by returning the tool results in the `sdk.prompts.chat` method.
    # You may have to store `requested_tool_calls` and `conversation_uuid` on your side to resume the conversation.
    # Note that, you must return all results from the requested tools, in the `sdk.prompts.chat` method, at once.
    return details.pause_execution()

await sdk.prompts.run('path/to/your/prompt', RunPromptOptions(
    ...
    tools={
        "get_coordinates": get_coordinates,
        "get_weather": get_weather,
    },
    ...
))

Log Management

Pushing a log to Latitude

To create a log programmatically, use the create method:

from latitude_sdk import CreateLogOptions, UserMessage

messages = [
    UserMessage(content='Please tell me a joke about doctors'),
]

await sdk.logs.create('path/to/your/prompt', messages, CreateLogOptions(
    project_id=12345, # Optional, if you did not provide it in the constructor
    version_uuid='optional-version-uuid', # Optional, by default it targets the latest live version
    response='assistant response',
))

Logs follow the PromptL format. If you’re using a different method to run your prompts, you’ll need to format your logs accordingly.