Python
Integrate Latitude’s SDK into your Python project
Latitude’s Python integration has the following main features:
- Automatic tracing of LLM calls
- Interact with Latitude’s prompt manager from code: create, update and delete prompts
- Render Latitude prompts locally and run them against your LLM providers
- Run prompts with Latitude’s high-performing gateway
- Trigger LLM as judge and human in the loop evaluations
- Programmatically push external logs to Latitude for evaluation and monitoring
Installation
To install the Latitude SDK, use your preferred package manager:
Getting Started
First, import the Latitude class from the SDK and initialize it with your API key:
Examples
Check out our cookbook for more examples of how to use Latitude’s SDK.
Telemetry
Latitude can automatically trace all your LLM calls from most major providers and frameworks using our OpenTelemetry integration. We recommend this approach to easily get started using Latitude’s full capabilities.
Here’s how to integrate with the all supported providers/frameworks:
- Aleph Alpha
- Anthropic
- AWS Bedrock
- AWS Sagemaker
- Cohere
- DSPy
- Google AI Platform
- Groq
- Haystack
- Langchain
- LiteLLM
- LlamaIndex
- MistralAI
- Ollama
- OpenAI
- Replicate
- Together
- Transformers
- Vertex AI
- Watsonx
Learn more about traces and how to monitor them with Latitude.
A note during development
Latitude’s OpenTelemetry integration batches requests automatically in order to
improve performance. This is helpful in production workloads, but during
development you may want to disable batching. This can be done by setting the
disable_batch
option to True
:
Prompt Management
Get or create a prompt
To get or create a prompt, use the get_or_create
method:
Run a prompt with your LLM provider
The render
method will render your prompt and return the configuration and
messages to use with your LLM provider. This render step is completely local and
does not use Latitude’s runtime services:
You can also execute chains by providing an on_step
callback to the
render_chain
method, which will be called for each step of the chain to generate
the corresponding response:
render
and render_chain
only work with the latest version of Latitude’s
open source prompt syntax: PromptL
Run a prompt through Latitude Gateway
Latitude’s Gateway is a high-performing gateway that proxies your LLM calls between your application and the LLM provider. It includes some additional features like automatic prompt caching based on content and prompt configuration.
In order to run a prompt through Latitude’s Gateway, use the run
method:
Running a prompt with tools
When you run a prompt with tools, you can define and supply the corresponding tool handlers to the Latitude SDK. These handlers will be called automatically when the LLM invokes the tools. The tool results will be returned to the LLM, and the conversation will continue.
Any exception raised in the tool handler will be caught and sent to the LLM as a tool result error.
Pausing tool execution
If you need to pause the execution of the tools, you can do so by returning
details.pause_execution()
in the tool handler. You can resume the
conversation later by returning the tool results in the sdk.prompts.chat
method.
Log Management
Pushing a log to Latitude
To create a log programmatically, use the create
method:
Logs follow the PromptL format. If you’re using a different method to run your prompts, you’ll need to format your logs accordingly.