Python
Integrate Latitude into your Python applications using the Python SDK.
The Latitude Python SDK provides a convenient way to interact with the Latitude platform from your Python applications.
Installation
The Latitude SDK is compatible with Python 3.9 or higher.
Authentication and Initialization
Import the SDK and initialize it with your API key. You can generate API keys in your Latitude project settings under “API Access”.
You can also provide additional options during initialization:
Keep your API key secure and avoid committing it directly into your codebase.
Examples
Check out our cookbook for more examples of how to use the Latitude SDK.
SDK Usage
The Latitude Python SDK is an async library by design. This means you must use it within an async event loop, such as FastAPI or Async Django. Another option is to use the built-in asyncio
library.
SDK Structure
The Latitude SDK is organized into several namespaces:
prompts
: Methods for managing and running promptslogs
: Methods for creating and managing logsevaluations
: Methods for triggering evaluations and creating results
Prompt Management
Get a Prompt
To retrieve a specific prompt by its path:
Get All Prompts
To retrieve all prompts in your project:
Get or Create a Prompt
To get an existing prompt or create a new one if it doesn’t exist:
You can also provide the content when creating a new prompt:
Running Prompts
Non-Streaming Run
Execute a prompt and get the complete response once generation is finished:
Handling Streaming Responses
For real-time applications (like chatbots), use streaming to get response chunks as they are generated:
Using Tools with Prompts
You can provide tool handlers that the model can call during execution:
If you need to pause the execution of a tool, you can do so by returning
details.pause_execution()
in the tool handler. You can resume the
conversation later by returning the tool results in the
latitude.prompts.chat
method.
Chat with a Prompt
Follow the conversation of a runned prompt:
Messages follow the PromptL format. If you’re using a different method to run your prompts, you’ll need to format your messages accordingly.
Rendering Prompts
Prompt Rendering
Render a prompt locally without running it:
Chain Rendering
Render a chain of prompts locally:
Logging
Creating Logs
Push a log to Latitude manually for a prompt:
Evaluations
Triggering Evaluations
Trigger an evaluation manually for a conversation:
Creating Evaluation Results
Push a result to Latitude manually for an evaluation:
Complete Method Reference
Initialization
Prompts Namespace
Logs Namespace
Evaluations Namespace
Error Handling
The SDK raises ApiError
instances when API requests fail. You can catch and handle these errors:
Logging Features
- Automatic Logging: All runs through
latitude.prompts.run()
are automatically logged in Latitude, capturing inputs, outputs, performance metrics, and trace information. - Custom Identifiers: Use the optional
custom_identifier
parameter to tag runs for easier filtering and analysis in the Latitude dashboard. - Response Identification: Each response includes identifying information like
uuid
that can be used to reference the specific run later.