The Latitude Python SDK provides a convenient way to interact with the Latitude platform from your Python applications.
Installation
The Latitude SDK is compatible with Python 3.9 or higher.
pip install latitude-sdk
# or
poetry add latitude-sdk
# or
uv add latitude-sdk
Authentication and Initialization
Import the SDK and initialize it with your API key. You can generate API keys in your Latitude project settings under “API Access”.
import os
from latitude_sdk import Latitude
latitude = Latitude(os.getenv("LATITUDE_API_KEY"))
You can also provide additional options during initialization:
latitude = Latitude(os.getenv("LATITUDE_API_KEY"), LatitudeOptions(
project_id=12345, # Your Latitude project ID
version_uuid="optional-version-uuid", # Optional version UUID
))
Keep your API key secure and avoid committing it directly into your codebase.
Examples
Check out our Examples section for more examples of how to use the Latitude SDK.
SDK Usage
The Latitude Python SDK is an async library by design. This means you must use it within an async event loop, such as FastAPI or Async Django. Another option is to use the built-in asyncio
library.
import asyncio
from latitude_sdk import Latitude
latitude = Latitude("your-api-key-here")
async def main():
prompt = await latitude.prompts.get("prompt-path")
print(prompt)
asyncio.run(main())
SDK Structure
The Latitude SDK is organized into several namespaces:
prompts
: Methods for managing and running prompts
logs
: Methods for creating and managing logs
evaluations
: Methods for triggering evaluations and creating results
Prompt Management
Get a Prompt
To retrieve a specific prompt by its path:
prompt = await latitude.prompts.get('prompt-path')
Get All Prompts
To retrieve all prompts in your project:
prompts = await latitude.prompts.get_all()
Get or Create a Prompt
To get an existing prompt or create a new one if it doesn’t exist:
prompt = await latitude.prompts.get_or_create('prompt-path')
You can also provide the content when creating a new prompt:
prompt = await latitude.prompts.get_or_create('prompt-path', GetOrCreatePromptOptions(
prompt='This is the content of my new prompt',
))
Running Prompts
Non-Streaming Run
Execute a prompt and get the complete response once generation is finished:
result = await latitude.prompts.run('prompt-path', RunPromptOptions(
parameters={
'productName': 'CloudSync Pro',
'audience': 'Small Business Owners',
},
# Optional: Provide a custom identifier for this run
custom_identifier='email-campaign-2023',
# Optional: Provide callbacks for events
on_finished=lambda result: print('Run completed:', result.uuid),
on_error=lambda error: print('Run error:', error.message),
))
print('Conversation UUID:', result.uuid)
print('Conversation messages:', result.conversation)
If your prompt is an agent, an agent_response
property will be defined in the
result. The structure of the response will depend on the agent’s
configuration, although by default it will be: { "response": "Your agent's response" }
.
Handling Streaming Responses
For real-time applications (like chatbots), use streaming to get response chunks as they are generated:
async def on_event(event: StreamEvent):
if event.event == StreamEvents.Provider and event.type == 'text-delta':
print(event.text_delta)
elif event.event == StreamEvents.Latitude and event.type == ChainEvents.ChainCompleted:
print('Conversation UUID:', event.uuid)
print('Conversation messages:', event.messages)
await latitude.prompts.run('prompt-path', RunPromptOptions(
parameters={
'productName': 'CloudSync Pro',
'audience': 'Small Business Owners',
},
# Enable streaming
stream=True,
# Provide callbacks for events
on_event=on_event,
on_finished=lambda result: print('Stream completed:', result.uuid),
on_error=lambda error: print('Stream error:', error.message),
))
You can provide tool handlers that the model can call during execution:
async def get_weather(arguments: Dict[str, Any], details: OnToolCallDetails) -> Dict[str, Any]:
# `arguments` contains the arguments passed by the model
# `details` contains context like tool id, name, messages...
# The result can be anything JSON serializable
return { "weather": "sunny" }
await latitude.prompts.run('prompt-path', RunPromptOptions(
parameters={
'query': 'What is the weather in San Francisco?',
},
# Define the tools the model can use
tools={
'getWeather': get_weather,
},
))
If you need to pause the execution of a tool, you can do so by returning
details.pause_execution()
in the tool handler. You can resume the
conversation later by returning the tool results in the
latitude.prompts.chat
method.
Chat with a Prompt
Follow the conversation of a runned prompt:
messages = [
{
'role': 'user',
'content': 'Hello, how can you help me today?',
},
]
result = await latitude.prompts.chat('conversation-uuid', messages, ChatPromptOptions(
# Chat options are similar to the run method
on_finished=lambda result: print('Chat completed:', result.uuid),
on_error=lambda error: print('Chat error:', error.message),
))
print('Conversation UUID:', result.uuid)
print('Conversation messages:', result.conversation)
Messages follow the PromptL format. If you’re
using a different method to run your prompts, you’ll need to format your
messages accordingly.
Rendering Prompts
Prompt Rendering
Render a prompt locally without running it:
result = await latitude.prompts.render(
'Your prompt content here with {{ parameters }}',
RenderPromptOptions(
parameters={
'topic': 'Artificial Intelligence',
'tone': 'Professional',
},
# Optional: Specify a provider adapter
adapter=Adapter.OpenAI,
))
print('Rendered config:', result.config)
print('Rendered messages:', result.messages)
Chain Rendering
Render a chain of prompts locally:
async def on_step(messages: list[MessageLike], config: dict[str, Any]) -> str | MessageLike:
# Process each step in the chain
print('Processing step with messages:', messages)
# Return a string or a message object
return 'Step response'
result = await latitude.prompts.render_chain(
Prompt(
path='prompt-path',
content='Your prompt content here with {{ parameters }}',
provider='openai',
),
on_step,
RenderChainOptions(
parameters={
'topic': 'Machine Learning',
'complexity': 'Advanced',
},
# Optional: Specify a provider adapter
adapter=Adapter.OpenAI,
))
print('Rendered config:', result.config)
print('Rendered messages:', result.messages)
Logging
Creating Logs
Push a log to Latitude manually for a prompt:
messages = [
{
'role': 'user',
'content': 'Hello, how can you help me today?',
},
]
log = await latitude.logs.create('prompt-path', messages, CreateLogOptions(
response='I can help you with anything!',
))
Evaluations
Annotate a log
Push an evaluation result (annotate) to Latitude:
result = await sdk.evaluations.annotate(
'conversation-uuid',
4, # In this case, the score is 4 out of 5
"evaluation-uuid",
AnnotateEvaluationOptions(reason="I liked it!")
)
Complete Method Reference
Initialization
# SDK initialization
class LatitudeOptions:
promptl: Optional[PromptlOptions]
internal: Optional[InternalOptions]
project_id: Optional[int]
version_uuid: Optional[str]
tools: Optional[dict[str, OnToolCall]]
Latitude(
api_key: str,
options: Optional[LatitudeOptions]
)
Prompts Namespace
# Get a prompt
class GetPromptOptions:
project_id: Optional[int]
version_uuid: Optional[str]
class GetPromptResult: Prompt
latitude.prompts.get(
path: str,
options: Optional[GetPromptOptions]
) -> GetPromptResult:
# Get all prompts
class GetAllPromptsOptions:
project_id: Optional[int]
version_uuid: Optional[str]
class GetAllPromptsResult: List[Prompt]
latitude.prompts.get_all(
options: Optional[GetAllPromptsOptions]
) -> GetAllPromptsResult:
# Get or create a prompt
class GetOrCreatePromptOptions:
project_id: Optional[int]
version_uuid: Optional[str]
prompt: Optional[str]
class GetOrCreatePromptResult: Prompt
latitude.prompts.get_or_create(
path: str,
options: Optional[GetOrCreatePromptOptions]
) -> GetOrCreatePromptResult:
# Run a prompt
class RunPromptOptions:
project_id: Optional[int]
version_uuid: Optional[str]
on_event: Optional[OnEvent]
on_finished: Optional[OnFinished]
on_error: Optional[OnError]
custom_identifier: Optional[str]
parameters: Optional[dict[str, Any]]
tools: Optional[dict[str, OnToolCall]]
stream: Optional[bool]
class RunPromptResult:
uuid: str
conversation: list[Message]
response: ChainResponse
tool_requests: list[ToolCall]
latitude.prompts.run(
path: str,
options: Optional[RunPromptOptions]
) -> Optional[RunPromptResult]:
# Chat with a prompt
class ChatPromptOptions:
on_event: Optional[OnEvent]
on_finished: Optional[OnFinished]
on_error: Optional[OnError]
tools: Optional[dict[str, OnToolCall]]
stream: Optional[bool]
class ChatPromptResult:
uuid: str
conversation: list[Message]
response: ChainResponse
tool_requests: list[ToolCall]
latitude.prompts.chat(
uuid: str,
messages: Sequence[MessageLike],
options: Optional[ChatPromptOptions]
) -> Optional[ChatPromptResult]:
# Render a prompt
class RenderPromptOptions:
parameters: Optional[dict[str, Any]]
adapter: Optional[Adapter]
class RenderPromptResult:
messages: list[MessageLike]
config: dict[str, Any]
latitude.prompts.render(
prompt: str,
options: Optional[RenderPromptOptions]
) -> RenderPromptResult:
# Render a chain
class RenderChainOptions:
parameters: Optional[dict[str, Any]]
adapter: Optional[Adapter]
class RenderChainResult:
messages: list[MessageLike]
config: dict[str, Any]
latitude.prompts.render_chain(
prompt: Prompt,
on_step: OnStep,
options: Optional[RenderChainOptions]
) -> RenderChainResult:
Logs Namespace
# Create a log
class CreateLogOptions:
project_id: Optional[int]
version_uuid: Optional[str]
response: Optional[str]
class CreateLogResult: Log
latitude.logs.create(
path: str,
messages: Sequence[MessageLike],
options: Optional[CreateLogOptions]
) -> CreateLogResult:
Evaluations Namespace
class AnnotateEvaluationOptions(Model):
reason: str
class AnnotateEvaluationResult(Model):
uuid: str
score: int
normalized_score: int = Field(alias=str("normalizedScore"))
metadata: dict[str, Any]
has_passed: bool = Field(alias=str("hasPassed"))
created_at: datetime = Field(alias=str("createdAt"))
updated_at: datetime = Field(alias=str("updatedAt"))
version_uuid: str = Field(alias=str("versionUuid"))
error: Optional[Union[str, None]] = None
latitude.evaluations.annotate(
uuid: str,
score: int,
evaluation_uuid: str,
options: AnnotateEvaluationOptions
) -> AnnotateEvaluationResult
Error Handling
The SDK raises ApiError
instances when API requests fail. You can catch and handle these errors:
from latitude_sdk import ApiError
async def handle_errors():
try:
prompt = await latitude.prompts.get("non-existent-prompt")
except ApiError as error:
print(f"API Error: {error.message}")
print(f"Error Code: {error.code}")
print(f"Status: {error.status}")
except Exception as error:
print(f"Unexpected error: {error}")
Logging Features
- Automatic Logging: All runs through
latitude.prompts.run()
are automatically logged in Latitude, capturing inputs, outputs, performance metrics, and trace information.
- Custom Identifiers: Use the optional
custom_identifier
parameter to tag runs for easier filtering and analysis in the Latitude dashboard.
- Response Identification: Each response includes identifying information like
uuid
that can be used to reference the specific run later.