Installation
The Latitude SDK is compatible with Node.js 16 or higher.Authentication and Initialization
Import the SDK and initialize it with your API key. You can generate API keys in your Latitude project settings under “API Access”.BothprojectId
andversionUuid
options can be overridden on a per-method basis when needed.
Examples
Check out our Examples section for more examples of how to use the Latitude SDK.SDK Structure
The Latitude SDK is organized into several namespaces:prompts
: Methods for managing and running promptsruns
: Methods for managing active runslogs
: Methods for pushing logs to Latitudeevaluations
: Methods for pushing evaluation results to Latitudeprojects
: Methods for managing projectsversions
: Methods for managing project versions
Prompt Management
Get a Prompt
To retrieve a specific prompt by its path:Get All Prompts
To retrieve all prompts in your project:Get or Create a Prompt
To get an existing prompt or create a new one if it doesn’t exist:Version Management
Get All Versions
To retrieve all versions from a project:Running Prompts
Non-Streaming Run
Execute a prompt and get the complete response once generation is finished:Handling Streaming Responses
For real-time applications (like chatbots), use streaming to get response chunks as they are generated:Using Tools with Prompts
You can provide tool handlers that the model can call during execution:Prompts that return structured outputs
By default the sdk assumes your prompt return text. If you expect your prompt to return structured output you can type it in theprompts.run
method:
Chat with a Prompt
Follow the conversation of a runned prompt:Messages follow the PromptL format. If you’re
using a different method to run your prompts, you’ll need to format your
messages accordingly.
Running a Prompt in the Background
For long-running prompts, such as large Agent systems, that you don’t need to wait for, use background runs:Run Management
Stop a Run
Stop an active conversation that is currently running:Attach to a Run
Attach to an active conversation to receive its ongoing output:Rendering Prompts
Prompt Rendering
Render a prompt locally without running it:Chain Rendering
Render a chain of prompts locally:Agent Rendering
Render an agent prompt locally (similar torenderChain
but with a final agent result):
Make sure to provide the
config.tools
parameter to the LLM provider in your
onStep
handler, otherwise the AI won’t be able to stop the Agent loop!Logging
Creating Logs
Push a log to Latitude manually for a prompt:Evaluations
Annotate a log
Push an evaluation result (annotate) to Latitude:Complete Method Reference
Initialization
Prompts Namespace
Runs Namespace
Projects Namespace
Logs Namespace
Evaluations Namespace
Versions Namespace
Error Handling
The SDK throwsLatitudeApiError
instances when API requests fail. You can catch and handle these errors:
Logging Features
- Automatic Logging: All runs through
latitude.prompts.run()
are automatically logged in Latitude, capturing inputs, outputs, performance metrics, and trace information. - Custom Identifiers: Use the optional
customIdentifier
parameter to tag runs for easier filtering and analysis in the Latitude dashboard. - Response Identification: Each response includes identifying information like
uuid
that can be used to reference the specific run later.