API Access
Learn how to access and use Latitude’s API to run your prompts.
We recommend checking the SDK docs section in case you’re looking for a specific language or framework.
Latitude HTTP API Documentation
This guide explains how to use the Latitude HTTP API to interact with the Prompt Manager and run AI-powered conversations.
Authentication
All API requests require authentication. Include your API key in the Authorization
header of your HTTP requests:
Base URL
The base URL for API requests depends on your environment:
https://gateway.latitude.so/api/v2
Rate Limiting
The API enforces rate limits based on your API key to ensure fair usage and prevent abuse.
Limits:
- Rate Limit Points: 1000 requests
- Rate Limit Duration: 60 seconds
When the rate limit is exceeded, the following headers are included in the response to help you manage your request rate:
Retry-After
: Indicates the number of seconds to wait before making a new request.X-RateLimit-Limit
: The maximum number of requests allowed in the current period.X-RateLimit-Remaining
: The number of requests remaining in the current period.X-RateLimit-Reset
: The timestamp when the rate limit will reset.
Example Headers:
These headers are sent with every request to help you monitor and adjust your request rate accordingly.
Endpoints
1. Get a Document
Retrieve a specific prompt by its path.
Endpoint: GET /projects/{projectId}/versions/{versionUuid}/documents/{path}
Path Parameters:
projectId
: Your project ID (required)versionUuid
: Version UUID (required, optional for SDK’s defaults to ‘live’)path
: Path to the document (required)
Response:
The response contains the existing document details along with its configuration.
Response Body:
2. Get or Create a Document
Endpoint: POST /projects/{projectId}/versions/{versionUuid}/documents/get-or-create
Path Parameters:
projectId
: Your project ID (required)versionUuid
: Version UUID (required, optional for SDK’s defaults to ‘live’)
Request Body:
path
: Path to the document (required)prompt
: Prompt to use for the document (optional, defaults to empty)
Response:
The response contains the created (or existing) document details along with its configuration.
Response Body:
3. Run a Document
Run a specific document (prompt) with optional parameters.
Endpoint: POST /projects/{projectId}/versions/{versionUuid}/documents/run
Path Parameters:
projectId
: Your project ID (required)versionUuid
: Version UUID (required, optional for SDK’s defaults to ‘live’)
Request Body:
stream
: Optional boolean parameter (defaults tofalse
). When set to true, the response will be a stream of Server-Sent Events (SSE). If false, a single JSON response containing the last event is returned.
Response:
- If
stream
istrue
: The response is a stream of Server-Sent Events (SSE). Each event contains JSON data with the following structure:
- If
stream
isfalse
: A single JSON response is returned with the final event (typically the chain-complete event) in the following structure:
Message follows OpenAI’s format.
ToolCall has the following format:
4. Chat
The previously described POST /projects/{projectId}/versions/{versionUuid}/documents/run
endpoint can also be used to continue a conversation. Notice all events contain a uuid
field that represents that conversation with the AI. You can use this uuid to continue the conversation.
Endpoint: POST /conversations/{conversationUuid}/chat
Path Parameters:
conversationUuid
: UUID of the conversation
Request Body:
- follows OpenAI’s format or the vercel’s ai sdk
stream
: Optional boolean parameter (defaults tofalse
). When set to true, the response will be a stream of Server-Sent Events (SSE). If false, a single JSON response containing the last event is returned.
Message follows OpenAI’s format.
Response: The response is a stream of Server-Sent Events (SSE) or a single JSON response containing the final event, similar to the “Run a Document” endpoint.
Handling Server-Sent Events (SSE)
The API uses SSE for real-time updates. Here’s how to handle SSE responses:
- Set up an EventSource or use a library that supports SSE.
- Listen for events and parse the JSON data in each event.
- Handle different event types:
latitude-event
: Contains information about the chain progress and results.provider-event
: Contains real-time updates from the AI provider.
Error Handling
The API uses standard HTTP status codes. In case of an error, the response body will contain an error message:
5. Evaluate a Conversation
Evaluate a conversation using configured evaluations.
Endpoint: POST /conversations/{conversationUuid}/evaluate
Path Parameters:
conversationUuid
: UUID of the conversation to evaluate
Request Body:
Response:
6. Create Log Entry
Create a log entry for a document.
Endpoint: POST /projects/{projectId}/versions/{versionUuid}/documents/logs
Path Parameters:
projectId
: Your project ID (required)versionUuid
: Version UUID (required, optional for SDK’s defaults to ‘live’)
Request Body:
- follows OpenAI’s format or the vercel’s ai sdk
Response:
7. Create Evaluation Result
Create an evaluation result for a conversation using a configured evaluation.
Endpoint: POST /conversations/{conversationUuid}/evaluations/{evaluationUuid}/results
Path Parameters:
conversationUuid
: UUID of the conversation to evaluateevaluationUuid
: UUID of the evaluation to use
Request Body:
Response: