Skip to main content
For detailed endpoint specifications, request/response schemas, and the ability to try out API calls directly, please refer to the Interactive API Documentation. We recommend checking the SDK docs section in case you’re looking for a specific language or framework.

Latitude HTTP API Documentation

This guide explains how to use the Latitude HTTP API to interact with the Prompt Manager and run AI-powered conversations.

Authentication

All API requests require authentication. Include your API key in the Authorization header of your HTTP requests:
Authorization: Bearer YOUR_API_KEY

Base URL

The base URL for API requests depends on your environment: https://gateway.latitude.so/api/v3

Rate Limiting

The API enforces rate limits based on your API key to ensure fair usage and prevent abuse. Limits: Rate limits are enforced based on your subscription plan. The following limits apply:
  • Hobby Plan:
    • 10 requests per second
  • Team Plan:
    • 166 requests per second (10000 requests per minute)
  • Enterprise Plan:
    • 500 requests per second (30000 requests per minute)
Contact sales to request a custom rate limit in the enterprise plan. When the rate limit is exceeded, the following headers are included in the response to help you manage your request rate:
  • Retry-After: Indicates the number of seconds to wait before making a new request.
  • X-RateLimit-Limit: The maximum number of requests allowed in the current period.
  • X-RateLimit-Remaining: The number of requests remaining in the current period.
  • X-RateLimit-Reset: The timestamp when the rate limit will reset.
Example Headers:
Retry-After: 60
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1729399082482
These headers are sent with every request to help you monitor and adjust your request rate accordingly.

Endpoints

1. Get a Prompt

Retrieve a specific prompt by its path. Use this endpoint to fetch the content and configuration of an existing prompt in your project. Endpoint: GET /projects/{projectId}/versions/{versionUuid}/documents/{path} Path Parameters:
  • projectId: Your project ID (required)
  • versionUuid: Version UUID (required, optional for SDK’s defaults to ‘live’)
  • path: Path to the document (required)
Response: The response contains the prompt details along with its configuration. Response Body:
{
  "id": "document-id",
  "documentUuid": "document-uuid",
  "path": "path/to/document",
  "content": "Document content",
  "resolvedContent": "Document content without comments",
  "contentHash": "content-hash",
  "commitId": "commit-id",
  "deletedAt": "deleted-at",
  "createdAt": "created-at",
  "updatedAt": "updated-at",
  "mergedAt": "merged-at",
  "projectId": "project-id",
  "config": {
    "provider": "Provider name",
    "model": "Model name"
  }
}

2. Get or Create a Prompt

Retrieve an existing prompt or create it if it doesn’t exist. This endpoint provides an idempotent way to ensure a prompt exists at a specific path without checking first. Endpoint: POST /projects/{projectId}/versions/{versionUuid}/documents/get-or-create Path Parameters:
  • projectId: Your project ID (required)
  • versionUuid: Version UUID (required, optional for SDK’s defaults to ‘live’)
Request Body:
{
  "path": "path/to/document",
  "prompt": "Your prompt here"
}
  • path: Path to the prompt (required)
  • prompt: Prompt content to use (optional, defaults to empty)
Response: The response contains the created (or existing) prompt details along with its configuration. Response Body:
{
  "id": "document-id",
  "documentUuid": "document-uuid",
  "path": "path/to/document",
  "content": "Document content",
  "resolvedContent": "Document content without comments",
  "contentHash": "content-hash",
  "commitId": "commit-id",
  "deletedAt": "deleted-at",
  "createdAt": "created-at",
  "updatedAt": "updated-at",
  "mergedAt": "merged-at",
  "projectId": "project-id",
  "config": {
    "provider": "Provider name",
    "model": "Model name"
  }
}

3. Create or Update a Prompt

Create a new prompt or update an existing one in a single operation. This endpoint provides more control than get-or-create, including the ability to update live commits with the force flag. Endpoint: POST /projects/{projectId}/versions/{versionUuid}/documents/create-or-update Path Parameters:
  • projectId: Your project ID (required)
  • versionUuid: Version UUID (required, optional for SDK’s defaults to ‘live’)
Request Body:
{
  "path": "path/to/document",
  "prompt": "Your prompt here",
  "force": false
}
  • path: Path to the prompt (required)
  • prompt: Content of the prompt (required)
  • force: Allow modifications to live/merged commits (optional, defaults to false)
Behavior:
  • If the prompt does not exist at the specified path, it will be created
  • If the prompt already exists at the path, it will be updated with the new content
  • By default, modifications are only allowed on draft commits (not live/merged)
  • When force: true, allows creating or updating prompts in live commits (use with caution)
Using force: true allows modifying production prompts directly. This should only be used for emergency hotfixes or controlled production updates. For normal development workflows, use draft commits.
Response: The response contains the created or updated prompt details along with its configuration. Response Body:
{
  "id": "document-id",
  "documentUuid": "document-uuid",
  "path": "path/to/document",
  "content": "Document content",
  "resolvedContent": "Document content without comments",
  "contentHash": "content-hash",
  "commitId": "commit-id",
  "deletedAt": "deleted-at",
  "createdAt": "created-at",
  "updatedAt": "updated-at",
  "mergedAt": "merged-at",
  "projectId": "project-id",
  "config": {
    "provider": "Provider name",
    "model": "Model name"
  }
}
Error Handling: If you try to modify a live commit without the force flag, the API returns a 400 status code:
{
  "name": "BadRequestError",
  "message": "Cannot modify a merged commit. Use force=true to allow modifications to the live commit.",
  "errorCode": "BadRequestError",
  "details": {}
}
Use Cases:
  • Single API call for upsert operations: No need to check if a prompt exists before creating/updating
  • Programmatic prompt updates: Update prompts from your CI/CD pipeline or automation scripts
  • Emergency hotfixes: Use force: true to quickly fix production prompts when needed
  • Batch operations: Efficiently create or update multiple prompts in a loop
Example: Update with Force Flag
curl -X POST "https://gateway.latitude.so/api/v3/projects/123/versions/live/documents/create-or-update" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "path": "production/emergency-fix",
    "prompt": "---\nprovider: openai\nmodel: gpt-4\n---\n\nFixed prompt content",
    "force": true
  }'

4. Create a Version (Commit)

Create a new draft version (commit) for a project. Versions allow you to manage changes to your prompts before publishing them to production. Endpoint: POST /projects/{projectId}/versions Path Parameters:
  • projectId: Your project ID (required)
Request Body:
{
  "name": "Version name or title"
}
  • name: Name/title for the new version (required)
Response: The response contains the created version (commit) details. Response Body:
{
  "id": 123,
  "uuid": "version-uuid",
  "projectId": 456,
  "message": "Version name or title",
  "authorName": "Author name",
  "authorEmail": "author@example.com",
  "authorId": 789,
  "createdAt": "2024-01-01T00:00:00.000Z",
  "updatedAt": "2024-01-01T00:00:00.000Z",
  "status": "draft",
  "parentCommitUuid": "parent-version-uuid"
}
Use Cases:
  • Create draft versions: Start working on prompt changes in isolation
  • Version control: Track different iterations of your prompts
  • CI/CD integration: Programmatically create versions from your deployment pipeline
Example:
curl -X POST "https://gateway.latitude.so/api/v3/projects/123/versions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Feature: Add Spanish language support"
  }'

5. Publish a Version (Commit)

Publish a draft version (commit) to make it the live/production version. This merges the draft changes and assigns it a version number. Endpoint: POST /projects/{projectId}/versions/{versionUuid}/publish Path Parameters:
  • projectId: Your project ID (required)
  • versionUuid: UUID of the draft version to publish (required)
Request Body:
{
  "title": "Optional updated title",
  "description": "Optional description or release notes"
}
  • title: Optional title for the published version (if not provided, uses existing title)
  • description: Optional description or release notes for the published version
Response: The response contains the published version (commit) details with a version number and merged timestamp. Response Body:
{
  "id": 123,
  "uuid": "version-uuid",
  "projectId": 456,
  "message": "Published version title",
  "authorName": "Author name",
  "authorEmail": "author@example.com",
  "authorId": 789,
  "createdAt": "2024-01-01T00:00:00.000Z",
  "updatedAt": "2024-01-01T00:00:00.000Z",
  "status": "merged",
  "parentCommitUuid": "parent-version-uuid"
}
Publishing a version makes it the live/production version. All documents in the published version become the active versions accessible via the API. Make sure to test your changes thoroughly before publishing.
Use Cases:
  • Deploy to production: Publish tested prompt changes to make them live
  • Release management: Track which version is currently in production
  • Automated deployments: Publish versions from CI/CD pipelines after successful tests
Example:
curl -X POST "https://gateway.latitude.so/api/v3/projects/123/versions/abc-123-def-456/publish" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Feature: Add Spanish language support",
    "description": "Added support for Spanish language queries with improved accuracy"
  }'
Error Handling: If you try to publish a version that is already published or doesn’t exist, the API returns an appropriate error:
{
  "name": "BadRequestError",
  "message": "Cannot publish: version is not a draft",
  "errorCode": "BadRequestError",
  "details": {}
}

6. Run a Prompt

Execute a prompt with optional parameters. This endpoint processes your prompt template, sends it to the configured AI provider, and returns the generated response. Supports both streaming and non-streaming modes, as well as background processing for long-running operations. Endpoint: POST /projects/{projectId}/versions/{versionUuid}/documents/run Path Parameters:
  • projectId: Your project ID (required)
  • versionUuid: Version UUID (required, optional for SDK’s defaults to ‘live’)
Request Body:
{
  "path": "path/to/document",
  "parameters": {
    "key1": "value1",
    "key2": "value2"
  },
  "stream": false,
  "background": false,
  "userMessage": "Optional user message",
  "customIdentifier": "optional-custom-id",
  "tools": ["tool1", "tool2"]
}
  • stream: Optional boolean parameter (defaults to false). When set to true, the response will be a stream of Server-Sent Events (SSE). If false, a single JSON response containing the last event is returned.
  • background: Optional boolean parameter (defaults to false). When set to true, the request is enqueued for background processing and returns immediately with a conversation UUID.
  • userMessage: Optional string to start the conversation with a user message.
  • customIdentifier: Optional string for custom identification of the run.
  • tools: Optional array of tool names to enable for this run.
Response:
  • If background is true: Returns immediately with a conversation UUID for background processing:
{
  "uuid": "conversation-uuid"
}
  • If stream is true: The response is a stream of Server-Sent Events (SSE). Check out the Streaming Events guide for more information about the specific events you can expect.
  • If stream is false: A single JSON response is returned with the final event (typically the chain-complete event) in the following structure:
{
  "uuid": string,
  "conversation": Message[],
  "response": {
    "streamType": "text" | "object",
    "usage": {
      "promptTokens": number,
      "completionTokens": number,
      "totalTokens": number
    },
    "text": string,
    "object": object | undefined,
    "toolCalls": ToolCall[]
    "cost": number
}
Message follows the PromptL format.
ToolCall has the following format:
type ToolCall = {
  id: string
  name: string
  arguments: Record<string, unknown>
}

7. Chat

Continue a multi-turn conversation by sending additional messages to an existing conversation thread. This endpoint allows you to maintain context across multiple exchanges with the AI model by building upon messages from a previous run. The conversation history is automatically managed, and each new message is appended to the existing message chain. Endpoint: POST /conversations/{conversationUuid}/chat Path Parameters:
  • conversationUuid: UUID of the conversation
Request Body:
  • Messages follow the PromptL format. If you’re using a different method to run your prompts, you’ll need to format your messages accordingly.
{
  "messages": [
    {
      "role": "user" | "system" | "assistant",
      "content": [
        {
          "type": "text",
          "text": "message content"
        }
      ],
    }
  ],
  "stream": true
}
  • stream: Optional boolean parameter (defaults to false). When set to true, the response will be a stream of Server-Sent Events (SSE). If false, a single JSON response containing the last event is returned. Check out the Streaming Events guide for more information about the specific events you can expect.
Message follows the PromptL format.
Response: The response is a stream of Server-Sent Events (SSE) or a single JSON response containing the final event, similar to the “Run a Document” endpoint. Check out the Streaming Events guide for more information about the specific events you can expect.

8. Get a Conversation

Retrieve the conversation history by its UUID. Use this endpoint to fetch the complete message history from a completed conversation, including all user messages, assistant responses, and tool calls. Endpoint: GET /conversations/{conversationUuid} Path Parameters:
  • conversationUuid: UUID of the conversation (required)
Response: The response contains the conversation UUID and the complete conversation history as an array of messages. Response Body:
{
  "uuid": "conversation-uuid",
  "conversation": [
    {
      "role": "user" | "system" | "assistant",
      "content": [
        {
          "type": "text",
          "text": "message content"
        }
      ]
    }
  ]
}
Message follows the PromptL format.
Error Handling If the conversation is not found, the API returns a 404 status code with an error message:
{
  "name": "NotFoundError",
  "message": "Conversation not found",
  "errorCode": "NotFoundError",
  "details": {}
}

9. Stop a Conversation

Stop an active run that is currently processing. This is useful when you need to cancel a long-running prompt execution, such as when the output is no longer needed or when you want to prevent further token consumption. Endpoint: POST /conversations/{conversationUuid}/stop Path Parameters:
  • conversationUuid: UUID of the conversation
Request Body: No request body is required for this endpoint. Response: This endpoint returns a 200 status code when the conversation is successfully stopped.

10. Attach to a Conversation

Attach to an active run to receive its output events. This endpoint is particularly useful when you’ve started a prompt execution with background: true and want to stream the results. You can attach at any point during the run’s execution to receive the remaining events. Endpoint: POST /conversations/{conversationUuid}/attach Path Parameters:
  • conversationUuid: UUID of the conversation
Request Body:
{
  "stream": false
}
  • stream: Optional boolean parameter (defaults to false). When set to true, the response will be a stream of Server-Sent Events (SSE). If false, a single JSON response containing the final event is returned.
Response:
  • If stream is true: The response is a stream of Server-Sent Events (SSE). Check out the Streaming Events guide for more information about the specific events you can expect.
  • If stream is false: A single JSON response is returned with the final event in the following structure:
{
  "uuid": "conversation-uuid",
  "conversation": [
    {
      "role": "user" | "system" | "assistant",
      "content": [
        {
          "type": "text",
          "content": "message content"
        }
      ]
    }
  ],
  "response": {
    "streamType": "text" | "object",
    "usage": {
      "promptTokens": 10,
      "completionTokens": 15,
      "totalTokens": 25
    },
    "text": "response text",
    "object": {},
    "toolCalls": []
    "cost": number
  }
}
Error Handling The API uses standard HTTP status codes. In case of an error, the response body will contain an error message:
{
  "error": {
    "message": "Error description"
  }
}

11. Annotate a Log

Add a manual evaluation score to a conversation log. Use this endpoint to provide human feedback or manual assessments of prompt outputs, which can be used for quality tracking and model improvement. Endpoint: POST /conversations/{conversationUuid}/evaluations/{evaluationUuid}/annotate Path Parameters:
  • conversationUuid: UUID of the conversation to annotate
  • evaluationUuid: UUID of the evaluation to use
Request Body:
{
  "score": 2,
  "versionUuid": "version-uuid", // optional
  "metadata": {
    "reason": "The output is not relevant to the prompt"
  }
}
Response:
{
  "uuid": "annotation-uuid",
  "score": 2,
  "normalizedScore": 0.5,
  "metadata": {
    "reason": "The output is not relevant to the prompt"
  },
  "hasPassed": false,
  "error": "optional-error-message",
  "versionUuid": "version-uuid"
}

12. Create Log Entry

Create a log entry for a prompt without executing it. This endpoint allows you to record prompt executions that happened outside of Latitude (e.g., direct LLM API calls) for tracking, analytics, and evaluation purposes. Endpoint: POST /projects/{projectId}/versions/{versionUuid}/documents/logs Path Parameters:
  • projectId: Your project ID (required)
  • versionUuid: Version UUID (required, optional for SDK’s defaults to ‘live’)
Request Body:
  • Messages follow the PromptL format. If you’re using a different method to run your prompts, you’ll need to format your messages accordingly.
{
  "path": "path/to/document",
  "messages": [
    {
      "role": "user" | "system" | "assistant",
        {
          "type": "text",
          "content": string
        }
    }
  ],
  "response": string
}
Response:
{
  "id": "document-id",
  "uuid": "log-uuid",
  "documentUuid": "document-uuid",
  "commitId": "commit-id",
  "resolvedContent": "Document content without comments",
  "contentHash": "content-hash",
  "parameters": {},
  "customIdentifier": "custom-identifier",
  "duration": "duration",
  "source": "source",
  "createdAt": "created-at",
  "updatedAt": "updated-at"
}