The Latitude TypeScript SDK provides a convenient way to interact with the Latitude platform from your Node.js or browser applications.

Installation

The Latitude SDK is compatible with Node.js 16 or higher.

npm install @latitude-data/sdk
# or
yarn add @latitude-data/sdk
# or
pnpm add @latitude-data/sdk

Authentication and Initialization

Import the SDK and initialize it with your API key. You can generate API keys in your Latitude project settings under “API Access”.

import { Latitude } from '@latitude-data/sdk'

const latitude = new Latitude(process.env.LATITUDE_API_KEY)

You can also provide additional options during initialization:

const latitude = new Latitude(process.env.LATITUDE_API_KEY, {
  projectId: 123, // Your Latitude project ID
  versionUuid: 'version-uuid', // Optional version UUID
})

Keep your API key secure and avoid committing it directly into your codebase.

Examples

Check out our cookbook for more examples of how to use the Latitude SDK.

SDK Structure

The Latitude SDK is organized into several namespaces:

  • prompts: Methods for managing and running prompts
  • logs: Methods for creating and managing logs
  • evaluations: Methods for triggering evaluations and creating results

Prompt Management

Get a Prompt

To retrieve a specific prompt by its path:

const prompt = await latitude.prompts.get('prompt-path')

Get All Prompts

To retrieve all prompts in your project:

const prompts = await latitude.prompts.getAll()

Get or Create a Prompt

To get an existing prompt or create a new one if it doesn’t exist:

const prompt = await latitude.prompts.getOrCreate('prompt-path')

You can also provide the content when creating a new prompt:

const prompt = await latitude.prompts.getOrCreate('prompt-path', {
  prompt: 'This is the content of my new prompt',
})

Running Prompts

Non-Streaming Run

Execute a prompt and get the complete response once generation is finished:

const result = await latitude.prompts.run('prompt-path', {
  parameters: {
    productName: 'CloudSync Pro',
    audience: 'Small Business Owners',
  },
  // Optional: Provide a custom identifier for this run
  customIdentifier: 'email-campaign-2023',
  // Optional: Provide callbacks for events
  onFinished: (result) => {
    console.log('Run completed:', result.uuid)
  },
  onError: (error) => {
    console.error('Run error:', error.message)
  },
})

console.log('Conversation UUID:', result.uuid)
console.log('Conversation messages:', result.conversation)

If your prompt is an agent, an agentResponse property will be defined in the result. The structure of the response will depend on the agent’s configuration, although by default it will be: { "response": "Your agent's response" }.

Handling Streaming Responses

For real-time applications (like chatbots), use streaming to get response chunks as they are generated:

await latitude.prompts.run('prompt-path', {
  parameters: {
    productName: 'CloudSync Pro',
    audience: 'Small Business Owners',
  },
  // Enable streaming
  stream: true,
  // Provide callbacks for events
  onEvent: ({ event, data }) => {
    if (event === StreamEventTypes.Provider && data.type === 'text-delta') {
      console.log(data.textDelta)
    } else if (
      event === StreamEventTypes.Latitude &&
      data.type === 'chain-completed'
    ) {
      console.log('Conversation UUID:', data.uuid)
      console.log('Conversation messages:', data.messages)
    }
  },
  onFinished: (result) => {
    console.log('Stream completed:', result.uuid)
  },
  onError: (error) => {
    console.error('Stream error:', error.message)
  },
})

Using Tools with Prompts

You can provide tool handlers that the model can call during execution:

await latitude.prompts.run('prompt-path', {
  parameters: {
    query: 'What is the weather in San Francisco?',
  },
  // Define the tools the model can use
  tools: {
    getWeather: async (args, details) => {
      // `args` contains the arguments passed by the model
      // `details` contains context like tool id, name, messages...
      // The result can be anything JSON serializable
      console.log('Getting weather for:', args.location)
      return { temperature: '72°F', conditions: 'Sunny' }
    },
  },
})

If you need to pause the execution of a tool, you can do so by returning details.pauseExecution() in the tool handler. You can resume the conversation later by returning the tool results in the latitude.prompts.chat method.

Chat with a Prompt

Follow the conversation of a runned prompt:

const messages = [
  {
    role: 'user',
    content: 'Hello, how can you help me today?',
  },
]

const result = await latitude.prompts.chat('conversation-uuid', messages, {
  // Chat options are similar to the run method
  onFinished: (result) => {
    console.log('Chat completed:', result.uuid)
  },
  onError: (error) => {
    console.error('Chat error:', error.message)
  },
})

console.log('Conversation UUID:', result.uuid)
console.log('Conversation messages:', result.conversation)

Messages follow the PromptL format. If you’re using a different method to run your prompts, you’ll need to format your messages accordingly.

Rendering Prompts

Prompt Rendering

Render a prompt locally without running it:

const result = await latitude.prompts.render({
  prompt: {
    content: 'Your prompt content here with {{ parameters }}',
  },
  parameters: {
    topic: 'Artificial Intelligence',
    tone: 'Professional',
  },
  // Optional: Specify a provider adapter
  adapter: Adapters.OpenAI,
})

console.log('Rendered config:', result.config)
console.log('Rendered messages:', result.messages)

Chain Rendering

Render a chain of prompts locally:

const result = await latitude.prompts.renderChain({
  prompt: {
    path: 'prompt-path',
    content: 'Your prompt content here with {{ parameters }}',
    provider: 'openai',
  },
  parameters: {
    topic: 'Machine Learning',
    complexity: 'Advanced',
  },
  // Required: Process each step in the chain
  onStep: async ({ config, messages }) => {
    // Process each step in the chain
    console.log('Processing step with messages:', messages)
    // Return a string or a message object
    return 'Step response'
  },
  // Optional: Specify a provider adapter
  adapter: Adapters.OpenAI,
  // Optional: Log responses to Latitude
  logResponses: true,
  // Optional: Define tools for the chain
  tools: {
    getExample: async (args, details) => {
      return { example: 'This is an example response' }
    },
  },
})

console.log('Rendered config:', result.config)
console.log('Rendered messages:', result.messages)

Agent Rendering

Render an agent prompt locally (similar to renderChain but with a final agent result):

const result = await latitude.prompts.renderAgent({
  prompt: {
    path: 'prompt-path',
    content: 'Agent prompt content with {{ parameters }}',
    provider: 'openai',
  },
  parameters: {
    task: 'Research quantum computing',
    depth: 'Detailed',
  },
  // Required: Process each agent step
  onStep: async ({ config, messages }) => {
    // Process each step in the agent execution
    console.log('Processing agent step:', messages)
    // Return a string or a message object
    return 'Agent step response'
  },
  // Optional: Log responses to Latitude
  logResponses: true,
  // Optional: Define tools for the agent
  tools: {
    search: async (args, details) => {
      console.log('Agent using search tool with args:', args)
      return { results: ['Result 1', 'Result 2'] }
    },
  },
})

console.log('Rendered config:', result.config)
console.log('Rendered messages:', result.messages)
// Agent final response is available in result.result
console.log('Agent final response:', result.result)

Make sure to provide the config.tools parameter to the LLM provider in your onStep handler, otherwise the AI won’t be able to stop the Agent loop!

Logging

Creating Logs

Push a log to Latitude manually for a prompt:

const messages = [
  {
    role: 'user',
    content: 'Hello, how can you help me today?',
  },
]

const log = await latitude.logs.create('prompt-path', messages, {
  response: 'I can help you with anything!',
})

Evaluations

Triggering Evaluations

Trigger an evaluation manually for a conversation:

const result = await latitude.evaluations.trigger('conversation-uuid', {
  // Optional: trigger all or specific evaluations
  evaluationUuids: ['eval-uuid-1', 'eval-uuid-2'],
})

Creating Evaluation Results

Push a result to Latitude manually for an evaluation:

const result = await latitude.evaluations.createResult(
  'conversation-uuid',
  'evaluation-uuid',
  {
    // The result can be a string, boolean, or number
    result: true,
    reason: 'I liked it!',
  },
)

Complete Method Reference

Initialization

// SDK initialization
new Latitude(
  apiKey: string,
  options: {
    projectId?: number,
    versionUuid?: string,
    __internal?: {
      gateway?: GatewayApiConfig,
      source?: LogSources,
      retryMs?: number
    }
  }
)

Prompts Namespace

// Get a prompt
latitude.prompts.get(
  path: string,
  options?: {
    projectId?: number,
    versionUuid?: string
  }
): Promise<Prompt>

// Get all prompts
latitude.prompts.getAll(
  options?: {
    projectId?: number,
    versionUuid?: string
  }
): Promise<Prompt[]>

// Get or create a prompt
latitude.prompts.getOrCreate(
  path: string,
  options?: {
    projectId?: number,
    versionUuid?: string,
    prompt?: string
  }
): Promise<Prompt>

// Run a prompt
latitude.prompts.run<Tools extends ToolSpec = {}>(
  path: string,
  options: {
    projectId?: number,
    versionUuid?: string,
    customIdentifier?: string,
    parameters?: Record<string, unknown>,
    stream?: boolean,
    tools?: ToolCalledFn<Tools>,
    signal?: AbortSignal,
    onEvent?: ({ event, data }: { event: StreamEventTypes, data: ChainEventDto }) => void,
    onFinished?: (data: StreamChainResponse) => void,
    onError?: (error: LatitudeApiError) => void
  }
): Promise<(StreamChainResponse & { uuid: string }) | undefined>

// Chat with a prompt
latitude.prompts.chat<Tools extends ToolSpec = {}>(
  uuid: string,
  messages: Message[],
  options?: {
    stream?: boolean,
    tools?: ToolCalledFn<Tools>,
    signal?: AbortSignal,
    onEvent?: ({ event, data }: { event: StreamEventTypes, data: ChainEventDto }) => void,
    onFinished?: (data: StreamChainResponse) => void,
    onError?: (error: LatitudeApiError) => void
  }
): Promise<StreamChainResponse | undefined>

// Render a prompt
latitude.prompts.render<M extends AdapterMessageType = PromptlMessage>(
  options: {
    prompt: { content: string },
    parameters: Record<string, unknown>,
    adapter?: ProviderAdapter<M>
  }
): Promise<{ config: Config, messages: M[] }>

// Render a chain
latitude.prompts.renderChain<M extends AdapterMessageType = PromptlMessage>(
  options: {
    prompt: Prompt,
    parameters: Record<string, unknown>,
    adapter?: ProviderAdapter<M>,
    onStep: (args: { config: Config, messages: M[] }) => Promise<string | Omit<M, 'role'>>,
    tools?: RenderToolCalledFn<ToolSpec>,
    logResponses?: boolean
  }
): Promise<{ config: Config, messages: M[] }>

// Render an agent
latitude.prompts.renderAgent<M extends AdapterMessageType = PromptlMessage>(
  options: RenderChainOptions<M>
): Promise<{ config: Config, messages: M[], result: unknown }>

Logs Namespace

// Create a log
latitude.logs.create(
  path: string,
  messages: Message[],
  options?: {
    response?: string,
    projectId?: number,
    versionUuid?: string
  }
): Promise<DocumentLog>

Evaluations Namespace

// Trigger an evaluation
latitude.evaluations.trigger(
  uuid: string,
  options?: {
    evaluationUuids?: string[]
  }
): Promise<{ uuid: string }>

// Create an evaluation result
latitude.evaluations.createResult(
  uuid: string,
  evaluationUuid: string,
  options: {
    result: string | boolean | number,
    reason: string
  }
): Promise<{ uuid: string }>

Error Handling

The SDK throws LatitudeApiError instances when API requests fail. You can catch and handle these errors:

import { LatitudeApiError } from '@latitude-data/sdk'

async function handleErrors() {
  try {
    const prompt = await latitude.prompts.get('non-existent-prompt')
  } catch (error) {
    if (error instanceof LatitudeApiError) {
      console.error('API Error:', error.message)
      console.error('Error Code:', error.errorCode)
      console.error('Status:', error.status)
    } else {
      console.error('Unexpected error:', error)
    }
  }
}

Logging Features

  • Automatic Logging: All runs through latitude.prompts.run() are automatically logged in Latitude, capturing inputs, outputs, performance metrics, and trace information.
  • Custom Identifiers: Use the optional customIdentifier parameter to tag runs for easier filtering and analysis in the Latitude dashboard.
  • Response Identification: Each response includes identifying information like uuid that can be used to reference the specific run later.

Further Information