Latitude’s Typescript integration has the following main features:

  • Automatic tracing of LLM calls
  • Interact with Latitude’s prompt manager from code: create, update and delete prompts
  • Render Latitude prompts locally and run them against your LLM providers
  • Run prompts with Latitude’s high-performing gateway
  • Trigger LLM as judge and human in the loop evaluations
  • Programmatically push external logs to Latitude for evaluation and monitoring

Installation

To install the Latitude SDK, use your preferred package manager:

npm install @latitude-data/sdk
# or
yarn add @latitude-data/sdk
# or
pnpm add @latitude-data/sdk

Getting Started

First, import the Latitude class from the SDK and initialize it with your API key:

import { Latitude } from '@latitude-data/sdk'

const sdk = new Latitude('your-api-key-here', {
  projectId: 12345, // Optional, otherwise you have to provide it in each method
  versionUuid: 'optional-version-uuid', // Optional, by default it targets the latest live version
})

Examples

Check out our cookbook for more examples of how to use Latitude’s SDK.

Telemetry

Latitude can automatically trace all your LLM calls from most major providers and frameworks using our OpenTelemetry integration. We recommend this approach to easily get started using Latitude’s full capabilities.

Here’s how to integrate with the all supported providers/frameworks:

Learn more about traces and how to monitor them with Latitude.

A note during development

Latitude’s OpenTelemetry integration batches requests automatically in order to improve performance. This is helpful in production workloads, but during development you may want to disable batching. This can be done by setting the disableBatch option to true:

new Latitude('your-api-key-here', {
  telemetry: {
    disableBatch: true,
  },
})

Prompt Management

Get or create a prompt

To get or create a prompt, use the getOrCreate method:

await sdk.prompts.getOrCreate('path/to/your/prompt', {
  projectId: 12345, // Optional, if you did not provide it in the constructor
  versionUuid: 'optional-version-uuid', // Optional, by default it targets the latest live version
  prompt: 'Your prompt here', // Optional, this will be the contents of your prompt if it does not exist
})

Run a prompt with your LLM provider

The render method will render your prompt and return the configuration and messages to use with your LLM provider. This render step is completely local and does not use Latitude’s runtime services:

import { Latitude } from '@latitude-data/sdk'
import { OpenAI } from 'openai'

const sdk = new Latitude('your-api-key-here', {
  projectId: 12345, // Optional: You can specify a default project ID here
  versionUuid: 'optional-version-uuid', // Optional: You can specify a default version UUID here
})

const prompt = await sdk.prompts.get('path/to/your/prompt', {
  projectId: 12345, // Optional if you provided it in the constructor
  versionUuid: 'optional-version-uuid', // Optional, by default it targets the latest live version
})

const openai = new OpenAI({ apiKey: 'your-openai-api-key' })

const { config, messages } = await sdk.prompts.render({
  prompt,
  parameters: {
    // Any parameters your prompt expects
  },
})

const response = await openai.chat.completions.create({
  ...config,
  messages,
})

You can also execute chains by providing an onStep callback to the renderChain method, which will be called for each step of the chain to generate the corresponding response:

await sdk.prompts.renderChain({
  prompt,
  parameters: {
    // Any parameters your prompt expects
  },
  onStep: async ({ config, messages }) => {
    // Your call to your LLM provider goes here
    const response = await openai.chat.completions.create({
      ...config,
      messages,
    })

    return response
  },
})

render and renderChain only work with the latest version of Latitude’s open source prompt syntax: PromptL

Run a prompt through Latitude Gateway

Latitude’s Gateway is a high-performing gateway that proxies your LLM calls between your application and the LLM provider. It includes some additional features like automatic prompt caching based on content and prompt configuration.

In order to run a prompt through Latitude’s Gateway, use the run method:

await sdk.prompts.run('path/to/your/prompt', {
  projectId: 12345, // Optional if you provided it in the constructor
  versionUuid: 'optional-version-uuid', // Optional, by default it targets the latest live version
  stream: false, // Optional, by default it's false
  parameters: {
    // Any parameters your prompt expects
  },
  tools: {
    // Any tools your prompt expects
  },
  onEvent: ({ event, data }) => {
    // Handle events during execution
  },
  onFinished: (result) => {
    // Handle the final result
  },
  onError: (error) => {
    // Handle any errors
  },
})

Running a prompt with tools

When you run a prompt with tools, you can define and supply the corresponding tool handlers to the Latitude SDK. These handlers will be called automatically when the LLM invokes the tools. The tool results will be returned to the LLM, and the conversation will continue.

import { Latitude } from '@latitude-data/sdk'

// If you use TypeScript, you can define the tools you want to use
// These types will be available in the tools parameter of the run and chat methods
type Tools = {
  get_coordinates: { location: string }
  get_weather: { latitude: string; longitude: string }
}

const response = await sdk.prompts.run<Tools>(documentPath, {
  projectId,
  parameters: {
    my_location: 'Barcelona and Miami',
    other_location: 'Boston',
  },
  versionUuid: commitUuid,
  stream: true,
  tools: {
    get_coordinates: async ({ location }) => {
      const data = fetch('https://api.example.com/coordinates?location=' + location)
      const { latitude, longitude } = await data.json()
      return { latitude, longitude }
    },
    get_weather: async ({ latitude, longitude }) => {
      const data = fetch('https://api.example.com/weather?latitude=' + latitude + '&longitude=' + longitude)
      const { temperature } = await data.json()
      return { temperature }
    },
  },
})

Pausing tool execution

If you need to pause the execution of the tools, you can do so by returning details.pauseExecution() in the tool handler. You can resume the conversation later by returning the tool results in the sdk.prompts.chat method.

// ...example of the tools argument in the `sdk.prompts.run` method
tools: {
  get_weather: async (
    { latitude, longitude },
    details: {
      toolId, // Called tool id
      toolName, // Called tool name, `get_weather` in this case
      pauseExecution, // Signal to optionally pause the execution
      messages, // List of conversation messages so far
      conversationUuid, // Conversation identifier, to be used in the `sdk.prompts.chat` method
      requestedToolCalls, // All tool calls requested by the LLM, including this one
    },
  ) => {
    // Let's imagine `get_weather` takes a long time to execute and you want to do it in the background.
    // With `pauseExecution`, you can pause the conversation, get the weather data, and resume the conversation
    // later, by returning the tool results in the `sdk.prompts.chat` method.
    // You may have to store `requestedToolCalls` and `conversationUuid` on your side to resume the conversation.
    // Note that, you must return all results from the requested tools, in the `sdk.prompts.chat` method, at once.
    return pauseExecution()
  },
},

Log Management

Pushing a log to Latitude

To create a log programmatically, use the create method:

const messages = [
  {
    role: 'user',
    content: 'Please tell me a joke about doctors',
  },
]

await sdk.logs.create('path/to/your/prompt', messages, {
  projectId: 12345, // Optional, if you did not provide it in the constructor
  versionUuid: 'optional-version-uuid', // Optional, by default it targets the latest live version
  response: 'assistant response',
})

Logs follow the PromptL format. If you’re using a different method to run your prompts, you’ll need to format your logs accordingly.