Latitude’s Typescript integration has the following main features:

  • Automatic tracing of LLM calls
  • Interact with Latitude’s prompt manager from code: create, update and delete prompts
  • Render Latitude prompts locally and run them against your LLM providers
  • Run prompts with Latitude’s high-performing gateway
  • Trigger LLM as judge and human in the loop evaluations
  • Programmatically push external logs to Latitude for evaluation and monitoring

Installation

To install the Latitude SDK, use your preferred package manager:

npm install @latitude-data/sdk
# or
yarn add @latitude-data/sdk
# or
pnpm add @latitude-data/sdk

Getting Started

First, import the Latitude class from the SDK and initialize it with your API key:

import { Latitude } from '@latitude-data/sdk'

const sdk = new Latitude('your-api-key-here', {
  projectId: 12345, // Optional, otherwise you have to provide it in each method
  versionUuid: 'optional-version-uuid', // Optional, by default it targets the latest live version
})

Examples

Check out our cookbook for more examples of how to use Latitude’s SDK.

Telemetry

Latitude can automatically trace all your LLM calls from most major providers and frameworks using our OpenTelemetry integration. We recommend this approach to easily get started using Latitude’s full capabilities.

Here’s how to integrate with the all supported providers/frameworks:

Learn more about traces and how to monitor them with Latitude.

A note during development

Latitude’s OpenTelemetry integration batches requests automatically in order to improve performance. This is helpful in production workloads, but during development you may want to disable batching. This can be done by setting the disableBatch option to true:

new Latitude('your-api-key-here', {
  telemetry: {
    disableBatch: true,
  },
})

Prompt Management

Get or create a prompt

To get or create a prompt, use the getOrCreate method:

await sdk.prompts.getOrCreate('path/to/your/prompt', {
  projectId: 12345, // Optional, if you did not provide it in the constructor
  versionUuid: 'optional-version-uuid', // Optional, by default it targets the latest live version
  prompt: 'Your prompt here', // Optional, this will be the contents of your prompt if it does not exist
})

Run a prompt with your LLM provider

The render method will render your prompt and return the configuration and messages to use with your LLM provider. This render step is completely local and does not use Latitude’s runtime services.

Here’s an example of running a Latitude prompt locally using the OpenAI package:

import { Latitude } from '@latitude-data/sdk'
import { OpenAI } from 'openai'

const sdk = new Latitude('your-api-key-here', {
  projectId: 12345, // Optional: You can specify a default project ID here
  versionUuid: 'optional-version-uuid', // Optional: You can specify a default version UUID here
})

const prompt = await sdk.prompts.get('path/to/your/prompt', {
  projectId: 12345, // Optional if you provided it in the constructor
  versionUuid: 'optional-version-uuid', // Optional, by default it targets the latest live version
})

const openai = new OpenAI({ apiKey: 'your-openai-api-key' })

const { config, messages } = await sdk.prompts.render({
  prompt,
  parameters: {
    // Any parameters your prompt expects
  },
})

const response = await openai.chat.completions.create({
  ...config,
  messages,
})

You can also execute chains by providing an onStep callback to the renderChain method, which will be run for each step of the chain to generate the response. Here’s an example:

await sdk.prompts.renderChain({
  prompt,
  parameters: {
    // Any parameters your prompt expects
  },
  onStep: async ({ config, messages }) => {
    // Your call to your LLM provider goes here
    const response = await openai.chat.completions.create({
      ...config,
      messages,
    })

    return response
  },
})

render and renderChain only work with the latest iteration of Latitude’s open source prompt syntax: PromptL

Run a prompt through Latitude Gateway

Latitude’s Gateway is a high-performing gateway that proxies your LLM calls between your application and the LLM provider. It includes some additional features like automatic prompt caching based on content and prompt configuration.

In order to run a prompt through Latitude’s Gateway, use the run method:

await sdk.prompts.run('path/to/your/prompt', {
  projectId: 12345, // Optional if you provided it in the constructor
  versionUuid: 'optional-version-uuid', // Optional, by default it targets the latest live version
  stream: false, // Optional, by default it's false
  parameters: {
    // Any parameters your prompt expects
  },
  onEvent: ({ event, data }) => {
    // Handle events during execution
  },
  onFinished: (result) => {
    // Handle the final result
  },
  onError: (error) => {
    // Handle any errors
  },
})

Log Management

Pushing a log to Latitude

To create a log programmatically, use the create method:

const messages = [
  {
    role: 'user',
    content: 'Please tell me a joke about doctors',
  },
]

await sdk.logs.create('path/to/your/prompt', messages, {
  projectId: 12345, // Optional, if you did not provide it in the constructor
  versionUuid: 'optional-version-uuid', // Optional, by default it targets the latest live version
  response: 'assistant response',
})

Message follows OpenAI’s format.