Latitude SDK Documentation

The Latitude SDK provides an easy way to interact with the Latitude API, allowing you to run documents and chat with AI models.

You can reach out to us with any question or request through our Slack channel.

Installation

To install the Latitude SDK, use your preferred package manager:

npm install @latitude-data/sdk
# or
yarn add @latitude-data/sdk
# or
pnpm add @latitude-data/sdk

Getting Started

First, import the Latitude class from the SDK:

import { Latitude } from '@latitude-data/sdk'

Then, create an instance of the Latitude class with your API key:

const latitude = new Latitude('your-api-key-here', {
  projectId: 12345, // Optional: You can specify a default project ID here
  versionUuid: 'optional-version-uuid', // Optional: You can specify a default version UUID here
})

Running a Document

To run a document, use the run method:

latitude.run('path/to/your/document', {
  projectId: 12345, // Optional if you provided it in the constructor
  versionUuid: 'optional-version-uuid', // Optional, by default it targets latest live version
  stream: false, // Optional, by default it's false
  parameters: {
    // Any parameters your document expects
  },
  onEvent: ({ event, data }) => {
    // Handle events during execution
  },
  onFinished: (result) => {
    // Handle the final result
    console.log('Conversation:', result.conversation)
    console.log('Response:', result.response)
  },
  onError: (error) => {
    // Handle any errors
    console.error('Error:', error)
  },
})

Chatting with an AI Model

The document run method previously described returns events which all contain a singular uuid field. This field can be used to further continue the conversation with the document, including the context from the document run. Here’s how to do it.

To continue a chat conversation, use the chat method:

const messages = [
  { role: 'user', content: 'Hello, how are you?' },
  // ... other messages
]

// conversation-uuid is the uuid from the document run
latitude.chat('conversation-uuid', messages, {
  stream: true, // This send the request from Latitude API as a stream
  onEvent: ({ event, data }) => {
    // Handle events during the chat
  },
  onFinished: (result) => {
    // Handle the final result
    console.log('Conversation:', result.conversation)
    console.log('Response:', result.response)
  },
  onError: (error) => {
    // Handle any errors
    console.error('Error:', error)
  },
})

Handling Streams

Both run and chat methods return streams of events. You can handle these events in real-time using the onEvent callback:

onEvent: ({ event, data }) => {
  switch (event) {
    case 'latitude-event':
      // Handle Latitude-specific events
      break
    // Handle other event types as needed
  }
}

Error Handling

Errors are handled through the onError callback. It’s recommended to always provide this callback to catch and handle any errors that may occur during execution:

onError: (error) => {
  console.error('An error occurred:', error.message)
  // Perform any necessary error handling or logging
}

IMPORTANT: If you don’t provide onError callback await sdk.run will throw an error.

Pushing a log to Latitude

You can push a log to Latitude in order to evaluate it, using the log method:

const sdk = new Latitude(process.env.LATITUDE_API_KEY, {
  projectId: 1,
})
const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
})

const messages = [
  {
    role: 'user' as MessageRole.user,
    content: [
      {
        type: 'text' as ContentType.text,
        text: 'Please tell me a joke about doctors',
      },
    ],
  },
]
const chatCompletion = await openai.chat.completions.create({
  messages,
  model: 'gpt-4o-mini',
})

// Push the log to the live version of our prompt called 'joker'
sdk.log('joker', messages, {
  response: chatCompletion.choices[0].message.content,
})

Logs follow OpenAI’s format. If you’re using a different method to run your prompts, you’ll need to format your logs accordingly.

If you include the assistant response in the optional response parameter, make sure to not include it in the log so it isn’t included twice.

Evaluating Conversations

To evaluate a conversation using configured evaluators, use the eval method:

import { Latitude } from '@latitude-data/sdk'

// Initialize SDK
const sdk = new Latitude(process.env.LATITUDE_API_KEY, {
  projectId: 1,
})

// 1. Run the document
const { uuid } = await sdk.run('joker', {
  parameters: {
    topic: 'police',
  },
  stream: false,
})

// 2. Chat with the model
await sdk.chat(uuid, [
  {
    role: 'user',
    content: [
      {
        type: 'text',
        text: 'Tell me another joke about doctors',
      },
    ],
  },
])

// 3. Evaluate the full conversation
await sdk.eval(uuid, {
  evaluationUuids: ['evaluator-uuid-1', 'evaluator-uuid-2'], // Optional: You can specify evaluators by their UUIDs. It defaults to running all evaluators connected to the prompt.
})

This allows you to evaluate a conversation at any point in time. Specially helpful when building agents that have multiple interactions with users, and you intend to evaluate the agent’s performance after the interaction is fully completed, or at particular points in time.