Rust SDK
Learn how to use the SDKs to interact with the Latitude API.
Latitude Community SDK Documentation
The Latitude Rust SDK provides an easy way to interact with the Latitude API, allowing you to run documents and chat with AI models.
This SDK is community-maintained and not officially supported by Latitude. If you have any questions or requests, please reach out to the community on their GitHub repository.
To install the Latitude SDK: Crates.io
Getting Started
First, import the Client Struct from the SDK:
Then, create an instance of the Latitude class with your API key:
Running a Document
To run a document, use the run
method:
Chatting with an AI Model
The document run method previously described returns events which all contain a
singular uuid
field. This field can be used to further continue the
conversation with the document, including the context from the document run.
Here’s how to do it.
To continue a chat conversation, use the chat
method:
The Chat Method currently only Supports Stream Response!
Stream Handling
The run
and chat
methods in this library both support streaming responses. When using these methods, the response can be handled as a stream of events in real-time. Here’s how you can work with streaming in Rust:
- Stream Responses: By calling
.stream()
on your chat or run request, the library will return events as they are received. The stream can be iterated over asynchronously, allowing you to handle each event as it arrives.
Example:
Error Handling
Errors can occur during both the initialization of a chat or run request and while handling streamed events. It’s recommended to handle errors using Rust’s standard Result
and match
pattern.
- Error Callback: If an error occurs, the library returns an
Err
variant containing the error details. It is good practice to handle errors immediately after callingrun
orchat
.
Example:
In both cases, error messages can be logged or handled as needed. This ensures stability and gives insight into issues when handling streaming data.
Pushing a log to Latitude
You can push a log to Latitude in order to evaluate it, using the log
method:
Logs follow OpenAI’s format. If you’re using a different method to run your prompts, you’ll need to format your logs accordingly.
If you include the assistant response in the optional response
parameter,
make sure to not include it in the log so it isn’t included twice.
Evaluating Conversations
To evaluate a conversation using configured evaluators, use the eval
method:
This allows you to evaluate a conversation at any point in time. Specially helpful when building agents that have multiple interactions with users, and you intend to evaluate the agent’s performance after the interaction is fully completed, or at particular points in time.