Streaming Events
Discover how to efficiently handle and process streaming events using Latitude’s API.
When executing a prompt in streaming mode, Latitude will return a stream of Server-Sent Events (SSE) that contain real-time updates from the AI provider. This guide explains how to handle and process streaming events using Latitude’s API.
Overview
There are two main types of events that you will receive when streaming events:
latitude-event
: Contains information about the chain progress and results.provider-event
: Contains real-time updates from your AI provider.
Latitude Events
Latitude Events originate from Latitude’s AI engine and provide detailed updates on the processing chain, from initiation to completion. Every request is processed as a chain of steps, even if it consists of a single step.
General structure
All Latitude events follow this structure:
Event Flow
Every chain execution follows the same flow:
- Chain Starts: Every stream begins with a
chain-started
event. - Processing Steps: Multiple steps can be executed within a chain. All steps start with a
step-started
event and end with astep-completed
event. Within a step, you may receive additional events:- Provider Interaction: The LLM processing includes
provider-started
andprovider-completed
events. - Tool Execution: If Latitude built-in tools are involved,
tools-started
andtool-completed
events occur, indicating the execution status of the requested tool.Check out Latitude Tools for more information about built-in tools.
- Provider Interaction: The LLM processing includes
- Chain Completion: The chain concludes with a
chain-completed
.
However, the chain execution can be interrupted by any of the following events:
chain-error
: Indicates an error occurred during the processing chain.tools-requested
: Indicates the AI response requested additional tools to be executed by the client, and they are required to continue processing the chain.
Event Types
Here’s a complete list of all Latitude Event types and their attributes:
Provider Events
Provider Events are events that are generated by your AI provider. These events contain real-time updates from your AI provider, providing insights into the ongoing tasks. This is specially useful to render your LLM’s responses in real-time as they are being generated.
These events will always take place between a provider-started
and a provider-completed
event. You can expect updates on each stage of the processing, providing insights into the ongoing tasks.
Here’s an example of a Provider Event with the response text delta:
For more information about these events, visit Vercel AI SDK’s Documentation
Handling SSE Events
The API uses SSE for real-time updates. Here’s how to handle SSE responses:
- Set up an EventSource or use a library that supports SSE.
- Listen for events and parse the JSON data in each event.
- Handle different event types.