Discover how to efficiently handle and process streaming events using Latitude’s API.
When executing a prompt in streaming mode, Latitude will return a stream of Server-Sent Events (SSE) that contain real-time updates from the AI provider. This guide explains how to handle and process streaming events using Latitude’s API.
There are two main types of events that you will receive when streaming events:
latitude-event
: Contains information about the chain progress and results.provider-event
: Contains real-time updates from your AI provider.Latitude Events originate from Latitude’s AI engine and provide detailed updates on the processing chain, from initiation to completion. Every request is processed as a chain of steps, even if it consists of a single step.
All Latitude events follow this structure:
Every chain execution follows the same flow:
chain-started
event.step-started
event and end with a step-completed
event. Within a step, you may receive additional events:
provider-started
and provider-completed
events.tools-started
and tool-completed
events occur, indicating the execution status of the requested tool.
Check out Latitude Tools for more information about built-in tools.
chain-completed
.However, the chain execution can be interrupted by any of the following events:
chain-error
: Indicates an error occurred during the processing chain.tools-requested
: Indicates the AI response requested additional tools to be executed by the client, and they are required to continue processing the chain.Here’s a complete list of all Latitude Event types and their attributes:
ChainStarted
The chain has started processing.
StepStarted
A new step in the chain has started processing.
ProviderStarted
Your LLM Provider is being requested to generate a new response.
ProviderCompleted
Your LLM Provider has completed the response generation.
ToolsStarted
Latitude has started running built-in tools requested by the LLM response.
Check out Latitude Tools for more information about built-in tools.
ToolCompleted
A built-in tool has completed its execution.
Check out Latitude Tools for more information about built-in tools.
ToolsRequested
The AI provider has requested additional tools to be executed, and they are required to continue processing the chain.
This event will terminate the SSE stream. To continue it, you will need to use the Chat API Endpoint to send the requested tools response, and continue the chain processing.
ChainError
An error has occurred during the processing of the chain.
This event will terminate the SSE stream.
ChainCompleted
The chain processing has completed successfully.
This event will terminate the SSE stream.
Provider Events are events that are generated by your AI provider. These events contain real-time updates from your AI provider, providing insights into the ongoing tasks. This is specially useful to render your LLM’s responses in real-time as they are being generated.
These events will always take place between a provider-started
and a provider-completed
event. You can expect updates on each stage of the processing, providing insights into the ongoing tasks.
Here’s an example of a Provider Event with the response text delta:
For more information about these events, visit Vercel AI SDK’s Documentation
The API uses SSE for real-time updates. Here’s how to handle SSE responses:
Discover how to efficiently handle and process streaming events using Latitude’s API.
When executing a prompt in streaming mode, Latitude will return a stream of Server-Sent Events (SSE) that contain real-time updates from the AI provider. This guide explains how to handle and process streaming events using Latitude’s API.
There are two main types of events that you will receive when streaming events:
latitude-event
: Contains information about the chain progress and results.provider-event
: Contains real-time updates from your AI provider.Latitude Events originate from Latitude’s AI engine and provide detailed updates on the processing chain, from initiation to completion. Every request is processed as a chain of steps, even if it consists of a single step.
All Latitude events follow this structure:
Every chain execution follows the same flow:
chain-started
event.step-started
event and end with a step-completed
event. Within a step, you may receive additional events:
provider-started
and provider-completed
events.tools-started
and tool-completed
events occur, indicating the execution status of the requested tool.
Check out Latitude Tools for more information about built-in tools.
chain-completed
.However, the chain execution can be interrupted by any of the following events:
chain-error
: Indicates an error occurred during the processing chain.tools-requested
: Indicates the AI response requested additional tools to be executed by the client, and they are required to continue processing the chain.Here’s a complete list of all Latitude Event types and their attributes:
ChainStarted
The chain has started processing.
StepStarted
A new step in the chain has started processing.
ProviderStarted
Your LLM Provider is being requested to generate a new response.
ProviderCompleted
Your LLM Provider has completed the response generation.
ToolsStarted
Latitude has started running built-in tools requested by the LLM response.
Check out Latitude Tools for more information about built-in tools.
ToolCompleted
A built-in tool has completed its execution.
Check out Latitude Tools for more information about built-in tools.
ToolsRequested
The AI provider has requested additional tools to be executed, and they are required to continue processing the chain.
This event will terminate the SSE stream. To continue it, you will need to use the Chat API Endpoint to send the requested tools response, and continue the chain processing.
ChainError
An error has occurred during the processing of the chain.
This event will terminate the SSE stream.
ChainCompleted
The chain processing has completed successfully.
This event will terminate the SSE stream.
Provider Events are events that are generated by your AI provider. These events contain real-time updates from your AI provider, providing insights into the ongoing tasks. This is specially useful to render your LLM’s responses in real-time as they are being generated.
These events will always take place between a provider-started
and a provider-completed
event. You can expect updates on each stage of the processing, providing insights into the ongoing tasks.
Here’s an example of a Provider Event with the response text delta:
For more information about these events, visit Vercel AI SDK’s Documentation
The API uses SSE for real-time updates. Here’s how to handle SSE responses: