Chains and Steps
Chains and Steps are used to create multi-step prompts that can interact with the AI model in stages.
Overview
LLMs have been observed to perform complex operations better when broken down into smaller steps.
Chains allow you to create multi-step prompts that can interact with the AI model in stages. You can pass the result of one step to the next one. This enables more complex workflows and dynamic conversations.
Syntax
Use the <step>
tag in your prompt to add a step. The engine will pause at that step, wait for a generated response, and add it to the conversation as an assistant message, before continuing with the prompt.
Configuration
All steps will use the configuration defined at the beginning of the prompt by default. However, you can override the configuration for each step by adding attributes to the <step>
tag:
Store step responses
You can store the text of the response to a variable by adding an as
attribute, followed by the variable name. This allows you to reuse the response later in your prompt or use it in conditionals or other logic.
Store the whole message
The as
attribute stores the text of the generated response. However, the response often contains additional relevant information that can be useful in the prompt.
To store the entire message object, use the raw attribute. This attribute stores the whole message object in a variable, which can then be accessed later.
The generatedMessage variable will contain attributes like role
and content
, as well as any additional data provided by your LLM provider. The content
attribute is always an array of objects, each with a type
such as text
, image
, or tool-call
.
If you want to debug the contents of the message, you can interpolate it into the prompt and run the chain to see what it contains.
Isolating steps
All steps will automatically receive all the messages from previous steps as context. In some cases, some steps may not need context from previous steps, and including them would be and unnecessary cost and even confuse the model.
If you want to isolate a step from the general context, you can add the isolated
attribute to the <step>
tag. Isolated steps will not receive any context from previous steps, and future steps will not receive the context from isolated steps either.
Implementation
In order to run chains, PromptL will evaluate the prompt in steps, and wait for the response of each step before continuing to the next one. To do this, you must use the Chain
class to define an instance with the prompt, parameters, and the rest of the configuration, and run the .step()
method to generate the strucure for each step.
The first time step
is called, it must be called without any response, as there has not been any input yet. After that, you can call step
with the response of the previous step. For each step, the method will return an object with both messages
and config
, as usual, but also with a completed
boolean, which will be true
when the chain is finished and no more responses are required to continue.
Let’s see an example of how to use the Chain
class with the openai
provider: