Overview

LLMs have been observed to perform complex operations better when broken down into smaller steps.

Chains allow you to create multi-step prompts that can interact with the AI model in stages. You can pass the result of one step to the next one. This enables more complex workflows and dynamic conversations.

Syntax

Use the <step> tag in your prompt to add a step. The engine will pause at that step, wait for a generated response, and add it to the conversation as an assistant message, before continuing with the prompt.

<step>
  Analyze the following sentence and identify the subject, verb, and object:

  <user>
    "The curious cat chased the playful mouse."
  </user>
</step>

<step>
  Now, using this information, create a new sentence by
  replacing each part of speech with a different word, but keep the same structure.
</step>

<step>
  Finally, translate the new sentence you created into French and return just the sentence.
</step>

Configuration

All steps will use the configuration defined at the beginning of the prompt by default. However, you can override the configuration for each step by adding attributes to the <step> tag:

---
model: gpt-4o
---

<step model="gpt-4o-mini" temperature={{0.1}}>
  /* This step will use a smaller model and lower temperature */

  Analyze the following sentence and identify the subject, verb, and object:

  <user>
    "The curious cat chased the playful mouse."
  </user>
</step>

Store step responses

You can store the text of the response to a variable by adding an as attribute, followed by the variable name. This allows you to reuse the response later in your prompt or use it in conditionals or other logic.

<step as="result">
  Is this statement correct?
  {{ statement }}

  Respond only with "correct" or "incorrect".
</step>

{{ if result == "correct" }}
  Great, now respond with an explanation about the statement.
{{ else }}
  Now, provide an explanation of why the statement is incorrect, and give the correct answer.
{{ endif }}

Store the whole message

The ⁠as attribute stores the text of the generated response. However, the response often contains additional relevant information that can be useful in the prompt. To store the entire message object, use the ⁠raw attribute. This attribute stores the whole message object in a variable, which can then be accessed later.

<step raw="generatedMessage">
  ...
</step>

The ⁠generatedMessage variable will contain attributes like ⁠role and ⁠content, as well as any additional data provided by your LLM provider. The ⁠content attribute is always an array of objects, each with a ⁠type such as text, image, or tool-call.

If you want to debug the contents of the message, you can interpolate it into the prompt and run the chain to see what it contains.

Isolating steps

All steps will automatically receive all the messages from previous steps as context. In some cases, some steps may not need context from previous steps, and including them would be and unnecessary cost and even confuse the model.

If you want to isolate a step from the general context, you can add the isolated attribute to the <step> tag. Isolated steps will not receive any context from previous steps, and future steps will not receive the context from isolated steps either.

<step isolated as="summary1">
  Generate a summary of the following text:
  {{ text1 }} /* Long text */
</step>

<step isolated as="summary2">
  Generate a summary of the following text:
  {{ text2 }} /* Long text */
</step>

<step>
  Compare these two summaries and provide a conclusion.
  {{ summary1 }}
  {{ summary2 }}
</step>

Implementation

In order to run chains, PromptL will evaluate the prompt in steps, and wait for the response of each step before continuing to the next one. To do this, you must use the Chain class to define an instance with the prompt, parameters, and the rest of the configuration, and run the .step() method to generate the strucure for each step.

The first time step is called, it must be called without any response, as there has not been any input yet. After that, you can call step with the response of the previous step. For each step, the method will return an object with both messages and config, as usual, but also with a completed boolean, which will be true when the chain is finished and no more responses are required to continue.

Let’s see an example of how to use the Chain class with the openai provider:

import { Chain } from '@latitude-data/promptl';
import OpenAI from 'openai';

// Create the OpenAI client
const client = new OpenAI();

// Create a function to generate a response based on the step messages and config
async function generateResponse({ config, messages }) {
  const response = await client.chat.completions.create({
    ...config,
    messages,
  })

  // return the response message
  return resopnse.choices[0]!.message;
}

// Create a new chain
const chain = new Chain({
  prompt: '...', // Your PromptL prompt as a string
  parameters: {...} // Your prompt parameters
})

// Compile the first step
let result = chain.step()
let last_response

// Iterate over the chain until it is completed
while (!result.completed) {
  // Generate the response
  last_response = await generateResponse(result)

  // Compile the next step
  result = chain.step(last_response)
}

console.log(last_response)