Overview

The configuration section of a PromptL prompt is an optional yet powerful way to define how your LLM will behave. It allows you to set key parameters like the model, temperature, and other options specific to your LLM provider. This section is enclosed between three dashes (---) and written in YAML format:

---
model: gpt-4o
temperature: 0.6
top_p: 0.9
---

PromptL doesn’t impose restrictions on what you include in the config section, so you can add any key-value pairs supported by your LLM provider.

If you’re using the Latitude platform, check out the Latitude Prompt Configuration guide for a more detailed guide on what you can include in the configuration section.


Structure of the Config Section

Key Characteristics:

  1. YAML Format: The config section uses YAML, making it intuitive to write and easy to read.
  2. Flexibility: Add as many or as few key-value pairs as needed.
  3. Placement: Always appears at the top of the PromptL file.

Here’s another example with additional parameters:

---
model: gpt-3.5-turbo
temperature: 0.7
max_tokens: 500
stop: ["\n"]
presence_penalty: 0.2
frequency_penalty: 0.5
---

Common Configuration Options:

While the specific keys you can use depend on your LLM provider, here are some commonly used options:

  • model: Specifies the model to use (e.g., gpt-4, gpt-3.5-turbo).
  • temperature: Controls randomness in responses (higher = more creative, lower = more deterministic).
  • top_p: Adjusts the nucleus sampling probability for token generation.
  • max_tokens: Limits the number of tokens in the response.
  • stop: Defines one or more sequences where the assistant will stop generating tokens.
  • presence_penalty: Encourages or discourages mentioning new topics.
  • frequency_penalty: Penalizes repeated phrases for more diverse responses.

Refer to your LLM provider’s documentation for a complete list of supported configuration options.


Best Practices for Configurations

To make the most of your config section, consider these tips:

  1. Be Specific: Define parameters explicitly to avoid unexpected behavior from your LLM.

    • Example: Specify temperature and top_p to control response variability.
  2. Experiment and Iterate: LLM performance can vary based on your configuration. Adjust parameters like temperature or frequency_penalty to fine-tune results.

  3. Reuse Configurations: If you frequently use the same settings, consider creating reusable templates to streamline your workflow.

Debugging Configurations

If your prompt isn’t behaving as expected:

  • Check Your Parameters: Ensure all keys and values are supported by your LLM provider.
  • Validate YAML Syntax: Incorrect YAML formatting can cause errors.
  • Test with Minimal Configs: Start with a simple configuration and build up to identify problematic settings.