The Prompt Playground is your interactive sandbox for testing, debugging, and refining prompts before deploying them. It lets you run prompts with different inputs, see the model’s responses in real-time, and even test how your prompt interacts with tools. Prompt Playground

Running Single Inputs

  1. Preview: The main panel shows a preview of the messages that will be sent to the model, based on your prompt template and current parameter values. Prompt Preview
  2. Parameters: If your prompt uses input parameters (like {{ topic }}), they appear in the “Parameters” section. Fill in values here. Parameters
  3. Run: Click “Run prompt”. Latitude sends the request to the configured provider and model.
  4. Chat Mode: The response appears, and the Playground enters Chat mode. You can continue the conversation turn by turn. Chat Mode
  5. Reset: Click “Reset Chat” to clear the conversation and run the prompt again from the beginning, potentially with new parameter values.

Parameter Input Methods

You can populate parameters in several ways:
  • Manual: Type values directly into the fields. Manual Parameters
  • Dataset: Load inputs from a Dataset. Each row becomes a separate test case. This is great for batch testing. Dataset Parameters
  • History: Reuse parameter values from previous runs. History Parameters

Parameter Types

Parameters can accept different input types, configured either in the prompt’s settings or directly in the Playground: Parameter Types
  • Text: Standard text input (default). Text Parameter
    Advanced Users: Lists are also acceptable inputs for this field. Specify a list with the following format: [a1, a2, etc..]
  • Image: Upload an image file. Passed to the model as content (requires model support like GPT-4V, Claude 3). Use <content-image> tag in your prompt. Image Parameter
  • File: Upload any file type. Passed as content (requires model support). Use <content-file> tag. File Parameter

Testing Tool Responses

If your prompt uses Tools, the Playground allows you to simulate their responses:
  1. Run the prompt: Initiate the prompt run as usual.
  2. Tool Call Request: If the model decides to call a tool, the Playground will pause and display the requested tool call and its arguments. Tool Call Request
  3. Mock Response: Enter the JSON response you want the tool to pretend to return. Mock Tool Response
  4. Continue: Click “Send tool response”. Latitude sends the mocked tool response back to the model, which then continues its generation process based on that simulated information.
This allows you to test the logic of your prompt’s interaction with tools without needing to execute the actual tool functions.

Viewing Logs in the Playground

Every run in the Playground generates a log entry. You can quickly access the detailed log for the current run:
  1. Click the “Logs” icon or link within the Playground interface (location may vary slightly).
  2. This opens the detailed log view, showing inputs, outputs, metadata, timings, and any evaluation results associated with that specific run.
This provides immediate feedback and traceability for debugging.

Next Steps