Playground
Learn how to test and refine your prompts interactively in the Playground.
The Prompt Playground is your interactive sandbox for testing, debugging, and refining prompts before deploying them. It lets you run prompts with different inputs, see the model’s responses in real-time, and even test how your prompt interacts with tools.
Running Single Inputs
- Preview: The main panel shows a preview of the messages that will be sent to the model, based on your prompt template and current parameter values.
- Parameters: If your prompt uses input parameters (like
{{ topic }}
), they appear in the “Parameters” section. Fill in values here. - Run: Click “Run prompt”. Latitude sends the request to the configured provider and model.
- Chat Mode: The response appears, and the Playground enters Chat mode. You can continue the conversation turn by turn.
- Reset: Click “Reset Chat” to clear the conversation and run the prompt again from the beginning, potentially with new parameter values.
Parameter Input Methods
You can populate parameters in several ways:
- Manual: Type values directly into the fields.
- Dataset: Load inputs from a Dataset. Each row becomes a separate test case. This is great for batch testing.
- History: Reuse parameter values from previous runs.
Parameter Types
Parameters can accept different input types, configured either in the prompt’s settings or directly in the Playground:
- Text: Standard text input (default).
- Image: Upload an image file. Passed to the model as content (requires model support like GPT-4V, Claude 3). Use
<content-image>
tag in your prompt. - File: Upload any file type. Passed as content (requires model support). Use
<content-file>
tag.
Testing Tool Responses
If your prompt uses Tools, the Playground allows you to simulate their responses:
- Run the prompt: Initiate the prompt run as usual.
- Tool Call Request: If the model decides to call a tool, the Playground will pause and display the requested tool call and its arguments.
- Mock Response: Enter the JSON response you want the tool to pretend to return.
- Continue: Click “Send tool response”. Latitude sends the mocked tool response back to the model, which then continues its generation process based on that simulated information.
This allows you to test the logic of your prompt’s interaction with tools without needing to execute the actual tool functions.
Viewing Logs in the Playground
Every run in the Playground generates a log entry. You can quickly access the detailed log for the current run:
- Click the “Logs” icon or link within the Playground interface (location may vary slightly).
- This opens the detailed log view, showing inputs, outputs, metadata, timings, and any evaluation results associated with that specific run.
This provides immediate feedback and traceability for debugging.
Next Steps
- Learn about Prompt Configuration
- Manage changes using Version Control
- Explore how to use Tools in your prompts