This guide will walk you through creating and testing your first prompt in Latitude without writing any code.
1. Add a Provider
- Navigate to Settings > “Providers”
- Click “Create Provider”
- Select your provider (e.g., OpenAI, Anthropic, Google, etc.)
- Enter your API key and any required configuration
- Click “Save”
2. Create a Project
- Go to the dashboard, click “New Project”
- Enter a name for your project
- Click “Create Project”
3. Create Your First Prompt
- Notice the 3 icons in the sidebar to create a new folder, prompt, and file, respectively
- Click the second icon, a new prompt input will appear in the sidebar
- Enter a name for your prompt and hit “Enter” key
You will get redirected to the prompt editor. Write the following prompt in the editor:
---
provider: Latitude
model: gpt-4o-mini
---
Write a compelling product description for {{product_name}} with the following features:
{{features}}
The description should be appropriate for {{target_audience}} and highlight the main benefits.
Tone: {{tone}}
Length: {{word_count}} words
The prompt is automatically saved as you write it.
4. Test in the Playground
- In the prompt editor, notice the parameter inputs in the right side
- Fill in the parameter values:
product_name
: “Smart Home Assistant”
features
: “Voice control, Smart home integration, AI-powered recommendations”
target_audience
: “Tech-savvy homeowners”
tone
: “Professional but friendly”
word_count
: “150”
- Click “Run” in the bottom right corner to see the generated output
- Try different inputs to test how your prompt performs in various scenarios
Bonus: notice how every prompt run has an evaluation automatically generated for it
5. Observe Results and Logs
- Navigate to the “Logs” section
- You’ll see a record of all interactions with your prompt
- Click on any log entry to view details including:
- Input parameters
- Generated output
- Model used
- Response time
- Evaluation results (if available)
- Use filters to find specific logs based on time, status, or content
- Select logs and save them to a dataset
6. Create an Evaluation
- In the prompt editor, go to the “Evaluations” tab
- Click “Add Evaluation”
- Select “LLM as Judge” as the evaluation type
- Choose a title, description and select a result type: number, boolean, or text
- Click “Create evaluation”
- In the evaluation page, click “Run Experiment”
- Select the dataset we recently created from logs, map prompt parameters to the dataset columns, and click “Run Experiment”
- Watch experiment results stream in real-time
7. Publish and Deploy the Prompt
- In the prompt editor, click “Publish” button in the top sidebar
- Add a version note describing the prompt (e.g., “Initial version of product description generator”)
- Click “Publish Version”
- Your prompt is now available as an API endpoint through the AI Gateway
- Click “Deploy this prompt” button in the top header
- Copy your preferred integration method (SDK or HTTP API)
Next Steps
Now that you’ve created and evaluated your first prompt at scale, you can:
Responses are generated using AI and may contain mistakes.