Skip to main content
This guide will walk you through creating and testing your first prompt in Latitude.

1. Add a Provider

  1. Navigate to Settings > “Providers”
  2. Click “Create Provider”
  3. Select your provider (e.g., OpenAI, Anthropic, Google, etc.)
  4. Enter your API key and any required configuration
  5. Click “Save”

2. Create a Project

  1. Go to the dashboard, click “New Project”
  2. Enter a name for your project
  3. Click “Create Project”

3. Create Your First Prompt

  1. Notice the 3 icons in the sidebar to create a new folder, prompt and agent respectively
  2. Click the second icon, a new prompt input will appear in the sidebar
  3. Enter a name for your prompt and hit “Enter” key
Create Prompt You will get redirected to the prompt editor. Write the following prompt in the editor:
---
provider: OpenAI
model: gpt-4.1-mini
---

This is a response from an NPS survey:

Score out of 10: {{ score }}
Message: {{ message }}

Analyze the sentiment based on both the score and message. Prioritize identifying the primary concern in the feedback, focusing on the core issue mentioned by the user. Categorize the sentiment into one of the following categories:

- Product Features and Functionality
- User Interface (UI) and User Experience (UX)
- Performance and Reliability
- Customer Support and Service
- Onboarding and Learning Curve
- Pricing and Value Perception
- Integrations and Compatibility
- Scalability and Customization
- Feature Requests and Product Roadmap
- Competitor Comparison
- General Feedback (Neutral/Non-specific)

Return only one of the categories.
The prompt is automatically saved as you write it.
Notice how every prompt run has a human-in-the-loop evaluation automatically generated for it

4. Test in the Playground

  1. In the prompt editor, notice the parameter inputs in the right side
  2. Fill in the parameter values:
    • score: “5”
    • message: “Product is working but occasionally laggy.”
  3. Click “Run” in the bottom right corner to see the generated output
  4. Try different inputs to test how your prompt performs in various scenarios
Testing in Playground

5. Observe Results and Logs

  1. Navigate to the “Logs” section
  2. You’ll see a record of all interactions with your prompt
  3. Click on any log entry to view details including:
    • Input parameters
    • Generated output
    • Model used
    • Response time
    • Evaluation results (if available)
  4. Use filters to find specific logs based on time, status, or content
  5. Select logs and save them to a dataset
Viewing Logs

6. Create an Evaluation

  1. In the prompt editor, go to the “Evaluations” tab
  2. Click “Add Evaluation”
  3. Select “LLM as Judge” as the evaluation type
  4. Choose a title, description, critera, pass-fail conditions and select a result type: number, boolean, or text Create Evaluation
  5. Click “Create evaluation”
  6. In the evaluation page, click “Run Experiment”
  7. Select the dataset we recently created from logs, map prompt parameters to the dataset columns, and click “Run Experiment” Run Evaluation
  8. Watch experiment results in real-time

7. Publish and Deploy the Prompt

  1. In the prompt editor, click “Publish” button in the top sidebar
  2. Add a version note describing the prompt (e.g., “Initial version of product description generator”)
  3. Click “Publish Version” Publish Version
  4. Your prompt is now available as an API endpoint through the AI Gateway
  5. Click “Deploy this prompt” button in the top header
  6. Copy your preferred integration method (SDK or HTTP API)

Next Steps

Now that you’ve created and evaluated your first prompt at scale, you can: