This guide will walk you through creating and testing your first prompt in Latitude without writing any code.

1. Add a Provider

  1. Navigate to Settings > “Providers”
  2. Click “Create Provider”
  3. Select your provider (e.g., OpenAI, Anthropic, Google, etc.)
  4. Enter your API key and any required configuration
  5. Click “Save”

2. Create a Project

  1. Go to the dashboard, click “New Project”
  2. Enter a name for your project
  3. Click “Create Project”

3. Create Your First Prompt

  1. Notice the 3 icons in the sidebar to create a new folder, prompt, and file, respectively
  2. Click the second icon, a new prompt input will appear in the sidebar
  3. Enter a name for your prompt and hit “Enter” key

You will get redirected to the prompt editor. Write the following prompt in the editor:

---
provider: Latitude
model: gpt-4o-mini
---

Write a compelling product description for {{product_name}} with the following features:
{{features}}

The description should be appropriate for {{target_audience}} and highlight the main benefits.
Tone: {{tone}}
Length: {{word_count}} words

The prompt is automatically saved as you write it.

4. Test in the Playground

  1. In the prompt editor, notice the parameter inputs in the right side
  2. Fill in the parameter values:
    • product_name: “Smart Home Assistant”
    • features: “Voice control, Smart home integration, AI-powered recommendations”
    • target_audience: “Tech-savvy homeowners”
    • tone: “Professional but friendly”
    • word_count: “150”
  3. Click “Run” in the bottom right corner to see the generated output
  4. Try different inputs to test how your prompt performs in various scenarios

Bonus: notice how every prompt run has an evaluation automatically generated for it

5. Observe Results and Logs

  1. Navigate to the “Logs” section
  2. You’ll see a record of all interactions with your prompt
  3. Click on any log entry to view details including:
    • Input parameters
    • Generated output
    • Model used
    • Response time
    • Evaluation results (if available)
  4. Use filters to find specific logs based on time, status, or content
  5. Select logs and save them to a dataset

6. Create an Evaluation

  1. In the prompt editor, go to the “Evaluations” tab
  2. Click “Add Evaluation”
  3. Select “LLM as Judge” as the evaluation type
  4. Choose a title, description and select a result type: number, boolean, or text
  5. Click “Create evaluation”
  6. In the evaluation page, click “Run Experiment”
  7. Select the dataset we recently created from logs, map prompt parameters to the dataset columns, and click “Run Experiment”
  8. Watch evaluation results stream in real-time

7. Publish and Deploy the Prompt

  1. In the prompt editor, click “Publish” button in the top sidebar
  2. Add a version note describing the prompt (e.g., “Initial version of product description generator”)
  3. Click “Publish Version”
  4. Your prompt is now available as an API endpoint through the AI Gateway
  5. Click “Deploy this prompt” button in the top header
  6. Copy your preferred integration method (SDK or HTTP API)

Next Steps

Now that you’ve created and evaluated your first prompt at scale, you can: