1. Add a Provider
- Navigate to Settings > “Providers”
 - Click “Create Provider”
 - Select your provider (e.g., OpenAI, Anthropic, Google, etc.)
 - Enter your API key and any required configuration
 - Click “Save”
 
2. Create a Project
- Go to the dashboard, click “New Project”
 - Enter a name for your project
 - Click “Create Project”
 
3. Create Your First Prompt
- Notice the 3 icons in the sidebar to create a new folder, prompt and agent respectively
 - Click the second icon, a new prompt input will appear in the sidebar
 - Enter a name for your prompt and hit “Enter” key
 

Notice how every prompt run has a human-in-the-loop evaluation automatically
generated for it
4. Test in the Playground
- In the prompt editor, notice the parameter inputs in the right side
 - Fill in the parameter values:
score: “5”message: “Product is working but occasionally laggy.”
 - Click “Run” in the bottom right corner to see the generated output
 - Try different inputs to test how your prompt performs in various scenarios
 

5. Observe Results and Logs
- Navigate to the “Logs” section
 - You’ll see a record of all interactions with your prompt
 - Click on any log entry to view details including:
- Input parameters
 - Generated output
 - Model used
 - Response time
 - Evaluation results (if available)
 
 - Use filters to find specific logs based on time, status, or content
 - Select logs and save them to a dataset
 

6. Create an Evaluation
- In the prompt editor, go to the “Evaluations” tab
 - Click “Add Evaluation”
 - Select “LLM as Judge” as the evaluation type
 - Choose a title, description, critera, pass-fail conditions and select a result type: number, boolean, or text

 - Click “Create evaluation”
 - In the evaluation page, click “Run Experiment”
 - Select the dataset we recently created from logs, map prompt parameters to the dataset columns, and click “Run Experiment”

 - Watch experiment results in real-time
 
7. Publish and Deploy the Prompt
- In the prompt editor, click “Publish” button in the top sidebar
 - Add a version note describing the prompt (e.g., “Initial version of product description generator”)
 - Click “Publish Version”

 - Your prompt is now available as an API endpoint through the AI Gateway
 - Click “Deploy this prompt” button in the top header
 - Copy your preferred integration method (SDK or HTTP API)
 
Next Steps
Now that you’ve created and evaluated your first prompt at scale, you can:- Share your endpoint with developers for integration
 - Create more complex prompts using PromptL syntax
 - Set up ongoing evaluations to monitor quality
 - Invite team members to collaborate