Learn how to configure your PromptL prompts
---
) and written in YAML format:
model
: Specifies the model to use (e.g., gpt-4
, gpt-3.5-turbo
).temperature
: Controls randomness in responses (higher = more creative, lower = more deterministic).top_p
: Adjusts the nucleus sampling probability for token generation.max_tokens
: Limits the number of tokens in the response.stop
: Defines one or more sequences where the assistant will stop generating tokens.presence_penalty
: Encourages or discourages mentioning new topics.frequency_penalty
: Penalizes repeated phrases for more diverse responses.