openai-responses
The openai-responses
provider supports OpenAI’s /responses
endpoint which uses the newer Responses API instead of the traditional Chat Completions API.
Read more about the differences between the Chat Completions API and the Responses API in OpenAI’s comparison guide.
If you’re a new user, OpenAI recommends using the openai-responses
provider instead of the openai
provider.
o1-mini
is not supported with the openai-responses
provider.
Example:
BAML-specific request options
These unique parameters (aka options
) modify the API request sent to the provider.
Will be used to build the Authorization
header, like so: Authorization: Bearer $api_key
Default: env.OPENAI_API_KEY
The base URL for the API.
Default: https://api.openai.com/v1
Additional headers to send with the request.
Example:
Override the response format type. When using openai-responses
provider, this defaults to "openai-responses"
.
You can also use the standard openai
provider with client_response_type: "openai-responses"
to format the response as a openai-responses
response.
Example:
The role to use if the role is not in the allowed_roles. Default: "user"
usually, but some models like OpenAI’s gpt-4o
will use "system"
Picked the first role in allowed_roles
if not “user”, otherwise “user”.
Which roles should we forward to the API? Default: ["system", "user", "assistant"]
usually, but some models like OpenAI’s o1-mini
will use ["user", "assistant"]
When building prompts, any role not in this list will be set to the default_role
.
Which role metadata should we forward to the API? Default: []
For example you can set this to ["foo", "bar"]
to forward the cache policy to the API.
If you do not set allowed_role_metadata
, we will not forward any role metadata to the API even if it is set in the prompt.
Then in your prompt you can use something like:
You can use the playground to see the raw curl request to see what is being sent to the API.
Provider request parameters
These are parameters specific to the OpenAI Responses API that are passed through to the provider.
Controls the amount of reasoning effort the model should use.
Example:
Most models support the Responses API, some of the most popular models are:
o1-mini
is not supported with the openai-responses
provider.
See OpenAI’s Responses API documentation for the latest available models.
Tools that the model can use during reasoning. Supports function calling and web search.
Example with web search:
Additional Use Cases
Image Input Support
The openai-responses
provider supports image inputs for vision-capable models:
Advanced Reasoning
Using reasoning models with high effort for complex problem solving:
Modular API Support
The openai-responses
provider works with the Modular API for custom integrations:
For all other options, see the official OpenAI Responses API documentation.