openai-responses

The openai-responses provider supports OpenAI’s /responses endpoint which uses the newer Responses API instead of the traditional Chat Completions API. Read more about the differences between the Chat Completions API and the Responses API in OpenAI’s comparison guide.

If you’re a new user, OpenAI recommends using the openai-responses provider instead of the openai provider.

o1-mini is not supported with the openai-responses provider.

Example:

BAML
1client<llm> MyResponsesClient {
2 provider "openai-responses"
3 options {
4 api_key env.MY_OPENAI_KEY
5 model "gpt-4.1"
6 reasoning {
7 effort "medium"
8 }
9 }
10}

BAML-specific request options

These unique parameters (aka options) modify the API request sent to the provider.

api_key
stringDefaults to env.OPENAI_API_KEY

Will be used to build the Authorization header, like so: Authorization: Bearer $api_key

Default: env.OPENAI_API_KEY

base_url
string

The base URL for the API.

Default: https://api.openai.com/v1

headers
object

Additional headers to send with the request.

Example:

BAML
1client<llm> MyResponsesClient {
2 provider openai-responses
3 options {
4 api_key env.MY_OPENAI_KEY
5 model "gpt-4.1"
6 headers {
7 "X-My-Header" "my-value"
8 }
9 }
10}
client_response_type
string

Override the response format type. When using openai-responses provider, this defaults to "openai-responses".

You can also use the standard openai provider with client_response_type: "openai-responses" to format the response as a openai-responses response.

Example:

BAML
1client<llm> StandardOpenAIWithResponses {
2 provider openai
3 options {
4 api_key env.MY_OPENAI_KEY
5 model "gpt-4.1"
6 client_response_type "openai-responses"
7 }
8}
default_role
string

The role to use if the role is not in the allowed_roles. Default: "user" usually, but some models like OpenAI’s gpt-4o will use "system"

Picked the first role in allowed_roles if not “user”, otherwise “user”.

allowed_roles
string[]

Which roles should we forward to the API? Default: ["system", "user", "assistant"] usually, but some models like OpenAI’s o1-mini will use ["user", "assistant"]

When building prompts, any role not in this list will be set to the default_role.

allowed_role_metadata
string[]

Which role metadata should we forward to the API? Default: []

For example you can set this to ["foo", "bar"] to forward the cache policy to the API.

If you do not set allowed_role_metadata, we will not forward any role metadata to the API even if it is set in the prompt.

Then in your prompt you can use something like:

1client<llm> Foo {
2 provider openai
3 options {
4 allowed_role_metadata: ["foo", "bar"]
5 }
6}
7
8client<llm> FooWithout {
9 provider openai
10 options {
11 }
12}
13template_string Foo() #"
14 {{ _.role('user', foo={"type": "ephemeral"}, bar="1", cat=True) }}
15 This will be have foo and bar, but not cat metadata. But only for Foo, not FooWithout.
16 {{ _.role('user') }}
17 This will have none of the role metadata for Foo or FooWithout.
18"#

You can use the playground to see the raw curl request to see what is being sent to the API.

Provider request parameters

These are parameters specific to the OpenAI Responses API that are passed through to the provider.

reasnoning.effort
string

Controls the amount of reasoning effort the model should use.

ValueDescription
lowMinimal reasoning effort
mediumBalanced reasoning effort
highMaximum reasoning effort

Example:

BAML
1client<llm> HighReasoningClient {
2 provider openai-responses
3 options {
4 model "o4-mini"
5 reasoning {
6 effort "high"
7 }
8 }
9}
model
string

Most models support the Responses API, some of the most popular models are:

ModelDescription
gpt-4.1Enhanced reasoning model
gpt-4Standard GPT-4 model
o4-miniAdvanced reasoning model

o1-mini is not supported with the openai-responses provider.

See OpenAI’s Responses API documentation for the latest available models.

tools
array

Tools that the model can use during reasoning. Supports function calling and web search.

Example with web search:

BAML
1client<llm> WebSearchClient {
2 provider openai-responses
3 options {
4 model "gpt-4.1"
5 tools [
6 {
7 type "web_search_preview"
8 }
9 ]
10 }
11}

Additional Use Cases

Image Input Support

The openai-responses provider supports image inputs for vision-capable models:

BAML
1client<llm> OpenAIResponsesVision {
2 provider openai-responses
3 options {
4 model "gpt-4.1"
5 }
6}
7
8function AnalyzeImage(image: image|string) -> string {
9 client OpenAIResponsesVision
10 prompt #"
11 {{ _.role("user") }}
12 What is in this image?
13 {{ image }}
14 "#
15}

Advanced Reasoning

Using reasoning models with high effort for complex problem solving:

BAML
1client<llm> AdvancedReasoningClient {
2 provider openai-responses
3 options {
4 model "o4-mini"
5 reasoning {
6 effort "high"
7 }
8 }
9}
10
11function SolveComplexProblem(problem: string) -> string {
12 client AdvancedReasoningClient
13 prompt #"
14 {{ _.role("user") }}
15 Solve this step by step: {{ problem }}
16 "#
17}

Modular API Support

The openai-responses provider works with the Modular API for custom integrations:

Python
1from openai import AsyncOpenAI
2from openai.types.responses import Response
3import typing
4
5client = AsyncOpenAI()
6req = await b.request.MyFunction("input")
7res = typing.cast(Response, await client.responses.create(**req.body.json()))
8parsed = b.parse.MyFunction(res.output_text)

For all other options, see the official OpenAI Responses API documentation.