openai-generic

The openai-generic provider supports all APIs that use OpenAI’s request and response formats, such as Groq, HuggingFace, Ollama, OpenRouter, and Together AI.

Example:

BAML
1client<llm> MyClient {
2 provider "openai-generic"
3 options {
4 base_url "https://api.provider.com"
5 model "<provider-specified-format>"
6 }
7}

Non-forwarded options

base_url
string

The base URL for the API.

Default: https://api.openai.com/v1

default_role
string

The default role for any prompts that don’t specify a role.

We don’t do any validation of this field, so you can pass any string you wish.

Default: system

api_key
stringDefaults to <none>

Will be used to build the Authorization header, like so: Authorization: Bearer $api_key If api_key is not set, or is set to an empty string, the Authorization header will not be sent.

Default: <none>

headers
object

Additional headers to send with the request.

Example:

BAML
1client<llm> MyClient {
2 provider "openai-generic"
3 options {
4 base_url "https://api.provider.com"
5 model "<provider-specified-format>"
6 headers {
7 "X-My-Header" "my-value"
8 }
9 }
10}
allowed_role_metadata
string[]

Which role metadata should we forward to the API? Default: []

For example you can set this to ["foo", "bar"] to forward the cache policy to the API.

If you do not set allowed_role_metadata, we will not forward any role metadata to the API even if it is set in the prompt.

Then in your prompt you can use something like:

1client<llm> Foo {
2 provider openai
3 options {
4 allowed_role_metadata: ["foo", "bar"]
5 }
6}
7
8client<llm> FooWithout {
9 provider openai
10 options {
11 }
12}
13template_string Foo() #"
14 {{ _.role('user', foo={"type": "ephemeral"}, bar="1", cat=True) }}
15 This will be have foo and bar, but not cat metadata. But only for Foo, not FooWithout.
16 {{ _.role('user') }}
17 This will have none of the role metadata for Foo or FooWithout.
18"#

You can use the playground to see the raw curl request to see what is being sent to the API.

supports_streaming
boolean

Whether the internal LLM client should use the streaming API. Default: true

Then in your prompt you can use something like:

1client<llm> MyClientWithoutStreaming {
2 provider anthropic
3 options {
4 model claude-3-haiku-20240307
5 api_key env.ANTHROPIC_API_KEY
6 max_tokens 1000
7 supports_streaming false
8 }
9}
10
11function MyFunction() -> string {
12 client MyClientWithoutStreaming
13 prompt #"Write a short story"#
14}
1# This will be streamed from your python code perspective,
2# but under the hood it will call the non-streaming HTTP API
3# and then return a streamable response with a single event
4b.stream.MyFunction()
5
6# This will work exactly the same as before
7b.MyFunction()

Forwarded options

messages
DO NOT USE

BAML will auto construct this field for you from the prompt

stream
DO NOT USE

BAML will auto construct this field for you based on how you call the client in your code

model
string

The model to use.

For OpenAI, this might be "gpt-4o-mini"; for Ollama, this might be "llama2". The exact syntax will depend on your API provider’s documentation: we’ll just forward it to them as-is.

For all other options, see the official OpenAI API documentation.