ollama
Ollama supports the OpenAI client, allowing you to use the
openai-generic
provider with an
overridden base_url
.
Note that to call Ollama, you must use its OpenAI-compatible
/v1
endpoint. See Ollama’s OpenAI compatibility
documentation.
OLLAMA_ORIGINS='*' ollama serve
. Learn more in hereBAML-specific request options
These unique parameters (aka options
) modify the API request sent to the provider.
You can use this to modify the headers
and base_url
for example.
The base URL for the API. Default: http://localhost:11434/v1
/v1
at the end of the URL. See Ollama’s OpenAI compatabilityAdditional headers to send with the request.
Example:
The role to use if the role is not in the allowed_roles. Default: "user"
usually, but some models like OpenAI’s gpt-4o
will use "system"
Picked the first role in allowed_roles
if not “user”, otherwise “user”.
Which roles should we forward to the API? Default: ["system", "user", "assistant"]
usually, but some models like OpenAI’s o1-mini
will use ["user", "assistant"]
When building prompts, any role not in this list will be set to the default_role
.
Which role metadata should we forward to the API? Default: []
For example you can set this to ["foo", "bar"]
to forward the cache policy to the API.
If you do not set allowed_role_metadata
, we will not forward any role metadata to the API even if it is set in the prompt.
Then in your prompt you can use something like:
You can use the playground to see the raw curl request to see what is being sent to the API.
Whether the internal LLM client should use the streaming API. Default: true
Then in your prompt you can use something like:
Provider request parameters
These are other parameters that are passed through to the provider, without modification by BAML. For example if the request has a temperature
field, you can define it in the client here so every call has that set.
Consult the specific provider’s documentation for more information.
BAML will auto construct this field for you from the prompt
BAML will auto construct this field for you based on how you call the client in your code
The model to use.
For the most up-to-date list of models supported by Ollama, see their Model Library.
"mixtral:8x22b"