anthropic
The anthropic
provider supports all APIs that use the same interface for the /v1/messages
endpoint.
Example:
BAML-specific request options
These unique parameters (aka options
) modify the API request sent to the provider.
You can use this to modify the headers
and base_url
for example.
Will be passed as a bearer token. Default: env.ANTHROPIC_API_KEY
Authorization: Bearer $api_key
The base URL for the API. Default: https://api.anthropic.com
Additional headers to send with the request.
Unless specified with a different value, we inject in the following headers:
Example:
The role to use if the role is not in the allowed_roles. Default: "user"
usually, but some models like OpenAI’s gpt-4o
will use "system"
Picked the first role in allowed_roles
if not “user”, otherwise “user”.
Which roles should we forward to the API? Default: ["system", "user", "assistant"]
usually, but some models like OpenAI’s o1-mini
will use ["user", "assistant"]
When building prompts, any role not in this list will be set to the default_role
.
Which role metadata should we forward to the API? Default: []
For example you can set this to ["cache_control"]
to forward the cache policy to the API.
If you do not set allowed_role_metadata
, we will not forward any role metadata to the API even if it is set in the prompt.
Then in your prompt you can use something like:
You can use the playground to see the raw curl request to see what is being sent to the API.
Whether the internal LLM client should use the streaming API. Default: true
Then in your prompt you can use something like:
Which finish reasons are allowed? Default: null
Will raise a BamlClientFinishReasonError
if the finish reason is not in the allow list. See Exceptions for more details.
Note, only one of finish_reason_allow_list
or finish_reason_deny_list
can be set.
For example you can set this to ["stop"]
to only allow the stop finish reason, all other finish reasons (e.g. length
) will treated as failures that PREVENT fallbacks and retries (similar to parsing errors).
Then in your code you can use something like:
Which finish reasons are denied? Default: null
Will raise a BamlClientFinishReasonError
if the finish reason is in the deny list. See Exceptions for more details.
Note, only one of finish_reason_allow_list
or finish_reason_deny_list
can be set.
For example you can set this to ["length"]
to stop the function from continuing if the finish reason is length
. (e.g. LLM was cut off because it was too long).
Then in your code you can use something like:
Provider request parameters
These are other parameters that are passed through to the provider, without modification by BAML. For example if the request has a temperature
field, you can define it in the client here so every call has that set.
Consult the specific provider’s documentation for more information.
BAML will auto construct this field for you from the prompt, if necessary.
Only the first system message will be used, all subsequent ones will be cast to the assistant
role.
BAML will auto construct this field for you from the prompt
BAML will auto construct this field for you based on how you call the client in your code
The model to use.
See anthropic docs for the latest list of all models. You can pass any model name you wish, we will not check if it exists.
The maximum number of tokens to generate. Default: 4069
For all other options, see the official anthropic API documentation.