Client Registry

If you need to modify the model / parameters for an LLM client at runtime, you can modify the ClientRegistry for any specified function.

1import os
2from baml_py import ClientRegistry
3
4async def run():
5 cr = ClientRegistry()
6 # Creates a new client
7 cr.add_llm_client(name='MyAmazingClient', provider='openai', options={
8 "model": "gpt-4o",
9 "temperature": 0.7,
10 "api_key": os.environ.get('OPENAI_API_KEY')
11 })
12 # Sets MyAmazingClient as the primary client
13 cr.set_primary('MyAmazingClient')
14
15 # ExtractResume will now use MyAmazingClient as the calling client
16 res = await b.ExtractResume("...", { "client_registry": cr })

ClientRegistry Interface

Note: ClientRegistry is imported from baml_py in Python and @boundaryml/baml in TypeScript, not baml_client.

As we mature ClientRegistry, we will add a more type-safe and ergonomic interface directly in baml_client. See Github issue #766.

Methods use snake_case in Python and camelCase in TypeScript.

add_llm_client / addLlmClient

A function to add an LLM client to the registry.

name
stringRequired

The name of the client.

Using the exact same name as a client also defined in .baml files overwrites the existing client whenever the ClientRegistry is used.

provider
stringRequired

This configures which provider to use. The provider is responsible for handling the actual API calls to the LLM service. The provider is a required field.

The configuration modifies the URL request BAML runtime makes.

Provider NameDocsNotes
anthropicAnthropicSupports /v1/messages endpoint
aws-bedrockAWS BedrockSupports Converse and ConverseStream endpoint
google-aiGoogle AISupports Google AI’s generateContent and streamGenerateContent endpoints
vertex-aiVertex AISupports Vertex’s generateContent and streamGenerateContent endpoints
openaiOpenAISupports /chat/completions endpoint
azure-openaiAzure OpenAISupports Azure’s /chat/completions endpoint
openai-genericOpenAI (generic)Any other provider that supports OpenAI’s /chat/completions endpoint

A non-exhaustive list of providers you can use with openai-generic:

Inference ProviderDocs
Azure AI FoundaryAzure AI Foundary
GroqGroq
Hugging FaceHugging Face
Keywords AIKeywords AI
LitellmLitellm
LM StudioLM Studio
OllamaOllama
OpenRouterOpenRouter
TogetherAITogetherAI
Unify AIUnify AI
vLLMvLLM

We also have some special providers that allow composing clients together:

Provider NameDocsNotes
fallbackFallbackUsed to chain models conditional on failures
round-robinRound RobinUsed to load balance
options
dict[str, Any]Required

These vary per provider. Please see provider specific documentation for more information. Generally they are pass through options to the POST request made to the LLM.

retry_policy
string

The name of a retry policy that is already defined in a .baml file. See Retry Policies.

set_primary / setPrimary

This sets the client for the function to use. (i.e. replaces the client property in a function)

name
stringRequired

The name of the client to use.

This can be a new client that was added with add_llm_client or an existing client that is already in a .baml file.