Timeout Configuration

Configure timeouts on any BAML client to prevent requests from hanging indefinitely.

Overview

Timeouts can be configured on leaf clients (OpenAI, Anthropic, etc.).

Timeout Options

All timeout values are specified in milliseconds as positive integers.

connect_timeout_ms
int

Maximum time to establish a network connection to the provider.

Default: No timeout (infinite)

1client<llm> MyClient {
2 provider openai
3 options {
4 model "gpt-4"
5 api_key env.OPENAI_API_KEY
6 http {
7 connect_timeout_ms 5000 // 5 seconds
8 }
9 }
10}
time_to_first_token_timeout_ms
int

Maximum time to receive the first token after sending the request.

Default: No timeout (infinite)

Particularly useful for detecting when a provider accepts the request but takes too long to start generating.

1client<llm> MyClient {
2 provider openai
3 options {
4 model "gpt-4"
5 api_key env.OPENAI_API_KEY
6 http {
7 time_to_first_token_timeout_ms 10000 // 10 seconds
8 }
9 }
10}
idle_timeout_ms
int

Maximum time between receiving consecutive data chunks.

Default: No timeout (infinite)

Important for detecting stalled streaming connections.

1client<llm> MyClient {
2 provider openai
3 options {
4 model "gpt-4"
5 api_key env.OPENAI_API_KEY
6 http {
7 idle_timeout_ms 15000 // 15 seconds
8 }
9 }
10}
request_timeout_ms
int

Maximum total time for the entire request-response cycle.

Default: No timeout (infinite)

For streaming responses, this applies to the entire stream duration (first token to last token).

1client<llm> MyClient {
2 provider openai
3 options {
4 model "gpt-4"
5 api_key env.OPENAI_API_KEY
6 http {
7 request_timeout_ms 60000 // 60 seconds
8 }
9 }
10}

Timeout Composition

When composite clients reference subclients with their own timeouts, the minimum (most restrictive) timeout wins.

Example

1client<llm> FastClient {
2 provider openai
3 options {
4 model "gpt-3.5-turbo"
5 api_key env.OPENAI_API_KEY
6 http {
7 connect_timeout_ms 3000
8 request_timeout_ms 20000
9 }
10 }
11}
12
13client<llm> SlowClient {
14 provider openai
15 options {
16 model "gpt-4"
17 api_key env.OPENAI_API_KEY
18 http {
19 request_timeout_ms 60000
20 }
21 }
22}
23
24client<llm> MyFallback {
25 provider fallback
26 options {
27 strategy [FastClient, SlowClient]
28 http {
29 connect_timeout_ms 5000 // Parent timeout
30 idle_timeout_ms 15000 // Parent timeout
31 }
32 }
33}

Effective timeouts:

When calling FastClient:

  • connect_timeout_ms: min(5000, 3000) = 3000ms (FastClient is stricter)
  • request_timeout_ms: min(∞, 20000) = 20000ms (only FastClient defines it)
  • idle_timeout_ms: min(15000, ∞) = 15000ms (only parent defines it)

When calling SlowClient:

  • connect_timeout_ms: min(5000, ∞) = 5000ms (only parent defines it)
  • request_timeout_ms: min(∞, 60000) = 60000ms (only SlowClient defines it)
  • idle_timeout_ms: min(15000, ∞) = 15000ms (only parent defines it)

Timeout Evaluation

All timeouts are evaluated concurrently. A request fails when any timeout is exceeded:

  1. Connection phase: connect_timeout_ms applies
  2. After connection:
    • time_to_first_token_timeout_ms starts when request is sent
    • request_timeout_ms starts when request is sent
    • idle_timeout_ms starts after each chunk is received

Interaction with Retry Policies

When a client has both timeouts and a retry policy:

  • Each retry attempt gets the full timeout duration
  • A timeout triggers the retry mechanism (if configured)
  • Total elapsed time = (number of attempts) × (timeout per attempt) + (retry delays)

Example:

1retry_policy Exponential {
2 max_retries 3
3 strategy {
4 type exponential_backoff
5 }
6}
7
8client<llm> MyClient {
9 provider openai
10 retry_policy Exponential
11 options {
12 model "gpt-4"
13 api_key env.OPENAI_API_KEY
14 http {
15 request_timeout_ms 30000 // Each attempt gets 30 seconds
16 }
17 }
18}

Maximum possible time: ~30s × 4 attempts + exponential backoff delays

Runtime Overrides

Override timeout values at runtime using the client registry:

1import { b } from './baml_client'
2
3const result = await b.MyFunction(input, {
4 clientRegistry: b.ClientRegistry.override({
5 "MyClient": {
6 options: {
7 http: {
8 request_timeout_ms: 10000,
9 idle_timeout_ms: 5000
10 }
11 }
12 }
13 })
14})

Runtime overrides follow the same composition rules: the minimum timeout wins when composing runtime values with config file values.

Error Handling

Timeout errors are represented by BamlTimeoutError, a subclass of BamlClientError:

BamlError
└── BamlClientError
└── BamlTimeoutError

Timeout errors include structured fields:

  • client: The client name that timed out
  • timeout_type: The specific timeout that was exceeded
  • configured_value_ms: The configured timeout value in milliseconds
  • elapsed_ms: The actual elapsed time in milliseconds
  • message: A human-readable error message
1from baml_py.errors import BamlTimeoutError
2
3try:
4 result = await b.MyFunction(input)
5except BamlTimeoutError as e:
6 print(f"Timeout: {e.timeout_type}")
7 print(f"Configured: {e.configured_value_ms}ms")
8 print(f"Elapsed: {e.elapsed_ms}ms")

Validation Rules

BAML validates timeout configurations at compile time:

  1. Positive values: All timeout values must be positive integers
  2. Logical constraints: request_timeout_ms must be ≥ time_to_first_token_timeout_ms (if both are specified)

Invalid configurations will cause BAML to raise validation errors with helpful messages.

See Also