ctx.output_format
{{ ctx.output_format }}
is used within a prompt template (or in any template_string) to print out the function’s output schema into the prompt. It describes to the LLM how to generate a structure BAML can parse (usually JSON).
Here’s an example of a function with {{ ctx.output_format }}
, and how it gets rendered by BAML before sending it to the LLM.
BAML Prompt
Rendered prompt
Controlling the output_format
ctx.output_format
can also be called as a function with parameters to customize how the schema is printed, like this:
Here’s the parameters:
The prefix instruction to use before printing out the schema.
BAML’s default prefix varies based on the function’s return type.
Whether to inline the enum definitions in the schema, or print them above. Default: false
Inlined
hoisted
Default: or
If a type is a union like string | int
or an optional like string?
, this indicates how it’s rendered.
BAML renders it as property: string or null
as we have observed some LLMs have trouble identifying what property: string | null
means (and are better with plain english).
You can always set it to |
or something else for a specific model you use.
Prefix of hoisted classes in the prompt. Default: <none>
Recursive classes are hoisted in the prompt so that any class field can
reference them using their name. This parameter controls the prefix used for
hoisted classes as well as the word used in the render message to refer to the
output type, which defaults to "schema"
:
See examples below.
Recursive BAML Prompt Example
Default hoisted_class_prefix
(none)
Custom Prefix: hoisted_class_prefix="interface"
Why BAML doesn’t use JSON schema format in prompts
BAML uses “type definitions” or “jsonish” format instead of the long-winded json-schema format. The tl;dr is that json schemas are
- 4x more inefficient than “type definitions”.
- very unreadable by humans (and hence models)
- perform worse than type definitions (especially on deeper nested objects or smaller models)