ctx.output_format

{{ ctx.output_format }} is used within a prompt template (or in any template_string) to print out the function’s output schema into the prompt. It describes to the LLM how to generate a structure BAML can parse (usually JSON).

Here’s an example of a function with {{ ctx.output_format }}, and how it gets rendered by BAML before sending it to the LLM.

BAML Prompt

1class Resume {
2 name string
3 education Education[]
4}
5function ExtractResume(resume_text: string) -> Resume {
6 prompt #"
7 Extract this resume:
8 ---
9 {{ resume_text }}
10 ---
11
12 {{ ctx.output_format }}
13 "#
14}

Rendered prompt

Extract this resume
---
Aaron V.
Bachelors CS, 2015
UT Austin
---
Answer in JSON using this schema:
{
name: string
education: [
{
school: string
graduation_year: string
}
]
}

Controlling the output_format

ctx.output_format can also be called as a function with parameters to customize how the schema is printed, like this:

{{ ctx.output_format(prefix="If you use this schema correctly and I'll tip $400:\n", always_hoist_enums=true)}}

Here’s the parameters:

prefix
string

The prefix instruction to use before printing out the schema.

Answer in this schema correctly I'll tip $400:
{
...
}

BAML’s default prefix varies based on the function’s return type.

Fuction return typeDefault Prefix
Primitive (String)
Primitive (Int)Answer as an
Primitive (Other)Answer as a
EnumAnswer with any of the categories:\n
ClassAnswer in JSON using this schema:\n
ListAnswer with a JSON Array using this schema:\n
UnionAnswer in JSON using any of these schemas:\n
OptionalAnswer in JSON using this schema:\n
always_hoist_enums
boolean

Whether to inline the enum definitions in the schema, or print them above. Default: false

Inlined

Answer in this json schema:
{
categories: "ONE" | "TWO" | "THREE"
}

hoisted

MyCategory
---
ONE
TWO
THREE
Answer in this json schema:
{
categories: MyCategory
}
BAML will always hoist if you add a description to any of the enum values.
or_splitter
string

Default: or

If a type is a union like string | int or an optional like string?, this indicates how it’s rendered.

BAML renders it as property: string or null as we have observed some LLMs have trouble identifying what property: string | null means (and are better with plain english).

You can always set it to | or something else for a specific model you use.

hoisted_class_prefix
string

Prefix of hoisted classes in the prompt. Default: <none>

Recursive classes are hoisted in the prompt so that any class field can reference them using their name. This parameter controls the prefix used for hoisted classes as well as the word used in the render message to refer to the output type, which defaults to "schema":

Answer in JSON using this schema:

See examples below.

Recursive BAML Prompt Example

1class Node {
2 data int
3 next Node?
4}
5
6class LinkedList {
7 head Node?
8 len int
9}
10
11function BuildLinkedList(input: int[]) -> LinkedList {
12 prompt #"
13 Build a linked list from the input array of integers.
14
15 INPUT: {{ input }}
16
17 {{ ctx.output_format }}
18 "#
19}

Default hoisted_class_prefix (none)

Node {
data: int,
next: Node or null
}
Answer in JSON using this schema:
{
head: Node or null,
len: int
}

Custom Prefix: hoisted_class_prefix="interface"

interface Node {
data: int,
next: Node or null
}
Answer in JSON using this interface:
{
head: Node or null,
len: int
}

Why BAML doesn’t use JSON schema format in prompts

BAML uses “type definitions” or “jsonish” format instead of the long-winded json-schema format. The tl;dr is that json schemas are

  1. 4x more inefficient than “type definitions”.
  2. very unreadable by humans (and hence models)
  3. perform worse than type definitions (especially on deeper nested objects or smaller models)

Read our full article on json schema vs type definitions