Writing AI functions
You are currently viewing the old BAML documentation. This syntax is still supported, but will eventually be deprecated in newer versions of BAML.
Pre-requisites
Follow the installation instructions and run baml init in a new project.
The starting project structure will look like this:
Overview
Before you call an LLM, ask yourself what kind of input or output youre
expecting. If you want the LLM to generate text, then you probably want a
string, but if you’re trying to get it to collect user details, you may want it
to return a complex type like UserDetails
.
Thinking this way can help you decompose large complex prompts into smaller, more measurable functions, and will also help you build more complex workflows and agents.
We’ll start with a simple function to extract verbs from a sentence, and then build on that to learn how BAML can modify more complex and powerful functions.
Implementing an AI function
1. Define AI functions and models in BAML files
First we will define a function of the following signature in BAML:
ExtractVerbs(input: string) -> string[]
Here’s the BAML equivalent, which you can add to your main.baml
:
function ExtractVerbs {
input string
/// list of verbs
output string[]
}
Every BAML function has a strictly typed input and output. The input and output can be either a primitive type (string, number, boolean) or a complex type (think unions, lists, or even custom pydantic models)
2. Implement the function using a prompt
To implement the function we need two things:
- An LLM client that defines which LLM to call and with which params.
- The actual prompt.
Define the LLM client
To implement a client we can just define one like this in a BAML file. Learn more about clients and non-openai chat providers.
If you used baml init
you should already have a clients.baml file with the client below
client<llm> GPT4 {
provider baml-openai-chat
options {
model gpt-4
api_key env.OPENAI_API_KEY
}
}
Use any parameters available to that model, like temperature etc, by adding them to the options block. You can also use environment variables to store secrets like API keys.
Define a prompt
Next we can create the prompt by implementing the function using an LLM. In BAML we provide helper utilities to inject the input variables into the prompt, and also get the LLM to return the right output type. You always get full-view of the whole prompt string, without any magic.
impl<llm, ExtractVerbs> version1 {
client GPT4
prompt #"
Extract the verbs from this INPUT:
INPUT:
---
{#input}
---
{// this is a comment inside a prompt! //}
Return a {#print_type(output)}.
Response:
"#
}
In VSCode you can click on “Open Playground” on top of the impl or prompt to see the full prompt:
In here you’ll notice how our language automatically dedents strings, injects variables into the prompt, and supports comments that will be stripped from the actual prompt. See our syntax guide for more information on basic string / comment syntax.
We will explain more how print_type works in later tutorials.
3. Use the function in your application
Our VSCode extension automatically generates a baml_client in your language of choice - either Python or TypeScript - to access and call your functions.
Show me the code
Here it is! Clone the repo to get syntax highlighting.
Further reading
- Continue on to the Testing + Extraction tutorials!
- See other types of function signatures possible in BAML.
- Learn more about prompt variables.