Overview
What is BAML?
An LLM function is a prompt template with some defined input variables, and a specific output type like a class, enum, union, optional string, etc.
BAML is a configuration file format to write better and cleaner LLM functions.
With BAML you can write and test a complex LLM function in 1/10 of the time it takes to setup a python LLM testing environment.
Try it out in the playground — PromptFiddle.com
Share your creations and ask questions in our Discord.
Features
- Python and Typescript support: Plug-and-play BAML with other languages
- Type validation: more resilient to common LLM mistakes than Pydantic or Zod
- Wide model support: Ollama, Openai, Anthropic. Tested on small models like Llama2
- Streaming: Stream structured partial outputs.
- Realtime Prompt Previews: See the full prompt always, even if it has loops and conditionals
- Testing support: Test functions in the playground with 1 click.
- Resilience and fallback features: Add retries, redundancy, to your LLM calls
- Observability Platform: Use Boundary Studio to visualize your functions and replay production requests with 1 click.
Companies using BAML
- Zenfetch - ChatGPT for your bookmarks
- Vetrec - AI-powered Clinical Notes for Veterinarians
- MagnaPlay - Production-quality machine translation for games
- Aer Compliance - AI-powered compliance tasks
- Haven - Automate Tenant communications with AI
- Muckrock - FOIA request tracking and filing
- and more! Let us know if you want to be showcased or want to work with us 1-1 to solve your usecase.
Starter projects
First steps
We recommend checking the examples in PromptFiddle.com. Once you’re ready to start, install the toolchain and read the guides.