🏠 Welcome

BAML is a domain-specific language to generate structured outputs from LLMs — with the best developer experience.

With BAML you can build reliable Agents, Chatbots with RAG, extract data from PDFs, and more.

A small sample of features:

  1. An amazingly fast developer experience for prompting in the BAML VSCode playground
  2. Fully type-safe outputs, even when streaming structured data (that means autocomplete!)
  3. Flexibility — it works with any LLM, any language, and any schema.
  4. State-of-the-art structured outputs that even outperform OpenAI with their own models — plus it works with OpenSource models.

Products

Motivation

Prompts are more than just f-strings; they’re actual functions with logic that can quickly become complex to organize, maintain, and test.

Currently, developers craft LLM prompts as if they’re writing raw HTML and CSS in text files, lacking:

  • Type safety
  • Hot-reloading or previews
  • Linting

The situation worsens when dealing with structured outputs. Since most prompts rely on Python and Pydantic, developers must execute their code and set up an entire Python environment just to test a minor prompt adjustment, or they have to setup a whole Python microservice just to call an LLM.

BAML allows you to view and run prompts directly within your editor, similar to how Markdown Preview function — no additional setup necessary, that interoperates with all your favorite languages and frameworks.

Just as TSX/JSX provided the ideal abstraction for web development, BAML offers the perfect abstraction for prompt engineering. Watch our demo video to see it in action.

Comparisons

Here’s our in-depth comparison with a couple of popular frameworks: