Overview

One important way to ensure your AI functions are working as expected is to write unit tests. This is especially important when you’re working with AI functions that are used in production, or when you’re working with a team.

You have two options for adding / running tests:

  • Using the Playground
  • Using the BAML CLI

Using the Playground

The playground allows a type-safe interface for creating tests along with running them. Under the hood, the playground runs baml test for you and writes the test files to the __tests__ folder (see below).

At the end of this video notice that the test runs are all saved into Boundary Studio, our analytics and observability platform, and can be accessed from within VSCode. Check out the installation steps to set it up!

Run tests using the BAML CLI

To understand how to run tests using the CLI, you’ll need to understand how BAML projects are structured.

.
├── baml_src/ # Where you write BAML files
│   ├── __tests__/ # Where you write tests
│   │   ├── YourAIFunction/ # A folder for each AI function
│   │   │   └── test_name_cricket.json
│   │   └── YourAIFunction2/ # Another folder for another AI function
│   │       └── test_name_jellyfish.json

To run tests, you’ll need to run baml test from the root of your project. This will run all tests in the __tests__ folder.

# This will LIST all tests found in the __tests__ folder
$ baml test
# This will RUN all tests found in the __tests__ folder
$ baml test run

You can also run tests for a specific AI function by passing the -i flag.

# This will list all tests found in the __tests__/YourAIFunction folder
$ baml test -i "YourAIFunction:"

# This will run all tests found in the __tests__/YourAIFunction folder
$ baml test run -i "YourAIFunction:" -i "YourAIFunction2:"

For more filters on the baml test command, run baml test --help.

Evaluating test results

baml test and the Playground UI (which uses baml test under the hood) don’t have a way to evaluate results other than by manual inspection at the moment.

If you want to add asserts or expectations on the output, you can declare your tests programmatically using our pytest plugin to do so. See this tutorial