Video
Video values to BAML functions can be created in client libraries. This document explains how to use these functions both at compile time and runtime to handle video data. For more details, refer to video types.
When you create a Video
using from_url
(Python) or fromUrl
(TypeScript), the URL is passed directly to the model without any intermediate fetching. If the model cannot access external media, it will fail on such inputs. In these cases, convert the video to Base64 before passing it to the model.
Only Google Gemini and Vertex AI currently support video input directly. Other providers (Anthropic Claude, OpenAI GPT-4o, AWS Bedrock) will error or require you to extract frames as images or provide transcripts. See the model compatibility table below for details.
Usage Examples
Static Methods
Creates a Video object from a URL. Optionally specify the media type, otherwise it will be inferred from the URL.
Creates a Video object using Base64 encoded data along with the given MIME type.
Instance Methods
Check if the video is stored as a URL.
Get the URL of the video if it’s stored as a URL. Throws an Error if the video is not stored as a URL.
Get the base64 data and media type if the video is stored as base64. Returns [base64Data, mediaType]. Throws an Error if the video is not stored as base64.
Model Compatibility
Different AI models have varying levels of support for video input methods (As of July 2025):
For most models, direct video input is only supported by Google Gemini and Vertex AI. For other providers, you must extract frames as images or use transcripts. Always specify the correct MIME type (e.g., video/mp4) when required.