Video

Video values to BAML functions can be created in client libraries. This document explains how to use these functions both at compile time and runtime to handle video data. For more details, refer to video types.

When you create a Video using from_url (Python) or fromUrl (TypeScript), the URL is passed directly to the model without any intermediate fetching. If the model cannot access external media, it will fail on such inputs. In these cases, convert the video to Base64 before passing it to the model.

Only Google Gemini and Vertex AI currently support video input directly. Other providers (Anthropic Claude, OpenAI GPT-4o, AWS Bedrock) will error or require you to extract frames as images or provide transcripts. See the model compatibility table below for details.

Usage Examples

1from baml_py import Video
2from baml_client import b
3
4async def test_video_input():
5 # Create a Video object from a URL
6 video = Video.from_url("https://www.youtube.com/watch?v=dQw4w9WgXcQ")
7 res = await b.TestVideoInput(video=video)
8
9 # Create a Video object from Base64 data
10 video_b64 = "AAAAGGZ0eXBpc29t..."
11 video = Video.from_base64("video/mp4", video_b64)
12 res = await b.TestVideoInput(video=video)

Static Methods

fromUrl
(url: string, mediaType?: string) => Video

Creates a Video object from a URL. Optionally specify the media type, otherwise it will be inferred from the URL.

fromBase64
(mediaType: string, base64: string) => Video

Creates a Video object using Base64 encoded data along with the given MIME type.

fromFile
(file: File) => Promise<Video>

Only available in browser environments. @boundaryml/baml/browser
Creates a Video object from a File object. Available in browser environments only.

Instance Methods

isUrl
() => boolean

Check if the video is stored as a URL.

asUrl
() => string

Get the URL of the video if it’s stored as a URL. Throws an Error if the video is not stored as a URL.

asBase64
() => [string, string]

Get the base64 data and media type if the video is stored as base64. Returns [base64Data, mediaType]. Throws an Error if the video is not stored as base64.

Model Compatibility

Different AI models have varying levels of support for video input methods (As of July 2025):

Provider / APIVideo Input Support
AnthropicNo native video support. Only accepts PDF, images, and common docs.
AWS BedrockFully multimodal. Accepts video as Base64 bytes in request or S3 URI. JSON must include format (e.g. mp4) and source.
Google GeminiThree options: upload with ai.files.upload and use file_uri, inline Base64 (<20MB), or YouTube URL (preview). Requires mime_type.
OpenAIVideo input not yet in public API. Only text and images. Must extract frames and send as images for now.
Google Vertex AIAccepts video via Cloud Storage gs:// URI (up to 2GB), public HTTP/HTTPS URL (≤15MB), YouTube URL, or inline Base64. Requires mimeType.

For most models, direct video input is only supported by Google Gemini and Vertex AI. For other providers, you must extract frames as images or use transcripts. Always specify the correct MIME type (e.g., video/mp4) when required.