LMStudio supports the OpenAI client, allowing you to use the openai-generic provider with an overridden base_url.

See https://lmstudio.ai/docs/local-server#make-an-inferencing-request-using-openais-chat-completions-format for more information.

BAML
1client<llm> MyClient {
2 provider "openai-generic"
3 options {
4 base_url "http://localhost:1234/v1"
5 model "TheBloke/phi-2-GGUF"
6 }
7}