Skip to content

Support for Azure OpenAI with AI Assistant #3117

@matulef

Description

@matulef

Description

Currently, the AI assistant works with any OpenAI API-compatible endpoint. However, when OpenAI models are deployed on Azure, there is a slightly different initialization strategy. See here:

https://github.com/openai/openai-python?tab=readme-ov-file#microsoft-azure-openai

The user needs to provide an API version number, then the "AzureOpenAI" client is created instead of a regular OpenAI client (the URL ultimately looks like https://[XXXX].openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-10-01-preview). Once the client is initialized, the API calls are the same, so I believe the rest should just work.

Personally, at my company we use an Azure deployment instead of OpenAI's own API for privacy/compliance reasons, so it would be nice to support this.

Suggested solution

We can use the marimo.toml file to specify which flavor of the API to use, say using an api_type variable, e.g.

[ai.open_ai]
api_type = "azure" # 'azure' or 'openai', default is 'openai' if unspecified
api_key = "*********" 
api_version = "2024-10-01-preview"
model = "gpt-4o" # for azure this is the deployment_name, which is often the model name but can differ 
base_url = "https://example-endpoint.openai.azure.com"

Then when the client is initialized (looks like the llm.py file here) we can check the api_type in the config, and use the either the AzureOpenAI class or regular OpenAI class accordingly.

Alternative

No response

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions