A provider module that integrates Azure OpenAI Service with the Amplifier AI agent platform.
This module enables Amplifier to use Azure OpenAI Service deployments for language model reasoning via the Responses API. Requests are routed through your Azure endpoint with Azure-specific authentication and deployment configuration.
- Azure OpenAI Service Integration: Connect to your Azure-hosted OpenAI deployments
- Responses API Compatibility: Routes requests through Azure's Responses API endpoint
- Deployment Name Mapping: Map model names to Azure deployment names
- Multiple Authentication Methods: Support for API keys, Azure AD tokens, and Managed Identity
- Tool Calling Support: Full support for function calling/tools
- Managed Identity Support: Seamless authentication in Azure environments
The provider recognises Responses API function_call / tool_call
payloads, decodes any JSON-encoded arguments, and forwards standard
ToolCall objects to Amplifier. No additional configuration is needed—tools
declared in your configuration or profiles run as soon as the model
requests them.
- Python 3.11+
- UV - Fast Python package manager
# macOS/Linux/WSL
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"Install the module using pip:
uv pip install -e amplifier-module-provider-azure-openaiFor Managed Identity authentication support, install with the azure extra:
uv pip install -e amplifier-module-provider-azure-openai[azure]Or add it to your Amplifier configuration for automatic installation.
The simplest way to configure the provider is with environment variables.
Use DefaultAzureCredential with Azure CLI:
# Login to Azure
az login
# Set up endpoint
export AZURE_OPENAI_ENDPOINT="https://myresource.openai.azure.com"
export AZURE_USE_DEFAULT_CREDENTIAL="true"
# Optional: Configure API version and defaults
export AZURE_OPENAI_API_VERSION="2024-10-01-preview"
export AZURE_OPENAI_DEFAULT_MODEL="gpt-5"
# Run amplifier - no config file needed!
amplifier run# Set up authentication
export AZURE_OPENAI_ENDPOINT="https://myresource.openai.azure.com"
export AZURE_OPENAI_API_KEY="your-api-key-here"
# Optional: Configure API version and defaults
export AZURE_OPENAI_API_VERSION="2024-10-01-preview"
export AZURE_OPENAI_DEFAULT_MODEL="gpt-5"
# Run amplifier
amplifier runFor managed identity in Azure environments (VMs, App Service, etc.):
export AZURE_OPENAI_ENDPOINT="https://myresource.openai.azure.com"
export AZURE_USE_DEFAULT_CREDENTIAL="true" # Will use managed identity when availableBasic setup with API key authentication:
[[providers]]
name = "azure-openai"
[providers.config]
azure_endpoint = "https://myresource.openai.azure.com"
api_key = "your-api-key-here"
default_model = "gpt-5"Full configuration with deployment mapping:
[[providers]]
name = "azure-openai"
[providers.config]
# Required: Your Azure OpenAI resource endpoint
azure_endpoint = "https://myresource.openai.azure.com"
# Authentication (use one of these, in order of priority)
api_key = "your-api-key-here" # Option 1: API Key
# azure_ad_token = "your-azure-ad-token" # Option 2: Azure AD Token
# use_managed_identity = true # Option 3: Managed Identity
# use_default_credential = true # Option 4: DefaultAzureCredential
# For user-assigned managed identity (optional)
# managed_identity_client_id = "client-id-here"
# Optional: API version (defaults to 2024-02-15-preview)
api_version = "2024-10-01-preview"
# Optional: Map model names to Azure deployment names
[providers.config.deployment_mapping]
"gpt-5" = "my-gpt5-deployment"
"gpt-5" = "my-gpt5-deployment"
"gpt-5-mini" = "my-mini-deployment"
# Optional: Default deployment when no mapping matches
default_deployment = "my-default-deployment"
# Optional: Default model for requests
default_model = "gpt-5"
# Optional: Generation parameters
max_tokens = 4096
temperature = 0.7Azure OpenAI uses deployment names instead of model names. This module provides flexible mapping:
- Explicit Mapping: Check
deployment_mappingfor the requested model - Default Deployment: Use
default_deploymentif configured - Pass-through: Use the model name as-is (assumes it's a deployment name)
[providers.config.deployment_mapping]
"gpt-5" = "production-gpt5"
"gpt-5-mini" = "fast-mini"
default_deployment = "fallback-deployment"- Request for "gpt-5" → Uses "production-gpt5"
- Request for "gpt-5-mini" → Uses "fast-mini"
- Request for "claude-opus-4-1" → Uses "fallback-deployment" (not in mapping)
- Request for "my-custom-deploy" → Uses "my-custom-deploy" (if no default set)
The provider supports multiple authentication methods with the following priority:
- API Key (highest priority)
- Azure AD Token
- Managed Identity / Azure Credentials
The most common method. Set via configuration or environment variable:
export AZURE_OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_ENDPOINT="https://myresource.openai.azure.com"[providers.config]
azure_endpoint = "https://myresource.openai.azure.com"
api_key = "your-api-key"For enterprise scenarios with Azure Active Directory:
[providers.config]
azure_endpoint = "https://myresource.openai.azure.com"
azure_ad_token = "your-azure-ad-token"Or via environment:
export AZURE_OPENAI_AD_TOKEN="your-ad-token"Note: This requires the azure-identity package to be installed.
For Azure resources with system-assigned managed identity:
[providers.config]
azure_endpoint = "https://myresource.openai.azure.com"
use_managed_identity = trueFor Azure resources with user-assigned managed identity:
[providers.config]
azure_endpoint = "https://myresource.openai.azure.com"
use_managed_identity = true
managed_identity_client_id = "your-managed-identity-client-id"This is the recommended authentication method as it works in both local development and Azure deployments.
Uses Azure's credential chain (includes Azure CLI, environment variables, managed identity):
[provider.config]
azure_endpoint = "https://myresource.openai.azure.com"
use_default_credential = trueOr via environment variable:
export AZURE_USE_DEFAULT_CREDENTIAL=trueThe DefaultAzureCredential tries multiple authentication methods in order:
- Environment variables (AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID)
- Managed identity (if running in Azure)
- Azure CLI (if logged in locally with
az login) - Azure PowerShell (if logged in)
- Interactive browser authentication
Note for WSL2 Users: If you're developing in WSL2, use DefaultAzureCredential instead of ManagedIdentityCredential directly, as WSL2 doesn't have access to Azure IMDS. After running az login, DefaultAzureCredential will automatically use your CLI credentials.
The module supports these environment variables as fallbacks:
AZURE_OPENAI_ENDPOINTorAZURE_OPENAI_BASE_URL- Azure OpenAI resource endpointAZURE_OPENAI_API_KEYorAZURE_OPENAI_KEY- API key for authenticationAZURE_OPENAI_AD_TOKEN- Azure AD token for authenticationAZURE_OPENAI_API_VERSION- API version (defaults to2024-02-15-preview)
AZURE_USE_MANAGED_IDENTITY- Enable managed identity authentication (true,1, oryes)AZURE_USE_DEFAULT_CREDENTIAL- Enable DefaultAzureCredential authentication (true,1, oryes)AZURE_MANAGED_IDENTITY_CLIENT_ID- Client ID for user-assigned managed identity
AZURE_OPENAI_DEFAULT_DEPLOYMENT- Default deployment name to use when no mapping matchesAZURE_OPENAI_DEFAULT_MODEL- Default model to use for requests (defaults togpt-5)
AZURE_OPENAI_MAX_OUTPUT_TOKENS- Maximum output tokens (defaults to4096)AZURE_OPENAI_TEMPERATURE- Temperature for generation (defaults to0.7)
Note: Configuration file values take precedence over environment variables.
Once configured, the Azure OpenAI provider works seamlessly with Amplifier:
# In your Amplifier session
response = await session.send_message(
"Hello, how are you?",
provider="azure-openai",
model="gpt-5" # Will be mapped to your Azure deployment
)The module defaults to API version 2024-02-15-preview. You can override this:
[provider.config]
api_version = "2024-10-01-preview" # Use a newer versionAPI versions 2024-08-01-preview and later introduce parameter changes:
- Use
max_output_tokens(Azure) /max_completion_tokens(OpenAI) instead ofmax_tokens - The provider automatically handles this translation for you
Model-Specific Restrictions: Some models (e.g., GPT-5 and later) have specific parameter requirements:
- May only support default temperature values
- Check Azure OpenAI documentation for your specific model's capabilities
Example config for newer models:
[provider.config]
api_version = "2025-03-01-preview"
default_model = "gpt-5"
temperature = 1.0 # Use model's default temperatureCheck Azure OpenAI documentation for available API versions and model capabilities.
The provider fully supports Responses API tool/function calling:
tools = [
{
"name": "get_weather",
"description": "Get the weather in a location",
"input_schema": {
"type": "object",
"properties": {
"location": {"type": "string"}
}
}
}
]
response = await session.send_message(
"What's the weather in Seattle?",
provider="azure-openai",
tools=tools
)-
Authentication Errors
- Verify your API key or Azure AD token is correct
- Check that the endpoint URL includes
https://and ends with.openai.azure.com
-
Deployment Not Found
- Ensure the deployment name exists in your Azure OpenAI resource
- Check deployment mapping configuration
- Verify the deployment is in a "Succeeded" state
-
API Version Errors
- Some features may require specific API versions
- Try using the default version or check Azure documentation
-
Rate Limiting
- Azure OpenAI has deployment-specific rate limits
- Consider implementing retry logic or using multiple deployments
Enable debug logging to see deployment resolution:
import logging
logging.getLogger("amplifier_module_provider_azure_openai").setLevel(logging.DEBUG)- Endpoints: Uses Azure resource endpoints instead of api.openai.com
- Authentication: Supports Azure-specific auth methods
- Deployments: References deployment names instead of model names directly
- Rate Limits: Azure-specific quotas per deployment
- Regional Availability: Limited to Azure regions with OpenAI service
MIT License - See LICENSE file for details.
For issues or questions:
- Check the Amplifier documentation
- Review Azure OpenAI documentation
- File issues in the Amplifier repository