English | 中文
This adapter provides an OpenAI-compatible API layer for Azure AI Foundry Agents, allowing you to use existing OpenAI client libraries and tools with Azure AI Foundry Agents seamlessly. It acts as a bridge between the OpenAI API format and Azure AI Foundry Agent APIs. Moreover, this is a more elegant way to use AI Foundry Agent than using mcp integration.
Azure AI Foundry Agent Core Capabilities Supported:
- 📚 Knowledge: Access to custom knowledge bases and documents
- ⚡ Actions: Execute custom functions and integrations
- 🔗 Connected Agents: Multi-agent orchestration and collaboration
- 🔄 OpenAI-compatible API: Full compatibility with OpenAI's
/v1/chat/completionsendpoint - 🌊 Streaming Support: Real-time streaming responses with Server-Sent Events (SSE)
- 📋 Model Listing:
/v1/modelsendpoint for listing available models - 🔍 Comprehensive Logging: Detailed logging with configurable levels and file rotation
- 📊 Request Auditing: Automatic audit trail for all requests and responses
- 🛡️ Error Handling: Robust error handling with fallback responses
- 📖 Auto Documentation: FastAPI-powered interactive API documentation
- 🔧 Health Monitoring: Health check endpoint for service monitoring
- 🌐 CORS Support: Cross-origin resource sharing enabled
- ⚙️ Environment Configuration: Flexible configuration via environment variables
foundry-agent-openai-compat-adapter/
├── main.py # Main application file
├── README.md # This file
├── .env.example # Environment variables template
├── logs/ # Auto-generated log files
├── audits/ # Auto-generated audit files
└── tests/ # Test scripts
├── test_client.py # Python test client
├── test_streaming.py # Streaming test
├── test_curl.bat # Windows curl tests
└── test_curl.sh # Linux/macOS curl tests
- Python 3.7+
- Azure AI Foundry Agent (with valid credentials)
- Azure Subscription Application credentials (tenant_id, client_id, client_secret)
-
Clone the repository:
git clone https://github.com/satomic/foundry-agent-openai-compat-adapter.git cd foundry-agent-openai-compat-adapter -
Install required dependencies:
pip install -r requirements.txt
-
Configure environment variables:
cp .env.example .env
Edit the
.envfile with your Azure Subscription Application credentials and settings:# Azure Authentication Information (from Azure Subscription Application) AZURE_TENANT_ID=your_tenant_id_here AZURE_CLIENT_ID=your_client_id_here AZURE_CLIENT_SECRET=your_client_secret_here # Azure AI Project Information AZURE_ENDPOINT=your_azure_ai_endpoint_here AZURE_AGENT_ID=your_agent_id_here # Server Configuration (Optional) SERVER_HOST=0.0.0.0 SERVER_PORT=8000 LOG_LEVEL=info
python main.pyThe server will start on http://localhost:8000 (or the configured host/port).
Visit http://localhost:8000/docs for interactive Swagger/OpenAPI documentation.
Python test script:
python tests/test_client.pyStreaming test:
python tests/test_streaming.pycurl tests:
# Windows
tests/test_curl.bat
# Linux/macOS
bash tests/test_curl.shimport openai
# Configure the client to use the local adapter
client = openai.OpenAI(
api_key="not-needed", # Any string works
base_url="http://localhost:8000/v1"
)
# Non-streaming chat completion
response = client.chat.completions.create(
model="foundry-agent-model",
messages=[
{"role": "user", "content": "Hello! Can you help me with Python?"}
],
temperature=0.7,
max_tokens=150
)
print(response.choices[0].message.content)
# Streaming chat completion
stream = client.chat.completions.create(
model="foundry-agent-model",
messages=[
{"role": "user", "content": "Tell me a short story"}
],
temperature=0.7,
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")# Non-streaming request
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "foundry-agent-model",
"messages": [{"role": "user", "content": "Hello!"}],
"temperature": 0.7,
"max_tokens": 150
}'
# Streaming request
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-d '{
"model": "foundry-agent-model",
"messages": [{"role": "user", "content": "Tell me a story"}],
"temperature": 0.7,
"stream": true
}'| Endpoint | Method | Description |
|---|---|---|
/v1/chat/completions |
POST | OpenAI-compatible chat completions (supports streaming) |
/v1/models |
GET | List available models |
/health |
GET | Health check endpoint |
/docs |
GET | Interactive API documentation |
- ✅
model: Model identifier (use "foundry-agent-model") - ✅
messages: Array of conversation messages - ✅
temperature: Sampling temperature (0.0 to 2.0) - ✅
max_tokens: Maximum tokens in completion - ✅
stream: Enable streaming responses - ❌
functions,tools: Not currently supported
Logs are automatically saved to the logs/ directory with daily rotation:
- Format:
YYYY-MM-DD.log
Set via LOG_LEVEL environment variable:
DEBUG: Detailed debugging informationINFO: General information (default)WARNING: Warning messagesERROR: Error messagesCRITICAL: Critical errors
All requests and responses are automatically saved to the audits/ directory:
- File format:
audit_YYYYMMDD_HHMMSS_mmm_XXXXXXXX.json - Includes complete request/response data
- Metadata about server environment
- Separate audit trails for streaming vs non-streaming requests
- Server won't start: Check environment variables in
.envfile - Authentication errors: Verify Azure credentials and permissions
- Agent not responding: Check
AZURE_AGENT_IDand agent status in Azure - Timeout errors: Check network connectivity to Azure endpoints
This project is licensed under the MIT License. See LICENSE file for details.