Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
5e1a25f
Add support for azure openai provider
kunalk16 Mar 12, 2026
0d720f0
Merge branch 'main' of https://github.com/kunalk16/picoclaw into feat…
kunalk16 Mar 12, 2026
577796f
Merge branch 'main' of https://github.com/sipeed/picoclaw into feat-a…
kunalk16 Mar 12, 2026
4101111
Add checks for deployment model name
kunalk16 Mar 12, 2026
ad74f27
Apply suggestion from @Copilot
kunalk16 Mar 12, 2026
4542d25
Addressing @Copilot suggestion to remove the init() function which se…
kunalk16 Mar 12, 2026
b5a915f
Merge branch 'main' of https://github.com/sipeed/picoclaw into feat-a…
kunalk16 Mar 12, 2026
ea09139
Merge branch 'main' of https://github.com/sipeed/picoclaw into feat-a…
kunalk16 Mar 13, 2026
320533b
Fix readme
kunalk16 Mar 13, 2026
ec16f7c
Merge branch 'main' of https://github.com/sipeed/picoclaw into feat-a…
kunalk16 Mar 13, 2026
6cdd2de
Merge branch 'main' of https://github.com/sipeed/picoclaw into feat-a…
kunalk16 Mar 13, 2026
1264e99
Merge branch 'main' of https://github.com/sipeed/picoclaw into feat-a…
kunalk16 Mar 13, 2026
7ba20cc
Merge branch 'main' of https://github.com/sipeed/picoclaw into feat-a…
kunalk16 Mar 13, 2026
2d5972e
Merge branch 'main' of https://github.com/sipeed/picoclaw into feat-a…
kunalk16 Mar 13, 2026
e0defab
Merge branch 'main' of https://github.com/sipeed/picoclaw into feat-a…
kunalk16 Mar 14, 2026
03ed172
Fix linting checks
kunalk16 Mar 14, 2026
4131cb2
Merge branch 'main' of https://github.com/sipeed/picoclaw into feat-a…
kunalk16 Mar 14, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.fr.md
Original file line number Diff line number Diff line change
Expand Up @@ -987,6 +987,7 @@ Cette conception permet également le **support multi-agent** avec une sélectio
| **BytePlus** | `byteplus/` | `https://ark.ap-southeast.bytepluses.com/api/v3` | OpenAI | [Obtenir Clé](https://www.byteplus.com/) |
| **LongCat** | `longcat/` | `https://api.longcat.chat/openai` | OpenAI | [Obtenir une clé](https://longcat.chat/platform) |
| **ModelScope (魔搭)**| `modelscope/` | `https://api-inference.modelscope.cn/v1` | OpenAI | [Obtenir un Token](https://modelscope.cn/my/tokens) |
| **Azure OpenAI** | `azure/` | `https://{resource}.openai.azure.com` | Azure | [Obtenir Clé](https://portal.azure.com) |
| **Antigravity** | `antigravity/` | Google Cloud | Custom | OAuth uniquement |
| **GitHub Copilot** | `github-copilot/` | `localhost:4321` | gRPC | - |

Expand Down
1 change: 1 addition & 0 deletions README.ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -928,6 +928,7 @@ HEARTBEAT_OK 応答 ユーザーが直接結果を受け取る
| **BytePlus** | `byteplus/` | `https://ark.ap-southeast.bytepluses.com/api/v3` | OpenAI | [キーを取得](https://www.byteplus.com) |
| **LongCat** | `longcat/` | `https://api.longcat.chat/openai` | OpenAI | [キーを取得](https://longcat.chat/platform) |
| **ModelScope (魔搭)**| `modelscope/` | `https://api-inference.modelscope.cn/v1` | OpenAI | [トークンを取得](https://modelscope.cn/my/tokens) |
| **Azure OpenAI** | `azure/` | `https://{resource}.openai.azure.com` | Azure | [キーを取得](https://portal.azure.com) |
| **Antigravity** | `antigravity/` | Google Cloud | カスタム | OAuthのみ |
| **GitHub Copilot** | `github-copilot/` | `localhost:4321` | gRPC | - |

Expand Down
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -1005,6 +1005,7 @@ The subagent has access to tools (message, web_search, etc.) and can communicate
| `groq` | LLM + **Voice transcription** (Whisper) | [console.groq.com](https://console.groq.com) |
| `cerebras` | LLM (Cerebras direct) | [cerebras.ai](https://cerebras.ai) |
| `vivgrid` | LLM (Vivgrid direct) | [vivgrid.com](https://vivgrid.com) |
| `azure` | LLM (Azure OpenAI) | [portal.azure.com](https://portal.azure.com) |

### Model Configuration (model_list)

Expand Down Expand Up @@ -1041,6 +1042,7 @@ This design also enables **multi-agent support** with flexible provider selectio
| **Vivgrid** | `vivgrid/` | `https://api.vivgrid.com/v1` | OpenAI | [Get Key](https://vivgrid.com) |
| **LongCat** | `longcat/` | `https://api.longcat.chat/openai` | OpenAI | [Get Key](https://longcat.chat/platform) |
| **ModelScope (魔搭)**| `modelscope/` | `https://api-inference.modelscope.cn/v1` | OpenAI | [Get Token](https://modelscope.cn/my/tokens) |
| **Azure OpenAI** | `azure/` | `https://{resource}.openai.azure.com` | Azure | [Get Key](https://portal.azure.com) |
| **Antigravity** | `antigravity/` | Google Cloud | Custom | OAuth only |
| **GitHub Copilot** | `github-copilot/` | `localhost:4321` | gRPC | - |

Expand Down
1 change: 1 addition & 0 deletions README.pt-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -983,6 +983,7 @@ Este design também possibilita o **suporte multi-agent** com seleção flexíve
| **BytePlus** | `byteplus/` | `https://ark.ap-southeast.bytepluses.com/api/v3` | OpenAI | [Obter Chave](https://www.byteplus.com) |
| **LongCat** | `longcat/` | `https://api.longcat.chat/openai` | OpenAI | [Obter Chave](https://longcat.chat/platform) |
| **ModelScope (魔搭)**| `modelscope/` | `https://api-inference.modelscope.cn/v1` | OpenAI | [Obter Token](https://modelscope.cn/my/tokens) |
| **Azure OpenAI** | `azure/` | `https://{resource}.openai.azure.com` | Azure | [Obter Chave](https://portal.azure.com) |
| **Antigravity** | `antigravity/` | Google Cloud | Custom | Apenas OAuth |
| **GitHub Copilot** | `github-copilot/` | `localhost:4321` | gRPC | - |

Expand Down
1 change: 1 addition & 0 deletions README.vi.md
Original file line number Diff line number Diff line change
Expand Up @@ -952,6 +952,7 @@ Thiết kế này cũng cho phép **hỗ trợ đa tác nhân** với lựa ch
| **BytePlus** | `byteplus/` | `https://ark.ap-southeast.bytepluses.com/api/v3` | OpenAI | [Lấy Khóa](https://www.byteplus.com) |
| **LongCat** | `longcat/` | `https://api.longcat.chat/openai` | OpenAI | [Lấy Key](https://longcat.chat/platform) |
| **ModelScope (魔搭)**| `modelscope/` | `https://api-inference.modelscope.cn/v1` | OpenAI | [Lấy Token](https://modelscope.cn/my/tokens) |
| **Azure OpenAI** | `azure/` | `https://{resource}.openai.azure.com` | Azure | [Lấy Khóa](https://portal.azure.com) |
| **Antigravity** | `antigravity/` | Google Cloud | Tùy chỉnh | Chỉ OAuth |
| **GitHub Copilot** | `github-copilot/` | `localhost:4321` | gRPC | - |

Expand Down
1 change: 1 addition & 0 deletions README.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -524,6 +524,7 @@ Agent 读取 HEARTBEAT.md
| **BytePlus** | `byteplus/` | `https://ark.ap-southeast.bytepluses.com/api/v3` | OpenAI | [获取密钥](https://www.byteplus.com) |
| **LongCat** | `longcat/` | `https://api.longcat.chat/openai` | OpenAI | [获取密钥](https://longcat.chat/platform) |
| **ModelScope (魔搭)**| `modelscope/` | `https://api-inference.modelscope.cn/v1` | OpenAI | [获取 Token](https://modelscope.cn/my/tokens) |
| **Azure OpenAI** | `azure/` | `https://{resource}.openai.azure.com` | Azure | [获取密钥](https://portal.azure.com) |
| **Antigravity** | `antigravity/` | Google Cloud | 自定义 | 仅 OAuth |
| **GitHub Copilot** | `github-copilot/` | `localhost:4321` | gRPC | - |

Expand Down
6 changes: 6 additions & 0 deletions config/config.example.json
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,12 @@
"api_key": "your-modelscope-access-token",
"api_base": "https://api-inference.modelscope.cn/v1"
},
{
"model_name": "azure-gpt5",
"model": "azure/my-gpt5-deployment",
"api_key": "your-azure-api-key",
"api_base": "https://your-resource.openai.azure.com"
},
{
"model_name": "loadbalanced-gpt-5.4",
"model": "openai/gpt-5.4",
Expand Down
9 changes: 9 additions & 0 deletions pkg/config/defaults.go
Original file line number Diff line number Diff line change
Expand Up @@ -384,6 +384,15 @@ func DefaultConfig() *Config {
APIBase: "http://localhost:8000/v1",
APIKey: "",
},

// Azure OpenAI - https://portal.azure.com
// model_name is a user-friendly alias; the model field's path after "azure/" is your deployment name
{
ModelName: "azure-gpt5",
Model: "azure/my-gpt5-deployment",
APIBase: "https://your-resource.openai.azure.com",
APIKey: "",
},
},
Gateway: GatewayConfig{
Host: "127.0.0.1",
Expand Down
150 changes: 150 additions & 0 deletions pkg/providers/azure/provider.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
package azure

import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"net/url"
"strings"
"time"

"github.com/sipeed/picoclaw/pkg/providers/common"
"github.com/sipeed/picoclaw/pkg/providers/protocoltypes"
)

type (
LLMResponse = protocoltypes.LLMResponse
Message = protocoltypes.Message
ToolDefinition = protocoltypes.ToolDefinition
)

const (
// azureAPIVersion is the Azure OpenAI API version used for all requests.
azureAPIVersion = "2024-10-21"
defaultRequestTimeout = common.DefaultRequestTimeout
)

// Provider implements the LLM provider interface for Azure OpenAI endpoints.
// It handles Azure-specific authentication (api-key header), URL construction
// (deployment-based), and request body formatting (max_completion_tokens, no model field).
type Provider struct {
apiKey string
apiBase string
httpClient *http.Client
}

// Option configures the Azure Provider.
type Option func(*Provider)

// WithRequestTimeout sets the HTTP request timeout.
func WithRequestTimeout(timeout time.Duration) Option {
return func(p *Provider) {
if timeout > 0 {
p.httpClient.Timeout = timeout
}
}
}

// NewProvider creates a new Azure OpenAI provider.
func NewProvider(apiKey, apiBase, proxy string, opts ...Option) *Provider {
p := &Provider{
apiKey: apiKey,
apiBase: strings.TrimRight(apiBase, "/"),
httpClient: common.NewHTTPClient(proxy),
}

for _, opt := range opts {
if opt != nil {
opt(p)
}
}

return p
}

// NewProviderWithTimeout creates a new Azure OpenAI provider with a custom request timeout in seconds.
func NewProviderWithTimeout(apiKey, apiBase, proxy string, requestTimeoutSeconds int) *Provider {
return NewProvider(
apiKey, apiBase, proxy,
WithRequestTimeout(time.Duration(requestTimeoutSeconds)*time.Second),
)
}

// Chat sends a chat completion request to the Azure OpenAI endpoint.
// The model parameter is used as the Azure deployment name in the URL.
func (p *Provider) Chat(
ctx context.Context,
messages []Message,
tools []ToolDefinition,
model string,
options map[string]any,
) (*LLMResponse, error) {
if p.apiBase == "" {
return nil, fmt.Errorf("Azure API base not configured")
}

// model is the deployment name for Azure OpenAI
deployment := model

// Build Azure-specific URL safely using url.JoinPath and query encoding
// to prevent path traversal or query injection via deployment names.
base, err := url.JoinPath(p.apiBase, "openai/deployments", deployment, "chat/completions")
if err != nil {
return nil, fmt.Errorf("failed to build Azure request URL: %w", err)
}
requestURL := base + "?api-version=" + azureAPIVersion

// Build request body — no "model" field (Azure infers from deployment URL)
requestBody := map[string]any{
"messages": common.SerializeMessages(messages),
}

if len(tools) > 0 {
requestBody["tools"] = tools
requestBody["tool_choice"] = "auto"
}

// Azure OpenAI always uses max_completion_tokens
if maxTokens, ok := common.AsInt(options["max_tokens"]); ok {
requestBody["max_completion_tokens"] = maxTokens
}

if temperature, ok := common.AsFloat(options["temperature"]); ok {
requestBody["temperature"] = temperature
}

jsonData, err := json.Marshal(requestBody)
if err != nil {
return nil, fmt.Errorf("failed to marshal request: %w", err)
}

req, err := http.NewRequestWithContext(ctx, "POST", requestURL, bytes.NewReader(jsonData))
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}

// Azure uses api-key header instead of Authorization: Bearer
req.Header.Set("Content-Type", "application/json")
if p.apiKey != "" {
req.Header.Set("api-key", p.apiKey)
}

resp, err := p.httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("failed to send request: %w", err)
}
defer resp.Body.Close()

if resp.StatusCode != http.StatusOK {
return nil, common.HandleErrorResponse(resp, p.apiBase)
}

return common.ReadAndParseResponse(resp, p.apiBase)
}

// GetDefaultModel returns an empty string as Azure deployments are user-configured.
func (p *Provider) GetDefaultModel() string {
return ""
}
Loading