Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions README.fr.md
Original file line number Diff line number Diff line change
Expand Up @@ -221,6 +221,7 @@ picoclaw onboard
"model_name": "gpt4",
"model": "openai/gpt-5.2",
"api_key": "sk-your-openai-key",
"request_timeout": 300,
"api_base": "https://api.openai.com/v1"
}
],
Expand Down Expand Up @@ -252,6 +253,9 @@ picoclaw onboard
}
```

> **Nouveau** : Le format de configuration `model_list` permet d'ajouter des fournisseurs sans modifier le code. Voir [Configuration de Modèle](#configuration-de-modèle-model_list) pour plus de détails.
> `request_timeout` est optionnel et s'exprime en secondes. S'il est omis ou défini à `<= 0`, PicoClaw utilise le délai d'expiration par défaut (120s).

**3. Obtenir des Clés API**

* **Fournisseur LLM** : [OpenRouter](https://openrouter.ai/keys) · [Zhipu](https://open.bigmodel.cn/usercenter/proj-mgmt/apikeys) · [Anthropic](https://console.anthropic.com) · [OpenAI](https://platform.openai.com) · [Gemini](https://aistudio.google.com/api-keys)
Expand Down Expand Up @@ -979,6 +983,17 @@ Cette conception permet également le **support multi-agent** avec une sélectio
```
> Exécutez `picoclaw auth login --provider anthropic` pour configurer les identifiants OAuth.

**Proxy/API personnalisée**
```json
{
"model_name": "my-custom-model",
"model": "openai/custom-model",
"api_base": "https://my-proxy.com/v1",
"api_key": "sk-...",
"request_timeout": 300
}
```

#### Équilibrage de Charge

Configurez plusieurs points de terminaison pour le même nom de modèle—PicoClaw utilisera automatiquement le round-robin entre eux :
Expand Down
15 changes: 15 additions & 0 deletions README.ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -183,6 +183,7 @@ picoclaw onboard
"model_name": "gpt4",
"model": "openai/gpt-5.2",
"api_key": "sk-your-openai-key",
"request_timeout": 300,
"api_base": "https://api.openai.com/v1"
}
],
Expand Down Expand Up @@ -221,6 +222,9 @@ picoclaw onboard
}
```

> **新機能**: `model_list` 形式により、プロバイダーをコード変更なしで追加できます。詳細は [モデル設定](#モデル設定-model_list) を参照してください。
> `request_timeout` は任意の秒単位設定です。省略または `<= 0` の場合、PicoClaw はデフォルトのタイムアウト(120秒)を使用します。

**3. API キーの取得**

- **LLM プロバイダー**: [OpenRouter](https://openrouter.ai/keys) · [Zhipu](https://open.bigmodel.cn/usercenter/proj-mgmt/apikeys) · [Anthropic](https://console.anthropic.com) · [OpenAI](https://platform.openai.com) · [Gemini](https://aistudio.google.com/api-keys)
Expand Down Expand Up @@ -918,6 +922,17 @@ HEARTBEAT_OK 応答 ユーザーが直接結果を受け取る
```
> OAuth認証を設定するには、`picoclaw auth login --provider anthropic` を実行してください。

**カスタムプロキシ/API**
```json
{
"model_name": "my-custom-model",
"model": "openai/custom-model",
"api_base": "https://my-proxy.com/v1",
"api_key": "sk-...",
"request_timeout": 300
}
```

#### ロードバランシング

同じモデル名で複数のエンドポイントを設定すると、PicoClaw が自動的にラウンドロビンで分散します:
Expand Down
7 changes: 5 additions & 2 deletions README.md
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can I bother you with the addition of these lines into other READMEs? AI generated translation is fine.

Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,8 @@ picoclaw onboard
{
"model_name": "gpt4",
"model": "openai/gpt-5.2",
"api_key": "your-api-key"
"api_key": "your-api-key",
"request_timeout": 300
},
{
"model_name": "claude-sonnet-4.6",
Expand Down Expand Up @@ -262,6 +263,7 @@ picoclaw onboard
```

> **New**: The `model_list` configuration format allows zero-code provider addition. See [Model Configuration](#model-configuration-model_list) for details.
> `request_timeout` is optional and uses seconds. If omitted or set to `<= 0`, PicoClaw uses the default timeout (120s).

**3. Get API Keys**

Expand Down Expand Up @@ -915,7 +917,8 @@ This design also enables **multi-agent support** with flexible provider selectio
"model_name": "my-custom-model",
"model": "openai/custom-model",
"api_base": "https://my-proxy.com/v1",
"api_key": "sk-..."
"api_key": "sk-...",
"request_timeout": 300
}
```

Expand Down
15 changes: 15 additions & 0 deletions README.pt-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -222,6 +222,7 @@ picoclaw onboard
"model_name": "gpt4",
"model": "openai/gpt-5.2",
"api_key": "sk-your-openai-key",
"request_timeout": 300,
"api_base": "https://api.openai.com/v1"
}
],
Expand All @@ -246,6 +247,9 @@ picoclaw onboard
}
```

> **Novo**: O formato de configuração `model_list` permite adicionar provedores sem alterar código. Veja [Configuração de Modelo](#configuração-de-modelo-model_list) para detalhes.
> `request_timeout` é opcional e usa segundos. Se omitido ou definido como `<= 0`, o PicoClaw usa o timeout padrão (120s).

**3. Obter API Keys**

* **Provedor de LLM**: [OpenRouter](https://openrouter.ai/keys) · [Zhipu](https://open.bigmodel.cn/usercenter/proj-mgmt/apikeys) · [Anthropic](https://console.anthropic.com) · [OpenAI](https://platform.openai.com) · [Gemini](https://aistudio.google.com/api-keys)
Expand Down Expand Up @@ -973,6 +977,17 @@ Este design também possibilita o **suporte multi-agent** com seleção flexíve
```
> Execute `picoclaw auth login --provider anthropic` para configurar credenciais OAuth.

**Proxy/API personalizada**
```json
{
"model_name": "my-custom-model",
"model": "openai/custom-model",
"api_base": "https://my-proxy.com/v1",
"api_key": "sk-...",
"request_timeout": 300
}
```

#### Balanceamento de Carga

Configure vários endpoints para o mesmo nome de modelo—PicoClaw fará round-robin automaticamente entre eles:
Expand Down
15 changes: 15 additions & 0 deletions README.vi.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,6 +202,7 @@ picoclaw onboard
"model_name": "gpt4",
"model": "openai/gpt-5.2",
"api_key": "sk-your-openai-key",
"request_timeout": 300,
"api_base": "https://api.openai.com/v1"
}
],
Expand All @@ -220,6 +221,9 @@ picoclaw onboard
}
```

> **Mới**: Định dạng cấu hình `model_list` cho phép thêm nhà cung cấp mà không cần thay đổi mã nguồn. Xem [Cấu hình Mô hình](#cấu-hình-mô-hình-model_list) để biết chi tiết.
> `request_timeout` là tùy chọn và dùng đơn vị giây. Nếu bỏ qua hoặc đặt `<= 0`, PicoClaw sẽ dùng timeout mặc định (120s).

**3. Lấy API Key**

* **Nhà cung cấp LLM**: [OpenRouter](https://openrouter.ai/keys) · [Zhipu](https://open.bigmodel.cn/usercenter/proj-mgmt/apikeys) · [Anthropic](https://console.anthropic.com) · [OpenAI](https://platform.openai.com) · [Gemini](https://aistudio.google.com/api-keys)
Expand Down Expand Up @@ -944,6 +948,17 @@ Thiết kế này cũng cho phép **hỗ trợ đa tác nhân** với lựa ch
```
> Chạy `picoclaw auth login --provider anthropic` để thiết lập thông tin xác thực OAuth.

**Proxy/API tùy chỉnh**
```json
{
"model_name": "my-custom-model",
"model": "openai/custom-model",
"api_base": "https://my-proxy.com/v1",
"api_key": "sk-...",
"request_timeout": 300
}
```

#### Cân bằng Tải tải

Định cấu hình nhiều endpoint cho cùng một tên mô hình—PicoClaw sẽ tự động phân phối round-robin giữa chúng:
Expand Down
7 changes: 5 additions & 2 deletions README.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,8 @@ picoclaw onboard
{
"model_name": "gpt4",
"model": "openai/gpt-5.2",
"api_key": "your-api-key"
"api_key": "your-api-key",
"request_timeout": 300
},
{
"model_name": "claude-sonnet-4.6",
Expand Down Expand Up @@ -263,6 +264,7 @@ picoclaw onboard
```

> **新功能**: `model_list` 配置格式支持零代码添加 provider。详见[模型配置](#模型配置-model_list)章节。
> `request_timeout` 为可选项,单位为秒。若省略或设置为 `<= 0`,PicoClaw 使用默认超时(120 秒)。

**3. 获取 API Key**

Expand Down Expand Up @@ -550,7 +552,8 @@ Agent 读取 HEARTBEAT.md
"model_name": "my-custom-model",
"model": "openai/custom-model",
"api_base": "https://my-proxy.com/v1",
"api_key": "sk-..."
"api_key": "sk-...",
"request_timeout": 300
}
```

Expand Down
1 change: 1 addition & 0 deletions docs/migration/model-list-migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,7 @@ The `model` field uses a protocol prefix format: `[protocol/]model-identifier`
| `connect_mode` | No | Connection mode for CLI providers: `stdio`, `grpc` |
| `rpm` | No | Requests per minute limit |
| `max_tokens_field` | No | Field name for max tokens |
| `request_timeout` | No | HTTP request timeout in seconds; `<=0` uses default `120s` |

*`api_key` is required for HTTP-based protocols unless `api_base` points to a local server.

Expand Down
12 changes: 7 additions & 5 deletions pkg/config/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -371,11 +371,12 @@ func (p ProvidersConfig) MarshalJSON() ([]byte, error) {
}

type ProviderConfig struct {
APIKey string `json:"api_key" env:"PICOCLAW_PROVIDERS_{{.Name}}_API_KEY"`
APIBase string `json:"api_base" env:"PICOCLAW_PROVIDERS_{{.Name}}_API_BASE"`
Proxy string `json:"proxy,omitempty" env:"PICOCLAW_PROVIDERS_{{.Name}}_PROXY"`
AuthMethod string `json:"auth_method,omitempty" env:"PICOCLAW_PROVIDERS_{{.Name}}_AUTH_METHOD"`
ConnectMode string `json:"connect_mode,omitempty" env:"PICOCLAW_PROVIDERS_{{.Name}}_CONNECT_MODE"` // only for Github Copilot, `stdio` or `grpc`
APIKey string `json:"api_key" env:"PICOCLAW_PROVIDERS_{{.Name}}_API_KEY"`
APIBase string `json:"api_base" env:"PICOCLAW_PROVIDERS_{{.Name}}_API_BASE"`
Proxy string `json:"proxy,omitempty" env:"PICOCLAW_PROVIDERS_{{.Name}}_PROXY"`
RequestTimeout int `json:"request_timeout,omitempty" env:"PICOCLAW_PROVIDERS_{{.Name}}_REQUEST_TIMEOUT"`
AuthMethod string `json:"auth_method,omitempty" env:"PICOCLAW_PROVIDERS_{{.Name}}_AUTH_METHOD"`
ConnectMode string `json:"connect_mode,omitempty" env:"PICOCLAW_PROVIDERS_{{.Name}}_CONNECT_MODE"` // only for Github Copilot, `stdio` or `grpc`
}

type OpenAIProviderConfig struct {
Expand Down Expand Up @@ -406,6 +407,7 @@ type ModelConfig struct {
// Optional optimizations
RPM int `json:"rpm,omitempty"` // Requests per minute limit
MaxTokensField string `json:"max_tokens_field,omitempty"` // Field name for max tokens (e.g., "max_completion_tokens")
RequestTimeout int `json:"request_timeout,omitempty"`
}

// Validate checks if the ModelConfig has all required fields.
Expand Down
Loading