Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
# ANTHROPIC_API_KEY=sk-ant-xxx
# OPENAI_API_KEY=sk-xxx
# GEMINI_API_KEY=xxx
# MODELSCOPE_API_KEY=xxx
# CLAUDE_CODE_OAUTH=xxx
# ── Chat Channel ──────────────────────────
# TELEGRAM_BOT_TOKEN=123456:ABC...
Expand Down
2 changes: 2 additions & 0 deletions README.fr.md
Original file line number Diff line number Diff line change
Expand Up @@ -985,6 +985,7 @@ Cette conception permet également le **support multi-agent** avec une sélectio
| **ShengsuanYun** | `shengsuanyun/` | `https://router.shengsuanyun.com/api/v1` | OpenAI | - |
| **BytePlus** | `byteplus/` | `https://ark.ap-southeast.bytepluses.com/api/v3` | OpenAI | [Obtenir Clé](https://www.byteplus.com/) |
| **LongCat** | `longcat/` | `https://api.longcat.chat/openai` | OpenAI | [Obtenir une clé](https://longcat.chat/platform) |
| **ModelScope (魔搭)**| `modelscope/` | `https://api-inference.modelscope.cn/v1` | OpenAI | [Obtenir un Token](https://modelscope.cn/my/tokens) |
| **Antigravity** | `antigravity/` | Google Cloud | Custom | OAuth uniquement |
| **GitHub Copilot** | `github-copilot/` | `localhost:4321` | gRPC | - |

Expand Down Expand Up @@ -1223,6 +1224,7 @@ Cela se produit lorsqu'une autre instance du bot est en cours d'exécution. Assu
| **Zhipu** | 200K tokens/mois | Convient aux utilisateurs chinois |
| **Brave Search** | 2000 requêtes/mois | Fonctionnalité de recherche web |
| **Groq** | Offre gratuite dispo | Inférence ultra-rapide (Llama, Mixtral) |
| **ModelScope** | 2000 requêtes/jour | Inférence gratuite (Qwen, GLM, DeepSeek, etc.) |

---

Expand Down
2 changes: 2 additions & 0 deletions README.ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -926,6 +926,7 @@ HEARTBEAT_OK 応答 ユーザーが直接結果を受け取る
| **ShengsuanYun** | `shengsuanyun/` | `https://router.shengsuanyun.com/api/v1` | OpenAI | - |
| **BytePlus** | `byteplus/` | `https://ark.ap-southeast.bytepluses.com/api/v3` | OpenAI | [キーを取得](https://www.byteplus.com) |
| **LongCat** | `longcat/` | `https://api.longcat.chat/openai` | OpenAI | [キーを取得](https://longcat.chat/platform) |
| **ModelScope (魔搭)**| `modelscope/` | `https://api-inference.modelscope.cn/v1` | OpenAI | [トークンを取得](https://modelscope.cn/my/tokens) |
| **Antigravity** | `antigravity/` | Google Cloud | カスタム | OAuthのみ |
| **GitHub Copilot** | `github-copilot/` | `localhost:4321` | gRPC | - |

Expand Down Expand Up @@ -1146,6 +1147,7 @@ Web 検索を有効にするには:
| **Tavily** | 月 1000 クエリ | AI エージェント検索最適化 |
| **Groq** | 無料枠あり | 高速推論(Llama, Mixtral) |
| **Cerebras** | 無料枠あり | 高速推論(Llama, Qwen など) |
| **ModelScope** | 1 日 2000 リクエスト | 無料推論(Qwen, GLM, DeepSeek など) |

---

Expand Down
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -1039,6 +1039,7 @@ This design also enables **multi-agent support** with flexible provider selectio
| **BytePlus** | `byteplus/` | `https://ark.ap-southeast.bytepluses.com/api/v3` | OpenAI | [Get Key](https://www.byteplus.com) |
| **Vivgrid** | `vivgrid/` | `https://api.vivgrid.com/v1` | OpenAI | [Get Key](https://vivgrid.com) |
| **LongCat** | `longcat/` | `https://api.longcat.chat/openai` | OpenAI | [Get Key](https://longcat.chat/platform) |
| **ModelScope (魔搭)**| `modelscope/` | `https://api-inference.modelscope.cn/v1` | OpenAI | [Get Token](https://modelscope.cn/my/tokens) |
| **Antigravity** | `antigravity/` | Google Cloud | Custom | OAuth only |
| **GitHub Copilot** | `github-copilot/` | `localhost:4321` | gRPC | - |

Expand Down Expand Up @@ -1526,6 +1527,7 @@ This happens when another instance of the bot is running. Make sure only one `pi
| **Groq** | Free tier available | Fast inference (Llama, Mixtral) |
| **Cerebras** | Free tier available | Fast inference (Llama, Qwen, etc.) |
| **LongCat** | Up to 5M tokens/day | Fast inference (free tier) |
| **ModelScope** | 2000 requests/day | Free inference (Qwen, GLM, DeepSeek, etc.) |

---

Expand Down
2 changes: 2 additions & 0 deletions README.pt-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -981,6 +981,7 @@ Este design também possibilita o **suporte multi-agent** com seleção flexíve
| **ShengsuanYun** | `shengsuanyun/` | `https://router.shengsuanyun.com/api/v1` | OpenAI | - |
| **BytePlus** | `byteplus/` | `https://ark.ap-southeast.bytepluses.com/api/v3` | OpenAI | [Obter Chave](https://www.byteplus.com) |
| **LongCat** | `longcat/` | `https://api.longcat.chat/openai` | OpenAI | [Obter Chave](https://longcat.chat/platform) |
| **ModelScope (魔搭)**| `modelscope/` | `https://api-inference.modelscope.cn/v1` | OpenAI | [Obter Token](https://modelscope.cn/my/tokens) |
| **Antigravity** | `antigravity/` | Google Cloud | Custom | Apenas OAuth |
| **GitHub Copilot** | `github-copilot/` | `localhost:4321` | gRPC | - |

Expand Down Expand Up @@ -1220,6 +1221,7 @@ Isso acontece quando outra instância do bot está em execução. Certifique-se
| **Brave Search** | 2000 consultas/mês | Funcionalidade de busca web |
| **Groq** | Plano gratuito disponível | Inferência ultra-rápida (Llama, Mixtral) |
| **Cerebras** | Plano gratuito disponível | Inferência ultra-rápida (Llama 3.3 70B) |
| **ModelScope** | 2000 requisições/dia | Inferência gratuita (Qwen, GLM, DeepSeek, etc.) |

---

Expand Down
2 changes: 2 additions & 0 deletions README.vi.md
Original file line number Diff line number Diff line change
Expand Up @@ -950,6 +950,7 @@ Thiết kế này cũng cho phép **hỗ trợ đa tác nhân** với lựa ch
| **ShengsuanYun** | `shengsuanyun/` | `https://router.shengsuanyun.com/api/v1` | OpenAI | - |
| **BytePlus** | `byteplus/` | `https://ark.ap-southeast.bytepluses.com/api/v3` | OpenAI | [Lấy Khóa](https://www.byteplus.com) |
| **LongCat** | `longcat/` | `https://api.longcat.chat/openai` | OpenAI | [Lấy Key](https://longcat.chat/platform) |
| **ModelScope (魔搭)**| `modelscope/` | `https://api-inference.modelscope.cn/v1` | OpenAI | [Lấy Token](https://modelscope.cn/my/tokens) |
| **Antigravity** | `antigravity/` | Google Cloud | Tùy chỉnh | Chỉ OAuth |
| **GitHub Copilot** | `github-copilot/` | `localhost:4321` | gRPC | - |

Expand Down Expand Up @@ -1188,6 +1189,7 @@ Một số nhà cung cấp (như Zhipu) có bộ lọc nội dung nghiêm ngặt
| **Zhipu** | 200K tokens/tháng | Phù hợp cho người dùng Trung Quốc |
| **Brave Search** | 2000 truy vấn/tháng | Chức năng tìm kiếm web |
| **Groq** | Có gói miễn phí | Suy luận siêu nhanh (Llama, Mixtral) |
| **ModelScope** | 2000 yêu cầu/ngày | Suy luận miễn phí (Qwen, GLM, DeepSeek, v.v.) |

---

Expand Down
2 changes: 2 additions & 0 deletions README.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -522,6 +522,7 @@ Agent 读取 HEARTBEAT.md
| **神算云** | `shengsuanyun/` | `https://router.shengsuanyun.com/api/v1` | OpenAI | - |
| **BytePlus** | `byteplus/` | `https://ark.ap-southeast.bytepluses.com/api/v3` | OpenAI | [获取密钥](https://www.byteplus.com) |
| **LongCat** | `longcat/` | `https://api.longcat.chat/openai` | OpenAI | [获取密钥](https://longcat.chat/platform) |
| **ModelScope (魔搭)**| `modelscope/` | `https://api-inference.modelscope.cn/v1` | OpenAI | [获取 Token](https://modelscope.cn/my/tokens) |
| **Antigravity** | `antigravity/` | Google Cloud | 自定义 | 仅 OAuth |
| **GitHub Copilot** | `github-copilot/` | `localhost:4321` | gRPC | - |

Expand Down Expand Up @@ -901,6 +902,7 @@ Discord: [https://discord.gg/V4sAZ9XWpN](https://discord.gg/V4sAZ9XWpN)
| **Tavily** | 1000 次查询/月 | AI Agent 搜索优化 |
| **Groq** | 提供免费层级 | 极速推理 (Llama, Mixtral) |
| **LongCat** | 最多 5M tokens/天 | 推理速度快 (免费额度) |
| **ModelScope (魔搭)** | 2000 次请求/天 | 免费推理 (Qwen, GLM, DeepSeek 等) |

---

Expand Down
10 changes: 10 additions & 0 deletions config/config.example.json
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,12 @@
"model": "longcat/LongCat-Flash-Thinking",
"api_key": "your-longcat-api-key"
},
{
"model_name": "modelscope-qwen",
"model": "modelscope/Qwen/Qwen3-235B-A22B-Instruct-2507",
"api_key": "your-modelscope-access-token",
"api_base": "https://api-inference.modelscope.cn/v1"
},
{
"model_name": "loadbalanced-gpt-5.4",
"model": "openai/gpt-5.4",
Expand Down Expand Up @@ -283,6 +289,10 @@
"longcat": {
"api_key": "",
"api_base": "https://api.longcat.chat/openai"
},
"modelscope": {
"api_key": "",
"api_base": "https://api-inference.modelscope.cn/v1"
}
},
"tools": {
Expand Down
4 changes: 3 additions & 1 deletion pkg/config/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -528,6 +528,7 @@ type ProvidersConfig struct {
Avian ProviderConfig `json:"avian"`
Minimax ProviderConfig `json:"minimax"`
LongCat ProviderConfig `json:"longcat"`
ModelScope ProviderConfig `json:"modelscope"`
}

// IsEmpty checks if all provider configs are empty (no API keys or API bases set)
Expand Down Expand Up @@ -555,7 +556,8 @@ func (p ProvidersConfig) IsEmpty() bool {
p.Mistral.APIKey == "" && p.Mistral.APIBase == "" &&
p.Avian.APIKey == "" && p.Avian.APIBase == "" &&
p.Minimax.APIKey == "" && p.Minimax.APIBase == "" &&
p.LongCat.APIKey == "" && p.LongCat.APIBase == ""
p.LongCat.APIKey == "" && p.LongCat.APIBase == "" &&
p.ModelScope.APIKey == "" && p.ModelScope.APIBase == ""
}

// MarshalJSON implements custom JSON marshaling for ProvidersConfig
Expand Down
8 changes: 8 additions & 0 deletions pkg/config/defaults.go
Original file line number Diff line number Diff line change
Expand Up @@ -369,6 +369,14 @@ func DefaultConfig() *Config {
APIKey: "",
},

// ModelScope (魔搭社区) - https://modelscope.cn/my/tokens
{
ModelName: "modelscope-qwen",
Model: "modelscope/Qwen/Qwen3-235B-A22B-Instruct-2507",
APIBase: "https://api-inference.modelscope.cn/v1",
APIKey: "",
},

// VLLM (local) - http://localhost:8000
{
ModelName: "local-model",
Expand Down
17 changes: 17 additions & 0 deletions pkg/config/migration.go
Original file line number Diff line number Diff line change
Expand Up @@ -424,6 +424,23 @@ func ConvertProvidersToModelList(cfg *Config) []ModelConfig {
}, true
},
},
{
providerNames: []string{"modelscope"},
protocol: "modelscope",
buildConfig: func(p ProvidersConfig) (ModelConfig, bool) {
if p.ModelScope.APIKey == "" && p.ModelScope.APIBase == "" {
return ModelConfig{}, false
}
return ModelConfig{
ModelName: "modelscope",
Model: "modelscope/Qwen/Qwen3-235B-A22B-Instruct-2507",
APIKey: p.ModelScope.APIKey,
APIBase: p.ModelScope.APIBase,
Proxy: p.ModelScope.Proxy,
RequestTimeout: p.ModelScope.RequestTimeout,
}, true
},
},
}

// Process each provider migration
Expand Down
7 changes: 4 additions & 3 deletions pkg/config/migration_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -163,14 +163,15 @@ func TestConvertProvidersToModelList_AllProviders(t *testing.T) {
Mistral: ProviderConfig{APIKey: "key18"},
Avian: ProviderConfig{APIKey: "key19"},
LongCat: ProviderConfig{APIKey: "key-longcat"},
ModelScope: ProviderConfig{APIKey: "key-modelscope"},
},
}

result := ConvertProvidersToModelList(cfg)

// All 22 providers should be converted
if len(result) != 22 {
t.Errorf("len(result) = %d, want 22", len(result))
// All 23 providers should be converted
if len(result) != 23 {
t.Errorf("len(result) = %d, want 23", len(result))
}
}

Expand Down
4 changes: 3 additions & 1 deletion pkg/providers/factory_provider.go
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ func CreateProviderFromConfig(cfg *config.ModelConfig) (LLMProvider, string, err
case "litellm", "openrouter", "groq", "zhipu", "gemini", "nvidia",
"ollama", "moonshot", "shengsuanyun", "deepseek", "cerebras",
"vivgrid", "volcengine", "vllm", "qwen", "mistral", "avian",
"minimax", "longcat":
"minimax", "longcat", "modelscope":
// All other OpenAI-compatible HTTP providers
if cfg.APIKey == "" && cfg.APIBase == "" {
return nil, "", fmt.Errorf("api_key or api_base is required for HTTP-based protocol %q", protocol)
Expand Down Expand Up @@ -217,6 +217,8 @@ func getDefaultAPIBase(protocol string) string {
return "https://api.minimaxi.com/v1"
case "longcat":
return "https://api.longcat.chat/openai"
case "modelscope":
return "https://api-inference.modelscope.cn/v1"
default:
return ""
}
Expand Down
30 changes: 30 additions & 0 deletions pkg/providers/factory_provider_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,7 @@ func TestCreateProviderFromConfig_DefaultAPIBase(t *testing.T) {
{"deepseek", "deepseek"},
{"ollama", "ollama"},
{"longcat", "longcat"},
{"modelscope", "modelscope"},
}

for _, tt := range tests {
Expand Down Expand Up @@ -186,6 +187,35 @@ func TestCreateProviderFromConfig_LongCat(t *testing.T) {
}
}

func TestCreateProviderFromConfig_ModelScope(t *testing.T) {
cfg := &config.ModelConfig{
ModelName: "test-modelscope",
Model: "modelscope/Qwen/Qwen3-235B-A22B-Instruct-2507",
APIKey: "test-key",
APIBase: "https://api-inference.modelscope.cn/v1",
}

provider, modelID, err := CreateProviderFromConfig(cfg)
if err != nil {
t.Fatalf("CreateProviderFromConfig() error = %v", err)
}
if provider == nil {
t.Fatal("CreateProviderFromConfig() returned nil provider")
}
if modelID != "Qwen/Qwen3-235B-A22B-Instruct-2507" {
t.Errorf("modelID = %q, want %q", modelID, "Qwen/Qwen3-235B-A22B-Instruct-2507")
}
if _, ok := provider.(*HTTPProvider); !ok {
t.Fatalf("expected *HTTPProvider, got %T", provider)
}
}

func TestGetDefaultAPIBase_ModelScope(t *testing.T) {
if got := getDefaultAPIBase("modelscope"); got != "https://api-inference.modelscope.cn/v1" {
t.Fatalf("getDefaultAPIBase(%q) = %q, want %q", "modelscope", got, "https://api-inference.modelscope.cn/v1")
}
}

func TestCreateProviderFromConfig_Anthropic(t *testing.T) {
cfg := &config.ModelConfig{
ModelName: "test-anthropic",
Expand Down
Loading