feat(provider): add ModelScope as OpenAI-compatible provider#1486
Conversation
|
Hi, @alexhoshina |
Conflicts resolved: - helpers.go: merged import sections (io, log, net/http + sync) - config.go: merged AgentDefaults with Schedule, SafetyLevel, BirthYear Upstream features merged: - Config hot reload (PR sipeed#1187) - Anthropic Messages protocol (PR sipeed#1284) - Enhanced Skill Installer v2 (PR sipeed#1252) - Model command CLI (PR sipeed#1250) - ModelScope provider (PR sipeed#1486) - LINE webhook DoS protection (PR sipeed#1413)
|
@dataCenter430 Nice work adding ModelScope as a provider! The 2,000 free requests per day is a solid free-tier option, and reusing the existing HTTPProvider for the OpenAI-compatible endpoint keeps things clean. Good attention to detail with all the README translations too. We have a PicoClaw Dev Group on Discord where contributors can connect and collaborate. If you're interested, send an email to |
Hi, @alexhoshina thanks for your review. 🙏 |
Of course! Thank you so much for your enthusiasm |
…1486) * feat(provider): add ModelScope as OpenAI-compatible provider * test(provider): add ModelScope provider and migration tests * docs: add ModelScope to README provider tables and free tier sections * chore: add ModelScope to example config and env template
…1486) * feat(provider): add ModelScope as OpenAI-compatible provider * test(provider): add ModelScope provider and migration tests * docs: add ModelScope to README provider tables and free tier sections * chore: add ModelScope to example config and env template
This merge brings in upstream changes including: - zerolog logger refactoring (sipeed#1239) - Anthropic Messages API support (sipeed#1284) - Global WebSocket for Pico chat (sipeed#1507) - ModelScope and LongCat providers (sipeed#1317, sipeed#1486) - Web gateway hot reload and polling (sipeed#1684) - Credential encryption with AES-GCM (sipeed#1521) - Cross-platform systray UI (sipeed#1649) - Security fixes for LINE webhooks, identity allowlist - And many more improvements Conflict resolved: - pkg/agent/instance.go: merged buildAllowReadPatterns/mediaTempDirPattern functions from upstream while preserving A2A registry Close() handling Custom features preserved: - A2A channel (Agent-to-Agent protocol) - Krabot channel - Enhanced Docker multi-channel support
…1486) * feat(provider): add ModelScope as OpenAI-compatible provider * test(provider): add ModelScope provider and migration tests * docs: add ModelScope to README provider tables and free tier sections * chore: add ModelScope to example config and env template
📝 Description
Add ModelScope (魔搭社区) as a new OpenAI-compatible LLM provider.
ModelScope is Alibaba's open-source AI model community and inference platform. It provides an OpenAI-compatible API at
https://api-inference.modelscope.cn/v1with 2,000 free API requests per day, making it an attractive free-tier option. Available models include Qwen3, GLM-4, DeepSeek-V3, and many other open-source models.Changes:
modelscopeas an OpenAI-compatible protocol in the provider factory with default API basehttps://api-inference.modelscope.cn/v1ModelScopefield toProvidersConfigstruct andIsEmpty()checkQwen/Qwen3-235B-A22B-Instruct-2507) inDefaultConfig()model_list+ deprecatedproviderssection)MODELSCOPE_API_KEYto.env.example🗣️ Type of Change
🤖 AI Code Generation
🔗 Related Issue
Closes #1438
📚 Technical Context (Skip for Docs)
/v1/chat/completionsendpoint, so no new provider implementation is needed — it reuses the existingHTTPProvider(same as deepseek, moonshot, longcat, etc.). The model identifier format uses multiple slashes (e.g.,modelscope/Qwen/Qwen3-235B-A22B-Instruct-2507), which is correctly handled byExtractProtocol()sincestrings.Cutsplits on the first/only (same behavior as nvidia and avian models).🧪 Test Environment
Qwen/Qwen3-235B-A22B-Instruct-2507📸 Evidence (Optional)
Click to view Logs/Screenshots
Configuration example:
{ "model_list": [ { "model_name": "modelscope-qwen", "model": "modelscope/Qwen/Qwen3-235B-A22B-Instruct-2507", "api_key": "your-modelscope-access-token", "api_base": "https://api-inference.modelscope.cn/v1" } ] }Users can obtain their AccessToken at https://modelscope.cn/my/tokens
☑️ Checklist