feat: add MiniMax as a built-in LLM provider#1376
feat: add MiniMax as a built-in LLM provider#1376pan-x-c merged 2 commits intoagentscope-ai:mainfrom
Conversation
Add MiniMax (https://platform.minimax.io) as a built-in provider using the OpenAI-compatible API. MiniMax offers high-performance models with a 204,800 token context window. Changes: - Add MINIMAX_MODELS list with MiniMax-M2.5 and MiniMax-M2.5-highspeed - Register PROVIDER_MINIMAX as a built-in OpenAIProvider - Set temperature=1.0 default via generate_kwargs (MiniMax requires >0) - Add unit tests for MiniMax provider registration and configuration
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the platform's LLM capabilities by introducing MiniMax as a new built-in provider. This integration allows users to access MiniMax's high-performance models, notably those with a large 204,800 token context window, through an existing OpenAI-compatible API structure. The change enhances the system's flexibility and offers more powerful options for processing extensive documents and complex queries. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces MiniMax as a new built-in LLM provider by leveraging the existing OpenAIProvider. The implementation is clean and includes necessary configurations, such as setting a default temperature to comply with the MiniMax API. The accompanying unit tests are a great start. I've provided a couple of suggestions to enhance the new tests, making them more robust and comprehensive. Overall, the changes are well-executed.
|
Please fix the pre-commit error |
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Thanks for the review! I have fixed the pre-commit error by removing the unused |
Welcome to CoPaw! 🎉Thank you @octo-patch for your first contribution! Your PR has been merged. 🚀 We'd love to give you a shout-out in our release notes! If you're comfortable sharing, please reply to this comment with your social media handles using the format below:
Thanks again for helping make CoPaw better! |
Summary
Add MiniMax as a built-in LLM provider. MiniMax offers high-performance models with a 204,800 token context window via an OpenAI-compatible API, making integration seamless with the existing
OpenAIProviderclass.Changes
src/copaw/providers/provider_manager.py:MINIMAX_MODELSlist withMiniMax-M2.5(default) andMiniMax-M2.5-highspeedPROVIDER_MINIMAXas a built-inOpenAIProviderwithbase_url="https://api.minimax.io/v1"andfreeze_url=Truegenerate_kwargs={"temperature": 1.0}to comply with MiniMax's temperature constraint (must be > 0)PROVIDER_MINIMAXin_init_builtins()tests/unit/providers/test_minimax_provider.py(new):Model Details
MiniMax-M2.5MiniMax-M2.5-highspeedWhy MiniMax?
ALIYUN_CODINGPLAN_MODELS; this PR adds direct API accessTest plan
ProviderManager