Skip to content

feat: add MiniMax as a built-in LLM provider#1376

Merged
pan-x-c merged 2 commits intoagentscope-ai:mainfrom
octo-patch:feat/add-minimax-provider
Mar 13, 2026
Merged

feat: add MiniMax as a built-in LLM provider#1376
pan-x-c merged 2 commits intoagentscope-ai:mainfrom
octo-patch:feat/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown
Contributor

Summary

Add MiniMax as a built-in LLM provider. MiniMax offers high-performance models with a 204,800 token context window via an OpenAI-compatible API, making integration seamless with the existing OpenAIProvider class.

Changes

  • src/copaw/providers/provider_manager.py:

    • Add MINIMAX_MODELS list with MiniMax-M2.5 (default) and MiniMax-M2.5-highspeed
    • Create PROVIDER_MINIMAX as a built-in OpenAIProvider with base_url="https://api.minimax.io/v1" and freeze_url=True
    • Set generate_kwargs={"temperature": 1.0} to comply with MiniMax's temperature constraint (must be > 0)
    • Register PROVIDER_MINIMAX in _init_builtins()
  • tests/unit/providers/test_minimax_provider.py (new):

    • Tests for provider type, configuration defaults, model list, manager registration, connection check, and model activation

Model Details

Model Context Window Input Price Output Price
MiniMax-M2.5 204,800 tokens $0.3/M tokens $1.2/M tokens
MiniMax-M2.5-highspeed 204,800 tokens $0.6/M tokens $2.4/M tokens

Why MiniMax?

  • OpenAI-compatible API — no new SDK dependencies needed
  • 204K context window — ideal for processing large documents
  • Competitive pricing vs GPT-4-class models
  • Already referenced in ALIYUN_CODINGPLAN_MODELS; this PR adds direct API access

Test plan

  • All 7 new MiniMax provider unit tests pass
  • All 60 existing provider tests pass (no regressions)
  • Provider correctly registered and discoverable via ProviderManager
  • Model activation works for both MiniMax models

Add MiniMax (https://platform.minimax.io) as a built-in provider using
the OpenAI-compatible API. MiniMax offers high-performance models with
a 204,800 token context window.

Changes:
- Add MINIMAX_MODELS list with MiniMax-M2.5 and MiniMax-M2.5-highspeed
- Register PROVIDER_MINIMAX as a built-in OpenAIProvider
- Set temperature=1.0 default via generate_kwargs (MiniMax requires >0)
- Add unit tests for MiniMax provider registration and configuration
@github-actions github-actions bot added the first-time-contributor PR created by a first time contributor label Mar 12, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the platform's LLM capabilities by introducing MiniMax as a new built-in provider. This integration allows users to access MiniMax's high-performance models, notably those with a large 204,800 token context window, through an existing OpenAI-compatible API structure. The change enhances the system's flexibility and offers more powerful options for processing extensive documents and complex queries.

Highlights

  • New LLM Provider Integration: Integrated MiniMax as a new built-in Large Language Model (LLM) provider, leveraging its OpenAI-compatible API for seamless integration.
  • MiniMax Model Definitions: Defined MINIMAX_MODELS including MiniMax-M2.5 and MiniMax-M2.5-highspeed, both supporting a 204,800 token context window.
  • Provider Configuration: Configured PROVIDER_MINIMAX as an OpenAIProvider instance with its specific base_url and a mandatory temperature of 1.0 for generation parameters.
  • Provider Registration: Registered the new PROVIDER_MINIMAX within the ProviderManager's built-in providers.
  • Unit Test Coverage: Added a comprehensive suite of unit tests for the MiniMax provider, covering its type, configuration, model list, manager registration, connection checks, and model activation.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/copaw/providers/provider_manager.py
    • Added a new list, MINIMAX_MODELS, to define the available MiniMax models.
    • Created PROVIDER_MINIMAX as an instance of OpenAIProvider with MiniMax-specific configuration, including its API base URL and generation parameters.
    • Registered PROVIDER_MINIMAX in the _init_builtins method to make it discoverable by the ProviderManager.
  • tests/unit/providers/test_minimax_provider.py
    • Added a new test file to validate the MiniMax provider's integration.
    • Included tests to verify the provider's type, default configuration, model list accuracy, and correct registration within the ProviderManager.
    • Implemented tests for successful connection checks and model activation for MiniMax models.
Activity
  • The author, octo-patch, created this pull request to add MiniMax as a built-in LLM provider.
  • The pull request includes a detailed summary of changes, model details, and a test plan.
  • All new MiniMax provider unit tests are passing, along with existing provider tests, indicating no regressions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Generative AI Prohibited Use Policy, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces MiniMax as a new built-in LLM provider by leveraging the existing OpenAIProvider. The implementation is clean and includes necessary configurations, such as setting a default temperature to comply with the MiniMax API. The accompanying unit tests are a great start. I've provided a couple of suggestions to enhance the new tests, making them more robust and comprehensive. Overall, the changes are well-executed.

@pan-x-c
Copy link
Copy Markdown
Collaborator

pan-x-c commented Mar 13, 2026

Please fix the pre-commit error

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@octo-patch
Copy link
Copy Markdown
Contributor Author

Thanks for the review! I have fixed the pre-commit error by removing the unused ModelInfo import from the test file. The pylint check should pass now.

@octo-patch octo-patch temporarily deployed to maintainer-approved March 13, 2026 06:50 — with GitHub Actions Inactive
@pan-x-c pan-x-c merged commit 5421f76 into agentscope-ai:main Mar 13, 2026
19 checks passed
@github-actions
Copy link
Copy Markdown

Welcome to CoPaw! 🎉

Thank you @octo-patch for your first contribution! Your PR has been merged. 🚀

We'd love to give you a shout-out in our release notes! If you're comfortable sharing, please reply to this comment with your social media handles using the format below:

discord: your_discord_handle
x: your_x_handle
xiaohongshu: your_xiaohongshu_id

Note: Only share what you're comfortable with — all fields are optional.

Thanks again for helping make CoPaw better!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

first-time-contributor PR created by a first time contributor

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants