feat: add MiniMax as a built-in LLM provider#940
Conversation
Add MiniMax to the provider registry with OpenAI-compatible protocol. Available models: - MiniMax-M2.5 (default) - 204,800 token context window - MiniMax-M2.5-highspeed - same performance, faster inference Configuration: LLM_BACKEND=minimax MINIMAX_API_KEY=<your-key> Supports both global (api.minimax.io) and China mainland (api.minimaxi.com) endpoints via MINIMAX_BASE_URL env var.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces MiniMax as a new integrated Large Language Model (LLM) provider. This enhancement expands the system's compatibility with various AI services, offering users more flexibility and choice in leveraging different LLM backends. The changes include necessary configuration updates and documentation to support the new provider. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds support for MiniMax as a new LLM provider. The changes are well-structured, including updates to the example environment file, provider registry, and documentation. I've provided a couple of minor suggestions to improve the documentation's clarity and enhance the provider's description to better inform users of its capabilities.
| ```env | ||
| LLM_BACKEND=minimax | ||
| MINIMAX_API_KEY=... | ||
| ``` |
There was a problem hiding this comment.
To make it clearer for users how to select a specific model, it would be helpful to include MINIMAX_MODEL in this example configuration block. This would align the documentation with the variables presented in .env.example and provide a more complete setup example.
| ```env | |
| LLM_BACKEND=minimax | |
| MINIMAX_API_KEY=... | |
| ``` | |
| LLM_BACKEND=minimax | |
| MINIMAX_API_KEY=... | |
| MINIMAX_MODEL=MiniMax-M2.5 |
| "base_url_env": "MINIMAX_BASE_URL", | ||
| "model_env": "MINIMAX_MODEL", | ||
| "default_model": "MiniMax-M2.5", | ||
| "description": "MiniMax API (MiniMax-M2.5 and MiniMax-M2.5-highspeed models)", |
There was a problem hiding this comment.
The large context window (204,800 tokens) is a significant feature of the MiniMax models. Highlighting this in the provider description would be beneficial for users browsing through the available LLM providers, as it's a key differentiator.
| "description": "MiniMax API (MiniMax-M2.5 and MiniMax-M2.5-highspeed models)", | |
| "description": "MiniMax API (204k context, MiniMax-M2.5 models)", |
zmanian
left a comment
There was a problem hiding this comment.
LGTM. Clean data-only addition that follows the established pattern for OpenAI-compatible providers.
Checked:
- Provider pattern: Correctly uses
open_ai_completionsprotocol with provider-specific env vars (MINIMAX_API_KEY,MINIMAX_MODEL,MINIMAX_BASE_URL). Structure matches Mistral, Yandex, and other OpenAI-compatible entries. - API key handling:
api_key_required: true,api_key_env: "MINIMAX_API_KEY", andsecret_name: "llm_minimax_api_key"all follow convention. No hardcoded secrets. - Error handling: No Rust code changes -- purely declarative JSON + docs. The existing
RigAdapterand registry machinery handle all runtime concerns. - Config/factory wiring: No wiring needed --
providers.jsonis loaded dynamically byRegistryCatalog. Thebase_url_envfield is correctly used (matches the pattern of OpenAI, Anthropic, Ollama, Cloudflare providers). - Model names:
MiniMax-M2.5as default model,MiniMax-M2.5-highspeeddocumented as alternative. Reasonable.
Minor nits (non-blocking):
key_urlpoints tohttps://platform.minimax.io(root). Other providers link directly to their API keys page when possible (e.g.,https://platform.openai.com/api-keys). If MiniMax has a direct API keys URL, consider updating.can_list_models: falseis fine if MiniMax doesn't support the/v1/modelsendpoint, but if they do (many OpenAI-compatible APIs do), setting it totruewould improve the onboarding wizard experience.
MiniMax was added as a built-in provider in nearai#940, but the README files still only listed OpenRouter, Together AI, Fireworks AI, and Ollama. Update the "Alternative LLM Providers" section across all three language variants (EN, ZH-CN, RU) to reflect the full set of built-in providers — including MiniMax — and add a quick-start env example so users can get started without reading the full provider guide.
Add MiniMax to the provider registry with OpenAI-compatible protocol. Available models: - MiniMax-M2.5 (default) - 204,800 token context window - MiniMax-M2.5-highspeed - same performance, faster inference Configuration: LLM_BACKEND=minimax MINIMAX_API_KEY=<your-key> Supports both global (api.minimax.io) and China mainland (api.minimaxi.com) endpoints via MINIMAX_BASE_URL env var. Co-authored-by: PR Bot <pr-bot@minimaxi.com>
Summary
providers.json) with OpenAI-compatible protocoldocs/LLM_PROVIDERS.mdwith MiniMax setup instructions.env.exampleAvailable Models
MiniMax-M2.5(default)MiniMax-M2.5-highspeedConfiguration
Changes
providers.json: Added MiniMax provider definition (OpenAI-compatible,api.minimax.io/v1)docs/LLM_PROVIDERS.md: Added MiniMax to provider overview table and dedicated setup section.env.example: Added MiniMax configuration templateTest plan
providers.jsonis valid JSONLLM_BACKEND=minimaxandMINIMAX_API_KEYto confirm provider loads correctly