Skip to content

feat: add MiniMax as a built-in LLM provider#940

Merged
zmanian merged 1 commit intonearai:stagingfrom
octo-patch:feat/add-minimax-provider
Mar 12, 2026
Merged

feat: add MiniMax as a built-in LLM provider#940
zmanian merged 1 commit intonearai:stagingfrom
octo-patch:feat/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown
Contributor

Summary

  • Add MiniMax to the built-in provider registry (providers.json) with OpenAI-compatible protocol
  • Update docs/LLM_PROVIDERS.md with MiniMax setup instructions
  • Add MiniMax configuration to .env.example

Available Models

Model Description
MiniMax-M2.5 (default) Peak performance, 204,800 token context
MiniMax-M2.5-highspeed Same performance, faster inference

Configuration

LLM_BACKEND=minimax
MINIMAX_API_KEY=<your-key>
# Optional: use China mainland endpoint
# MINIMAX_BASE_URL=https://api.minimaxi.com/v1

Changes

  • providers.json: Added MiniMax provider definition (OpenAI-compatible, api.minimax.io/v1)
  • docs/LLM_PROVIDERS.md: Added MiniMax to provider overview table and dedicated setup section
  • .env.example: Added MiniMax configuration template

Test plan

  • Verify providers.json is valid JSON
  • Set LLM_BACKEND=minimax and MINIMAX_API_KEY to confirm provider loads correctly
  • Send a chat completion request to verify end-to-end functionality

Add MiniMax to the provider registry with OpenAI-compatible protocol.

Available models:
- MiniMax-M2.5 (default) - 204,800 token context window
- MiniMax-M2.5-highspeed - same performance, faster inference

Configuration:
  LLM_BACKEND=minimax
  MINIMAX_API_KEY=<your-key>

Supports both global (api.minimax.io) and China mainland
(api.minimaxi.com) endpoints via MINIMAX_BASE_URL env var.
@github-actions github-actions bot added scope: docs Documentation size: S 10-49 changed lines risk: low Changes to docs, tests, or low-risk modules contributor: new First-time contributor labels Mar 11, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces MiniMax as a new integrated Large Language Model (LLM) provider. This enhancement expands the system's compatibility with various AI services, offering users more flexibility and choice in leveraging different LLM backends. The changes include necessary configuration updates and documentation to support the new provider.

Highlights

  • New LLM Provider: Added MiniMax to the built-in provider registry (providers.json) with OpenAI-compatible protocol.
  • Documentation Update: Updated docs/LLM_PROVIDERS.md with MiniMax setup instructions.
  • Configuration Example: Added MiniMax configuration to .env.example.
Changelog
  • .env.example
    • Added MiniMax configuration template.
  • docs/LLM_PROVIDERS.md
    • Added MiniMax to provider overview table and dedicated setup section.
  • providers.json
    • Added MiniMax provider definition (OpenAI-compatible, api.minimax.io/v1).
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for MiniMax as a new LLM provider. The changes are well-structured, including updates to the example environment file, provider registry, and documentation. I've provided a couple of minor suggestions to improve the documentation's clarity and enhance the provider's description to better inform users of its capabilities.

Comment on lines +82 to +85
```env
LLM_BACKEND=minimax
MINIMAX_API_KEY=...
```
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To make it clearer for users how to select a specific model, it would be helpful to include MINIMAX_MODEL in this example configuration block. This would align the documentation with the variables presented in .env.example and provide a more complete setup example.

Suggested change
```env
LLM_BACKEND=minimax
MINIMAX_API_KEY=...
```
LLM_BACKEND=minimax
MINIMAX_API_KEY=...
MINIMAX_MODEL=MiniMax-M2.5

"base_url_env": "MINIMAX_BASE_URL",
"model_env": "MINIMAX_MODEL",
"default_model": "MiniMax-M2.5",
"description": "MiniMax API (MiniMax-M2.5 and MiniMax-M2.5-highspeed models)",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The large context window (204,800 tokens) is a significant feature of the MiniMax models. Highlighting this in the provider description would be beneficial for users browsing through the available LLM providers, as it's a key differentiator.

Suggested change
"description": "MiniMax API (MiniMax-M2.5 and MiniMax-M2.5-highspeed models)",
"description": "MiniMax API (204k context, MiniMax-M2.5 models)",

@zmanian zmanian mentioned this pull request Mar 12, 2026
3 tasks
Copy link
Copy Markdown
Collaborator

@zmanian zmanian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Clean data-only addition that follows the established pattern for OpenAI-compatible providers.

Checked:

  1. Provider pattern: Correctly uses open_ai_completions protocol with provider-specific env vars (MINIMAX_API_KEY, MINIMAX_MODEL, MINIMAX_BASE_URL). Structure matches Mistral, Yandex, and other OpenAI-compatible entries.
  2. API key handling: api_key_required: true, api_key_env: "MINIMAX_API_KEY", and secret_name: "llm_minimax_api_key" all follow convention. No hardcoded secrets.
  3. Error handling: No Rust code changes -- purely declarative JSON + docs. The existing RigAdapter and registry machinery handle all runtime concerns.
  4. Config/factory wiring: No wiring needed -- providers.json is loaded dynamically by RegistryCatalog. The base_url_env field is correctly used (matches the pattern of OpenAI, Anthropic, Ollama, Cloudflare providers).
  5. Model names: MiniMax-M2.5 as default model, MiniMax-M2.5-highspeed documented as alternative. Reasonable.

Minor nits (non-blocking):

  • key_url points to https://platform.minimax.io (root). Other providers link directly to their API keys page when possible (e.g., https://platform.openai.com/api-keys). If MiniMax has a direct API keys URL, consider updating.
  • can_list_models: false is fine if MiniMax doesn't support the /v1/models endpoint, but if they do (many OpenAI-compatible APIs do), setting it to true would improve the onboarding wizard experience.

@zmanian zmanian merged commit 863702a into nearai:staging Mar 12, 2026
2 checks passed
@ironclaw-ci ironclaw-ci bot mentioned this pull request Mar 12, 2026
octo-patch pushed a commit to octo-patch/ironclaw that referenced this pull request Mar 15, 2026
MiniMax was added as a built-in provider in nearai#940, but the README
files still only listed OpenRouter, Together AI, Fireworks AI, and
Ollama.  Update the "Alternative LLM Providers" section across all
three language variants (EN, ZH-CN, RU) to reflect the full set of
built-in providers — including MiniMax — and add a quick-start env
example so users can get started without reading the full provider
guide.
bkutasi pushed a commit to bkutasi/ironclaw that referenced this pull request Mar 28, 2026
Add MiniMax to the provider registry with OpenAI-compatible protocol.

Available models:
- MiniMax-M2.5 (default) - 204,800 token context window
- MiniMax-M2.5-highspeed - same performance, faster inference

Configuration:
  LLM_BACKEND=minimax
  MINIMAX_API_KEY=<your-key>

Supports both global (api.minimax.io) and China mainland
(api.minimaxi.com) endpoints via MINIMAX_BASE_URL env var.

Co-authored-by: PR Bot <pr-bot@minimaxi.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor: new First-time contributor risk: low Changes to docs, tests, or low-risk modules scope: docs Documentation size: S 10-49 changed lines

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants