Skip to content

feat: add MiniMax provider support#15

Open
octo-patch wants to merge 1 commit intoNousResearch:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax provider support#15
octo-patch wants to merge 1 commit intoNousResearch:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMax chat model provider via the OpenAI-compatible endpoint (https://api.minimax.io/v1)
  • Add MINIMAX_API_KEY environment variable support and a make_lm() helper on EvolutionConfig that auto-routes MiniMax model strings to the correct base URL and API key
  • Add --use-minimax CLI flag to evolve_skill.py for easy one-flag switching
  • Update SyntheticDatasetBuilder and LLMJudge to use config.make_lm() so all LM calls benefit from MiniMax routing
  • Document MiniMax usage in README with model table and usage examples
  • Add 16 unit tests covering constants, env var loading, model routing, temperature enforcement, and pass-through behavior for non-MiniMax models

Supported models

Model ID Description
MiniMax-M2.7 Peak performance — default
MiniMax-M2.7-highspeed Same performance, lower latency

Usage

export MINIMAX_API_KEY=your_key_here

# Shorthand
python -m evolution.skills.evolve_skill --skill github-code-review --use-minimax

# Explicit
python -m evolution.skills.evolve_skill \
    --skill github-code-review \
    --optimizer-model minimax/MiniMax-M2.7 \
    --eval-model minimax/MiniMax-M2.7-highspeed

API Reference

Test plan

  • pytest tests/core/test_minimax_config.py — 16 new tests, all passing
  • pytest tests/ (excluding pre-existing broken test_constraints.py) — 139 passed, 0 failed

- Add MiniMax chat model provider via OpenAI-compatible endpoint
- Add MINIMAX_API_KEY and MINIMAX_BASE_URL config fields to EvolutionConfig
- Add make_lm() helper to EvolutionConfig that routes MiniMax models to
  https://api.minimax.io/v1 with correct temperature (1.0, required by MiniMax)
- Support bare model IDs and prefixed forms (minimax/, openai/)
- Add --use-minimax CLI flag to evolve_skill.py for easy MiniMax selection
- Update dataset_builder.py and fitness.py to use config.make_lm()
- Add 16 unit tests covering MiniMax config and LM routing
- Document MiniMax usage in README

Supported models: MiniMax-M2.7, MiniMax-M2.7-highspeed
Copy link
Copy Markdown

@jlicerio jlicerio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Approved by Hermes Agent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants