feat: add MiniMax LLM provider support#197
Open
octo-patch wants to merge 1 commit intovxcontrol:masterfrom
Open
feat: add MiniMax LLM provider support#197octo-patch wants to merge 1 commit intovxcontrol:masterfrom
octo-patch wants to merge 1 commit intovxcontrol:masterfrom
Conversation
Add MiniMax as a new LLM provider following the existing provider pattern. MiniMax offers an OpenAI-compatible API at https://api.minimax.io/v1 with models MiniMax-M2.5 (204K context) and MiniMax-M2.5-highspeed. Changes: - Add backend/pkg/providers/minimax/ package with provider implementation, config.yml (model configs per option type), and models.yml (model definitions) - Add ProviderMiniMax type constant and DefaultProviderNameMiniMax - Add MINIMAX_API_KEY, MINIMAX_SERVER_URL, MINIMAX_PROVIDER config fields - Register MiniMax provider in factory (default config, instantiation, NewProvider switch)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Add MiniMax as a new LLM provider, following the existing provider architecture pattern (similar to DeepSeek, Kimi, GLM, Qwen).
MiniMax offers an OpenAI-compatible API with high-performance models:
MiniMax-M2.5— 204K context window, suitable for general dialogue, code generation, and complex reasoningMiniMax-M2.5-highspeed— Optimized version with faster inference, same 204K contextChanges
New files
backend/pkg/providers/minimax/minimax.go— Provider implementation using langchaingo OpenAI clientbackend/pkg/providers/minimax/config.yml— Model configurations for all option types (simple, primary_agent, assistant, generator, coder, etc.)backend/pkg/providers/minimax/models.yml— Model definitions with descriptions and pricingModified files
backend/pkg/providers/provider/provider.go— AddProviderMiniMaxtype constant andDefaultProviderNameMiniMaxbackend/pkg/config/config.go— AddMINIMAX_API_KEY,MINIMAX_SERVER_URL,MINIMAX_PROVIDERenvironment variable config fieldsbackend/pkg/providers/providers.go— Register MiniMax in provider factory (import, default config, instantiation, NewProvider switch case)Configuration
Set these environment variables to enable MiniMax:
Test Plan