MiniMax: MiniMax M2.1
Chatminimax/minimax-m2.1MiniMax-M2.1 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated parameters, it delivers a major jump in real-world capability while maintaining exceptional latency, scalability, and cost efficiency.
205K context window
131K max output tokens
Released: 2025-12-23
Supported Protocols:openaianthropic
Available Providers:MiniMaxAliyun
Capabilities:Function CallingReasoningPrompt CachingWeb Search
Pricing
| Type | Price |
|---|---|
| Input Tokens | $0.3/M |
| Output Tokens | $1.2/M |
| Cache Read | $0.03/M |
| Cache Write | $0.375/M |
Code Examples
from openai import OpenAIclient = OpenAI(base_url="https://api.ofox.ai/v1",api_key="YOUR_OFOX_API_KEY",)response = client.chat.completions.create(model="minimax/minimax-m2.1",messages=[{"role": "user", "content": "Hello!"}],)print(response.choices[0].message.content)
Related Models
Frequently Asked Questions
MiniMax: MiniMax M2.1 on Ofox.ai costs $0.3/M per million input tokens and $1.2/M per million output tokens. Pay-as-you-go, no monthly fees.
MiniMax: MiniMax M2.1 supports a context window of 205K tokens with max output of 131K tokens, allowing you to process large documents and maintain long conversations.
Simply set your base URL to https://api.ofox.ai/v1 and use your Ofox API key. The API is OpenAI-compatible โ just change the base URL and API key in your existing code.
MiniMax: MiniMax M2.1 supports the following capabilities: Function Calling, Reasoning, Prompt Caching, Web Search. Access all features through the Ofox.ai unified API.