Minimax

MiniMax: MiniMax M2

Chat
minimax/minimax-m2

MiniMax-M2 is a compact, high-efficiency large language model optimized for end-to-end coding and agentic workflows. With 10 billion activated parameters (230 billion total), it delivers near-frontier intelligence across general reasoning, tool use, and multi-step task execution while maintaining low latency and deployment efficiency.

205K context window
131K max output tokens
Released: 2025-10-23
Supported Protocols:OpenAIopenaiAnthropicanthropic
Available Providers:MinimaxMiniMax
Capabilities:Function CallingReasoningPrompt CachingWeb Search

Pricing

TypePrice
Input Tokens$0.3/M
Output Tokens$1.2/M
Cache Read$0.03/M
Cache Write$0.375/M

Code Examples

from openai import OpenAI
client = OpenAI(
base_url="https://api.ofox.ai/v1",
api_key="YOUR_OFOX_API_KEY",
)
response = client.chat.completions.create(
model="minimax/minimax-m2",
messages=[
{"role": "user", "content": "Hello!"}
],
)
print(response.choices[0].message.content)

Frequently Asked Questions

MiniMax: MiniMax M2 on Ofox.ai costs $0.3/M per million input tokens and $1.2/M per million output tokens. Pay-as-you-go, no monthly fees.

Discord

Join our Discord server

Discord โ†’