Qwen3.6 Max Preview
Chatbailian/qwen3.6-max-previewQwen3.6系列中规模最大、综合能力最强的Max模型Preview版本,当前开放纯文本模型能力供体验。相较于此前发布的Qwen3-Max和Qwen3.6-Plus,本模型在vibe coding能力上进一步提升、coding agent执行更加高效、前端编程开发能力显著提升;长尾知识能力进一步升级。
256K context window
64K max output tokens
Released: 2026-04-20
Supported Protocols:openaianthropic
Available Providers:Aliyun
Capabilities:Function CallingReasoningPrompt Caching
Pricing
| Type | Price |
|---|---|
| Input Tokens | $2/M |
| Output Tokens | $12/M |
| Cache Read | $0.2/M |
| Cache Write | $2/M |
| Web Search | $0.01/R |
Code Examples
from openai import OpenAIclient = OpenAI(base_url="https://api.ofox.ai/v1",api_key="YOUR_OFOX_API_KEY",)response = client.chat.completions.create(model="bailian/qwen3.6-max-preview",messages=[{"role": "user", "content": "Hello!"}],)print(response.choices[0].message.content)
Related Models
Frequently Asked Questions
Qwen3.6 Max Preview on Ofox.ai costs $2/M per million input tokens and $12/M per million output tokens. Pay-as-you-go, no monthly fees.
Qwen3.6 Max Preview supports a context window of 256K tokens with max output of 64K tokens, allowing you to process large documents and maintain long conversations.
Simply set your base URL to https://api.ofox.ai/v1 and use your Ofox API key. The API is OpenAI-compatible — just change the base URL and API key in your existing code.
Qwen3.6 Max Preview supports the following capabilities: Function Calling, Reasoning, Prompt Caching. Access all features through the Ofox.ai unified API.