Qwen3 Coder Flash
Chatbailian/qwen3-coder-flash基于Qwen3的代码生成模型,继承Qwen3-Coder-Plus的coding agent能力,支持多轮工具交互,重点优化仓库级别理解能力并增加工具调用稳定性。
1M context window
64K max output tokens
Released: 2025-08-05
Supported Protocols:openaianthropic
Available Providers:Aliyun
Capabilities:Function CallingReasoningPrompt Caching
Pricing
| Type | Price |
|---|---|
| Input Tokens | $0.5/M |
| Output Tokens | $2.5/M |
| Cache Read | $0.06/M |
| Cache Write | $0.27/M |
Code Examples
from openai import OpenAIclient = OpenAI(base_url="https://api.ofox.ai/v1",api_key="YOUR_OFOX_API_KEY",)response = client.chat.completions.create(model="bailian/qwen3-coder-flash",messages=[{"role": "user", "content": "Hello!"}],)print(response.choices[0].message.content)
Related Models
Frequently Asked Questions
Qwen3 Coder Flash on Ofox.ai costs $0.5/M per million input tokens and $2.5/M per million output tokens. Pay-as-you-go, no monthly fees.
Qwen3 Coder Flash supports a context window of 1M tokens with max output of 64K tokens, allowing you to process large documents and maintain long conversations.
Simply set your base URL to https://api.ofox.ai/v1 and use your Ofox API key. The API is OpenAI-compatible — just change the base URL and API key in your existing code.
Qwen3 Coder Flash supports the following capabilities: Function Calling, Reasoning, Prompt Caching. Access all features through the Ofox.ai unified API.