Qwen

Qwen: Qwen3.5 Flash

Chat
bailian/qwen3.5-flash

Qwen3.5原生视觉语言系列Flash模型,基于混合架构设计,融合了线性注意力机制与稀疏混合专家模型,实现了更高的推理效率。模型效果在纯文本与多模态方面相较3系列均实现飞跃式进步;响应速度快,兼具推理速度和性能。

1M context window
64K max output tokens
Released: 2026-02-23
Supported Protocols:OpenAIopenaiAnthropicanthropic
Available Providers:BaiLianAliyun
Capabilities:VisionFunction CallingReasoningPrompt CachingVideo Input

Pricing

TypePrice
Input Tokens$0.1/M
Output Tokens$0.4/M
Cache Read$0.01/M
Cache Write$0.125/M
Web Search$0.01/R

Code Examples

from openai import OpenAI
client = OpenAI(
base_url="https://api.ofox.ai/v1",
api_key="YOUR_OFOX_API_KEY",
)
response = client.chat.completions.create(
model="bailian/qwen3.5-flash",
messages=[
{"role": "user", "content": "Hello!"}
],
)
print(response.choices[0].message.content)

Frequently Asked Questions

Qwen: Qwen3.5 Flash on Ofox.ai costs $0.1/M per million input tokens and $0.4/M per million output tokens. Pay-as-you-go, no monthly fees.

Discord

Join our Discord server

Discord