DeepSeek

DeepSeek V4 Flash

Chat
deepseek/deepseek-v4-flash

DeepSeek V4 Flash is an efficiency-optimized Mixture-of-Experts model from DeepSeek with 284B total parameters and 13B activated parameters, supporting a 1M-token context window. It is designed for fast inference and high-throughput workloads, while maintaining strong reasoning and coding performance.

1M context window
384K max output tokens
Released: 2026-04-24
Supported Protocols:OpenAIopenaiAnthropicanthropic
Available Providers:DeepSeekDeepSeek
Capabilities:Function CallingPrompt Caching

Pricing

TypePrice
Input Tokens$0.14/M
Output Tokens$0.28/M
Cache Read$0.028/M

Code Examples

from openai import OpenAI
client = OpenAI(
base_url="https://api.ofox.ai/v1",
api_key="YOUR_OFOX_API_KEY",
)
response = client.chat.completions.create(
model="deepseek/deepseek-v4-flash",
messages=[
{"role": "user", "content": "Hello!"}
],
)
print(response.choices[0].message.content)

Frequently Asked Questions

DeepSeek V4 Flash on Ofox.ai costs $0.14/M per million input tokens and $0.28/M per million output tokens. Pay-as-you-go, no monthly fees.