DeepSeek V4 Pro
Chatdeepseek/deepseek-v4-proDeepSeek V4 Pro is a large-scale Mixture-of-Experts model from DeepSeek with 1.6T total parameters and 49B activated parameters, supporting a 1M-token context window. It is designed for advanced reasoning, coding, and long-horizon agent workflows, with strong performance across knowledge, math, and software engineering benchmarks.
1M context window
384K max output tokens
Released: 2026-04-24
Supported Protocols:openaianthropic
Available Providers:DeepSeek
Capabilities:Function CallingPrompt Caching
Pricing
| Type | Price |
|---|---|
| Input Tokens | $1.74/M |
| Output Tokens | $3.48/M |
| Cache Read | $0.145/M |
Code Examples
from openai import OpenAIclient = OpenAI(base_url="https://api.ofox.ai/v1",api_key="YOUR_OFOX_API_KEY",)response = client.chat.completions.create(model="deepseek/deepseek-v4-pro",messages=[{"role": "user", "content": "Hello!"}],)print(response.choices[0].message.content)
Related Models
Frequently Asked Questions
DeepSeek V4 Pro on Ofox.ai costs $1.74/M per million input tokens and $3.48/M per million output tokens. Pay-as-you-go, no monthly fees.
DeepSeek V4 Pro supports a context window of 1M tokens with max output of 384K tokens, allowing you to process large documents and maintain long conversations.
Simply set your base URL to https://api.ofox.ai/v1 and use your Ofox API key. The API is OpenAI-compatible — just change the base URL and API key in your existing code.
DeepSeek V4 Pro supports the following capabilities: Function Calling, Prompt Caching. Access all features through the Ofox.ai unified API.