Zhipu

Z.ai: GLM-4.7 FlashX

Chat
z-ai/glm-4.7-flashx

As a 30B-class SOTA model, GLM-4.7-Flash offers a new option that balances performance and efficiency. It is further optimized for agentic coding use cases, strengthening coding capabilities, long-horizon task planning, and tool collaboration, and has achieved leading performance among open-source models of the same size on several current public benchmark leaderboards.

200K context window
128K max output tokens
Released: 2026-01-19
Supported Protocols:OpenAIopenaiAnthropicanthropic
Available Providers:ZhipuZhipu
Capabilities:Function CallingReasoningPrompt CachingWeb Search

Pricing

TypePrice
Input Tokens$0.072/M
Output Tokens$0.43/M
Cache Read$0.015/M
Web Search$0.005/R

Code Examples

from openai import OpenAI
client = OpenAI(
base_url="https://api.ofox.ai/v1",
api_key="YOUR_OFOX_API_KEY",
)
response = client.chat.completions.create(
model="z-ai/glm-4.7-flashx",
messages=[
{"role": "user", "content": "Hello!"}
],
)
print(response.choices[0].message.content)

Frequently Asked Questions

Z.ai: GLM-4.7 FlashX on Ofox.ai costs $0.072/M per million input tokens and $0.43/M per million output tokens. Pay-as-you-go, no monthly fees.

Discord

Join our Discord server

Discord โ†’