Zhipu

Z.ai: GLM-4.7-Flash (Free)

ChatFREE
z-ai/glm-4.7-flash:free

As a 30B-class SOTA model, GLM-4.7-Flash offers a new option that balances performance and efficiency. It is further optimized for agentic coding use cases, strengthening coding capabilities, long-horizon task planning, and tool collaboration, and has achieved leading performance among open-source models of the same size on several current public benchmark leaderboards.

200K context window
128K max output tokens
Released: 2026-01-19
Supported Protocols:OpenAIopenaiAnthropicanthropic
Available Providers:ZhipuZhipu
Capabilities:Function CallingPrompt CachingWeb Search

Pricing

TypePrice
Input Tokens$0/M
Output Tokens$0/M
Web Search$0.005/R

Code Examples

from openai import OpenAI
client = OpenAI(
base_url="https://api.ofox.ai/v1",
api_key="YOUR_OFOX_API_KEY",
)
response = client.chat.completions.create(
model="z-ai/glm-4.7-flash:free",
messages=[
{"role": "user", "content": "Hello!"}
],
)
print(response.choices[0].message.content)

Frequently Asked Questions

Z.ai: GLM-4.7-Flash (Free) on Ofox.ai costs $0 per million input tokens and $0 per million output tokens. Pay-as-you-go, no monthly fees.

Discord

Join our Discord server

Discord โ†’