Z.ai: GLM-4.7-Flash (Free)
ChatFREEz-ai/glm-4.7-flash:freeAs a 30B-class SOTA model, GLM-4.7-Flash offers a new option that balances performance and efficiency. It is further optimized for agentic coding use cases, strengthening coding capabilities, long-horizon task planning, and tool collaboration, and has achieved leading performance among open-source models of the same size on several current public benchmark leaderboards.
200K context window
128K max output tokens
Released: 2026-01-19
Supported Protocols:openaianthropic
Available Providers:Zhipu
Capabilities:Function CallingPrompt CachingWeb Search
Pricing
| Type | Price |
|---|---|
| Input Tokens | $0/M |
| Output Tokens | $0/M |
| Web Search | $0.005/R |
Code Examples
from openai import OpenAIclient = OpenAI(base_url="https://api.ofox.ai/v1",api_key="YOUR_OFOX_API_KEY",)response = client.chat.completions.create(model="z-ai/glm-4.7-flash:free",messages=[{"role": "user", "content": "Hello!"}],)print(response.choices[0].message.content)
Related Models
Frequently Asked Questions
Z.ai: GLM-4.7-Flash (Free) on Ofox.ai costs $0 per million input tokens and $0 per million output tokens. Pay-as-you-go, no monthly fees.
Z.ai: GLM-4.7-Flash (Free) supports a context window of 200K tokens with max output of 128K tokens, allowing you to process large documents and maintain long conversations.
Simply set your base URL to https://api.ofox.ai/v1 and use your Ofox API key. The API is OpenAI-compatible โ just change the base URL and API key in your existing code.
Z.ai: GLM-4.7-Flash (Free) supports the following capabilities: Function Calling, Prompt Caching, Web Search. Access all features through the Ofox.ai unified API.