GLM-5V-Turbo
Chatz-ai/glm-5v-turboGLM-5V-Turbo is Z.AI’s first multimodal coding foundation model, built for vision-based coding tasks. It can natively process multimodal inputs such as images, video, and text, while also excelling at long-horizon planning, complex coding, and action execution.
200K context window
128K max output tokens
Released: 2026-04-01
Supported Protocols:openaianthropic
Available Providers:Zhipu
Capabilities:VisionFunction CallingReasoningPrompt CachingWeb SearchVideo InputPDF Input
Pricing
| Type | Price |
|---|---|
| Input Tokens | $1.2/M |
| Output Tokens | $4/M |
| Cache Read | $0.24/M |
| Web Search | $0.01/R |
Code Examples
from openai import OpenAIclient = OpenAI(base_url="https://api.ofox.ai/v1",api_key="YOUR_OFOX_API_KEY",)response = client.chat.completions.create(model="z-ai/glm-5v-turbo",messages=[{"role": "user", "content": "Hello!"}],)print(response.choices[0].message.content)
Related Models
Frequently Asked Questions
GLM-5V-Turbo on Ofox.ai costs $1.2/M per million input tokens and $4/M per million output tokens. Pay-as-you-go, no monthly fees.
GLM-5V-Turbo supports a context window of 200K tokens with max output of 128K tokens, allowing you to process large documents and maintain long conversations.
Simply set your base URL to https://api.ofox.ai/v1 and use your Ofox API key. The API is OpenAI-compatible — just change the base URL and API key in your existing code.
GLM-5V-Turbo supports the following capabilities: Vision, Function Calling, Reasoning, Prompt Caching, Web Search, Video Input, PDF Input. Access all features through the Ofox.ai unified API.