GPT-4o Mini
Chatopenai/gpt-4o-miniOpenAI's advanced small model, 60% cheaper than GPT-3.5 Turbo while scoring 82% on MMLU. Optimized for cost-effective tasks requiring strong language understanding and generation capabilities.
128K context window
16K max output tokens
Released: 2024-07-18
Supported Protocols:openai
Available Providers:Azure
Capabilities:VisionFunction CallingPrompt Caching
Pricing
| Type | Price |
|---|---|
| Input Tokens | $0.15/M |
| Output Tokens | $0.6/M |
| Cache Read | $0.075/M |
| Web Search | $0.01/R |
Code Examples
from openai import OpenAIclient = OpenAI(base_url="https://api.ofox.ai/v1",api_key="YOUR_OFOX_API_KEY",)response = client.chat.completions.create(model="openai/gpt-4o-mini",messages=[{"role": "user", "content": "Hello!"}],)print(response.choices[0].message.content)
Related Models
Frequently Asked Questions
GPT-4o Mini on Ofox.ai costs $0.15/M per million input tokens and $0.6/M per million output tokens. Pay-as-you-go, no monthly fees.
GPT-4o Mini supports a context window of 128K tokens with max output of 16K tokens, allowing you to process large documents and maintain long conversations.
Simply set your base URL to https://api.ofox.ai/v1 and use your Ofox API key. The API is OpenAI-compatible โ just change the base URL and API key in your existing code.
GPT-4o Mini supports the following capabilities: Vision, Function Calling, Prompt Caching. Access all features through the Ofox.ai unified API.