Gemini

Google: Gemini 2.5 Flash Lite

Chat
google/gemini-2.5-flash-lite

Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, [thinking] (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the Reasoning API parameter to selectively trade off cost for intelligence.

1M kontextfenster
66K max. ausgabe-tokens
Veröffentlicht: 2025-07-22
Unterstützte Protokolle:OpenAIopenaiGeminigemini
Verfügbare Anbieter:GoogleCloudVertex
Fähigkeiten:VisionFunction CallingPrompt CachingPDF-Eingabe

Preise

TypPreis
Eingabe-Tokens$0.1/M
Ausgabe-Tokens$0.4/M
Audioeingabe$0.3/M
Cache-Lesen$0.025/M
Cache-Schreiben$1/M
Gecachtes Audio$0.3/M
Websuche$0.035/R

Code-Beispiele

from google import genai
client = genai.Client(
api_key="YOUR_OFOX_API_KEY",
http_options={"api_version": "v1beta", "url": "https://api.ofox.ai/gemini"},
)
response = client.models.generate_content(
model="google/gemini-2.5-flash-lite",
contents="Hello!",
)
print(response.text)

Häufig gestellte Fragen

Google: Gemini 2.5 Flash Lite auf Ofox.ai kostet $0.1/M pro Million Eingabe-Tokens und $0.4/M pro Million Ausgabe-Tokens. Pay-as-you-go, keine monatlichen Gebühren.

Discord

Treten Sie unserem Discord-Server bei

Discord