Gemini

Google: Gemini 2.5 Flash Lite

Chat
google/gemini-2.5-flash-lite

Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, [thinking] (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the Reasoning API parameter to selectively trade off cost for intelligence.

1M fenêtre de contexte
66K tokens de sortie max
Publié: 2025-07-22
Protocoles supportés:OpenAIopenaiGeminigemini
Fournisseurs disponibles:GoogleCloudVertex
Capacités:VisionFunction CallingPrompt CachingEntrée PDF

Tarifs

TypePrix
Tokens d'entrée$0.1/M
Tokens de sortie$0.4/M
Entrée audio$0.3/M
Lecture cache$0.025/M
Écriture cache$1/M
Audio en cache$0.3/M
Recherche web$0.035/R

Exemples de code

from google import genai
client = genai.Client(
api_key="YOUR_OFOX_API_KEY",
http_options={"api_version": "v1beta", "url": "https://api.ofox.ai/gemini"},
)
response = client.models.generate_content(
model="google/gemini-2.5-flash-lite",
contents="Hello!",
)
print(response.text)

Questions fréquentes

Google: Gemini 2.5 Flash Lite sur Ofox.ai coûte $0.1/M par million de tokens d'entrée et $0.4/M par million de tokens de sortie. Paiement à l'usage, sans frais mensuels.

Discord

Rejoignez notre serveur Discord

Discord