Live pricing — last refreshed Apr 21, 2026

Google: Gemini 2.5 Flash Lite vs OpenAI: GPT-4o

Head-to-head API pricing and cost comparison between Google’s Google: Gemini 2.5 Flash Lite and OpenAI’s OpenAI: GPT-4o. Prices auto-refresh daily from OpenRouter.

Verdict

Google: Gemini 2.5 Flash Lite is 96% cheaper for input tokens; Google: Gemini 2.5 Flash Lite also wins on output tokens.

Side-by-side comparison

SpecGoogle: Gemini 2.5 Flash LiteOpenAI: GPT-4o
Input price (per 1M)$0.10$2.50
Cached input (per 1M)$0.01
Output price (per 1M)$0.40$10.00
Batch input (per 1M)$1.25
Batch output (per 1M)$5.00
Reasoning price (per 1M)$0.40
Context window1049K128K
Vision supportYesYes
Caching supportYesNo
Batch APINoYes
Reasoning capabilityYesNo

Monthly cost at volume

Estimated monthly API spend at common production traffic levels (input/output tokens per request shown).

VolumeGoogle: Gemini 2.5 Flash LiteOpenAI: GPT-4oSavings
1K req/day
500in / 200out tokens
$3.90$97.50$93.60
Google: Gemini 2.5 Flash Lite wins
10K req/day
1500in / 500out tokens
$105.00$2,625$2,520
Google: Gemini 2.5 Flash Lite wins
100K req/day
3000in / 800out tokens
$1,860$46,500$44,640
Google: Gemini 2.5 Flash Lite wins
1M req/day
8000in / 2000out tokens
$48,000$1,200,000$1,152,000
Google: Gemini 2.5 Flash Lite wins
Open in interactive calculator →

Adjust input/output token counts, request volume, batch & cached pricing.

Related comparisons

Frequently asked questions

Which is cheaper, Google: Gemini 2.5 Flash Lite or OpenAI: GPT-4o?

For input tokens, Google: Gemini 2.5 Flash Lite is roughly 96% cheaper at $0.10/1M vs $2.50/1M. For output tokens, Google: Gemini 2.5 Flash Lite wins at $0.40/1M. Real-world cost depends on your input/output ratio — use the calculator to model your actual workload.

What’s the context window difference?

Google: Gemini 2.5 Flash Lite has a context window of 1049K tokens. OpenAI: GPT-4o offers 128K tokens. Larger context windows are valuable for long documents, RAG pipelines, and multi-turn conversations — but they come with higher input-token bills if you fill them every request.

Should I use Google: Gemini 2.5 Flash Lite or OpenAI: GPT-4o?

Choose Google: Gemini 2.5 Flash Lite if you’re already on the Google stack, want reasoning capabilities, or prefer its lower input price. Choose OpenAI: GPT-4o for OpenAI’s ecosystem, native vision input, or its differentiated capabilities. Run a small benchmark on your own prompts before committing — price is only one axis.

How are these prices kept current?

Prices are pulled directly from OpenRouter’s public models API once every 24 hours via a Convex cron job, then normalized to per-1M-token figures. Last refresh: Apr 21, 2026.