Live pricing — last refreshed Apr 21, 2026

Mistral Large vs Google: Gemini 2.5 Pro

Head-to-head API pricing and cost comparison between Mistral’s Mistral Large and Google’s Google: Gemini 2.5 Pro. Prices auto-refresh daily from OpenRouter.

Verdict

Google: Gemini 2.5 Pro is 38% cheaper for input tokens; Mistral Large wins on output tokens at $6.00/1M.

Side-by-side comparison

SpecMistral LargeGoogle: Gemini 2.5 Pro
Input price (per 1M)$2.00$1.25
Cached input (per 1M)$0.20$0.13
Output price (per 1M)$6.00$10.00
Batch input (per 1M)
Batch output (per 1M)
Reasoning price (per 1M)$10.00
Context window128K1049K
Vision supportNoYes
Caching supportYesYes
Batch APINoNo
Reasoning capabilityNoYes

Monthly cost at volume

Estimated monthly API spend at common production traffic levels (input/output tokens per request shown).

VolumeMistral LargeGoogle: Gemini 2.5 ProSavings
1K req/day
500in / 200out tokens
$66.00$78.75$12.75
Mistral Large wins
10K req/day
1500in / 500out tokens
$1,800$2,063$262.50
Mistral Large wins
100K req/day
3000in / 800out tokens
$32,400$35,250$2,850
Mistral Large wins
1M req/day
8000in / 2000out tokens
$840,000$900,000$60,000
Mistral Large wins
Open in interactive calculator →

Adjust input/output token counts, request volume, batch & cached pricing.

Related comparisons

Frequently asked questions

Which is cheaper, Mistral Large or Google: Gemini 2.5 Pro?

For input tokens, Google: Gemini 2.5 Pro is roughly 38% cheaper at $1.25/1M vs $2.00/1M. For output tokens, Mistral Large wins at $6.00/1M. Real-world cost depends on your input/output ratio — use the calculator to model your actual workload.

What’s the context window difference?

Mistral Large has a context window of 128K tokens. Google: Gemini 2.5 Pro offers 1049K tokens. Larger context windows are valuable for long documents, RAG pipelines, and multi-turn conversations — but they come with higher input-token bills if you fill them every request.

Should I use Mistral Large or Google: Gemini 2.5 Pro?

Choose Mistral Large if you’re already on the Mistral stack, want broad ecosystem support, or prefer its lower output price. Choose Google: Gemini 2.5 Pro for Google’s ecosystem, native vision input, or its cheaper input tokens. Run a small benchmark on your own prompts before committing — price is only one axis.

How are these prices kept current?

Prices are pulled directly from OpenRouter’s public models API once every 24 hours via a Convex cron job, then normalized to per-1M-token figures. Last refresh: Apr 21, 2026.