OpenAI: gpt-oss-120b vs Google: Gemini 2.5 Pro
Head-to-head API pricing and cost comparison between OpenAI’s OpenAI: gpt-oss-120b and Google’s Google: Gemini 2.5 Pro. Prices auto-refresh daily from OpenRouter.
OpenAI: gpt-oss-120b is 97% cheaper for input tokens; OpenAI: gpt-oss-120b also wins on output tokens.
Side-by-side comparison
| Spec | OpenAI: gpt-oss-120b | Google: Gemini 2.5 Pro |
|---|---|---|
| Input price (per 1M) | $0.04 | $1.25 |
| Cached input (per 1M) | — | $0.13 |
| Output price (per 1M) | $0.19 | $10.00 |
| Batch input (per 1M) | $0.02 | — |
| Batch output (per 1M) | $0.10 | — |
| Reasoning price (per 1M) | — | $10.00 |
| Context window | 131K | 1049K |
| Vision support | No | Yes |
| Caching support | No | Yes |
| Batch API | Yes | No |
| Reasoning capability | No | Yes |
Monthly cost at volume
Estimated monthly API spend at common production traffic levels (input/output tokens per request shown).
| Volume | OpenAI: gpt-oss-120b | Google: Gemini 2.5 Pro | Savings |
|---|---|---|---|
1K req/day 500in / 200out tokens | $1.72 | $78.75 | $77.03 OpenAI: gpt-oss-120b wins |
10K req/day 1500in / 500out tokens | $46.05 | $2,063 | $2,016 OpenAI: gpt-oss-120b wins |
100K req/day 3000in / 800out tokens | $807.00 | $35,250 | $34,443 OpenAI: gpt-oss-120b wins |
1M req/day 8000in / 2000out tokens | $20,760 | $900,000 | $879,240 OpenAI: gpt-oss-120b wins |
Adjust input/output token counts, request volume, batch & cached pricing.
Related comparisons
Frequently asked questions
Which is cheaper, OpenAI: gpt-oss-120b or Google: Gemini 2.5 Pro?
For input tokens, OpenAI: gpt-oss-120b is roughly 97% cheaper at $0.04/1M vs $1.25/1M. For output tokens, OpenAI: gpt-oss-120b wins at $0.19/1M. Real-world cost depends on your input/output ratio — use the calculator to model your actual workload.
What’s the context window difference?
OpenAI: gpt-oss-120b has a context window of 131K tokens. Google: Gemini 2.5 Pro offers 1049K tokens. Larger context windows are valuable for long documents, RAG pipelines, and multi-turn conversations — but they come with higher input-token bills if you fill them every request.
Should I use OpenAI: gpt-oss-120b or Google: Gemini 2.5 Pro?
Choose OpenAI: gpt-oss-120b if you’re already on the OpenAI stack, want broad ecosystem support, or prefer its lower input price. Choose Google: Gemini 2.5 Pro for Google’s ecosystem, native vision input, or its differentiated capabilities. Run a small benchmark on your own prompts before committing — price is only one axis.
How are these prices kept current?
Prices are pulled directly from OpenRouter’s public models API once every 24 hours via a Convex cron job, then normalized to per-1M-token figures. Last refresh: Apr 21, 2026.