Live pricing — last refreshed Apr 21, 2026

DeepSeek: DeepSeek V3.2 vs OpenAI: GPT-4o-mini

Head-to-head API pricing and cost comparison between DeepSeek’s DeepSeek: DeepSeek V3.2 and OpenAI’s OpenAI: GPT-4o-mini. Prices auto-refresh daily from OpenRouter.

Verdict

OpenAI: GPT-4o-mini is 40% cheaper for input tokens; DeepSeek: DeepSeek V3.2 wins on output tokens at $0.38/1M.

Side-by-side comparison

SpecDeepSeek: DeepSeek V3.2OpenAI: GPT-4o-mini
Input price (per 1M)$0.25$0.15
Cached input (per 1M)$0.03$0.07
Output price (per 1M)$0.38$0.60
Batch input (per 1M)$0.07
Batch output (per 1M)$0.30
Reasoning price (per 1M)
Context window131K128K
Vision supportNoYes
Caching supportYesYes
Batch APINoYes
Reasoning capabilityNoNo

Monthly cost at volume

Estimated monthly API spend at common production traffic levels (input/output tokens per request shown).

VolumeDeepSeek: DeepSeek V3.2OpenAI: GPT-4o-miniSavings
1K req/day
500in / 200out tokens
$6.05$5.85$0.20
OpenAI: GPT-4o-mini wins
10K req/day
1500in / 500out tokens
$170.10$157.50$12.60
OpenAI: GPT-4o-mini wins
100K req/day
3000in / 800out tokens
$3,175$2,790$385.20
OpenAI: GPT-4o-mini wins
1M req/day
8000in / 2000out tokens
$83,160$72,000$11,160
OpenAI: GPT-4o-mini wins
Open in interactive calculator →

Adjust input/output token counts, request volume, batch & cached pricing.

Related comparisons

Frequently asked questions

Which is cheaper, DeepSeek: DeepSeek V3.2 or OpenAI: GPT-4o-mini?

For input tokens, OpenAI: GPT-4o-mini is roughly 40% cheaper at $0.15/1M vs $0.25/1M. For output tokens, DeepSeek: DeepSeek V3.2 wins at $0.38/1M. Real-world cost depends on your input/output ratio — use the calculator to model your actual workload.

What’s the context window difference?

DeepSeek: DeepSeek V3.2 has a context window of 131K tokens. OpenAI: GPT-4o-mini offers 128K tokens. Larger context windows are valuable for long documents, RAG pipelines, and multi-turn conversations — but they come with higher input-token bills if you fill them every request.

Should I use DeepSeek: DeepSeek V3.2 or OpenAI: GPT-4o-mini?

Choose DeepSeek: DeepSeek V3.2 if you’re already on the DeepSeek stack, want broad ecosystem support, or prefer its lower output price. Choose OpenAI: GPT-4o-mini for OpenAI’s ecosystem, native vision input, or its cheaper input tokens. Run a small benchmark on your own prompts before committing — price is only one axis.

How are these prices kept current?

Prices are pulled directly from OpenRouter’s public models API once every 24 hours via a Convex cron job, then normalized to per-1M-token figures. Last refresh: Apr 21, 2026.