Live pricing — last refreshed Apr 21, 2026

Anthropic: Claude Opus 4.7 vs OpenAI: GPT-5.2-Codex

Head-to-head API pricing and cost comparison between Anthropic’s Anthropic: Claude Opus 4.7 and OpenAI’s OpenAI: GPT-5.2-Codex. Prices auto-refresh daily from OpenRouter.

Verdict

OpenAI: GPT-5.2-Codex is 65% cheaper for input tokens; OpenAI: GPT-5.2-Codex also wins on output tokens.

Side-by-side comparison

SpecAnthropic: Claude Opus 4.7OpenAI: GPT-5.2-Codex
Input price (per 1M)$5.00$1.75
Cached input (per 1M)$0.50$0.17
Output price (per 1M)$25.00$14.00
Batch input (per 1M)$2.50$0.88
Batch output (per 1M)$12.50$7.00
Reasoning price (per 1M)
Context window1000K400K
Vision supportYesYes
Caching supportYesYes
Batch APIYesYes
Reasoning capabilityNoNo

Monthly cost at volume

Estimated monthly API spend at common production traffic levels (input/output tokens per request shown).

VolumeAnthropic: Claude Opus 4.7OpenAI: GPT-5.2-CodexSavings
1K req/day
500in / 200out tokens
$225.00$110.25$114.75
OpenAI: GPT-5.2-Codex wins
10K req/day
1500in / 500out tokens
$6,000$2,888$3,113
OpenAI: GPT-5.2-Codex wins
100K req/day
3000in / 800out tokens
$105,000$49,350$55,650
OpenAI: GPT-5.2-Codex wins
1M req/day
8000in / 2000out tokens
$2,700,000$1,260,000$1,440,000
OpenAI: GPT-5.2-Codex wins
Open in interactive calculator →

Adjust input/output token counts, request volume, batch & cached pricing.

Related comparisons

Frequently asked questions

Which is cheaper, Anthropic: Claude Opus 4.7 or OpenAI: GPT-5.2-Codex?

For input tokens, OpenAI: GPT-5.2-Codex is roughly 65% cheaper at $1.75/1M vs $5.00/1M. For output tokens, OpenAI: GPT-5.2-Codex wins at $14.00/1M. Real-world cost depends on your input/output ratio — use the calculator to model your actual workload.

What’s the context window difference?

Anthropic: Claude Opus 4.7 has a context window of 1000K tokens. OpenAI: GPT-5.2-Codex offers 400K tokens. Larger context windows are valuable for long documents, RAG pipelines, and multi-turn conversations — but they come with higher input-token bills if you fill them every request.

Should I use Anthropic: Claude Opus 4.7 or OpenAI: GPT-5.2-Codex?

Choose Anthropic: Claude Opus 4.7 if you’re already on the Anthropic stack, want broad ecosystem support, or prefer its feature set. Choose OpenAI: GPT-5.2-Codex for OpenAI’s ecosystem, native vision input, or its cheaper input tokens. Run a small benchmark on your own prompts before committing — price is only one axis.

How are these prices kept current?

Prices are pulled directly from OpenRouter’s public models API once every 24 hours via a Convex cron job, then normalized to per-1M-token figures. Last refresh: Apr 21, 2026.