Live pricing — last refreshed Apr 21, 2026

Cohere: Command R+ (08-2024) vs Qwen: Qwen3 VL 30B A3B Thinking

Head-to-head API pricing and cost comparison between Cohere’s Cohere: Command R+ (08-2024) and Qwen’s Qwen: Qwen3 VL 30B A3B Thinking. Prices auto-refresh daily from OpenRouter.

Verdict

Qwen: Qwen3 VL 30B A3B Thinking is 95% cheaper for input tokens; Qwen: Qwen3 VL 30B A3B Thinking also wins on output tokens.

Side-by-side comparison

SpecCohere: Command R+ (08-2024)Qwen: Qwen3 VL 30B A3B Thinking
Input price (per 1M)$2.50$0.13
Cached input (per 1M)
Output price (per 1M)$10.00$1.56
Batch input (per 1M)
Batch output (per 1M)
Reasoning price (per 1M)
Context window128K131K
Vision supportNoYes
Caching supportNoNo
Batch APINoNo
Reasoning capabilityNoYes

Monthly cost at volume

Estimated monthly API spend at common production traffic levels (input/output tokens per request shown).

VolumeCohere: Command R+ (08-2024)Qwen: Qwen3 VL 30B A3B ThinkingSavings
1K req/day
500in / 200out tokens
$97.50$11.31$86.19
Qwen: Qwen3 VL 30B A3B Thinking wins
10K req/day
1500in / 500out tokens
$2,625$292.50$2,333
Qwen: Qwen3 VL 30B A3B Thinking wins
100K req/day
3000in / 800out tokens
$46,500$4,914$41,586
Qwen: Qwen3 VL 30B A3B Thinking wins
1M req/day
8000in / 2000out tokens
$1,200,000$124,800$1,075,200
Qwen: Qwen3 VL 30B A3B Thinking wins
Open in interactive calculator →

Adjust input/output token counts, request volume, batch & cached pricing.

Related comparisons

Frequently asked questions

Which is cheaper, Cohere: Command R+ (08-2024) or Qwen: Qwen3 VL 30B A3B Thinking?

For input tokens, Qwen: Qwen3 VL 30B A3B Thinking is roughly 95% cheaper at $0.13/1M vs $2.50/1M. For output tokens, Qwen: Qwen3 VL 30B A3B Thinking wins at $1.56/1M. Real-world cost depends on your input/output ratio — use the calculator to model your actual workload.

What’s the context window difference?

Cohere: Command R+ (08-2024) has a context window of 128K tokens. Qwen: Qwen3 VL 30B A3B Thinking offers 131K tokens. Larger context windows are valuable for long documents, RAG pipelines, and multi-turn conversations — but they come with higher input-token bills if you fill them every request.

Should I use Cohere: Command R+ (08-2024) or Qwen: Qwen3 VL 30B A3B Thinking?

Choose Cohere: Command R+ (08-2024) if you’re already on the Cohere stack, want broad ecosystem support, or prefer its feature set. Choose Qwen: Qwen3 VL 30B A3B Thinking for Qwen’s ecosystem, native vision input, or its cheaper input tokens. Run a small benchmark on your own prompts before committing — price is only one axis.

How are these prices kept current?

Prices are pulled directly from OpenRouter’s public models API once every 24 hours via a Convex cron job, then normalized to per-1M-token figures. Last refresh: Apr 21, 2026.