Live pricing — last refreshed Apr 21, 2026

Cohere: Command R+ (08-2024) vs Meta: Llama 3.2 11B Vision Instruct

Head-to-head API pricing and cost comparison between Cohere’s Cohere: Command R+ (08-2024) and Meta’s Meta: Llama 3.2 11B Vision Instruct. Prices auto-refresh daily from OpenRouter.

Verdict

Meta: Llama 3.2 11B Vision Instruct is 90% cheaper for input tokens; Meta: Llama 3.2 11B Vision Instruct also wins on output tokens.

Side-by-side comparison

SpecCohere: Command R+ (08-2024)Meta: Llama 3.2 11B Vision Instruct
Input price (per 1M)$2.50$0.24
Cached input (per 1M)
Output price (per 1M)$10.00$0.24
Batch input (per 1M)
Batch output (per 1M)
Reasoning price (per 1M)
Context window128K131K
Vision supportNoYes
Caching supportNoNo
Batch APINoNo
Reasoning capabilityNoNo

Monthly cost at volume

Estimated monthly API spend at common production traffic levels (input/output tokens per request shown).

VolumeCohere: Command R+ (08-2024)Meta: Llama 3.2 11B Vision InstructSavings
1K req/day
500in / 200out tokens
$97.50$5.14$92.36
Meta: Llama 3.2 11B Vision Instruct wins
10K req/day
1500in / 500out tokens
$2,625$147.00$2,478
Meta: Llama 3.2 11B Vision Instruct wins
100K req/day
3000in / 800out tokens
$46,500$2,793$43,707
Meta: Llama 3.2 11B Vision Instruct wins
1M req/day
8000in / 2000out tokens
$1,200,000$73,500$1,126,500
Meta: Llama 3.2 11B Vision Instruct wins
Open in interactive calculator →

Adjust input/output token counts, request volume, batch & cached pricing.

Related comparisons

Frequently asked questions

Which is cheaper, Cohere: Command R+ (08-2024) or Meta: Llama 3.2 11B Vision Instruct?

For input tokens, Meta: Llama 3.2 11B Vision Instruct is roughly 90% cheaper at $0.24/1M vs $2.50/1M. For output tokens, Meta: Llama 3.2 11B Vision Instruct wins at $0.24/1M. Real-world cost depends on your input/output ratio — use the calculator to model your actual workload.

What’s the context window difference?

Cohere: Command R+ (08-2024) has a context window of 128K tokens. Meta: Llama 3.2 11B Vision Instruct offers 131K tokens. Larger context windows are valuable for long documents, RAG pipelines, and multi-turn conversations — but they come with higher input-token bills if you fill them every request.

Should I use Cohere: Command R+ (08-2024) or Meta: Llama 3.2 11B Vision Instruct?

Choose Cohere: Command R+ (08-2024) if you’re already on the Cohere stack, want broad ecosystem support, or prefer its feature set. Choose Meta: Llama 3.2 11B Vision Instruct for Meta’s ecosystem, native vision input, or its cheaper input tokens. Run a small benchmark on your own prompts before committing — price is only one axis.

How are these prices kept current?

Prices are pulled directly from OpenRouter’s public models API once every 24 hours via a Convex cron job, then normalized to per-1M-token figures. Last refresh: Apr 21, 2026.