MetaUpdated 2h ago

Meta: Llama 3.3 70B Instruct API Pricing

Live token cost for Meta: Llama 3.3 70B Instruct from Meta. Use the figures below for budgeting, then tune your exact request mix in the interactive calculator. Prices refresh every 24 hours from OpenRouter.

Input
$0.120
/ 1M tokens
Output
$0.380
/ 1M tokens
Cached input
Not supported

Capabilities

131K context131K max output

Meta: Llama 3.3 70B Instruct cost at scale

Estimated monthly cost across common production volumes. Assumes 30-day months and the request shapes shown.

TierRequests / dayIn / out tokens$ / month
Hobby1,000500 / 200$4.08
Startup10,0001,500 / 500$111.00
Growth100,0003,000 / 800$1,992.00
Enterprise1,000,0008,000 / 2,000$51,600.00
Open Meta: Llama 3.3 70B Instruct in interactive calculator →

Compare Meta: Llama 3.3 70B Instruct vs.

Frequently asked questions

How much does Meta: Llama 3.3 70B Instruct cost?

Meta: Llama 3.3 70B Instruct costs $0.12 per 1M input tokens and $0.38 per 1M output tokens. A typical 1,500-token in / 500-token out request costs $0.00037.

Does Meta: Llama 3.3 70B Instruct support cached input?

No. Meta: Llama 3.3 70B Instruct does not currently expose cached-input pricing through Meta. Every input token is billed at the full rate.

What is the Meta: Llama 3.3 70B Instruct context window?

Meta: Llama 3.3 70B Instruct supports a context window of 131,072 tokens (131K). Max output per response is 131,072 tokens.

What is Meta: Llama 3.3 70B Instruct good for?

Meta: Llama 3.3 70B Instruct is a good fit for open-source baseline, fine-tuning research, edge inference. For other use cases, run your specific input/output mix through the interactive calculator to compare against alternative models.