BE
← Leaderboard

Llama 3.1 70B Instruct

Open source
Meta
Open (restricted)
text
Llama 3.1Released 2y ago
Avg score
48.5
/ 100
Context
128k
Output limit
4k
Input price
$0.88 /M
Output price
$0.88 /M

Pricing verified 1y ago · source · Median of major hosted endpoints

Benchmarks

preference

Crowdsourced pairwise human preference rankings of LLM responses. Higher Elo means more frequently preferred by users.

knowledge

MMLU ProHigh risk
%

Harder version of MMLU testing knowledge across 57 academic subjects; reduces guessing-friendly answers.

reasoning

GPQA DiamondSome risk
%

Graduate-level Google-proof Q&A in physics, chemistry, and biology. Diamond subset is the hardest tier with PhD-validated answers.

math

MATH-500Saturated
%

500 high-school competition math problems requiring multi-step solutions. Scored on final-answer correctness.

AIME 2024High risk
%

American Invitational Mathematics Examination 2024 problems. Three-digit integer answers; very hard for non-reasoning models.

coding

HumanEvalSaturated
% pass@1

164 hand-written Python programming problems scored by passing unit tests. Saturated for frontier models.

instruction following

IFEvalSome risk
%

Verifiable instruction-following benchmark; 25 categories of strict formatting / structural directives.

long context

Long-context retrieval and reasoning suite. We report the 128k token effective-context score.

performance

tok/s

Median sustained output speed in tokens per second on the model's first-party API for medium-length prompts. Higher is faster.

Median time from request to first output chunk in milliseconds on the model's first-party API for medium-length prompts. Lower is snappier; reasoning models are penalised here because they think before talking.

Providers

ProviderInput $/MOutput $/MContextQuant
DeepInfra
deepinfra/turbo
$0.40$0.40131kfp8
DeepInfra
deepinfra/base
$0.40$0.40131kbf16
Amazon Bedrock
amazon-bedrock
$0.72$0.72131kunknown
WandB
wandb/bf16
$0.80$0.80128kbf16
Sourced from OpenRouter. Sorted by lowest output price.

Compare with...