BE
← Leaderboard

Qwen2.5 72B Instruct

Open source
Alibaba (Qwen)
Open license
text
Qwen 2.5Released 2y ago
Avg score
65.8
/ 100
Context
131k
Output limit
8k
Input price
$0.90 /M
Output price
$0.90 /M

Pricing verified 1y ago · source · Median of hosted endpoints

Benchmarks

preference

Crowdsourced pairwise human preference rankings of LLM responses. Higher Elo means more frequently preferred by users.

knowledge

MMLU ProHigh risk
%

Harder version of MMLU testing knowledge across 57 academic subjects; reduces guessing-friendly answers.

reasoning

GPQA DiamondSome risk
%

Graduate-level Google-proof Q&A in physics, chemistry, and biology. Diamond subset is the hardest tier with PhD-validated answers.

math

MATH-500Saturated
%

500 high-school competition math problems requiring multi-step solutions. Scored on final-answer correctness.

AIME 2024High risk
%

American Invitational Mathematics Examination 2024 problems. Three-digit integer answers; very hard for non-reasoning models.

coding

HumanEvalSaturated
% pass@1

164 hand-written Python programming problems scored by passing unit tests. Saturated for frontier models.

instruction following

IFEvalSome risk
%

Verifiable instruction-following benchmark; 25 categories of strict formatting / structural directives.

long context

Long-context retrieval and reasoning suite. We report the 128k token effective-context score.

performance

tok/s

Median sustained output speed in tokens per second on the model's first-party API for medium-length prompts. Higher is faster.

Median time from request to first output chunk in milliseconds on the model's first-party API for medium-length prompts. Lower is snappier; reasoning models are penalised here because they think before talking.

Providers

ProviderInput $/MOutput $/MContextQuant
DeepInfra
deepinfra/fp8
$0.36$0.4033kfp8
Novita
novita/bf16
$0.38$0.4032kbf16
Sourced from OpenRouter. Sorted by lowest output price.

Compare with...