Gemini 2.5 Pro
Pricing verified 1y ago · source · $2.50/$15.00 above 200k tokens
Benchmarks
preference
Crowdsourced pairwise human preference rankings of LLM responses. Higher Elo means more frequently preferred by users.
math
American Invitational Mathematics Examination 2024 problems. Three-digit integer answers; very hard for non-reasoning models.
coding
164 hand-written Python programming problems scored by passing unit tests. Saturated for frontier models.
Continuously refreshed coding benchmark drawing from LeetCode, AtCoder, and Codeforces; reduces benchmark contamination.
agentic
Real GitHub issues solved end-to-end. Verified subset is a 500-task human-validated slice of SWE-bench.
vision
long context
Long-context retrieval and reasoning suite. We report the 128k token effective-context score.
performance
Median sustained output speed in tokens per second on the model's first-party API for medium-length prompts. Higher is faster.
Median time from request to first output chunk in milliseconds on the model's first-party API for medium-length prompts. Lower is snappier; reasoning models are penalised here because they think before talking.
Providers
| Provider | Input $/M | Output $/M | Context | Quant |
|---|---|---|---|---|
Google google-vertex/global | $1.25 | $10.00 | 1.0M | unknown |
Google google-vertex/eu | $1.25 | $10.00 | 1.0M | unknown |
Google AI Studio google-ai-studio | $1.25 | $10.00 | 1.0M | unknown |
Google google-vertex/us | $1.25 | $10.00 | 1.0M | unknown |