Gemini 1.5 Pro logo

Gemini 1.5 Pro

Google

Gemini 1.5 Pro is a versatile, mid-size multimodal AI model designed for advanced reasoning across diverse tasks. Its standout feature is the ability to handle massive datasets in a single pass. For example, it can analyze two-hour videos, 19-hour audio files, code repositories containing 60,000 lines, or documents as large as 2,000 pages.

Model Specifications

Technical details and capabilities of Gemini 1.5 Pro

Core Specifications

2.1M / 8.2K

Input / Output tokens

October 31, 2023

Knowledge cutoff date

April 30, 2024

Release date

Capabilities & License

Multimodal Support
Supported
Web Hydrated
No
License
Proprietary

Resources

Research Paper
https://arxiv.org/pdf/2403.05530
API Reference
https://ai.google.dev/gemini-api/docs/models/gemini#gemini-1.5-pro
Playground
https://ai.google.dev/studio

Performance Insights

Check out how Gemini 1.5 Pro handles various AI tasks through comprehensive benchmark results.

100
75
50
25
0
98.8
XSTest
98.8
(99%)
93.3
Hellaswag
93.3
(93%)
90.8
GSM8K
90.8
(91%)
89.2
BigBench-Hard
89.2
(89%)
87.5
MGSM
87.5
(88%)
86.5
MATH
86.5
(87%)
85.9
MMLU
85.9
(86%)
85.4
Natural2Code
85.4
(85%)
84.1
HumanEval
84.1
(84%)
82.6
MRCR
82.6
(83%)
78.6
Video-MME
78.6
(79%)
75.8
MMLU-Pro
75.8
(76%)
75.1
WMT23
75.1
(75%)
74.9
DROP
74.9
(75%)
68.1
MathVista
68.1
(68%)
65.9
MMMU
65.9
(66%)
64.6
FunctionalMATH
64.6
(65%)
63.9
PhysicsFinals
63.9
(64%)
59.1
GPQA
59.1
(59%)
53.9
Vibe-Eval
53.9
(54%)
52
HiddenMath
52
(52%)
46.4
AMC_2022_23
46.4
(46%)
6.7
FLEURS
6.7
(7%)
XSTest
Hellaswag
GSM8K
BigBench-Hard
MGSM
MATH
MMLU
Natural2Code
HumanEval
MRCR
Video-MME
MMLU-Pro
WMT23
DROP
MathVista
MMMU
FunctionalMATH
PhysicsFinals
GPQA
Vibe-Eval
HiddenMath
AMC_2022_23
FLEURS

Detailed Benchmarks

Dive deeper into Gemini 1.5 Pro's performance across specific task categories. Expand each section to see detailed metrics and comparisons.

Coding

HumanEval

Current model
Other models
Avg (80.0%)

Reasoning

DROP

Current model
Other models
Avg (75.9%)

Hellaswag

Current model
Other models
Avg (89.9%)

Knowledge

MMLU

Current model
Other models
Avg (83.9%)

MATH

Current model
Other models
Avg (82.6%)

GPQA

Current model
Other models
Avg (57.8%)

Non categorized

MMLU-Pro

Current model
Other models
Avg (72.9%)

PhysicsFinals

Current model
Other models
Avg (60.7%)

AMC_2022_23

Current model
Other models
Avg (40.6%)

MGSM

Current model
Other models
Avg (84.6%)

Natural2Code

Current model
Other models
Avg (83.4%)

HiddenMath

Current model
Other models
Avg (50.2%)

MRCR

Current model
Other models
Avg (63.8%)

WMT23

Current model
Other models
Avg (73.4%)

MMMU

Current model
Other models
Avg (61.1%)

MathVista

Current model
Other models
Avg (62.3%)

FLEURS

Current model
Other models
Avg (34.2%)

Video-MME

Current model
Other models
Avg (73.6%)

Vibe-Eval

Current model
Other models
Avg (53.9%)

XSTest

Current model
Other models
Avg (96.1%)

Providers Pricing Coming Soon

We're working on gathering comprehensive pricing data from all major providers for Gemini 1.5 Pro. Compare costs across platforms to find the best pricing for your use case.

OpenAI
Anthropic
Google
Mistral AI
Cohere

Share your feedback

Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.

Your feedback helps us improve our service

Stay Ahead with AI Updates

Get insights on Gemini Pro 2.5, Sonnet 3.7 and more top AI models