Gemini 1.0 Pro logo

Gemini 1.0 Pro

Google

Gemini 1.0 Pro is a versatile Natural Language Processing (NLP) model excelling in multi-turn text and code conversations, as well as code generation. Supporting both text input and output, it's well-suited for a variety of natural language tasks. This model is engineered to manage intricate dialogues and produce functional code snippets effectively. It features customizable safety configurations and function calling capabilities. Note that JSON mode, JSON schema, and system instructions are not supported. The most current stable release, gemini-1.0-pro-001, was updated in February 2024.

Model Specifications

Technical details and capabilities of Gemini 1.0 Pro

Core Specifications

32.8K / 8.2K

Input / Output tokens

January 31, 2024

Knowledge cutoff date

February 14, 2024

Release date

Capabilities & License

Multimodal Support
Not Supported
Web Hydrated
No
License
Proprietary

Resources

Research Paper
https://arxiv.org/pdf/2312.11805
API Reference
https://ai.google.dev/gemini-api/docs/models/gemini#gemini-1.0-pro
Playground
https://gemini.google/advanced/

Performance Insights

Check out how Gemini 1.0 Pro handles various AI tasks through comprehensive benchmark results.

80
60
40
20
0
75
BigBench
75
(94%)
71.8
MMLU
71.8
(90%)
71.7
WMT23
71.7
(90%)
55.7
EgoSchema
55.7
(70%)
47.9
MMMU
47.9
(60%)
46.6
MathVista
46.6
(58%)
32.6
MATH
32.6
(41%)
27.9
GPQA
27.9
(35%)
6.4
Fleurs
6.4
(8%)
BigBench
MMLU
WMT23
EgoSchema
MMMU
MathVista
MATH
GPQA
Fleurs

Model Comparison

See how Gemini 1.0 Pro stacks up against other leading models across key performance metrics.

90
72
54
36
18
0
71.8
MMLU - Gemini 1.0 Pro
71.8
(80%)
73
MMLU - Llama 3.2 11B Instruct
73
(81%)
81.3
MMLU - Grok-1.5
81.3
(90%)
82
MMLU - GPT-4o mini
82
(91%)
86
MMLU - Llama 3.2 90B Instruct
86
(96%)
86.2
MMLU - Grok-2 mini
86.2
(96%)
32.6
MATH - Gemini 1.0 Pro
32.6
(36%)
51.9
MATH - Llama 3.2 11B Instruct
51.9
(58%)
50.6
MATH - Grok-1.5
50.6
(56%)
70.2
MATH - GPT-4o mini
70.2
(78%)
68
MATH - Llama 3.2 90B Instruct
68
(76%)
73
MATH - Grok-2 mini
73
(81%)
27.9
GPQA - Gemini 1.0 Pro
27.9
(31%)
32.8
GPQA - Llama 3.2 11B Instruct
32.8
(36%)
35.9
GPQA - Grok-1.5
35.9
(40%)
40.2
GPQA - GPT-4o mini
40.2
(45%)
46.7
GPQA - Llama 3.2 90B Instruct
46.7
(52%)
51
GPQA - Grok-2 mini
51
(57%)
47.9
MMMU - Gemini 1.0 Pro
47.9
(53%)
50.7
MMMU - Llama 3.2 11B Instruct
50.7
(56%)
53.6
MMMU - Grok-1.5
53.6
(60%)
59.4
MMMU - GPT-4o mini
59.4
(66%)
60.3
MMMU - Llama 3.2 90B Instruct
60.3
(67%)
63.2
MMMU - Grok-2 mini
63.2
(70%)
46.6
MathVista - Gemini 1.0 Pro
46.6
(52%)
51.5
MathVista - Llama 3.2 11B Instruct
51.5
(57%)
52.8
MathVista - Grok-1.5
52.8
(59%)
56.7
MathVista - GPT-4o mini
56.7
(63%)
57.3
MathVista - Llama 3.2 90B Instruct
57.3
(64%)
68.1
MathVista - Grok-2 mini
68.1
(76%)
MMLU
MATH
GPQA
MMMU
MathVista
Gemini 1.0 Pro
Llama 3.2 11B Instruct
Grok-1.5
GPT-4o mini
Llama 3.2 90B Instruct
Grok-2 mini

Detailed Benchmarks

Dive deeper into Gemini 1.0 Pro's performance across specific task categories. Expand each section to see detailed metrics and comparisons.

Knowledge

MATH

Current model
Other models
Avg (40.7%)

Non categorized

WMT23

Current model
Other models
Avg (73.4%)

MMMU

Current model
Other models
Avg (47.9%)

MathVista

Current model
Other models
Avg (47.6%)

EgoSchema

Current model
Other models
Avg (66.5%)

Providers Pricing Coming Soon

We're working on gathering comprehensive pricing data from all major providers for Gemini 1.0 Pro. Compare costs across platforms to find the best pricing for your use case.

OpenAI
Anthropic
Google
Mistral AI
Cohere

Share your feedback

Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.

Your feedback helps us improve our service

Stay Ahead with AI Updates

Get insights on Gemini Pro 2.5, Sonnet 3.7 and more top AI models