Gemma 2 27B logo

Gemma 2 27B

Google

Gemma 2 27B IT represents an advanced iteration of Google's open-source language model, meticulously refined for instructional applications. Leveraging the same core research and technology as the Gemini models, this version has been specifically optimized for dialogue-oriented tasks. This optimization is achieved through a combination of supervised fine-tuning, knowledge distillation from larger models, and reinforcement learning from human feedback (RLHF). As a result, the model demonstrates exceptional proficiency in various text generation tasks, including question answering, content summarization, and complex reasoning.

Model Specifications

Technical details and capabilities of Gemma 2 27B

Core Specifications

27.2B Parameters

Model size and complexity

13000.0B Training Tokens

Amount of data used in training

8.2K / 8.2K

Input / Output tokens

June 26, 2024

Release date

Capabilities & License

Multimodal Support
Not Supported
Web Hydrated
No
License
gemma

Resources

Research Paper
https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf
API Reference
https://huggingface.co/google/gemma-2-27b-it
Playground
https://huggingface.co/chat/models/google/gemma-2-27b-it

Performance Insights

Check out how Gemma 2 27B handles various AI tasks through comprehensive benchmark results.

90
68
45
23
0
88.6
ARC-e
88.6
(98%)
86.4
HellaSwag
86.4
(96%)
84.8
BoolQ
84.8
(94%)
83.7
WinoGrande
83.7
(93%)
83.7
TriviaQA
83.7
(93%)
83.2
PIQA
83.2
(92%)
75.2
MMLU
75.2
(84%)
74.9
BIG-Bench
74.9
(83%)
74
GSM8K
74
(82%)
71.4
ARC-c
71.4
(79%)
62.6
MBPP
62.6
(70%)
55.1
AGIEval
55.1
(61%)
53.7
SocialIQA
53.7
(60%)
51.8
HumanEval
51.8
(58%)
42.3
MATH
42.3
(47%)
34.5
Natural Questions
34.5
(38%)
ARC-e
HellaSwag
BoolQ
WinoGrande
TriviaQA
PIQA
MMLU
BIG-Bench
GSM8K
ARC-c
MBPP
AGIEval
SocialIQA
HumanEval
MATH
Natural Questions

Detailed Benchmarks

Dive deeper into Gemma 2 27B's performance across specific task categories. Expand each section to see detailed metrics and comparisons.

Coding

HumanEval

Current model
Other models
Avg (61.1%)

MBPP

Current model
Other models
Avg (70.5%)

Reasoning

HellaSwag

Current model
Other models
Avg (86.7%)

Knowledge

MMLU

Current model
Other models
Avg (75.9%)

MATH

Current model
Other models
Avg (47.1%)

Non categorized

PIQA

Current model
Other models
Avg (83.6%)

SocialIQA

Current model
Other models
Avg (53.6%)

BoolQ

Current model
Other models
Avg (82.9%)

WinoGrande

Current model
Other models
Avg (77.4%)

ARC-e

Current model
Other models
Avg (88.3%)

ARC-c

Current model
Other models
Avg (69.9%)

TriviaQA

Current model
Other models
Avg (78.0%)

Natural Questions

Current model
Other models
Avg (31.9%)

AGIEval

Current model
Other models
Avg (56.3%)

BIG-Bench

Current model
Other models
Avg (71.5%)

Providers Pricing Coming Soon

We're working on gathering comprehensive pricing data from all major providers for Gemma 2 27B. Compare costs across platforms to find the best pricing for your use case.

OpenAI
Anthropic
Google
Mistral AI
Cohere

Share your feedback

Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.

Your feedback helps us improve our service

Stay Ahead with AI Updates

Get insights on Gemini Pro 2.5, Sonnet 3.7 and more top AI models