
Gemma 2 27B
Gemma 2 27B IT represents an advanced iteration of Google's open-source language model, meticulously refined for instructional applications. Leveraging the same core research and technology as the Gemini models, this version has been specifically optimized for dialogue-oriented tasks. This optimization is achieved through a combination of supervised fine-tuning, knowledge distillation from larger models, and reinforcement learning from human feedback (RLHF). As a result, the model demonstrates exceptional proficiency in various text generation tasks, including question answering, content summarization, and complex reasoning.
Model Specifications
Technical details and capabilities of Gemma 2 27B
Core Specifications
27.2B Parameters
Model size and complexity
13000.0B Training Tokens
Amount of data used in training
8.2K / 8.2K
Input / Output tokens
June 26, 2024
Release date
Capabilities & License
Performance Insights
Check out how Gemma 2 27B handles various AI tasks through comprehensive benchmark results.
Detailed Benchmarks
Dive deeper into Gemma 2 27B's performance across specific task categories. Expand each section to see detailed metrics and comparisons.
Math
GSM8K
Coding
HumanEval
MBPP
Reasoning
HellaSwag
Knowledge
MMLU
MATH
Non categorized
PIQA
BoolQ
WinoGrande
TriviaQA
Providers Pricing Coming Soon
We're working on gathering comprehensive pricing data from all major providers for Gemma 2 27B. Compare costs across platforms to find the best pricing for your use case.
Share your feedback
Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.
Your feedback helps us improve our service