
Olmo 2 32B
Allen Institute for AI
OLMo-2 32B is the most capable and largest model in the OLMo 2 family, developed by the Allen Institute for AI (AI2) as part of its bold, research-centric commitment to open AI. Scaling up the same training recipe used for the 7B and 13B versions released in November, OLMo-2 32B boasts 32 billion parameters and is trained on a massive 6 trillion tokens. It’s the first fully open model—with all training data, code, model weights, and documentation publicly released—to outperform GPT-3.5 Turbo and GPT-4o mini across a broad range of multi-skill academic benchmarks. Post-training is executed using Tülu 3.1, combining supervised fine-tuning, direct preference optimization, and reinforcement learning with verifiable rewards (RLVR), yielding an instruction-tuned model that pushes the limits of open-source alignment techniques. Despite its strong performance, OLMo-2 32B is highly efficient: it achieves results comparable to Qwen 2.5 32B while using only one-third the training compute. The model is also supported by OLMo-core, a newly overhauled training framework built for scalability, modularity, and efficiency on modern AI hardware, including support for 4D+ parallelism and asynchronous checkpointing. Every model in the OLMo 2 series—including 7B, 13B, and now 32B—is designed to be finetuned on a single H100 GPU node, making them highly accessible for academic and independent research. With its powerful capabilities, fully transparent design, and optimized training infrastructure, OLMo-2 32B not only sets a new standard for open-weight models but also reaffirms AI2's role as a leading force in democratizing advanced AI.
Model Specifications
Technical details and capabilities of Olmo 2 32B
Core Specifications
32.0B Parameters
Model size and complexity
6000.0B Training Tokens
Amount of data used in training
4.1K / 4.1K
Input / Output tokens
March 13, 2025
Last 30 DaysRelease date
Performance Insights
Check out how Olmo 2 32B handles various AI tasks through comprehensive benchmark results.
Model Comparison
See how Olmo 2 32B stacks up against other leading models across key performance metrics.
Detailed Benchmarks
Dive deeper into Olmo 2 32B's performance across specific task categories. Expand each section to see detailed metrics and comparisons.
Math
Reasoning
DROP
Knowledge
MMLU
MATH
Non categorized
TriviaQA
BBH
IFEval
Providers Pricing Coming Soon
We're working on gathering comprehensive pricing data from all major providers for Olmo 2 32B. Compare costs across platforms to find the best pricing for your use case.
Share your feedback
Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.
Your feedback helps us improve our service