
Qwen2.5 32B Instruct
Qwen
Qwen2.5-32B-Instruct is a 32 billion parameter language model from the Qwen2.5 series, fine-tuned for instruction following. This model excels at producing extended content exceeding 8,000 tokens, interpreting structured data formats like tables, and generating structured outputs, with a focus on JSON. It also boasts multilingual support for more than 29 languages.
Model Specifications
Technical details and capabilities of Qwen2.5 32B Instruct
Core Specifications
32.5B Parameters
Model size and complexity
18000.0B Training Tokens
Amount of data used in training
131.1K / 8.2K
Input / Output tokens
September 18, 2024
Release date
Performance Insights
Check out how Qwen2.5 32B Instruct handles various AI tasks through comprehensive benchmark results.
Model Comparison
See how Qwen2.5 32B Instruct stacks up against other leading models across key performance metrics.
Detailed Benchmarks
Dive deeper into Qwen2.5 32B Instruct's performance across specific task categories. Expand each section to see detailed metrics and comparisons.
Math
GSM8K
Coding
HumanEval
HumanEval+
MBPP
Reasoning
HellaSwag
Knowledge
MMLU
GPQA
MATH
Hallucination
TruthfulQA
Non categorized
MMLU-Pro
MMLU-Redux
BBH
ARC-C
Winogrande
TheoremQA
MultiPL-E
Providers Pricing Coming Soon
We're working on gathering comprehensive pricing data from all major providers for Qwen2.5 32B Instruct. Compare costs across platforms to find the best pricing for your use case.
Share your feedback
Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.
Your feedback helps us improve our service