Qwen2.5-Coder 32B Instruct logo

Qwen2.5-Coder 32B Instruct

Qwen

Qwen2.5-Coder is a powerful coding model, expertly trained on 5.5 trillion tokens across 92 programming languages. With an extensive 128K context window, it's designed for superior code generation, completion, and repair. Beyond excelling at diverse programming tasks, Qwen2.5-Coder also demonstrates impressive mathematical and general knowledge.

Model Specifications

Technical details and capabilities of Qwen2.5-Coder 32B Instruct

Core Specifications

32.0B Parameters

Model size and complexity

5500.0B Training Tokens

Amount of data used in training

128.0K / 128.0K

Input / Output tokens

February 29, 2024

Knowledge cutoff date

September 18, 2024

Release date

Capabilities & License

Multimodal Support
Not Supported
Web Hydrated
No
License
Apache-2.0

Resources

Research Paper
https://arxiv.org/abs/2409.12186
API Reference
https://www.alibabacloud.com/help/en/model-studio/developer-reference/use-qwen-by-calling-api
Code Repository
https://github.com/QwenLM/Qwen2.5-Coder

Performance Insights

Check out how Qwen2.5-Coder 32B Instruct handles various AI tasks through comprehensive benchmark results.

100
75
50
25
0
92.7
HumanEval
92.7
(93%)
91.1
GSM8K
91.1
(91%)
90.2
MBPP
90.2
(90%)
83
HellaSwag
83
(83%)
80.8
WinoGrande
80.8
(81%)
77.5
MMLU-Redux
77.5
(78%)
75.1
MMLU
75.1
(75%)
70.5
ARC-Challenge
70.5
(71%)
57.2
MATH
57.2
(57%)
54.2
TruthfulQA
54.2
(54%)
50.4
MMLU-Pro
50.4
(50%)
49.6
BigCodeBench-Full
49.6
(50%)
43.1
TheoremQA
43.1
(43%)
31.4
LiveCodeBench
31.4
(31%)
27
BigCodeBench-Hard
27
(27%)
HumanEval
GSM8K
MBPP
HellaSwag
WinoGrande
MMLU-Redux
MMLU
ARC-Challenge
MATH
TruthfulQA
MMLU-Pro
BigCodeBench-Full
TheoremQA
LiveCodeBench
BigCodeBench-Hard

Model Comparison

See how Qwen2.5-Coder 32B Instruct stacks up against other leading models across key performance metrics.

100
80
60
40
20
0
92.7
HumanEval - Qwen2.5-Coder 32B Instruct
92.7
(93%)
88.4
HumanEval - Qwen2.5-Coder 7B Instruct
88.4
(88%)
84.8
HumanEval - Qwen2.5 7B Instruct
84.8
(85%)
86.6
HumanEval - Qwen2.5 72B Instruct
86.6
(87%)
87.8
HumanEval - Gemma 3 27B
87.8
(88%)
90.2
MBPP - Qwen2.5-Coder 32B Instruct
90.2
(90%)
83.5
MBPP - Qwen2.5-Coder 7B Instruct
83.5
(84%)
79.2
MBPP - Qwen2.5 7B Instruct
79.2
(79%)
88.2
MBPP - Qwen2.5 72B Instruct
88.2
(88%)
74.4
MBPP - Gemma 3 27B
74.4
(74%)
31.4
LiveCodeBench - Qwen2.5-Coder 32B Instruct
31.4
(31%)
18.2
LiveCodeBench - Qwen2.5-Coder 7B Instruct
18.2
(18%)
28.7
LiveCodeBench - Qwen2.5 7B Instruct
28.7
(29%)
55.5
LiveCodeBench - Qwen2.5 72B Instruct
55.5
(56%)
39
LiveCodeBench - Gemma 3 27B
39
(39%)
57.2
MATH - Qwen2.5-Coder 32B Instruct
57.2
(57%)
46.6
MATH - Qwen2.5-Coder 7B Instruct
46.6
(47%)
75.5
MATH - Qwen2.5 7B Instruct
75.5
(76%)
83.1
MATH - Qwen2.5 72B Instruct
83.1
(83%)
89
MATH - Gemma 3 27B
89
(89%)
91.1
GSM8K - Qwen2.5-Coder 32B Instruct
91.1
(91%)
83.9
GSM8K - Qwen2.5-Coder 7B Instruct
83.9
(84%)
91.6
GSM8K - Qwen2.5 7B Instruct
91.6
(92%)
95.8
GSM8K - Qwen2.5 72B Instruct
95.8
(96%)
95.9
GSM8K - Gemma 3 27B
95.9
(96%)
HumanEval
MBPP
LiveCodeBench
MATH
GSM8K
Qwen2.5-Coder 32B Instruct
Qwen2.5-Coder 7B Instruct
Qwen2.5 7B Instruct
Qwen2.5 72B Instruct
Gemma 3 27B

Detailed Benchmarks

Dive deeper into Qwen2.5-Coder 32B Instruct's performance across specific task categories. Expand each section to see detailed metrics and comparisons.

Coding

Non categorized

ARC-Challenge

Current model
Other models
Avg (65.7%)

WinoGrande

Current model
Other models
Avg (77.4%)

TheoremQA

Current model
Other models
Avg (41.1%)

Providers Pricing Coming Soon

We're working on gathering comprehensive pricing data from all major providers for Qwen2.5-Coder 32B Instruct. Compare costs across platforms to find the best pricing for your use case.

OpenAI
Anthropic
Google
Mistral AI
Cohere

Share your feedback

Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.

Your feedback helps us improve our service

Stay Ahead with AI Updates

Get insights on Gemini Pro 2.5, Sonnet 3.7 and more top AI models