
Mistral Small 3.1 24B
Mistral AI
Mistral Small 3.1-24B Instruct, an upgraded variant of Mistral Small 3 (2501), brings together 24 billion parameters, native image-text understanding, and a 128k token context window to deliver state-of-the-art performance across reasoning (MMLU 80.62%, GPQA Diamond 45.96%), coding (HumanEval 88.41%, MBPP 74.71%), math (MATH 69.3%), and multilingual benchmarks (71.18% average across 24 languages), while also dominating vision tasks like MathVista (68.91%), DocVQA (94.08%), and AI2D (93.72%), outperforming similarly sized proprietary models like Gemma 3 27B and GPT-4o Mini in most categories, all under an Apache 2.0 license and runnable on a single RTX 4090 or 32GB MacBook—making it uniquely positioned for fast, private, and open deployment in applications ranging from conversational agents and function calling to document analysis and edge-device multimodal inference.
Model Specifications
Technical details and capabilities of Mistral Small 3.1 24B
Performance Insights
Check out how Mistral Small 3.1 24B handles various AI tasks through comprehensive benchmark results.
Model Comparison
See how Mistral Small 3.1 24B stacks up against other leading models across key performance metrics.
Detailed Benchmarks
Dive deeper into Mistral Small 3.1 24B's performance across specific task categories. Expand each section to see detailed metrics and comparisons.
Coding
HumanEval
Knowledge
MMLU
MATH
Non categorized
SimpleQA
MMMU-Pro
MathVista
MMMU
ChartQA
DocVQA
AI2D
TriviaQA
Providers Pricing Coming Soon
We're working on gathering comprehensive pricing data from all major providers for Mistral Small 3.1 24B. Compare costs across platforms to find the best pricing for your use case.
Share your feedback
Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.
Your feedback helps us improve our service