Mistral NeMo Instruct logo

Mistral NeMo Instruct

Mistral AI

This is a cutting-edge, multilingual AI model boasting 12 billion parameters and an expansive 128,000-token context window. Engineered for worldwide use, it delivers high performance across a diverse range of languages.

Model Specifications

Technical details and capabilities of Mistral NeMo Instruct

Core Specifications

12.0B Parameters

Model size and complexity

128.0K / 128.0K

Input / Output tokens

July 17, 2024

Release date

Capabilities & License

Multimodal Support
Not Supported
Web Hydrated
No
License
Apache 2.0

Resources

API Reference
https://docs.mistral.ai/getting-started/models/models_overview/
Code Repository
https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407

Performance Insights

Check out how Mistral NeMo Instruct handles various AI tasks through comprehensive benchmark results.

90
68
45
23
0
83.5
HellaSwag
83.5
(93%)
76.8
Winogrande
76.8
(85%)
73.8
TriviaQA
73.8
(82%)
70.4
CommonSenseQA
70.4
(78%)
68
MMLU
68
(76%)
60.6
OpenBookQA
60.6
(67%)
50.3
TruthfulQA
50.3
(56%)
31.2
NaturalQuestions
31.2
(35%)
HellaSwag
Winogrande
TriviaQA
CommonSenseQA
MMLU
OpenBookQA
TruthfulQA
NaturalQuestions

Model Comparison

See how Mistral NeMo Instruct stacks up against other leading models across key performance metrics.

90
72
54
36
18
0
83.5
HellaSwag - Mistral NeMo Instruct
83.5
(93%)
76.8
HellaSwag - Qwen2.5-Coder 7B Instruct
76.8
(85%)
83
HellaSwag - Qwen2.5-Coder 32B Instruct
83
(92%)
87.6
HellaSwag - Qwen2 72B Instruct
87.6
(97%)
85.2
HellaSwag - Qwen2.5 32B Instruct
85.2
(95%)
69.4
HellaSwag - Phi-3.5-mini-instruct
69.4
(77%)
50.3
TruthfulQA - Mistral NeMo Instruct
50.3
(56%)
50.6
TruthfulQA - Qwen2.5-Coder 7B Instruct
50.6
(56%)
54.2
TruthfulQA - Qwen2.5-Coder 32B Instruct
54.2
(60%)
54.8
TruthfulQA - Qwen2 72B Instruct
54.8
(61%)
57.8
TruthfulQA - Qwen2.5 32B Instruct
57.8
(64%)
64
TruthfulQA - Phi-3.5-mini-instruct
64
(71%)
68
MMLU - Mistral NeMo Instruct
68
(76%)
67.6
MMLU - Qwen2.5-Coder 7B Instruct
67.6
(75%)
75.1
MMLU - Qwen2.5-Coder 32B Instruct
75.1
(83%)
82.3
MMLU - Qwen2 72B Instruct
82.3
(91%)
83.3
MMLU - Qwen2.5 32B Instruct
83.3
(93%)
69
MMLU - Phi-3.5-mini-instruct
69
(77%)
HellaSwag
TruthfulQA
MMLU
Mistral NeMo Instruct
Qwen2.5-Coder 7B Instruct
Qwen2.5-Coder 32B Instruct
Qwen2 72B Instruct
Qwen2.5 32B Instruct
Phi-3.5-mini-instruct

Detailed Benchmarks

Dive deeper into Mistral NeMo Instruct's performance across specific task categories. Expand each section to see detailed metrics and comparisons.

Non categorized

Winogrande

Current model
Other models
Avg (82.2%)

OpenBookQA

Current model
Other models
Avg (76.5%)

TriviaQA

Current model
Other models
Avg (78.0%)

Providers Pricing Coming Soon

We're working on gathering comprehensive pricing data from all major providers for Mistral NeMo Instruct. Compare costs across platforms to find the best pricing for your use case.

OpenAI
Anthropic
Google
Mistral AI
Cohere

Share your feedback

Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.

Your feedback helps us improve our service

Stay Ahead with AI Updates

Get insights on Gemini Pro 2.5, Sonnet 3.7 and more top AI models