Mistral Large 2 logo

Mistral Large 2

Mistral AI

This 123B parameter model demonstrates robust proficiency in code generation, mathematical operations, and logical reasoning. It offers enhanced multilingual support spanning numerous languages, a 128k context window for complex tasks, and sophisticated function calling capabilities. The model particularly excels at adhering to instructions and delivering succinct, focused outputs.

Model Specifications

Technical details and capabilities of Mistral Large 2

Core Specifications

123.0B Parameters

Model size and complexity

128.0K / 128.0K

Input / Output tokens

July 23, 2024

Release date

Capabilities & License

Multimodal Support
Not Supported
Web Hydrated
No
License
Mistral Research License

Resources

API Reference
https://docs.mistral.ai/
Playground
https://chat.mistral.ai/

Performance Insights

Check out how Mistral Large 2 handles various AI tasks through comprehensive benchmark results.

100
75
50
25
0
93
GSM8K
93
(93%)
92
HumanEval
92
(92%)
86.3
MT-Bench
86.3
(86%)
84
MMLU
84
(84%)
82.8
MMLU French
82.8
(83%)
71.5
Arena Hard
71.5
(72%)
67
CRAG
67
(67%)
34.4
LongBench
34.4
(34%)
26.7
HELMET LongQA
26.7
(27%)
24.6
FinanceBench (FullDoc)
24.6
(25%)
GSM8K
HumanEval
MT-Bench
MMLU
MMLU French
Arena Hard
CRAG
LongBench
HELMET LongQA
FinanceBench (FullDoc)

Detailed Benchmarks

Dive deeper into Mistral Large 2's performance across specific task categories. Expand each section to see detailed metrics and comparisons.

Math

GSM8K

Current model
Other models
Avg (91.5%)

Coding

HumanEval

Current model
Other models
Avg (85.1%)

Knowledge

MMLU

Current model
Other models
Avg (83.2%)

Non categorized

CRAG

Current model
Other models
Avg (59.8%)

HELMET LongQA

Current model
Other models
Avg (41.4%)

LongBench

Current model
Other models
Avg (25.9%)

Providers Pricing Coming Soon

We're working on gathering comprehensive pricing data from all major providers for Mistral Large 2. Compare costs across platforms to find the best pricing for your use case.

OpenAI
Anthropic
Google
Mistral AI
Cohere

Share your feedback

Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.

Your feedback helps us improve our service

Stay Ahead with AI Updates

Get insights on Gemini Pro 2.5, Sonnet 3.7 and more top AI models