Jamba 1.5 Mini

Unknown Developer

This cutting-edge foundation model is part of the Jamba 1.5 family and excels at following instructions. It combines the strengths of SSM and Transformer architectures, resulting in exceptional long-context understanding, rapid processing speed, and high-quality output.

Model Specifications

Technical details and capabilities of Jamba 1.5 Mini

Core Specifications

52.0B Parameters

Model size and complexity

256.1K / 256.1K

Input / Output tokens

March 4, 2024

Knowledge cutoff date

August 21, 2024

Release date

Capabilities & License

Multimodal Support
Not Supported
Web Hydrated
No
License
Jamba Open Model License

Resources

Research Paper
https://arxiv.org/abs/2408.12570
API Reference
https://docs.ai21.com/reference/jamba-15-api-ref

Performance Insights

Check out how Jamba 1.5 Mini handles various AI tasks through comprehensive benchmark results.

90
68
45
23
0
85.7
ARC Challenge
85.7
(95%)
75.8
GSM-8K
75.8
(84%)
69.7
MMLU
69.7
(77%)
54.1
TruthfulQA
54.1
(60%)
46.1
Arena Hard
46.1
(51%)
42.5
MMLU-Pro
42.5
(47%)
42.4
Wild Bench
42.4
(47%)
32.3
GPQA
32.3
(36%)
ARC Challenge
GSM-8K
MMLU
TruthfulQA
Arena Hard
MMLU-Pro
Wild Bench
GPQA

Model Comparison

See how Jamba 1.5 Mini stacks up against other leading models across key performance metrics.

100
80
60
40
20
0
46.1
Arena Hard - Jamba 1.5 Mini
46.1
(46%)
37
Arena Hard - Phi-3.5-mini-instruct
37
(37%)
37.9
Arena Hard - Phi-3.5-MoE-instruct
37.9
(38%)
28.2
Arena Hard - Llama 3.1 8B Instruct
28.2
(28%)
65.4
Arena Hard - Jamba 1.5 Large
65.4
(65%)
69.7
MMLU - Jamba 1.5 Mini
69.7
(70%)
69
MMLU - Phi-3.5-mini-instruct
69
(69%)
78.9
MMLU - Phi-3.5-MoE-instruct
78.9
(79%)
69.4
MMLU - Llama 3.1 8B Instruct
69.4
(69%)
81.2
MMLU - Jamba 1.5 Large
81.2
(81%)
42.5
MMLU-Pro - Jamba 1.5 Mini
42.5
(43%)
47.4
MMLU-Pro - Phi-3.5-mini-instruct
47.4
(47%)
54.3
MMLU-Pro - Phi-3.5-MoE-instruct
54.3
(54%)
48.3
MMLU-Pro - Llama 3.1 8B Instruct
48.3
(48%)
53.5
MMLU-Pro - Jamba 1.5 Large
53.5
(54%)
32.3
GPQA - Jamba 1.5 Mini
32.3
(32%)
30.4
GPQA - Phi-3.5-mini-instruct
30.4
(30%)
36.8
GPQA - Phi-3.5-MoE-instruct
36.8
(37%)
30.4
GPQA - Llama 3.1 8B Instruct
30.4
(30%)
36.9
GPQA - Jamba 1.5 Large
36.9
(37%)
85.7
ARC Challenge - Jamba 1.5 Mini
85.7
(86%)
84.6
ARC Challenge - Phi-3.5-mini-instruct
84.6
(85%)
91
ARC Challenge - Phi-3.5-MoE-instruct
91
(91%)
83.4
ARC Challenge - Llama 3.1 8B Instruct
83.4
(83%)
93
ARC Challenge - Jamba 1.5 Large
93
(93%)
Arena Hard
MMLU
MMLU-Pro
GPQA
ARC Challenge
Jamba 1.5 Mini
Phi-3.5-mini-instruct
Phi-3.5-MoE-instruct
Llama 3.1 8B Instruct
Jamba 1.5 Large

Detailed Benchmarks

Dive deeper into Jamba 1.5 Mini's performance across specific task categories. Expand each section to see detailed metrics and comparisons.

Knowledge

Non categorized

Wild Bench

Current model
Other models
Avg (45.5%)

GSM-8K

Current model
Other models
Avg (81.4%)

Providers Pricing Coming Soon

We're working on gathering comprehensive pricing data from all major providers for Jamba 1.5 Mini. Compare costs across platforms to find the best pricing for your use case.

OpenAI
Anthropic
Google
Mistral AI
Cohere

Share your feedback

Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.

Your feedback helps us improve our service

Stay Ahead with AI Updates

Get insights on Gemini Pro 2.5, Sonnet 3.7 and more top AI models