Jamba 1.5 Large

Unknown Developer

This cutting-edge AI excels at following instructions by combining the strengths of SSM and Transformer architectures. This hybrid design allows it to process long sequences of information faster and more effectively, resulting in high-quality outputs.

Model Specifications

Technical details and capabilities of Jamba 1.5 Large

Core Specifications

398.0B Parameters

Model size and complexity

256.0K / 256.0K

Input / Output tokens

March 4, 2024

Knowledge cutoff date

August 21, 2024

Release date

Capabilities & License

Multimodal Support
Not Supported
Web Hydrated
No
License
Jamba Open Model License

Resources

API Reference
https://docs.ai21.com/reference/jamba-15-api-ref

Performance Insights

Check out how Jamba 1.5 Large handles various AI tasks through comprehensive benchmark results.

100
75
50
25
0
93
ARC Challenge
93
(93%)
87
GSM-8K
87
(87%)
81.2
MMLU
81.2
(81%)
65.4
Arena Hard
65.4
(65%)
58.3
TruthfulQA
58.3
(58%)
53.5
MMLU-Pro
53.5
(54%)
48.5
Wild Bench
48.5
(49%)
36.9
GPQA
36.9
(37%)
ARC Challenge
GSM-8K
MMLU
Arena Hard
TruthfulQA
MMLU-Pro
Wild Bench
GPQA

Model Comparison

See how Jamba 1.5 Large stacks up against other leading models across key performance metrics.

100
80
60
40
20
0
65.4
Arena Hard - Jamba 1.5 Large
65.4
(65%)
46.1
Arena Hard - Jamba 1.5 Mini
46.1
(46%)
37.9
Arena Hard - Phi-3.5-MoE-instruct
37.9
(38%)
37
Arena Hard - Phi-3.5-mini-instruct
37
(37%)
28.2
Arena Hard - Llama 3.1 8B Instruct
28.2
(28%)
81.2
MMLU - Jamba 1.5 Large
81.2
(81%)
69.7
MMLU - Jamba 1.5 Mini
69.7
(70%)
78.9
MMLU - Phi-3.5-MoE-instruct
78.9
(79%)
69
MMLU - Phi-3.5-mini-instruct
69
(69%)
69.4
MMLU - Llama 3.1 8B Instruct
69.4
(69%)
53.5
MMLU-Pro - Jamba 1.5 Large
53.5
(54%)
42.5
MMLU-Pro - Jamba 1.5 Mini
42.5
(43%)
54.3
MMLU-Pro - Phi-3.5-MoE-instruct
54.3
(54%)
47.4
MMLU-Pro - Phi-3.5-mini-instruct
47.4
(47%)
48.3
MMLU-Pro - Llama 3.1 8B Instruct
48.3
(48%)
36.9
GPQA - Jamba 1.5 Large
36.9
(37%)
32.3
GPQA - Jamba 1.5 Mini
32.3
(32%)
36.8
GPQA - Phi-3.5-MoE-instruct
36.8
(37%)
30.4
GPQA - Phi-3.5-mini-instruct
30.4
(30%)
30.4
GPQA - Llama 3.1 8B Instruct
30.4
(30%)
93
ARC Challenge - Jamba 1.5 Large
93
(93%)
85.7
ARC Challenge - Jamba 1.5 Mini
85.7
(86%)
91
ARC Challenge - Phi-3.5-MoE-instruct
91
(91%)
84.6
ARC Challenge - Phi-3.5-mini-instruct
84.6
(85%)
83.4
ARC Challenge - Llama 3.1 8B Instruct
83.4
(83%)
Arena Hard
MMLU
MMLU-Pro
GPQA
ARC Challenge
Jamba 1.5 Large
Jamba 1.5 Mini
Phi-3.5-MoE-instruct
Phi-3.5-mini-instruct
Llama 3.1 8B Instruct

Detailed Benchmarks

Dive deeper into Jamba 1.5 Large's performance across specific task categories. Expand each section to see detailed metrics and comparisons.

Knowledge

MMLU

Current model
Other models
Avg (80.7%)

GPQA

Current model
Other models
Avg (41.8%)

Non categorized

Wild Bench

Current model
Other models
Avg (45.5%)

GSM-8K

Current model
Other models
Avg (81.4%)

Providers Pricing Coming Soon

We're working on gathering comprehensive pricing data from all major providers for Jamba 1.5 Large. Compare costs across platforms to find the best pricing for your use case.

OpenAI
Anthropic
Google
Mistral AI
Cohere

Share your feedback

Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.

Your feedback helps us improve our service

Stay Ahead with AI Updates

Get insights on Gemini Pro 2.5, Sonnet 3.7 and more top AI models