Jamba 1.6 Large logo

Jamba 1.6 Large

New
Released in the last 30 days

AI21 Labs

Jamba 1.6 is a hybrid SSM-Transformer foundation model that redefines what open models can offer enterprises. With a 256K context window, 94B active parameters, and ExpertsInt8 quantization, it delivers industry-leading performance on long-context and grounded reasoning tasks—without compromising speed or deployability. On LongBench, it scores 38.8, outperforming Mistral Large 2 (34.4) and LLaMA 3.3 70B (21.7). On CRAG, which evaluates citation-grounded QA, it leads with 78.2, and its 76.5 Arena-Hard score signals top-tier instruction following. Remarkably, this performance holds up in real-world latency-constrained settings, with up to 2.5× faster inference than comparable models, making it practical for use cases like document synthesis, enterprise chatbots, and RAG pipelines at scale. AI21 is positioning Jamba not just as a benchmark leader but as a deployment-ready, private-by-default alternative to GPT-4-tier closed models. Customers like Fnac and Educa EdTech report significant gains in output precision, latency, and retrieval accuracy—some even switching from Jamba Large 1.5 to Jamba Mini 1.6 without quality loss. Jamba’s full-stack support for structured outputs, tool use, and fine-tuning (via qLoRA + FSDP) makes it unusually accessible for enterprise customization. Unlike other open models, Jamba is architected for private VPC or on-prem deployment, allowing regulated industries to maintain full control of data while leveraging state-of-the-art model capabilities. In a fast-maturing open ecosystem, Jamba 1.6 is setting a new bar for what open actually enables.

Model Specifications

Technical details and capabilities of Jamba 1.6 Large

Core Specifications

398.0B Parameters

Model size and complexity

256.0K / 256.0K

Input / Output tokens

March 4, 2024

Knowledge cutoff date

March 12, 2025

Last 30 Days

Release date

Capabilities & License

Multimodal Support
Not Supported
Web Hydrated
No
License
Jamba Open Model License

Resources

API Reference
https://docs.ai21.com/reference/jamba-1-6-api-ref
Playground
https://studio.ai21.com/v2/chat

Performance Insights

Check out how Jamba 1.6 Large handles various AI tasks through comprehensive benchmark results.

80
60
40
20
0
78.2
CRAG
78.2
(98%)
76.5
Arena Hard
76.5
(96%)
72
HELMET RAG
72
(90%)
64.5
FinanceBench
64.5
(81%)
56.7
HELMET LongQA
56.7
(71%)
38.8
LongBench
38.8
(49%)
CRAG
Arena Hard
HELMET RAG
FinanceBench
HELMET LongQA
LongBench

Model Comparison

See how Jamba 1.6 Large stacks up against other leading models across key performance metrics.

80
64
48
32
16
0
76.5
Arena Hard - Jamba 1.6 Large
76.5
(96%)
51.2
Arena Hard - Jamba 1.6 Mini
51.2
(64%)
65.8
Arena Hard - Llama 3.3 70B Instruct
65.8
(82%)
70.9
Arena Hard - Ministral 8B Instruct
70.9
(89%)
28.2
Arena Hard - Llama 3.1 8B Instruct
28.2
(35%)
33.1
Arena Hard - Command R+
33.1
(41%)
78.2
CRAG - Jamba 1.6 Large
78.2
(98%)
76.2
CRAG - Jamba 1.6 Mini
76.2
(95%)
61.7
CRAG - Llama 3.3 70B Instruct
61.7
(77%)
52
CRAG - Ministral 8B Instruct
52
(65%)
60
CRAG - Llama 3.1 8B Instruct
60
(75%)
49.4
CRAG - Command R+
49.4
(62%)
64.5
FinanceBench - Jamba 1.6 Large
64.5
(81%)
45.4
FinanceBench - Jamba 1.6 Mini
45.4
(57%)
20
FinanceBench - Llama 3.3 70B Instruct
20
(25%)
19.2
FinanceBench - Ministral 8B Instruct
19.2
(24%)
28.4
FinanceBench - Llama 3.1 8B Instruct
28.4
(36%)
13.3
FinanceBench - Command R+
13.3
(17%)
56.7
HELMET LongQA - Jamba 1.6 Large
56.7
(71%)
46.9
HELMET LongQA - Jamba 1.6 Mini
46.9
(59%)
52.8
HELMET LongQA - Llama 3.3 70B Instruct
52.8
(66%)
33
HELMET LongQA - Ministral 8B Instruct
33
(41%)
29.2
HELMET LongQA - Llama 3.1 8B Instruct
29.2
(37%)
44.8
HELMET LongQA - Command R+
44.8
(56%)
38.8
LongBench - Jamba 1.6 Large
38.8
(49%)
32
LongBench - Jamba 1.6 Mini
32
(40%)
21.7
LongBench - Llama 3.3 70B Instruct
21.7
(27%)
17.5
LongBench - Ministral 8B Instruct
17.5
(22%)
17.7
LongBench - Llama 3.1 8B Instruct
17.7
(22%)
18.9
LongBench - Command R+
18.9
(24%)
Arena Hard
CRAG
FinanceBench
HELMET LongQA
LongBench
Jamba 1.6 Large
Jamba 1.6 Mini
Llama 3.3 70B Instruct
Ministral 8B Instruct
Llama 3.1 8B Instruct
Command R+

Detailed Benchmarks

Dive deeper into Jamba 1.6 Large's performance across specific task categories. Expand each section to see detailed metrics and comparisons.

Non categorized

Arena Hard

Current model
Other models
Avg (66.9%)

CRAG

Current model
Other models
Avg (61.8%)

FinanceBench

Current model
Other models
Avg (31.8%)

HELMET LongQA

Current model
Other models
Avg (41.4%)

LongBench

Current model
Other models
Avg (25.9%)

Providers Pricing Coming Soon

We're working on gathering comprehensive pricing data from all major providers for Jamba 1.6 Large. Compare costs across platforms to find the best pricing for your use case.

OpenAI
Anthropic
Google
Mistral AI
Cohere

Share your feedback

Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.

Your feedback helps us improve our service

Stay Ahead with AI Updates

Get insights on Gemini Pro 2.5, Sonnet 3.7 and more top AI models