Nova Lite

Unknown Developer

This is a high-performance multimodal model designed for speed and affordability. It rapidly processes various data types, including images, videos, documents, and text.

Model Specifications

Technical details and capabilities of Nova Lite

Core Specifications

300.0K / 2.0K

Input / Output tokens

November 19, 2024

Release date

Capabilities & License

Multimodal Support
Supported
Web Hydrated
Yes
License
Proprietary

Resources

Research Paper
https://www.amazon.science/publications/the-amazon-nova-family-of-models-technical-report-and-model-card
API Reference
https://aws.amazon.com/bedrock/amazon-nova-lite

Performance Insights

Check out how Nova Lite handles various AI tasks through comprehensive benchmark results.

100
75
50
25
0
94.5
GSM8k
94.5
(95%)
92.4
ARC-C
92.4
(92%)
92.4
Doc VQA
92.4
(92%)
89.7
IFEval
89.7
(90%)
88.8
Translation en→Set1 COMET22
88.8
(89%)
88.8
Translation Set1→en COMET22
88.8
(89%)
86.8
Chart QA
86.8
(87%)
85.4
HumanEval
85.4
(85%)
82.4
BBH
82.4
(82%)
80.5
MMLU
80.5
(81%)
80.2
GroundUI-1K
80.2
(80%)
80.2
Text VQA
80.2
(80%)
80.2
DROP
80.2
(80%)
77.8
VATEX
77.8
(78%)
77.7
VisualWebBench
77.7
(78%)
73.6
FinQA
73.6
(74%)
73.3
MATH
73.3
(73%)
71.4
Ego Schema
71.4
(71%)
66.6
BFCL
66.6
(67%)
60.7
MM-Mind2Web
60.7
(61%)
56.2
MMMU
56.2
(56%)
43.8
CRAG
43.8
(44%)
43.1
Translation Set1→en spBleu
43.1
(43%)
42
GPQA
42
(42%)
41.5
Translation en→Set1 spBleu
41.5
(42%)
40.4
LVBench
40.4
(40%)
19.2
SQuALITY
19.2
(19%)
GSM8k
ARC-C
Doc VQA
IFEval
Translation en→Set1 COMET22
Translation Set1→en COMET22
Chart QA
HumanEval
BBH
MMLU
GroundUI-1K
Text VQA
DROP
VATEX
VisualWebBench
FinQA
MATH
Ego Schema
BFCL
MM-Mind2Web
MMMU
CRAG
Translation Set1→en spBleu
GPQA
Translation en→Set1 spBleu
LVBench
SQuALITY

Model Comparison

See how Nova Lite stacks up against other leading models across key performance metrics.

90
72
54
36
18
0
80.5
MMLU - Nova Lite
80.5
(89%)
82
MMLU - GPT-4o mini
82
(91%)
77.6
MMLU - Nova Micro
77.6
(86%)
85.9
MMLU - Nova Pro
85.9
(95%)
86.5
MMLU - GPT-4 Turbo
86.5
(96%)
87.3
MMLU - Llama 3.1 405B Instruct
87.3
(97%)
85.4
HumanEval - Nova Lite
85.4
(95%)
87.2
HumanEval - GPT-4o mini
87.2
(97%)
81.1
HumanEval - Nova Micro
81.1
(90%)
89
HumanEval - Nova Pro
89
(99%)
87.1
HumanEval - GPT-4 Turbo
87.1
(97%)
89
HumanEval - Llama 3.1 405B Instruct
89
(99%)
80.2
DROP - Nova Lite
80.2
(89%)
79.7
DROP - GPT-4o mini
79.7
(89%)
79.3
DROP - Nova Micro
79.3
(88%)
85.4
DROP - Nova Pro
85.4
(95%)
86
DROP - GPT-4 Turbo
86
(96%)
84.8
DROP - Llama 3.1 405B Instruct
84.8
(94%)
42
GPQA - Nova Lite
42
(47%)
40.2
GPQA - GPT-4o mini
40.2
(45%)
40
GPQA - Nova Micro
40
(44%)
46.9
GPQA - Nova Pro
46.9
(52%)
48
GPQA - GPT-4 Turbo
48
(53%)
50.7
GPQA - Llama 3.1 405B Instruct
50.7
(56%)
73.3
MATH - Nova Lite
73.3
(81%)
70.2
MATH - GPT-4o mini
70.2
(78%)
69.3
MATH - Nova Micro
69.3
(77%)
76.6
MATH - Nova Pro
76.6
(85%)
72.6
MATH - GPT-4 Turbo
72.6
(81%)
73.8
MATH - Llama 3.1 405B Instruct
73.8
(82%)
MMLU
HumanEval
DROP
GPQA
MATH
Nova Lite
GPT-4o mini
Nova Micro
Nova Pro
GPT-4 Turbo
Llama 3.1 405B Instruct

Detailed Benchmarks

Dive deeper into Nova Lite's performance across specific task categories. Expand each section to see detailed metrics and comparisons.

Math

GSM8k

Current model
Other models
Avg (81.3%)

Coding

Reasoning

DROP

Current model
Other models
Avg (80.1%)

Knowledge

Non categorized

ARC-C

Current model
Other models
Avg (82.3%)

SQuALITY

Current model
Other models
Avg (21.2%)

LVBench

Current model
Other models
Avg (41.0%)

FinQA

Current model
Other models
Avg (72.0%)

CRAG

Current model
Other models
Avg (53.4%)

VisualWebBench

Current model
Other models
Avg (78.7%)

MM-Mind2Web

Current model
Other models
Avg (62.2%)

GroundUI-1K

Current model
Other models
Avg (80.8%)

BFCL

Current model
Other models
Avg (70.6%)

MMMU

Current model
Other models
Avg (54.2%)

Chart QA

Current model
Other models
Avg (88.0%)

Doc VQA

Current model
Other models
Avg (93.0%)

Text VQA

Current model
Other models
Avg (78.4%)

VATEX

Current model
Other models
Avg (77.8%)

Ego Schema

Current model
Other models
Avg (71.8%)

Translation en→Set1 spBleu

Current model
Other models
Avg (41.7%)

Translation en→Set1 COMET22

Current model
Other models
Avg (88.8%)

Translation Set1→en spBleu

Current model
Other models
Avg (43.4%)

Translation Set1→en COMET22

Current model
Other models
Avg (88.8%)

IFEval

Current model
Other models
Avg (87.2%)

BBH

Current model
Other models
Avg (81.8%)

Providers Pricing Coming Soon

We're working on gathering comprehensive pricing data from all major providers for Nova Lite. Compare costs across platforms to find the best pricing for your use case.

OpenAI
Anthropic
Google
Mistral AI
Cohere

Share your feedback

Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.

Your feedback helps us improve our service

Stay Ahead with AI Updates

Get insights on Gemini Pro 2.5, Sonnet 3.7 and more top AI models