
Llama 3.3 70B Instruct
Meta Llama
Llama 3.3 is a state-of-the-art multilingual large language model expertly designed for engaging in conversations across numerous languages. This powerful generative model boasts 70 billion parameters, allowing it to surpass the performance of many other publicly available and proprietary chat models on standard industry evaluations. With an extensive context window of 128,000 tokens, Llama 3.3 is well-suited for both commercial applications and research endeavors in a variety of languages.
Model Specifications
Technical details and capabilities of Llama 3.3 70B Instruct
Core Specifications
70.0B Parameters
Model size and complexity
15000.0B Training Tokens
Amount of data used in training
128.0K / 128.0K
Input / Output tokens
November 30, 2023
Knowledge cutoff date
December 5, 2024
Release date
Capabilities & License
Performance Insights
Check out how Llama 3.3 70B Instruct handles various AI tasks through comprehensive benchmark results.
Model Comparison
See how Llama 3.3 70B Instruct stacks up against other leading models across key performance metrics.
Detailed Benchmarks
Dive deeper into Llama 3.3 70B Instruct's performance across specific task categories. Expand each section to see detailed metrics and comparisons.
Coding
HumanEval
Knowledge
MMLU
GPQA
MATH
Non categorized
MMLU-Pro
IFEval
MBPP EvalPlus
MGSM
Arena Hard
CRAG
FinanceBench
HELMET LongQA
LongBench
Providers Pricing Coming Soon
We're working on gathering comprehensive pricing data from all major providers for Llama 3.3 70B Instruct. Compare costs across platforms to find the best pricing for your use case.
Share your feedback
Hi, I'm Charlie Palars, the founder of Deepranking.ai. I'm always looking for ways to improve the site and make it more useful for you. You can write me through this form or directly through X at @palarsio.
Your feedback helps us improve our service