Price Per TokenPrice Per Token
Meta-llama
Meta-llama
vs
Mistral AI
Mistral AI

Llama 3.3 70B Instruct vs Devstral 2 2512

A detailed comparison of pricing, benchmarks, and capabilities

108 out of our 483 tracked models have had a price change in March.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

Llama 3.3 70B Instruct wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Faster response time

Devstral 2 2512 wins:

  • Larger context window
  • Higher intelligence benchmark
  • Better at coding
  • Better at math
Price Advantage
Llama 3.3 70B Instruct
Benchmark Advantage
Devstral 2 2512
Context Window
Devstral 2 2512
Speed
Llama 3.3 70B Instruct

Pricing Comparison

Price Comparison

MetricLlama 3.3 70B InstructDevstral 2 2512Winner
Input (per 1M tokens)$0.10$0.40 Llama 3.3 70B Instruct
Output (per 1M tokens)$0.32$0.90 Llama 3.3 70B Instruct
Cache Read (per 1M)$0.13$0.45 Llama 3.3 70B Instruct
Using a 3:1 input/output ratio, Llama 3.3 70B Instruct is 70% cheaper overall.

Llama 3.3 70B Instruct Providers

No provider data available

Devstral 2 2512 Providers

No provider data available

Benchmark Comparison

8
Benchmarks Compared
0
Llama 3.3 70B Instruct Wins
6
Devstral 2 2512 Wins

Benchmark Scores

BenchmarkLlama 3.3 70B InstructDevstral 2 2512Winner
Intelligence Index
Overall intelligence score
14.522.0
Coding Index
Code generation & understanding
10.723.7
Math Index
Mathematical reasoning
7.736.7
MMLU Pro
Academic knowledge
71.376.2
GPQA
Graduate-level science
49.859.4
LiveCodeBench
Competitive programming
28.844.8
Aider
Real-world code editing
59.4--
AIME
Competition math
30.0--
Devstral 2 2512 significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
Llama 3.3 70B Instruct
Other models

Context & Performance

Context Window

Llama 3.3 70B Instruct
131,072
tokens
Devstral 2 2512
262,144
tokens
Devstral 2 2512 has a 50% larger context window.

Speed Performance

MetricLlama 3.3 70B InstructDevstral 2 2512Winner
Tokens/second99.5 tok/s81.2 tok/s
Time to First Token0.54s0.39s
Llama 3.3 70B Instruct responds 23% faster on average.

Capabilities

Feature Comparison

FeatureLlama 3.3 70B InstructDevstral 2 2512
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyLlama 3.3 70B InstructDevstral 2 2512
LicenseOpen SourceOpen Source
AuthorMeta-llamaMistral AI
ReleasedDec 2024Dec 2025

Llama 3.3 70B Instruct Modalities

Input
text
Output
text

Devstral 2 2512 Modalities

Input
text
Output
text

Related Comparisons

Compare Llama 3.3 70B Instruct with:

Compare Devstral 2 2512 with:

Frequently Asked Questions

Llama 3.3 70B Instruct has cheaper input pricing at $0.10/M tokens. Llama 3.3 70B Instruct has cheaper output pricing at $0.32/M tokens.
Devstral 2 2512 scores higher on coding benchmarks with a score of 23.7, compared to Llama 3.3 70B Instruct's score of 10.7.
Llama 3.3 70B Instruct has a 131,072 token context window, while Devstral 2 2512 has a 262,144 token context window.
Llama 3.3 70B Instruct does not support vision. Devstral 2 2512 does not support vision.