Price Per TokenPrice Per Token
Deepseek
Deepseek
vs
Meta-llama
Meta-llama

DeepSeek V3.2 vs Llama 3.3 70B Instruct

A detailed comparison of pricing, benchmarks, and capabilities

108 out of our 483 tracked models have had a price change in March.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

DeepSeek V3.2 wins:

  • Larger context window
  • Higher intelligence benchmark
  • Better at coding
  • Better at math
  • Has reasoning mode

Llama 3.3 70B Instruct wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Faster response time
Price Advantage
Llama 3.3 70B Instruct
Benchmark Advantage
DeepSeek V3.2
Context Window
DeepSeek V3.2
Speed
Llama 3.3 70B Instruct

Pricing Comparison

Price Comparison

MetricDeepSeek V3.2Llama 3.3 70B InstructWinner
Input (per 1M tokens)$0.26$0.10 Llama 3.3 70B Instruct
Output (per 1M tokens)$0.38$0.32 Llama 3.3 70B Instruct
Cache Read (per 1M)$0.03$0.13 DeepSeek V3.2
Using a 3:1 input/output ratio, Llama 3.3 70B Instruct is 47% cheaper overall.

DeepSeek V3.2 Providers

No provider data available

Llama 3.3 70B Instruct Providers

No provider data available

Benchmark Comparison

8
Benchmarks Compared
6
DeepSeek V3.2 Wins
0
Llama 3.3 70B Instruct Wins

Benchmark Scores

BenchmarkDeepSeek V3.2Llama 3.3 70B InstructWinner
Intelligence Index
Overall intelligence score
32.114.5
Coding Index
Code generation & understanding
34.610.7
Math Index
Mathematical reasoning
59.07.7
MMLU Pro
Academic knowledge
83.771.3
GPQA
Graduate-level science
75.149.8
LiveCodeBench
Competitive programming
59.328.8
Aider
Real-world code editing
-59.4-
AIME
Competition math
-30.0-
DeepSeek V3.2 significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
DeepSeek V3.2
Other models

Context & Performance

Context Window

DeepSeek V3.2
163,840
tokens
Llama 3.3 70B Instruct
131,072
tokens
DeepSeek V3.2 has a 20% larger context window.

Speed Performance

MetricDeepSeek V3.2Llama 3.3 70B InstructWinner
Tokens/second24.4 tok/s99.5 tok/s
Time to First Token1.43s0.54s
Llama 3.3 70B Instruct responds 307% faster on average.

Capabilities

Feature Comparison

FeatureDeepSeek V3.2Llama 3.3 70B Instruct
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyDeepSeek V3.2Llama 3.3 70B Instruct
LicenseOpen SourceOpen Source
AuthorDeepseekMeta-llama
ReleasedDec 2025Dec 2024

DeepSeek V3.2 Modalities

Input
text
Output
text

Llama 3.3 70B Instruct Modalities

Input
text
Output
text

Related Comparisons

Compare DeepSeek V3.2 with:

Compare Llama 3.3 70B Instruct with:

Frequently Asked Questions

Llama 3.3 70B Instruct has cheaper input pricing at $0.10/M tokens. Llama 3.3 70B Instruct has cheaper output pricing at $0.32/M tokens.
DeepSeek V3.2 scores higher on coding benchmarks with a score of 34.6, compared to Llama 3.3 70B Instruct's score of 10.7.
DeepSeek V3.2 has a 163,840 token context window, while Llama 3.3 70B Instruct has a 131,072 token context window.
DeepSeek V3.2 does not support vision. Llama 3.3 70B Instruct does not support vision.