Price Per TokenPrice Per Token
Deepseek
Deepseek
vs
Meta-llama
Meta-llama

DeepSeek V3.2 vs Llama 3.1 405B Instruct

A detailed comparison of pricing, benchmarks, and capabilities

108 out of our 483 tracked models have had a price change in March.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

DeepSeek V3.2 wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Larger context window
  • Higher intelligence benchmark
  • Better at coding
  • Better at math
  • Has reasoning mode

Llama 3.1 405B Instruct wins:

  • Faster response time
Price Advantage
DeepSeek V3.2
Benchmark Advantage
DeepSeek V3.2
Context Window
DeepSeek V3.2
Speed
Llama 3.1 405B Instruct

Pricing Comparison

Price Comparison

MetricDeepSeek V3.2Llama 3.1 405B InstructWinner
Input (per 1M tokens)$0.26$0.90 DeepSeek V3.2
Output (per 1M tokens)$0.38$0.90 DeepSeek V3.2
Cache Read (per 1M)$0.03$0.45 DeepSeek V3.2
Using a 3:1 input/output ratio, DeepSeek V3.2 is 68% cheaper overall.

DeepSeek V3.2 Providers

No provider data available

Llama 3.1 405B Instruct Providers

No provider data available

Benchmark Comparison

8
Benchmarks Compared
6
DeepSeek V3.2 Wins
0
Llama 3.1 405B Instruct Wins

Benchmark Scores

BenchmarkDeepSeek V3.2Llama 3.1 405B InstructWinner
Intelligence Index
Overall intelligence score
32.117.4
Coding Index
Code generation & understanding
34.614.5
Math Index
Mathematical reasoning
59.03.0
MMLU Pro
Academic knowledge
83.773.2
GPQA
Graduate-level science
75.151.5
LiveCodeBench
Competitive programming
59.330.5
Aider
Real-world code editing
-66.2-
AIME
Competition math
-21.3-
DeepSeek V3.2 significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
DeepSeek V3.2
Other models

Context & Performance

Context Window

DeepSeek V3.2
163,840
tokens
Llama 3.1 405B Instruct
131,000
tokens
DeepSeek V3.2 has a 20% larger context window.

Speed Performance

MetricDeepSeek V3.2Llama 3.1 405B InstructWinner
Tokens/second24.4 tok/s33.7 tok/s
Time to First Token1.43s0.71s
Llama 3.1 405B Instruct responds 38% faster on average.

Capabilities

Feature Comparison

FeatureDeepSeek V3.2Llama 3.1 405B Instruct
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyDeepSeek V3.2Llama 3.1 405B Instruct
LicenseOpen SourceOpen Source
AuthorDeepseekMeta-llama
ReleasedDec 2025Jul 2024

DeepSeek V3.2 Modalities

Input
text
Output
text

Llama 3.1 405B Instruct Modalities

Input
text
Output
text

Related Comparisons

Compare DeepSeek V3.2 with:

Compare Llama 3.1 405B Instruct with:

Frequently Asked Questions

DeepSeek V3.2 has cheaper input pricing at $0.26/M tokens. DeepSeek V3.2 has cheaper output pricing at $0.38/M tokens.
DeepSeek V3.2 scores higher on coding benchmarks with a score of 34.6, compared to Llama 3.1 405B Instruct's score of 14.5.
DeepSeek V3.2 has a 163,840 token context window, while Llama 3.1 405B Instruct has a 131,000 token context window.
DeepSeek V3.2 does not support vision. Llama 3.1 405B Instruct does not support vision.