Price Per TokenPrice Per Token
Deepseek
Deepseek
vs
Meta-llama
Meta-llama

DeepSeek V3.2 Speciale vs Llama 3.1 405B Instruct

A detailed comparison of pricing, benchmarks, and capabilities

108 out of our 483 tracked models have had a price change in March.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

DeepSeek V3.2 Speciale wins:

  • Cheaper input tokens
  • Larger context window
  • Higher intelligence benchmark
  • Better at coding
  • Better at math
  • Has reasoning mode

Llama 3.1 405B Instruct wins:

  • Cheaper output tokens
  • Faster response time
Price Advantage
DeepSeek V3.2 Speciale
Benchmark Advantage
DeepSeek V3.2 Speciale
Context Window
DeepSeek V3.2 Speciale
Speed
Llama 3.1 405B Instruct

Pricing Comparison

Price Comparison

MetricDeepSeek V3.2 SpecialeLlama 3.1 405B InstructWinner
Input (per 1M tokens)$0.40$0.90 DeepSeek V3.2 Speciale
Output (per 1M tokens)$1.20$0.90 Llama 3.1 405B Instruct
Cache Read (per 1M)$0.20$0.45 DeepSeek V3.2 Speciale
Using a 3:1 input/output ratio, DeepSeek V3.2 Speciale is 33% cheaper overall.

DeepSeek V3.2 Speciale Providers

No provider data available

Llama 3.1 405B Instruct Providers

No provider data available

Benchmark Comparison

8
Benchmarks Compared
6
DeepSeek V3.2 Speciale Wins
0
Llama 3.1 405B Instruct Wins

Benchmark Scores

BenchmarkDeepSeek V3.2 SpecialeLlama 3.1 405B InstructWinner
Intelligence Index
Overall intelligence score
29.417.4
Coding Index
Code generation & understanding
37.914.5
Math Index
Mathematical reasoning
96.73.0
MMLU Pro
Academic knowledge
86.373.2
GPQA
Graduate-level science
87.151.5
LiveCodeBench
Competitive programming
89.630.5
Aider
Real-world code editing
-66.2-
AIME
Competition math
-21.3-
DeepSeek V3.2 Speciale significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
DeepSeek V3.2 Speciale
Other models

Context & Performance

Context Window

DeepSeek V3.2 Speciale
163,840
tokens
Llama 3.1 405B Instruct
131,000
tokens
DeepSeek V3.2 Speciale has a 20% larger context window.

Speed Performance

MetricDeepSeek V3.2 SpecialeLlama 3.1 405B InstructWinner
Tokens/second0.0 tok/s33.7 tok/s
Time to First Token0.00s0.71s
Llama 3.1 405B Instruct responds Infinity% faster on average.

Capabilities

Feature Comparison

FeatureDeepSeek V3.2 SpecialeLlama 3.1 405B Instruct
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyDeepSeek V3.2 SpecialeLlama 3.1 405B Instruct
LicenseOpen SourceOpen Source
AuthorDeepseekMeta-llama
ReleasedDec 2025Jul 2024

DeepSeek V3.2 Speciale Modalities

Input
text
Output
text

Llama 3.1 405B Instruct Modalities

Input
text
Output
text

Related Comparisons

Compare DeepSeek V3.2 Speciale with:

Compare Llama 3.1 405B Instruct with:

Frequently Asked Questions

DeepSeek V3.2 Speciale has cheaper input pricing at $0.40/M tokens. Llama 3.1 405B Instruct has cheaper output pricing at $0.90/M tokens.
DeepSeek V3.2 Speciale scores higher on coding benchmarks with a score of 37.9, compared to Llama 3.1 405B Instruct's score of 14.5.
DeepSeek V3.2 Speciale has a 163,840 token context window, while Llama 3.1 405B Instruct has a 131,000 token context window.
DeepSeek V3.2 Speciale does not support vision. Llama 3.1 405B Instruct does not support vision.