Price Per TokenPrice Per Token
Deepseek
Deepseek
vs
Meta-llama
Meta-llama

DeepSeek V3.1 Terminus vs Llama 3.3 70B Instruct

A detailed comparison of pricing, benchmarks, and capabilities

108 out of our 483 tracked models have had a price change in March.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

DeepSeek V3.1 Terminus wins:

  • Larger context window
  • Higher intelligence benchmark
  • Better at coding
  • Better at math
  • Has reasoning mode

Llama 3.3 70B Instruct wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Faster response time
Price Advantage
Llama 3.3 70B Instruct
Benchmark Advantage
DeepSeek V3.1 Terminus
Context Window
DeepSeek V3.1 Terminus
Speed
Llama 3.3 70B Instruct

Pricing Comparison

Price Comparison

MetricDeepSeek V3.1 TerminusLlama 3.3 70B InstructWinner
Input (per 1M tokens)$0.21$0.10 Llama 3.3 70B Instruct
Output (per 1M tokens)$0.79$0.32 Llama 3.3 70B Instruct
Cache Read (per 1M)$0.12$0.13 DeepSeek V3.1 Terminus
Using a 3:1 input/output ratio, Llama 3.3 70B Instruct is 56% cheaper overall.

DeepSeek V3.1 Terminus Providers

No provider data available

Llama 3.3 70B Instruct Providers

No provider data available

Benchmark Comparison

8
Benchmarks Compared
6
DeepSeek V3.1 Terminus Wins
0
Llama 3.3 70B Instruct Wins

Benchmark Scores

BenchmarkDeepSeek V3.1 TerminusLlama 3.3 70B InstructWinner
Intelligence Index
Overall intelligence score
28.514.5
Coding Index
Code generation & understanding
31.910.7
Math Index
Mathematical reasoning
53.77.7
MMLU Pro
Academic knowledge
83.671.3
GPQA
Graduate-level science
75.149.8
LiveCodeBench
Competitive programming
52.928.8
Aider
Real-world code editing
-59.4-
AIME
Competition math
-30.0-
DeepSeek V3.1 Terminus significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
DeepSeek V3.1 Terminus
Other models

Context & Performance

Context Window

DeepSeek V3.1 Terminus
163,840
tokens
Llama 3.3 70B Instruct
131,072
tokens
DeepSeek V3.1 Terminus has a 20% larger context window.

Speed Performance

MetricDeepSeek V3.1 TerminusLlama 3.3 70B InstructWinner
Tokens/second0.0 tok/s99.5 tok/s
Time to First Token0.00s0.54s
Llama 3.3 70B Instruct responds Infinity% faster on average.

Capabilities

Feature Comparison

FeatureDeepSeek V3.1 TerminusLlama 3.3 70B Instruct
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyDeepSeek V3.1 TerminusLlama 3.3 70B Instruct
LicenseOpen SourceOpen Source
AuthorDeepseekMeta-llama
ReleasedSep 2025Dec 2024

DeepSeek V3.1 Terminus Modalities

Input
text
Output
text

Llama 3.3 70B Instruct Modalities

Input
text
Output
text

Related Comparisons

Compare DeepSeek V3.1 Terminus with:

Compare Llama 3.3 70B Instruct with:

Frequently Asked Questions

Llama 3.3 70B Instruct has cheaper input pricing at $0.10/M tokens. Llama 3.3 70B Instruct has cheaper output pricing at $0.32/M tokens.
DeepSeek V3.1 Terminus scores higher on coding benchmarks with a score of 31.9, compared to Llama 3.3 70B Instruct's score of 10.7.
DeepSeek V3.1 Terminus has a 163,840 token context window, while Llama 3.3 70B Instruct has a 131,072 token context window.
DeepSeek V3.1 Terminus does not support vision. Llama 3.3 70B Instruct does not support vision.