Price Per TokenPrice Per Token
Deepseek
Deepseek
vs
Meta-llama
Meta-llama

R1 Distill Llama 70B vs Llama 3.2 3B Instruct

A detailed comparison of pricing, benchmarks, and capabilities

OpenClaw

Best LLMs for OpenClaw Vote for which model works best with OpenClaw

112 out of our 301 tracked models have had a price change in February.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

R1 Distill Llama 70B wins:

  • Faster response time
  • Higher intelligence benchmark
  • Better at coding
  • Better at math
  • Has reasoning mode

Llama 3.2 3B Instruct wins:

  • Cheaper input tokens
  • Cheaper output tokens
Price Advantage
Llama 3.2 3B Instruct
Benchmark Advantage
R1 Distill Llama 70B
Context Window
Llama 3.2 3B Instruct
Speed
R1 Distill Llama 70B

Pricing Comparison

Price Comparison

MetricR1 Distill Llama 70BLlama 3.2 3B InstructWinner
Input (per 1M tokens)$0.03$0.02 Llama 3.2 3B Instruct
Output (per 1M tokens)$0.11$0.02 Llama 3.2 3B Instruct
Cache Read (per 1M)$15000.00N/A R1 Distill Llama 70B
Using a 3:1 input/output ratio, Llama 3.2 3B Instruct is 60% cheaper overall.

R1 Distill Llama 70B Providers

Chutes $0.03 (Cheapest)
SambaNova $0.70
DeepInfra $0.70
Vercel $0.75
Groq $0.75

Llama 3.2 3B Instruct Providers

DeepInfra $0.02 (Cheapest)
Novita $0.03
Cloudflare $0.05
Together $0.06
Hyperbolic $0.10

Benchmark Comparison

7
Benchmarks Compared
6
R1 Distill Llama 70B Wins
0
Llama 3.2 3B Instruct Wins

Benchmark Scores

BenchmarkR1 Distill Llama 70BLlama 3.2 3B InstructWinner
Intelligence Index
Overall intelligence score
16.09.7
Coding Index
Code generation & understanding
11.4--
Math Index
Mathematical reasoning
53.73.3
MMLU Pro
Academic knowledge
79.534.7
GPQA
Graduate-level science
40.225.5
LiveCodeBench
Competitive programming
26.68.3
AIME
Competition math
67.06.7
R1 Distill Llama 70B significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
R1 Distill Llama 70B
Other models

Context & Performance

Context Window

R1 Distill Llama 70B
131,072
tokens
Max output: 131,072 tokens
Llama 3.2 3B Instruct
131,072
tokens
Max output: 16,384 tokens

Speed Performance

MetricR1 Distill Llama 70BLlama 3.2 3B InstructWinner
Tokens/second55.3 tok/s44.1 tok/s
Time to First Token0.87s0.43s
R1 Distill Llama 70B responds 25% faster on average.

Capabilities

Feature Comparison

FeatureR1 Distill Llama 70BLlama 3.2 3B Instruct
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyR1 Distill Llama 70BLlama 3.2 3B Instruct
LicenseOpen SourceOpen Source
AuthorDeepseekMeta-llama
ReleasedJan 2025Sep 2024

R1 Distill Llama 70B Modalities

Input
text
Output
text

Llama 3.2 3B Instruct Modalities

Input
text
Output
text

Related Comparisons

Compare R1 Distill Llama 70B with:

Compare Llama 3.2 3B Instruct with:

Frequently Asked Questions

Llama 3.2 3B Instruct has cheaper input pricing at $0.02/M tokens. Llama 3.2 3B Instruct has cheaper output pricing at $0.02/M tokens.
R1 Distill Llama 70B scores higher on coding benchmarks with a score of 11.4, compared to Llama 3.2 3B Instruct's score of N/A.
R1 Distill Llama 70B has a 131,072 token context window, while Llama 3.2 3B Instruct has a 131,072 token context window.
R1 Distill Llama 70B does not support vision. Llama 3.2 3B Instruct does not support vision.