Overview
Anthropic
Models
18
Cheapest Input
$0.25
Max Context
1.0M
Top Intelligence
51.8
Deepseek
Models
22
Cheapest Input
$0.01
Max Context
164K
Top Intelligence
32.1
Flagship Model Comparison
Claude Opus 4.7vsDeepSeek V3.2
| Metric | Claude Opus 4.7 | DeepSeek V3.2 |
|---|---|---|
| Input $/1M tokens | $5.00 | $0.25 |
| Output $/1M tokens | $25.00 | $0.38 |
| Context Window | 1.0M | 164K |
| Intelligence | 51.8 | 32.1 |
| Coding | 53.1 | 34.6 |
| Math | N/A | 59.0 |
All Models Pricing
Anthropic Models
| Model | Input/1M | Output/1M | Context |
|---|---|---|---|
| Claude Opus 4.7 | $5.00 | $25.00 | 1.0M |
| Claude Opus 4.6 | $5.00 | $25.00 | 1.0M |
| Claude Opus 4.5 | $5.00 | $25.00 | 200K |
| Claude Sonnet 4.6 | $3.00 | $15.00 | 1.0M |
| Claude Sonnet 4.5 | $3.00 | $15.00 | 1.0M |
| Claude Opus 4.1 | $15.00 | $75.00 | 200K |
| Claude Sonnet 4 | $3.00 | $15.00 | 200K |
| Claude Opus 4 | $15.00 | $75.00 | 200K |
| Claude Haiku 4.5 | $1.00 | $5.00 | 200K |
| Claude 3.7 Sonnet | $3.00 | $15.00 | 200K |
| Claude 3.5 Haiku | $0.80 | $4.00 | 200K |
| Claude 3 Opus | $15.00 | $75.00 | 200K |
| Claude 3.5 Sonnet | $3.00 | $15.00 | 200K |
| Claude 3 Haiku | $0.25 | $1.25 | 200K |
| Claude 3 Sonnet | $3.00 | $15.00 | N/A |
| Claude 2 | $8.00 | $24.00 | 100K |
| Claude Instant | $0.80 | $2.40 | N/A |
| Claude 3.5 Haiku (2024-10-22) | $0.80 | $4.00 | 200K |
Deepseek Models
| Model | Input/1M | Output/1M | Context |
|---|---|---|---|
| DeepSeek V3.2 | $0.25 | $0.38 | 164K |
| DeepSeek V3.2 Speciale | $0.29 | $0.43 | 164K |
| DeepSeek V3.1 Terminus | $0.21 | $0.79 | 164K |
| DeepSeek V3.2 Exp | $0.27 | $0.41 | 164K |
| DeepSeek V3.1 | $0.15 | $0.75 | 33K |
| R1 0528 | $0.50 | $2.15 | 164K |
| R1 | $0.55 | $2.00 | 64K |
| R1 Distill Qwen 32B | $0.29 | $0.29 | 33K |
| DeepSeek V3 0324 | $0.20 | $0.77 | 164K |
| DeepSeek R1 0528 Qwen3 8B | $0.20 | $0.20 | 128K |
| R1 Distill Llama 70B | $0.70 | $0.80 | 131K |
| R1 Distill Qwen 14B | $0.15 | $0.15 | 33K |
| R1 Distill Llama 8B | $0.04 | $0.04 | 33K |
| R1 Distill Qwen 1.5B | $0.18 | $0.18 | 131K |
| DeepSeek V3 | $0.01 | $0.03 | 164K |
| DeepSeek Coder 1.3B Base | $0.10 | $0.10 | 16K |
| DeepSeek Coder 7B Base | $0.20 | $0.20 | 4K |
| R1 Distill Qwen 7B | $0.20 | $0.20 | 33K |
| DeepSeek Coder 7B Base v1.5 | $0.20 | $0.20 | 4K |
| DeepSeek Coder 7B Instruct v1.5 | $0.20 | $0.20 | 4K |
| DeepSeek Prover V2 | $0.50 | $2.18 | 164K |
| DeepSeek Coder 33B Instruct | $0.80 | $0.80 | 16K |
Benchmark Comparison
Best Scores by Provider
| Benchmark | Anthropic | Deepseek |
|---|---|---|
| Intelligence | 51.8 Claude Opus 4.7 | 32.1 DeepSeek V3.2 |
| Coding | 53.1 Claude Opus 4.7 | 37.9 DeepSeek V3.2 Speciale |
| Math | 62.7 Claude Opus 4.5 | 96.7 DeepSeek V3.2 Speciale |
| MMLU Pro | 88.9 Claude Opus 4.5 | 86.3 DeepSeek V3.2 Speciale |
| GPQA | 88.5 Claude Opus 4.7 | 87.1 DeepSeek V3.2 Speciale |
| LiveCodeBench | 73.8 Claude Opus 4.5 | 89.6 DeepSeek V3.2 Speciale |
| Aider | 84.2 Claude 3.5 Sonnet | 74.2 DeepSeek V3.2 Exp |
| AIME | 56.3 Claude Opus 4 | 89.3 R1 0528 |
Capabilities
| Capability | Anthropic | Deepseek |
|---|---|---|
| Vision | ✓ (15 models) | — |
| Tool Calls | ✓ (15 models) | ✓ (16 models) |
| Reasoning | ✓ (9 models) | ✓ (15 models) |
| Audio Input | — | — |
| Audio Output | — | — |
| PDF Input | ✓ (7 models) | — |
| Web Search | ✓ (11 models) | — |
| Prompt Caching | ✓ (13 models) | ✓ (4 models) |
| Open Source Models | — | ✓ (16 models) |
Model-Level Comparisons
Compare specific models head-to-head: