Overview
Cohere
Models
6
Cheapest Input
$0.04
Max Context
256K
Top Intelligence
33.6
Deepseek
Models
22
Cheapest Input
$0.01
Max Context
164K
Top Intelligence
32.1
Flagship Model Comparison
Command R+ (08-2024)vsDeepSeek V3.2
| Metric | Command R+ (08-2024) | DeepSeek V3.2 |
|---|---|---|
| Input $/1M tokens | $2.50 | $0.26 |
| Output $/1M tokens | $10.00 | $0.38 |
| Context Window | 128K | 164K |
| Intelligence | 33.6 | 32.1 |
| Coding | N/A | 34.6 |
| Math | N/A | 59.0 |
All Models Pricing
Cohere Models
| Model | Input/1M | Output/1M | Context |
|---|---|---|---|
| Command R+ (08-2024) | $2.50 | $10.00 | 128K |
| Command A | $2.50 | $10.00 | 256K |
| Command R7B (12-2024) | $0.04 | $0.15 | 128K |
| Command R | $0.15 | $0.60 | 128K |
| Command R (08-2024) | $0.15 | $0.60 | 128K |
| Command R+ | $2.50 | $10.00 | 128K |
Deepseek Models
| Model | Input/1M | Output/1M | Context |
|---|---|---|---|
| DeepSeek V3.2 | $0.26 | $0.38 | 164K |
| DeepSeek V3.2 Speciale | $0.40 | $1.20 | 164K |
| DeepSeek V3.1 Terminus | $0.21 | $0.79 | 164K |
| DeepSeek V3.2 Exp | $0.27 | $0.41 | 164K |
| DeepSeek V3.1 | $0.15 | $0.75 | 33K |
| R1 0528 | $0.45 | $2.15 | 164K |
| R1 | $0.55 | $2.19 | 64K |
| R1 Distill Qwen 32B | $0.29 | $0.29 | 33K |
| DeepSeek V3 0324 | $0.20 | $0.77 | 164K |
| DeepSeek R1 0528 Qwen3 8B | $0.20 | $0.20 | 128K |
| R1 Distill Llama 70B | $0.70 | $0.80 | 131K |
| R1 Distill Qwen 14B | $0.15 | $0.15 | 33K |
| R1 Distill Llama 8B | $0.04 | $0.04 | 33K |
| DeepSeek V3 | $0.01 | $0.03 | 164K |
| DeepSeek Coder 1.3B Base | $0.10 | $0.10 | 16K |
| DeepSeek Coder 7B Base | $0.20 | $0.20 | 4K |
| R1 Distill Qwen 7B | $0.20 | $0.20 | 33K |
| DeepSeek Coder 7B Base v1.5 | $0.20 | $0.20 | 4K |
| DeepSeek Coder 7B Instruct v1.5 | $0.20 | $0.20 | 4K |
| R1 Distill Qwen 1.5B | $0.20 | $0.20 | 131K |
| DeepSeek Prover V2 | $0.50 | $2.18 | 164K |
| DeepSeek Coder 33B Instruct | $0.80 | $0.80 | 16K |
Benchmark Comparison
Best Scores by Provider
| Benchmark | Cohere | Deepseek |
|---|---|---|
| Intelligence | 33.6 Command R+ (08-2024) | 32.1 DeepSeek V3.2 |
| Coding | 9.9 Command A | 37.9 DeepSeek V3.2 Speciale |
| Math | 13.0 Command A | 96.7 DeepSeek V3.2 Speciale |
| MMLU Pro | 71.2 Command A | 86.3 DeepSeek V3.2 Speciale |
| GPQA | 52.7 Command A | 87.1 DeepSeek V3.2 Speciale |
| LiveCodeBench | 28.7 Command A | 89.6 DeepSeek V3.2 Speciale |
| Aider | 38.3 Command R (08-2024) | 74.2 DeepSeek V3.2 Exp |
| AIME | 9.7 Command A | 89.3 R1 0528 |
| BBH | 42.8 Command R+ (08-2024) | N/A |
Capabilities
| Capability | Cohere | Deepseek |
|---|---|---|
| Vision | — | — |
| Tool Calls | ✓ (6 models) | ✓ (16 models) |
| Reasoning | ✓ (2 models) | ✓ (15 models) |
| Audio Input | — | — |
| Audio Output | — | — |
| PDF Input | — | — |
| Web Search | — | — |
| Prompt Caching | — | ✓ (4 models) |
| Open Source Models | ✓ (4 models) | ✓ (16 models) |
Model-Level Comparisons
Compare specific models head-to-head:
Command R+ (08-2024)vsDeepSeek V3.2
Command R+ (08-2024)vsDeepSeek V3.2 Speciale
Command R+ (08-2024)vsDeepSeek V3.1 Terminus
Command AvsDeepSeek V3.2
Command AvsDeepSeek V3.2 Speciale
Command AvsDeepSeek V3.1 Terminus
Command R7B (12-2024)vsDeepSeek V3.2
Command R7B (12-2024)vsDeepSeek V3.2 Speciale
Command R7B (12-2024)vsDeepSeek V3.1 Terminus