Overview
Anthropic
Models
15
Cheapest Input
$0.25
Max Context
1.0M
Top Intelligence
46.5
Cohere
Models
6
Cheapest Input
$0.04
Max Context
256K
Top Intelligence
33.6
Flagship Model Comparison
Claude Opus 4.6vsCommand R+ (08-2024)
| Metric | Claude Opus 4.6 | Command R+ (08-2024) |
|---|---|---|
| Input $/1M tokens | $5.00 | $2.50 |
| Output $/1M tokens | $25.00 | $10.00 |
| Context Window | 1.0M | 128K |
| Intelligence | 46.5 | 33.6 |
| Coding | 47.6 | N/A |
All Models Pricing
Anthropic Models
| Model | Input/1M | Output/1M | Context |
|---|---|---|---|
| Claude Opus 4.6 | $5.00 | $25.00 | 1.0M |
| Claude Opus 4.5 | $5.00 | $25.00 | 200K |
| Claude Sonnet 4.6 | $3.00 | $15.00 | 1.0M |
| Claude Sonnet 4.5 | $3.00 | $15.00 | 1.0M |
| Claude Sonnet 4 | $3.00 | $15.00 | 200K |
| Claude Haiku 4.5 | $1.00 | $5.00 | 200K |
| Claude 3.7 Sonnet | $3.00 | $15.00 | 200K |
| Claude Opus 4.1 | $15.00 | $75.00 | 200K |
| Claude Opus 4 | $15.00 | $75.00 | 200K |
| Claude 3.5 Haiku | $0.80 | $4.00 | 200K |
| Claude 3.5 Sonnet | $3.00 | $15.00 | 200K |
| Claude 3 Opus | $15.00 | $75.00 | 200K |
| Claude 3 Haiku | $0.25 | $1.25 | 200K |
| Claude 2 | $8.00 | $24.00 | 100K |
| Claude 3.5 Haiku (2024-10-22) | $0.80 | $4.00 | 200K |
Cohere Models
| Model | Input/1M | Output/1M | Context |
|---|---|---|---|
| Command R+ (08-2024) | $2.50 | $10.00 | 128K |
| Command A | $2.50 | $10.00 | 256K |
| Command R7B (12-2024) | $0.04 | $0.15 | 128K |
| Command R | $0.15 | $0.60 | 128K |
| Command R (08-2024) | $0.15 | $0.60 | 128K |
| Command R+ | $2.50 | $10.00 | 128K |
Benchmark Comparison
Best Scores by Provider
| Benchmark | Anthropic | Cohere |
|---|---|---|
| Intelligence | 46.5 Claude Opus 4.6 | 33.6 Command R+ (08-2024) |
| Coding | 47.6 Claude Opus 4.6 | 9.9 Command A |
| Math | 62.7 Claude Opus 4.5 | 13.0 Command A |
| MMLU Pro | 88.9 Claude Opus 4.5 | 71.2 Command A |
| GPQA | 84.0 Claude Opus 4.6 | 52.7 Command A |
| LiveCodeBench | 73.8 Claude Opus 4.5 | 28.7 Command A |
| Aider | 84.2 Claude 3.5 Sonnet | 38.3 Command R (08-2024) |
| AIME | 56.3 Claude Opus 4 | 9.7 Command A |
| BBH | N/A | 42.8 Command R+ (08-2024) |
Capabilities
| Capability | Anthropic | Cohere |
|---|---|---|
| Vision | ✓ (14 models) | — |
| Tool Calls | ✓ (14 models) | ✓ (6 models) |
| Reasoning | ✓ (9 models) | ✓ (2 models) |
| Audio Input | — | — |
| Audio Output | — | — |
| PDF Input | ✓ (7 models) | — |
| Web Search | ✓ (10 models) | — |
| Prompt Caching | ✓ (12 models) | — |
| Open Source Models | — | ✓ (4 models) |
Model-Level Comparisons
Compare specific models head-to-head:
Claude Opus 4.6vsCommand R+ (08-2024)
Claude Opus 4.6vsCommand A
Claude Opus 4.6vsCommand R7B (12-2024)
Claude Opus 4.5vsCommand R+ (08-2024)
Claude Opus 4.5vsCommand A
Claude Opus 4.5vsCommand R7B (12-2024)
Claude Sonnet 4.6vsCommand R+ (08-2024)
Claude Sonnet 4.6vsCommand A
Claude Sonnet 4.6vsCommand R7B (12-2024)