Overview
Cohere
Models
6
Cheapest Input
$0.04
Max Context
256K
Top Intelligence
33.6
OpenAI
Models
50
Cheapest Input
$0.03
Max Context
1.1M
Top Intelligence
57.0
Flagship Model Comparison
Command R+ (08-2024)vsGPT-5.4
| Metric | Command R+ (08-2024) | GPT-5.4 |
|---|---|---|
| Input $/1M tokens | $2.50 | $2.50 |
| Output $/1M tokens | $10.00 | $15.00 |
| Context Window | 128K | 1.1M |
| Intelligence | 33.6 | 57.0 |
| Coding | N/A | 57.3 |
All Models Pricing
Cohere Models
| Model | Input/1M | Output/1M | Context |
|---|---|---|---|
| Command R+ (08-2024) | $2.50 | $10.00 | 128K |
| Command A | $2.50 | $10.00 | 256K |
| Command R7B (12-2024) | $0.04 | $0.15 | 128K |
| Command R | $0.15 | $0.60 | 128K |
| Command R (08-2024) | $0.15 | $0.60 | 128K |
| Command R+ | $2.50 | $10.00 | 128K |
OpenAI Models
| Model | Input/1M | Output/1M | Context |
|---|---|---|---|
| GPT-5.4 | $2.50 | $15.00 | 1.1M |
| GPT-5.3 Codex | $1.75 | $14.00 | 400K |
| GPT-5.2 Pro | $10.50 | $84.00 | 400K |
| GPT-5.2-Codex | $1.75 | $14.00 | 400K |
| GPT-5 Codex | $1.25 | $10.00 | 400K |
| GPT-5.1-Codex | $1.25 | $10.00 | 400K |
| GPT-5 Mini | $0.25 | $2.00 | 400K |
| o3 Pro | $20.00 | $80.00 | 200K |
| GPT-5.1-Codex-Mini | $0.25 | $2.00 | 400K |
| o3 | $2.00 | $8.00 | 200K |
| GPT-5.2 | $0.88 | $7.00 | 400K |
| o4 Mini | $1.10 | $4.40 | 200K |
| o1 | $15.00 | $60.00 | 200K |
| GPT-5.1 | $1.25 | $10.00 | 400K |
| GPT-4.1 | $2.00 | $8.00 | 1.0M |
| o3 Mini | $0.55 | $2.20 | 200K |
| o1-pro | $150.00 | $600.00 | 200K |
| o3 Mini High | $1.10 | $4.40 | 200K |
| GPT-OSS-20b | $0.03 | $0.10 | 131K |
| GPT-OSS-120b | $0.04 | $0.10 | 131K |
| GPT-5 | $1.25 | $10.00 | 400K |
| GPT-4.1 Mini | $0.40 | $1.60 | 1.0M |
| o1 Mini | $0.55 | $2.20 | 128K |
| ChatGPT-4o | $5.00 | $15.00 | 128K |
| GPT-5 Nano | $0.05 | $0.40 | 400K |
| GPT-4 Turbo | $5.00 | $15.00 | 128K |
| GPT-4.1 Nano | $0.10 | $0.40 | 1.0M |
| GPT-4 | $30.00 | $60.00 | 8K |
| GPT-4o-mini | $0.15 | $0.60 | 128K |
| GPT-3.5 Turbo | $0.50 | $1.50 | 16K |
| text-ada-001 | $0.20 | $0.20 | 2K |
| Babbage | $0.50 | $0.50 | 2K |
| Codex Mini | $0.75 | $3.00 | 200K |
| curie | $1.00 | $1.00 | 2K |
| o4 Mini High | $1.10 | $4.40 | 200K |
| GPT-5.1-Codex-Max | $1.25 | $10.00 | 400K |
| GPT-5.1 Chat | $1.25 | $10.00 | 128K |
| GPT-5 Chat | $1.25 | $10.00 | 128K |
| GPT-3.5 Turbo Instruct | $1.50 | $2.00 | 4K |
| GPT-5.2 Chat | $1.75 | $14.00 | 128K |
| GPT-5.3 Chat | $1.75 | $14.00 | 128K |
| o4 Mini Deep Research | $2.00 | $8.00 | 200K |
| GPT-4o | $2.50 | $10.00 | 128K |
| GPT-3.5 Turbo 16k | $3.00 | $4.00 | 16K |
| o3 Deep Research | $10.00 | $40.00 | 200K |
| GPT-5 Pro | $15.00 | $120.00 | 400K |
| text-davinci-002 | $20.00 | $20.00 | 4K |
| text-davinci-003 | $20.00 | $20.00 | 4K |
| GPT-5.4 Pro | $30.00 | $180.00 | 1.1M |
| Azure OpenAI | $75.00 | $150.00 | 128K |
Benchmark Comparison
Best Scores by Provider
| Benchmark | Cohere | OpenAI |
|---|---|---|
| Intelligence | 33.6 Command R+ (08-2024) | 57.0 GPT-5.4 |
| Coding | 9.9 Command A | 57.3 GPT-5.4 |
| Math | 13.0 Command A | 99.0 GPT-5.2 Pro |
| MMLU Pro | 71.2 Command A | 87.4 GPT-5.2 Pro |
| GPQA | 52.7 Command A | 92.0 GPT-5.4 |
| LiveCodeBench | 28.7 Command A | 88.9 GPT-5.2 Pro |
| Aider | 38.3 Command R (08-2024) | 88.0 GPT-5 |
| AIME | 9.7 Command A | 94.0 o4 Mini |
| BBH | 42.8 Command R+ (08-2024) | N/A |
Capabilities
| Capability | Cohere | OpenAI |
|---|---|---|
| Vision | — | ✓ (38 models) |
| Tool Calls | ✓ (6 models) | ✓ (45 models) |
| Reasoning | ✓ (2 models) | ✓ (30 models) |
| Audio Input | — | — |
| Audio Output | — | — |
| PDF Input | — | ✓ (28 models) |
| Web Search | — | ✓ (25 models) |
| Prompt Caching | — | ✓ (28 models) |
| Open Source Models | ✓ (4 models) | ✓ (2 models) |
Model-Level Comparisons
Compare specific models head-to-head: