Price Per TokenPrice Per Token
Anthropic
Anthropic
vs
OpenAI
OpenAI

Claude Opus 4 vs o1

A detailed comparison of pricing, benchmarks, and capabilities

OpenClaw

Best LLMs for OpenClaw Vote for which model works best with OpenClaw

112 out of our 301 tracked models have had a price change in February.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

Claude Opus 4 wins:

  • Better at math

o1 wins:

  • Cheaper output tokens
  • Faster response time
  • Higher intelligence benchmark
  • Better at coding
Price Advantage
o1
Benchmark Advantage
o1
Context Window
o1
Speed
o1

Pricing Comparison

Price Comparison

MetricClaude Opus 4o1Winner
Input (per 1M tokens)$15.00$15.00Tie
Output (per 1M tokens)$75.00$60.00 o1
Cache Read (per 1M)$1500000.00$7500000.00 Claude Opus 4
Cache Write (per 1M)$18750000.00N/A Claude Opus 4
Using a 3:1 input/output ratio, o1 is 13% cheaper overall.

Claude Opus 4 Providers

Amazon Bedrock $15.00 (Cheapest)
Google $15.00 (Cheapest)
Anthropic $15.00 (Cheapest)

o1 Providers

OpenAI $15.00 (Cheapest)
Vercel $15.00 (Cheapest)
Azure $15.00 (Cheapest)

Benchmark Comparison

8
Benchmarks Compared
1
Claude Opus 4 Wins
5
o1 Wins

Benchmark Scores

BenchmarkClaude Opus 4o1Winner
Intelligence Index
Overall intelligence score
22.230.7
Coding Index
Code generation & understanding
-20.5-
Math Index
Mathematical reasoning
36.3--
MMLU Pro
Academic knowledge
86.084.1
GPQA
Graduate-level science
70.174.7
LiveCodeBench
Competitive programming
54.267.9
Aider
Real-world code editing
72.084.2
AIME
Competition math
56.372.3
o1 significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
Claude Opus 4
Other models

Context & Performance

Context Window

Claude Opus 4
200,000
tokens
Max output: 32,000 tokens
o1
200,000
tokens
Max output: 100,000 tokens

Speed Performance

MetricClaude Opus 4o1Winner
Tokens/second38.9 tok/s161.0 tok/s
Time to First Token1.31s19.78s
o1 responds 314% faster on average.

Capabilities

Feature Comparison

FeatureClaude Opus 4o1
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyClaude Opus 4o1
LicenseProprietaryProprietary
AuthorAnthropicOpenAI
ReleasedMay 2025Dec 2024

Claude Opus 4 Modalities

Input
imagetextfile
Output
text

o1 Modalities

Input
textimagefile
Output
text

Related Comparisons

Compare Claude Opus 4 with:

Compare o1 with:

Frequently Asked Questions

Claude Opus 4 has cheaper input pricing at $15.00/M tokens. o1 has cheaper output pricing at $60.00/M tokens.
o1 scores higher on coding benchmarks with a score of 20.5, compared to Claude Opus 4's score of N/A.
Claude Opus 4 has a 200,000 token context window, while o1 has a 200,000 token context window.
Claude Opus 4 supports vision. o1 supports vision.