Price Per TokenPrice Per Token
Anthropic
Anthropic
vs
OpenAI
OpenAI

Claude Opus 4.5 vs GPT-4o (2024-05-13)

A detailed comparison of pricing, benchmarks, and capabilities

OpenClaw

Best LLMs for OpenClaw Vote for which model works best with OpenClaw

112 out of our 301 tracked models have had a price change in February.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

Claude Opus 4.5 wins:

  • Larger context window
  • Higher intelligence benchmark
  • Better at coding
  • Better at math
  • Has reasoning mode

GPT-4o (2024-05-13) wins:

  • Cheaper output tokens
  • Faster response time
Price Advantage
GPT-4o (2024-05-13)
Benchmark Advantage
Claude Opus 4.5
Context Window
Claude Opus 4.5
Speed
GPT-4o (2024-05-13)

Pricing Comparison

Price Comparison

MetricClaude Opus 4.5GPT-4o (2024-05-13)Winner
Input (per 1M tokens)$5.00$5.00Tie
Output (per 1M tokens)$25.00$15.00 GPT-4o (2024-05-13)
Cache Read (per 1M)$500000.00N/A Claude Opus 4.5
Cache Write (per 1M)$6250000.00N/A Claude Opus 4.5
Using a 3:1 input/output ratio, GPT-4o (2024-05-13) is 25% cheaper overall.

Claude Opus 4.5 Providers

Amazon Bedrock $5.00 (Cheapest)
Google $5.00 (Cheapest)
Anthropic $5.00 (Cheapest)

GPT-4o (2024-05-13) Providers

OpenAI $5.00 (Cheapest)
Azure $5.00 (Cheapest)

Benchmark Comparison

8
Benchmarks Compared
5
Claude Opus 4.5 Wins
0
GPT-4o (2024-05-13) Wins

Benchmark Scores

BenchmarkClaude Opus 4.5GPT-4o (2024-05-13)Winner
Intelligence Index
Overall intelligence score
43.016.0
Coding Index
Code generation & understanding
42.924.2
Math Index
Mathematical reasoning
62.7--
MMLU Pro
Academic knowledge
88.974.0
GPQA
Graduate-level science
81.052.6
LiveCodeBench
Competitive programming
73.833.4
Aider
Real-world code editing
-72.9-
AIME
Competition math
-11.0-
Claude Opus 4.5 significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
Claude Opus 4.5
Other models

Context & Performance

Context Window

Claude Opus 4.5
200,000
tokens
Max output: 64,000 tokens
GPT-4o (2024-05-13)
128,000
tokens
Max output: 4,096 tokens
Claude Opus 4.5 has a 36% larger context window.

Speed Performance

MetricClaude Opus 4.5GPT-4o (2024-05-13)Winner
Tokens/second73.1 tok/s85.1 tok/s
Time to First Token1.50s0.50s
GPT-4o (2024-05-13) responds 16% faster on average.

Capabilities

Feature Comparison

FeatureClaude Opus 4.5GPT-4o (2024-05-13)
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyClaude Opus 4.5GPT-4o (2024-05-13)
LicenseProprietaryProprietary
AuthorAnthropicOpenAI
ReleasedNov 2025May 2024

Claude Opus 4.5 Modalities

Input
fileimagetext
Output
text

GPT-4o (2024-05-13) Modalities

Input
textimagefile
Output
text

Related Comparisons

Compare Claude Opus 4.5 with:

Compare GPT-4o (2024-05-13) with:

Frequently Asked Questions

Claude Opus 4.5 has cheaper input pricing at $5.00/M tokens. GPT-4o (2024-05-13) has cheaper output pricing at $15.00/M tokens.
Claude Opus 4.5 scores higher on coding benchmarks with a score of 42.9, compared to GPT-4o (2024-05-13)'s score of 24.2.
Claude Opus 4.5 has a 200,000 token context window, while GPT-4o (2024-05-13) has a 128,000 token context window.
Claude Opus 4.5 supports vision. GPT-4o (2024-05-13) supports vision.