Price Per TokenPrice Per Token
Anthropic
Anthropic
vs
OpenAI
OpenAI

Claude Opus 4.5 vs GPT-4 Turbo (older v1106)

A detailed comparison of pricing, benchmarks, and capabilities

OpenClaw

Best LLMs for OpenClaw Vote for which model works best with OpenClaw

112 out of our 301 tracked models have had a price change in February.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

Claude Opus 4.5 wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Larger context window
  • Faster response time
  • Higher intelligence benchmark
  • Better at coding
  • Better at math
  • Supports vision
  • Has reasoning mode

GPT-4 Turbo (older v1106) wins:

  • No clear advantages in compared metrics
Price Advantage
Claude Opus 4.5
Benchmark Advantage
Claude Opus 4.5
Context Window
Claude Opus 4.5
Speed
Claude Opus 4.5

Pricing Comparison

Price Comparison

MetricClaude Opus 4.5GPT-4 Turbo (older v1106)Winner
Input (per 1M tokens)$5.00$10.00 Claude Opus 4.5
Output (per 1M tokens)$25.00$30.00 Claude Opus 4.5
Cache Read (per 1M)$500000.00N/A Claude Opus 4.5
Cache Write (per 1M)$6250000.00N/A Claude Opus 4.5
Using a 3:1 input/output ratio, Claude Opus 4.5 is 33% cheaper overall.

Claude Opus 4.5 Providers

Amazon Bedrock $5.00 (Cheapest)
Google $5.00 (Cheapest)
Anthropic $5.00 (Cheapest)

GPT-4 Turbo (older v1106) Providers

OpenAI $10.00 (Cheapest)
Azure $10.00 (Cheapest)

Benchmark Comparison

7
Benchmarks Compared
0
Claude Opus 4.5 Wins
0
GPT-4 Turbo (older v1106) Wins

Benchmark Scores

BenchmarkClaude Opus 4.5GPT-4 Turbo (older v1106)Winner
Intelligence Index
Overall intelligence score
43.0--
Coding Index
Code generation & understanding
42.9--
Math Index
Mathematical reasoning
62.7--
MMLU Pro
Academic knowledge
88.9--
GPQA
Graduate-level science
81.0--
LiveCodeBench
Competitive programming
73.8--
Aider
Real-world code editing
-65.4-
Claude Opus 4.5 significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
Claude Opus 4.5
Other models

Context & Performance

Context Window

Claude Opus 4.5
200,000
tokens
Max output: 64,000 tokens
GPT-4 Turbo (older v1106)
128,000
tokens
Max output: 4,096 tokens
Claude Opus 4.5 has a 36% larger context window.

Speed Performance

MetricClaude Opus 4.5GPT-4 Turbo (older v1106)Winner
Tokens/second73.1 tok/sN/A
Time to First Token1.50sN/A

Capabilities

Feature Comparison

FeatureClaude Opus 4.5GPT-4 Turbo (older v1106)
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyClaude Opus 4.5GPT-4 Turbo (older v1106)
LicenseProprietaryProprietary
AuthorAnthropicOpenAI
ReleasedNov 2025Nov 2023

Claude Opus 4.5 Modalities

Input
fileimagetext
Output
text

GPT-4 Turbo (older v1106) Modalities

Input
text
Output
text

Related Comparisons

Compare Claude Opus 4.5 with:

Compare GPT-4 Turbo (older v1106) with:

Frequently Asked Questions

Claude Opus 4.5 has cheaper input pricing at $5.00/M tokens. Claude Opus 4.5 has cheaper output pricing at $25.00/M tokens.
Claude Opus 4.5 scores higher on coding benchmarks with a score of 42.9, compared to GPT-4 Turbo (older v1106)'s score of N/A.
Claude Opus 4.5 has a 200,000 token context window, while GPT-4 Turbo (older v1106) has a 128,000 token context window.
Claude Opus 4.5 supports vision. GPT-4 Turbo (older v1106) does not support vision.