Price Per TokenPrice Per Token
Anthropic
Anthropic
vs
OpenAI
OpenAI

Claude 3.5 Sonnet vs GPT-4o-mini

A detailed comparison of pricing, benchmarks, and capabilities

OpenClaw

Deploy OpenClaw in Under 1 Minute We handle hosting, scaling, and maintenance

114 out of our 303 tracked models have had a price change in February.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

Claude 3.5 Sonnet wins:

  • Larger context window
  • Higher intelligence benchmark
  • Better at coding

GPT-4o-mini wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Faster response time
  • Better at math
Price Advantage
GPT-4o-mini
Benchmark Advantage
Claude 3.5 Sonnet
Context Window
Claude 3.5 Sonnet
Speed
GPT-4o-mini

Pricing Comparison

Price Comparison

MetricClaude 3.5 SonnetGPT-4o-miniWinner
Input (per 1M tokens)$6.00$0.15 GPT-4o-mini
Output (per 1M tokens)$30.00$0.60 GPT-4o-mini
Cache Read (per 1M)$0.60$0.07 GPT-4o-mini
Cache Write (per 1M)$7.50N/A Claude 3.5 Sonnet
Using a 3:1 input/output ratio, GPT-4o-mini is 98% cheaper overall.

Claude 3.5 Sonnet Providers

Vercel $3.00 (Cheapest)
Amazon Bedrock $6.00

GPT-4o-mini Providers

OpenAI $0.15 (Cheapest)
Vercel $0.15 (Cheapest)
Azure $0.15 (Cheapest)

Benchmark Comparison

8
Benchmarks Compared
6
Claude 3.5 Sonnet Wins
0
GPT-4o-mini Wins

Benchmark Scores

BenchmarkClaude 3.5 SonnetGPT-4o-miniWinner
Intelligence Index
Overall intelligence score
15.912.6
Coding Index
Code generation & understanding
30.2--
Math Index
Mathematical reasoning
-14.7-
MMLU Pro
Academic knowledge
77.264.8
GPQA
Graduate-level science
59.942.6
LiveCodeBench
Competitive programming
38.123.4
Aider
Real-world code editing
84.255.6
AIME
Competition math
15.711.7
Claude 3.5 Sonnet significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
Claude 3.5 Sonnet
Other models

Context & Performance

Context Window

Claude 3.5 Sonnet
200,000
tokens
Max output: 8,192 tokens
GPT-4o-mini
128,000
tokens
Max output: 16,384 tokens
Claude 3.5 Sonnet has a 36% larger context window.

Speed Performance

MetricClaude 3.5 SonnetGPT-4o-miniWinner
Tokens/second0.0 tok/s53.3 tok/s
Time to First Token0.00s0.56s
GPT-4o-mini responds Infinity% faster on average.

Capabilities

Feature Comparison

FeatureClaude 3.5 SonnetGPT-4o-mini
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyClaude 3.5 SonnetGPT-4o-mini
LicenseProprietaryProprietary
AuthorAnthropicOpenAI
ReleasedOct 2024Jul 2024

Claude 3.5 Sonnet Modalities

Input
textimagefile
Output
text

GPT-4o-mini Modalities

Input
textimagefile
Output
text

Related Comparisons

Compare Claude 3.5 Sonnet with:

Compare GPT-4o-mini with:

Frequently Asked Questions

GPT-4o-mini has cheaper input pricing at $0.15/M tokens. GPT-4o-mini has cheaper output pricing at $0.60/M tokens.
Claude 3.5 Sonnet scores higher on coding benchmarks with a score of 30.2, compared to GPT-4o-mini's score of N/A.
Claude 3.5 Sonnet has a 200,000 token context window, while GPT-4o-mini has a 128,000 token context window.
Claude 3.5 Sonnet supports vision. GPT-4o-mini supports vision.