Price Per TokenPrice Per Token
Deepseek
Deepseek
vs
OpenAI
OpenAI

R1 vs GPT-4o-mini

A detailed comparison of pricing, benchmarks, and capabilities

Sponsor Price Per Token Reach 5000+ developers comparing LLM APIs

116 out of our 296 tracked models have had a price change in January.

Make informed model choices with updates on pricing, new releases, and tools with our weekly newsletter.

Key Takeaways

R1 wins:

  • Larger context window
  • Higher intelligence benchmark
  • Better at coding
  • Better at math
  • Has reasoning mode

GPT-4o-mini wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Faster response time
  • Supports vision
Price Advantage
GPT-4o-mini
Benchmark Advantage
R1
Context Window
R1
Speed
GPT-4o-mini

Pricing Comparison

Price Comparison

MetricR1GPT-4o-miniWinner
Input (per 1M tokens)$0.70$0.15 GPT-4o-mini
Output (per 1M tokens)$2.40$0.60 GPT-4o-mini
Cache Read (per 1M)N/A$75000.00 GPT-4o-mini
Using a 3:1 input/output ratio, GPT-4o-mini is 77% cheaper overall.

R1 Providers

Chutes $0.30 (Cheapest)
Vercel $0.55
Novita $0.70
DeepInfra $1.00
Azure $1.49

GPT-4o-mini Providers

OpenAI $0.15 (Cheapest)
Vercel $0.15 (Cheapest)
OpenRouter $0.15 (Cheapest)
Azure $0.15 (Cheapest)

Benchmark Comparison

8
Benchmarks Compared
7
R1 Wins
0
GPT-4o-mini Wins

Benchmark Scores

BenchmarkR1GPT-4o-miniWinner
Intelligence Index
Overall intelligence score
18.712.6
Coding Index
Code generation & understanding
15.7--
Math Index
Mathematical reasoning
68.014.7
MMLU Pro
Academic knowledge
84.464.8
GPQA
Graduate-level science
70.842.6
LiveCodeBench
Competitive programming
61.723.4
Aider
Real-world code editing
64.055.6
AIME
Competition math
68.311.7
R1 significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
R1
Other models

Context & Performance

Context Window

R1
163,840
tokens
Max output: 163,840 tokens
GPT-4o-mini
128,000
tokens
Max output: 16,384 tokens
R1 has a 22% larger context window.

Speed Performance

MetricR1GPT-4o-miniWinner
Tokens/second0.0 tok/s51.1 tok/s
Time to First Token0.00s0.51s
GPT-4o-mini responds Infinity% faster on average.

Capabilities

Feature Comparison

FeatureR1GPT-4o-mini
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyR1GPT-4o-mini
LicenseOpen SourceProprietary
AuthorDeepseekOpenAI
ReleasedJan 2025Jul 2024

R1 Modalities

Input
text
Output
text

GPT-4o-mini Modalities

Input
textimagefile
Output
text

Related Comparisons

Compare R1 with:

Compare GPT-4o-mini with:

Frequently Asked Questions

GPT-4o-mini has cheaper input pricing at $0.15/M tokens. GPT-4o-mini has cheaper output pricing at $0.60/M tokens.
R1 scores higher on coding benchmarks with a score of 15.7, compared to GPT-4o-mini's score of N/A.
R1 has a 163,840 token context window, while GPT-4o-mini has a 128,000 token context window.
R1 does not support vision. GPT-4o-mini supports vision.