Price Per TokenPrice Per Token
Mistral AI
Mistral AI
vs
OpenAI
OpenAI

Mistral Large 2407 vs GPT-5.2-Codex

A detailed comparison of pricing, benchmarks, and capabilities

Get our weekly newsletter on pricing changes, new releases, and tools.

OpenClaw

Deploy OpenClaw in Under 1 Minute We handle hosting, scaling, and maintenance

Key Takeaways

Mistral Large 2407 wins:

  • Cheaper output tokens
  • Better at math

GPT-5.2-Codex wins:

  • Cheaper input tokens
  • Larger context window
  • Faster response time
  • Higher intelligence benchmark
  • Better at coding
  • Supports vision
  • Has reasoning mode
Price Advantage
Mistral Large 2407
Benchmark Advantage
GPT-5.2-Codex
Context Window
GPT-5.2-Codex
Speed
GPT-5.2-Codex

Pricing Comparison

Benchmark Comparison

Context & Performance

Capabilities

Feature Comparison

FeatureMistral Large 2407GPT-5.2-Codex
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyMistral Large 2407GPT-5.2-Codex
LicenseOpen SourceProprietary
AuthorMistral AIOpenAI
ReleasedNov 2024Jan 2026

Mistral Large 2407 Modalities

Input
text
Output
text

GPT-5.2-Codex Modalities

Input
textimage
Output
text

Related Comparisons

Compare Mistral Large 2407 with:

Compare GPT-5.2-Codex with:

Frequently Asked Questions

GPT-5.2-Codex has cheaper input pricing at $1.75/M tokens. Mistral Large 2407 has cheaper output pricing at $6.00/M tokens.
GPT-5.2-Codex scores higher on coding benchmarks with a score of 43.0, compared to Mistral Large 2407's score of N/A.
Mistral Large 2407 has a 131,072 token context window, while GPT-5.2-Codex has a 400,000 token context window.
Mistral Large 2407 does not support vision. GPT-5.2-Codex supports vision.