Price Per TokenPrice Per Token
Mistral AI
Mistral AI
vs
OpenAI
OpenAI

Mistral Small 3.1 24B vs GPT-5.2-Codex

A detailed comparison of pricing, benchmarks, and capabilities

Get our weekly newsletter on pricing changes, new releases, and tools.

Join the Price Per Token Community

Key Takeaways

Mistral Small 3.1 24B wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Faster response time
  • Better at math

GPT-5.2-Codex wins:

  • Larger context window
  • Higher intelligence benchmark
  • Better at coding
  • Has reasoning mode
Price Advantage
Mistral Small 3.1 24B
Benchmark Advantage
GPT-5.2-Codex
Context Window
GPT-5.2-Codex
Speed
Mistral Small 3.1 24B

Pricing Comparison

Benchmark Comparison

Context & Performance

Capabilities

Feature Comparison

FeatureMistral Small 3.1 24BGPT-5.2-Codex
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyMistral Small 3.1 24BGPT-5.2-Codex
LicenseOpen SourceProprietary
AuthorMistral AIOpenAI
ReleasedMar 2025Jan 2026

Mistral Small 3.1 24B Modalities

Input
textimage
Output
text

GPT-5.2-Codex Modalities

Input
textimage
Output
text

Frequently Asked Questions

Mistral Small 3.1 24B has cheaper input pricing at $0.03/M tokens. Mistral Small 3.1 24B has cheaper output pricing at $0.11/M tokens.
GPT-5.2-Codex scores higher on coding benchmarks with a score of 43.0, compared to Mistral Small 3.1 24B's score of 13.9.
Mistral Small 3.1 24B has a 128,000 token context window, while GPT-5.2-Codex has a 400,000 token context window.
Mistral Small 3.1 24B supports vision. GPT-5.2-Codex supports vision.