Price Per TokenPrice Per Token
Mistral AI
Mistral AI
vs
OpenAI
OpenAI

Mistral Small 3.2 24B vs GPT-OSS-120b

A detailed comparison of pricing, benchmarks, and capabilities

OpenClaw

Best LLMs for OpenClaw Vote for which model works best with OpenClaw

112 out of our 301 tracked models have had a price change in February.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

Mistral Small 3.2 24B wins:

  • Cheaper output tokens
  • Supports vision
  • Supports tool calls

GPT-OSS-120b wins:

  • Cheaper input tokens
  • Faster response time
  • Higher intelligence benchmark
  • Better at coding
  • Better at math
Price Advantage
Mistral Small 3.2 24B
Benchmark Advantage
GPT-OSS-120b
Context Window
GPT-OSS-120b
Speed
GPT-OSS-120b

Pricing Comparison

Price Comparison

MetricMistral Small 3.2 24BGPT-OSS-120bWinner
Input (per 1M tokens)$0.06$0.04 GPT-OSS-120b
Output (per 1M tokens)$0.18$0.19 Mistral Small 3.2 24B
Cache Read (per 1M)$30000.00N/A Mistral Small 3.2 24B
Using a 3:1 input/output ratio, GPT-OSS-120b is 15% cheaper overall.

Mistral Small 3.2 24B Providers

Chutes $0.06 (Cheapest)
DeepInfra $0.07
Parasail $0.09
Mistral $0.10

GPT-OSS-120b Providers

Chutes $0.04 (Cheapest)
SiliconFlow $0.05
Novita $0.05
Clarifai $0.09
Google $0.09

Benchmark Comparison

8
Benchmarks Compared
0
Mistral Small 3.2 24B Wins
6
GPT-OSS-120b Wins

Benchmark Scores

BenchmarkMistral Small 3.2 24BGPT-OSS-120bWinner
Intelligence Index
Overall intelligence score
15.033.3
Coding Index
Code generation & understanding
13.328.6
Math Index
Mathematical reasoning
27.093.4
MMLU Pro
Academic knowledge
68.180.8
GPQA
Graduate-level science
50.578.2
LiveCodeBench
Competitive programming
27.587.8
Aider
Real-world code editing
-41.8-
AIME
Competition math
32.3--
GPT-OSS-120b significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
Mistral Small 3.2 24B
Other models

Context & Performance

Context Window

Mistral Small 3.2 24B
131,072
tokens
Max output: 131,072 tokens
GPT-OSS-120b
131,072
tokens

Speed Performance

MetricMistral Small 3.2 24BGPT-OSS-120bWinner
Tokens/second113.4 tok/s311.5 tok/s
Time to First Token0.29s0.47s
GPT-OSS-120b responds 175% faster on average.

Capabilities

Feature Comparison

FeatureMistral Small 3.2 24BGPT-OSS-120b
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyMistral Small 3.2 24BGPT-OSS-120b
LicenseOpen SourceOpen Source
AuthorMistral AIOpenAI
ReleasedJun 2025Aug 2025

Mistral Small 3.2 24B Modalities

Input
imagetext
Output
text

GPT-OSS-120b Modalities

Input
text
Output
text

Related Comparisons

Compare Mistral Small 3.2 24B with:

Compare GPT-OSS-120b with:

Frequently Asked Questions

GPT-OSS-120b has cheaper input pricing at $0.04/M tokens. Mistral Small 3.2 24B has cheaper output pricing at $0.18/M tokens.
GPT-OSS-120b scores higher on coding benchmarks with a score of 28.6, compared to Mistral Small 3.2 24B's score of 13.3.
Mistral Small 3.2 24B has a 131,072 token context window, while GPT-OSS-120b has a 131,072 token context window.
Mistral Small 3.2 24B supports vision. GPT-OSS-120b does not support vision.