Price Per TokenPrice Per Token
Mistral AI
Mistral AI
vs
Nvidia
Nvidia

Mistral Small 3.2 24B vs Nemotron-3 Super 120B A12B

A detailed comparison of pricing, benchmarks, and capabilities

Get our weekly newsletter on pricing changes, new releases, and tools.

OpenClaw

Deploy OpenClaw in Under 1 Minute We handle hosting, scaling, and maintenance

Key Takeaways

Mistral Small 3.2 24B wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Larger context window
  • Better at math
  • Supports vision
  • Supports tool calls

Nemotron-3 Super 120B A12B wins:

  • Faster response time
  • Higher intelligence benchmark
  • Better at coding
Price Advantage
Mistral Small 3.2 24B
Benchmark Advantage
Nemotron-3 Super 120B A12B
Context Window
Mistral Small 3.2 24B
Speed
Nemotron-3 Super 120B A12B

Pricing Comparison

Benchmark Comparison

Context & Performance

Capabilities

Feature Comparison

FeatureMistral Small 3.2 24BNemotron-3 Super 120B A12B
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyMistral Small 3.2 24BNemotron-3 Super 120B A12B
LicenseOpen SourceProprietary
AuthorMistral AINvidia
ReleasedJun 2025Unknown

Mistral Small 3.2 24B Modalities

Input
imagetext
Output
text

Nemotron-3 Super 120B A12B Modalities

Input
Output

Related Comparisons

Compare Mistral Small 3.2 24B with:

Compare Nemotron-3 Super 120B A12B with:

Frequently Asked Questions

Mistral Small 3.2 24B has cheaper input pricing at $0.07/M tokens. Mistral Small 3.2 24B has cheaper output pricing at $0.20/M tokens.
Nemotron-3 Super 120B A12B scores higher on coding benchmarks with a score of 31.2, compared to Mistral Small 3.2 24B's score of 13.3.
Mistral Small 3.2 24B has a 131,072 token context window, while Nemotron-3 Super 120B A12B has a unknown token context window.
Mistral Small 3.2 24B supports vision. Nemotron-3 Super 120B A12B does not support vision.