Price Per TokenPrice Per Token
Mistral AI
Mistral AI
vs
Qwen
Qwen

Mistral Small 3.1 24B vs Qwen2.5 7B Instruct

A detailed comparison of pricing, benchmarks, and capabilities

OpenClaw

Best LLMs for OpenClaw Vote for which model works best with OpenClaw

112 out of our 301 tracked models have had a price change in February.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

Mistral Small 3.1 24B wins:

  • Cheaper input tokens
  • Larger context window
  • Faster response time
  • Better at coding
  • Better at math
  • Supports vision

Qwen2.5 7B Instruct wins:

  • Cheaper output tokens
  • Higher intelligence benchmark
Price Advantage
Mistral Small 3.1 24B
Benchmark Advantage
Mistral Small 3.1 24B
Context Window
Mistral Small 3.1 24B
Speed
Mistral Small 3.1 24B

Pricing Comparison

Price Comparison

MetricMistral Small 3.1 24BQwen2.5 7B InstructWinner
Input (per 1M tokens)$0.03$0.04 Mistral Small 3.1 24B
Output (per 1M tokens)$0.11$0.10 Qwen2.5 7B Instruct
Cache Read (per 1M)$15000.00N/A Mistral Small 3.1 24B
Using a 3:1 input/output ratio, Mistral Small 3.1 24B is 9% cheaper overall.

Mistral Small 3.1 24B Providers

Chutes $0.03 (Cheapest)
Cloudflare $0.35

Qwen2.5 7B Instruct Providers

Phala $0.04 (Cheapest)
AtlasCloud $0.04 (Cheapest)
Together $0.30

Benchmark Comparison

8
Benchmarks Compared
2
Mistral Small 3.1 24B Wins
1
Qwen2.5 7B Instruct Wins

Benchmark Scores

BenchmarkMistral Small 3.1 24BQwen2.5 7B InstructWinner
Intelligence Index
Overall intelligence score
14.035.2
Coding Index
Code generation & understanding
13.9--
Math Index
Mathematical reasoning
3.7--
MMLU Pro
Academic knowledge
65.936.5
GPQA
Graduate-level science
45.45.5
LiveCodeBench
Competitive programming
21.2--
AIME
Competition math
9.3--
BBH
Big-Bench Hard
-34.9-
Mistral Small 3.1 24B significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
Mistral Small 3.1 24B
Other models

Context & Performance

Context Window

Mistral Small 3.1 24B
131,072
tokens
Max output: 131,072 tokens
Qwen2.5 7B Instruct
32,768
tokens
Mistral Small 3.1 24B has a 75% larger context window.

Speed Performance

MetricMistral Small 3.1 24BQwen2.5 7B InstructWinner
Tokens/second104.2 tok/sN/A
Time to First Token0.29sN/A

Capabilities

Feature Comparison

FeatureMistral Small 3.1 24BQwen2.5 7B Instruct
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyMistral Small 3.1 24BQwen2.5 7B Instruct
LicenseOpen SourceOpen Source
AuthorMistral AIQwen
ReleasedMar 2025Oct 2024

Mistral Small 3.1 24B Modalities

Input
textimage
Output
text

Qwen2.5 7B Instruct Modalities

Input
text
Output
text

Related Comparisons

Compare Mistral Small 3.1 24B with:

Compare Qwen2.5 7B Instruct with:

Frequently Asked Questions

Mistral Small 3.1 24B has cheaper input pricing at $0.03/M tokens. Qwen2.5 7B Instruct has cheaper output pricing at $0.10/M tokens.
Mistral Small 3.1 24B scores higher on coding benchmarks with a score of 13.9, compared to Qwen2.5 7B Instruct's score of N/A.
Mistral Small 3.1 24B has a 131,072 token context window, while Qwen2.5 7B Instruct has a 32,768 token context window.
Mistral Small 3.1 24B supports vision. Qwen2.5 7B Instruct does not support vision.