Price Per TokenPrice Per Token
Mistral AI
Mistral AI
vs
Mistral AI
Mistral AI

Mistral 7B Instruct vs Mistral Medium 3.1

A detailed comparison of pricing, benchmarks, and capabilities

OpenClaw

Best LLMs for OpenClaw Vote for which model works best with OpenClaw

112 out of our 301 tracked models have had a price change in February.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

Mistral 7B Instruct wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Faster response time

Mistral Medium 3.1 wins:

  • Larger context window
  • Higher intelligence benchmark
  • Better at coding
  • Better at math
  • Supports vision
  • Supports tool calls
Price Advantage
Mistral 7B Instruct
Benchmark Advantage
Mistral Medium 3.1
Context Window
Mistral Medium 3.1
Speed
Mistral 7B Instruct

Pricing Comparison

Price Comparison

MetricMistral 7B InstructMistral Medium 3.1Winner
Input (per 1M tokens)$0.20$0.40 Mistral 7B Instruct
Output (per 1M tokens)$0.20$2.00 Mistral 7B Instruct
Using a 3:1 input/output ratio, Mistral 7B Instruct is 75% cheaper overall.

Mistral 7B Instruct Providers

Novita $0.06 (Cheapest)
Mistral $0.14
Together $0.20

Mistral Medium 3.1 Providers

Mistral $0.40 (Cheapest)

Benchmark Comparison

7
Benchmarks Compared
0
Mistral 7B Instruct Wins
4
Mistral Medium 3.1 Wins

Benchmark Scores

BenchmarkMistral 7B InstructMistral Medium 3.1Winner
Intelligence Index
Overall intelligence score
7.421.1
Coding Index
Code generation & understanding
-18.3-
Math Index
Mathematical reasoning
-38.3-
MMLU Pro
Academic knowledge
24.568.3
GPQA
Graduate-level science
17.758.8
LiveCodeBench
Competitive programming
4.640.6
AIME
Competition math
0.0--
Mistral Medium 3.1 significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
Mistral 7B Instruct
Other models

Context & Performance

Context Window

Mistral 7B Instruct
32,768
tokens
Max output: 4,096 tokens
Mistral Medium 3.1
131,072
tokens
Mistral Medium 3.1 has a 75% larger context window.

Speed Performance

MetricMistral 7B InstructMistral Medium 3.1Winner
Tokens/second128.7 tok/s97.3 tok/s
Time to First Token0.29s0.36s
Mistral 7B Instruct responds 32% faster on average.

Capabilities

Feature Comparison

FeatureMistral 7B InstructMistral Medium 3.1
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyMistral 7B InstructMistral Medium 3.1
LicenseOpen SourceProprietary
AuthorMistral AIMistral AI
ReleasedMay 2024Aug 2025

Mistral 7B Instruct Modalities

Input
text
Output
text

Mistral Medium 3.1 Modalities

Input
textimage
Output
text

Related Comparisons

Compare Mistral 7B Instruct with:

Compare Mistral Medium 3.1 with:

Frequently Asked Questions

Mistral 7B Instruct has cheaper input pricing at $0.20/M tokens. Mistral 7B Instruct has cheaper output pricing at $0.20/M tokens.
Mistral Medium 3.1 scores higher on coding benchmarks with a score of 18.3, compared to Mistral 7B Instruct's score of N/A.
Mistral 7B Instruct has a 32,768 token context window, while Mistral Medium 3.1 has a 131,072 token context window.
Mistral 7B Instruct does not support vision. Mistral Medium 3.1 supports vision.