Price Per TokenPrice Per Token
Mistral AI
Mistral AI
vs
Mistral AI
Mistral AI

Mistral Medium 3.1 vs Mixtral 8x7B Instruct

A detailed comparison of pricing, benchmarks, and capabilities

OpenClaw

Best LLMs for OpenClaw Vote for which model works best with OpenClaw

112 out of our 301 tracked models have had a price change in February.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

Mistral Medium 3.1 wins:

  • Cheaper input tokens
  • Larger context window
  • Faster response time
  • Higher intelligence benchmark
  • Better at coding
  • Better at math
  • Supports vision
  • Supports tool calls

Mixtral 8x7B Instruct wins:

  • Cheaper output tokens
Price Advantage
Mistral Medium 3.1
Benchmark Advantage
Mistral Medium 3.1
Context Window
Mistral Medium 3.1
Speed
Mistral Medium 3.1

Pricing Comparison

Price Comparison

MetricMistral Medium 3.1Mixtral 8x7B InstructWinner
Input (per 1M tokens)$0.40$0.54 Mistral Medium 3.1
Output (per 1M tokens)$2.00$0.54 Mixtral 8x7B Instruct
Using a 3:1 input/output ratio, Mixtral 8x7B Instruct is 33% cheaper overall.

Mistral Medium 3.1 Providers

Mistral $0.40 (Cheapest)

Mixtral 8x7B Instruct Providers

Mistral $0.14 (Cheapest)
Fireworks $0.50
DeepInfra $0.54
Together $0.60

Benchmark Comparison

7
Benchmarks Compared
4
Mistral Medium 3.1 Wins
0
Mixtral 8x7B Instruct Wins

Benchmark Scores

BenchmarkMistral Medium 3.1Mixtral 8x7B InstructWinner
Intelligence Index
Overall intelligence score
21.17.7
Coding Index
Code generation & understanding
18.3--
Math Index
Mathematical reasoning
38.3--
MMLU Pro
Academic knowledge
68.338.7
GPQA
Graduate-level science
58.829.2
LiveCodeBench
Competitive programming
40.66.6
AIME
Competition math
-0.0-
Mistral Medium 3.1 significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
Mistral Medium 3.1
Other models

Context & Performance

Context Window

Mistral Medium 3.1
131,072
tokens
Mixtral 8x7B Instruct
32,768
tokens
Max output: 16,384 tokens
Mistral Medium 3.1 has a 75% larger context window.

Speed Performance

MetricMistral Medium 3.1Mixtral 8x7B InstructWinner
Tokens/second97.3 tok/s0.0 tok/s
Time to First Token0.36s0.00s
Mistral Medium 3.1 responds Infinity% faster on average.

Capabilities

Feature Comparison

FeatureMistral Medium 3.1Mixtral 8x7B Instruct
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyMistral Medium 3.1Mixtral 8x7B Instruct
LicenseProprietaryOpen Source
AuthorMistral AIMistral AI
ReleasedAug 2025Dec 2023

Mistral Medium 3.1 Modalities

Input
textimage
Output
text

Mixtral 8x7B Instruct Modalities

Input
text
Output
text

Related Comparisons

Compare Mistral Medium 3.1 with:

Compare Mixtral 8x7B Instruct with:

Frequently Asked Questions

Mistral Medium 3.1 has cheaper input pricing at $0.40/M tokens. Mixtral 8x7B Instruct has cheaper output pricing at $0.54/M tokens.
Mistral Medium 3.1 scores higher on coding benchmarks with a score of 18.3, compared to Mixtral 8x7B Instruct's score of N/A.
Mistral Medium 3.1 has a 131,072 token context window, while Mixtral 8x7B Instruct has a 32,768 token context window.
Mistral Medium 3.1 supports vision. Mixtral 8x7B Instruct does not support vision.