Price Per TokenPrice Per Token
Mistral AI
Mistral AI
vs
Sao10k

Mistral Small 3.1 24B vs Llama 3 8B Lunaris

A detailed comparison of pricing, benchmarks, and capabilities

OpenClaw

Best LLMs for OpenClaw Vote for which model works best with OpenClaw

112 out of our 301 tracked models have had a price change in February.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

Mistral Small 3.1 24B wins:

  • Cheaper input tokens
  • Larger context window
  • Faster response time
  • Better at coding
  • Better at math
  • Supports vision
  • Supports tool calls

Llama 3 8B Lunaris wins:

  • Cheaper output tokens
  • Higher intelligence benchmark
Price Advantage
Mistral Small 3.1 24B
Benchmark Advantage
Mistral Small 3.1 24B
Context Window
Mistral Small 3.1 24B
Speed
Mistral Small 3.1 24B

Pricing Comparison

Price Comparison

MetricMistral Small 3.1 24BLlama 3 8B LunarisWinner
Input (per 1M tokens)$0.03$0.04 Mistral Small 3.1 24B
Output (per 1M tokens)$0.11$0.05 Llama 3 8B Lunaris
Cache Read (per 1M)$15000.00N/A Mistral Small 3.1 24B
Using a 3:1 input/output ratio, Llama 3 8B Lunaris is 15% cheaper overall.

Mistral Small 3.1 24B Providers

Chutes $0.03 (Cheapest)
Cloudflare $0.35

Llama 3 8B Lunaris Providers

DeepInfra $0.04 (Cheapest)
Novita $0.05

Benchmark Comparison

8
Benchmarks Compared
2
Mistral Small 3.1 24B Wins
1
Llama 3 8B Lunaris Wins

Benchmark Scores

BenchmarkMistral Small 3.1 24BLlama 3 8B LunarisWinner
Intelligence Index
Overall intelligence score
14.025.6
Coding Index
Code generation & understanding
13.9--
Math Index
Mathematical reasoning
3.7--
MMLU Pro
Academic knowledge
65.931.0
GPQA
Graduate-level science
45.46.8
LiveCodeBench
Competitive programming
21.2--
AIME
Competition math
9.3--
BBH
Big-Bench Hard
-32.1-
Mistral Small 3.1 24B significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
Mistral Small 3.1 24B
Other models

Context & Performance

Context Window

Mistral Small 3.1 24B
131,072
tokens
Max output: 131,072 tokens
Llama 3 8B Lunaris
8,192
tokens
Mistral Small 3.1 24B has a 94% larger context window.

Speed Performance

MetricMistral Small 3.1 24BLlama 3 8B LunarisWinner
Tokens/second104.2 tok/sN/A
Time to First Token0.29sN/A

Capabilities

Feature Comparison

FeatureMistral Small 3.1 24BLlama 3 8B Lunaris
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyMistral Small 3.1 24BLlama 3 8B Lunaris
LicenseOpen SourceOpen Source
AuthorMistral AISao10k
ReleasedMar 2025Aug 2024

Mistral Small 3.1 24B Modalities

Input
textimage
Output
text

Llama 3 8B Lunaris Modalities

Input
text
Output
text

Related Comparisons

Compare Mistral Small 3.1 24B with:

Compare Llama 3 8B Lunaris with:

Frequently Asked Questions

Mistral Small 3.1 24B has cheaper input pricing at $0.03/M tokens. Llama 3 8B Lunaris has cheaper output pricing at $0.05/M tokens.
Mistral Small 3.1 24B scores higher on coding benchmarks with a score of 13.9, compared to Llama 3 8B Lunaris's score of N/A.
Mistral Small 3.1 24B has a 131,072 token context window, while Llama 3 8B Lunaris has a 8,192 token context window.
Mistral Small 3.1 24B supports vision. Llama 3 8B Lunaris does not support vision.