Price Per TokenPrice Per Token
Meta-llama
Meta-llama
vs
Xiaomi

Llama 3.3 70B Instruct vs MiMo-V2-Flash

A detailed comparison of pricing, benchmarks, and capabilities

OpenClaw

Best LLMs for OpenClaw Vote for which model works best with OpenClaw

112 out of our 301 tracked models have had a price change in February.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

Llama 3.3 70B Instruct wins:

  • Supports tool calls

MiMo-V2-Flash wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Larger context window
  • Faster response time
  • Higher intelligence benchmark
  • Better at coding
  • Better at math
Price Advantage
MiMo-V2-Flash
Benchmark Advantage
MiMo-V2-Flash
Context Window
MiMo-V2-Flash
Speed
MiMo-V2-Flash

Pricing Comparison

Price Comparison

MetricLlama 3.3 70B InstructMiMo-V2-FlashWinner
Input (per 1M tokens)$0.10$0.09 MiMo-V2-Flash
Output (per 1M tokens)$0.32$0.29 MiMo-V2-Flash
Cache Read (per 1M)N/A$45000.00 MiMo-V2-Flash
Using a 3:1 input/output ratio, MiMo-V2-Flash is 10% cheaper overall.

Llama 3.3 70B Instruct Providers

DeepInfra $0.10 (Cheapest)
Novita $0.14
Parasail $0.22
Nebius $0.25
Crusoe $0.25

MiMo-V2-Flash Providers

Chutes $0.09 (Cheapest)
AtlasCloud $0.10
Xiaomi $0.10
Novita $0.10

Benchmark Comparison

8
Benchmarks Compared
0
Llama 3.3 70B Instruct Wins
6
MiMo-V2-Flash Wins

Benchmark Scores

BenchmarkLlama 3.3 70B InstructMiMo-V2-FlashWinner
Intelligence Index
Overall intelligence score
14.230.6
Coding Index
Code generation & understanding
10.725.8
Math Index
Mathematical reasoning
7.767.7
MMLU Pro
Academic knowledge
71.374.4
GPQA
Graduate-level science
49.865.6
LiveCodeBench
Competitive programming
28.840.2
Aider
Real-world code editing
59.4--
AIME
Competition math
30.0--
MiMo-V2-Flash significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
Llama 3.3 70B Instruct
Other models

Context & Performance

Context Window

Llama 3.3 70B Instruct
131,072
tokens
Max output: 16,384 tokens
MiMo-V2-Flash
262,144
tokens
MiMo-V2-Flash has a 50% larger context window.

Speed Performance

MetricLlama 3.3 70B InstructMiMo-V2-FlashWinner
Tokens/second104.4 tok/s142.6 tok/s
Time to First Token0.49s1.25s
MiMo-V2-Flash responds 37% faster on average.

Capabilities

Feature Comparison

FeatureLlama 3.3 70B InstructMiMo-V2-Flash
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyLlama 3.3 70B InstructMiMo-V2-Flash
LicenseOpen SourceOpen Source
AuthorMeta-llamaXiaomi
ReleasedDec 2024Dec 2025

Llama 3.3 70B Instruct Modalities

Input
text
Output
text

MiMo-V2-Flash Modalities

Input
text
Output
text

Related Comparisons

Compare Llama 3.3 70B Instruct with:

Compare MiMo-V2-Flash with:

Frequently Asked Questions

MiMo-V2-Flash has cheaper input pricing at $0.09/M tokens. MiMo-V2-Flash has cheaper output pricing at $0.29/M tokens.
MiMo-V2-Flash scores higher on coding benchmarks with a score of 25.8, compared to Llama 3.3 70B Instruct's score of 10.7.
Llama 3.3 70B Instruct has a 131,072 token context window, while MiMo-V2-Flash has a 262,144 token context window.
Llama 3.3 70B Instruct does not support vision. MiMo-V2-Flash does not support vision.