Price Per TokenPrice Per Token
Meta-llama
Meta-llama
vs
OpenAI
OpenAI

Llama 4 Maverick vs GPT-5.2-Codex

A detailed comparison of pricing, benchmarks, and capabilities

Get our weekly newsletter on pricing changes, new releases, and tools.

Join the Price Per Token Community

Key Takeaways

Llama 4 Maverick wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Larger context window
  • Faster response time
  • Better at math

GPT-5.2-Codex wins:

  • Higher intelligence benchmark
  • Better at coding
  • Has reasoning mode
Price Advantage
Llama 4 Maverick
Benchmark Advantage
GPT-5.2-Codex
Context Window
Llama 4 Maverick
Speed
Llama 4 Maverick

Pricing Comparison

Benchmark Comparison

Context & Performance

Capabilities

Feature Comparison

FeatureLlama 4 MaverickGPT-5.2-Codex
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyLlama 4 MaverickGPT-5.2-Codex
LicenseOpen SourceProprietary
AuthorMeta-llamaOpenAI
ReleasedApr 2025Jan 2026

Llama 4 Maverick Modalities

Input
textimage
Output
text

GPT-5.2-Codex Modalities

Input
textimage
Output
text

Frequently Asked Questions

Llama 4 Maverick has cheaper input pricing at $0.15/M tokens. Llama 4 Maverick has cheaper output pricing at $0.60/M tokens.
GPT-5.2-Codex scores higher on coding benchmarks with a score of 43.0, compared to Llama 4 Maverick's score of 15.6.
Llama 4 Maverick has a 1,048,576 token context window, while GPT-5.2-Codex has a 400,000 token context window.
Llama 4 Maverick supports vision. GPT-5.2-Codex supports vision.