Price Per TokenPrice Per Token
Nvidia
Nvidia
vs
Z-ai

Llama 3.3 Nemotron Super 49B V1.5 vs GLM-5V Turbo

A detailed comparison of pricing, benchmarks, and capabilities

Get our weekly newsletter on pricing changes, new releases, and tools.

Join the Price Per Token Community

Key Takeaways

Llama 3.3 Nemotron Super 49B V1.5 wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Faster response time
  • Better at math
  • Has reasoning mode

GLM-5V Turbo wins:

  • Larger context window
  • Higher intelligence benchmark
  • Better at coding
  • Supports vision
Price Advantage
Llama 3.3 Nemotron Super 49B V1.5
Benchmark Advantage
GLM-5V Turbo
Context Window
GLM-5V Turbo
Speed
Llama 3.3 Nemotron Super 49B V1.5

Pricing Comparison

Benchmark Comparison

Context & Performance

Capabilities

Feature Comparison

FeatureLlama 3.3 Nemotron Super 49B V1.5GLM-5V Turbo
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyLlama 3.3 Nemotron Super 49B V1.5GLM-5V Turbo
LicenseProprietaryProprietary
AuthorNvidiaZ-ai
ReleasedOct 2025Apr 2026

Llama 3.3 Nemotron Super 49B V1.5 Modalities

Input
text
Output
text

GLM-5V Turbo Modalities

Input
imagetextvideo
Output
text

Frequently Asked Questions

Llama 3.3 Nemotron Super 49B V1.5 has cheaper input pricing at $0.10/M tokens. Llama 3.3 Nemotron Super 49B V1.5 has cheaper output pricing at $0.40/M tokens.
GLM-5V Turbo scores higher on coding benchmarks with a score of 36.2, compared to Llama 3.3 Nemotron Super 49B V1.5's score of 10.5.
Llama 3.3 Nemotron Super 49B V1.5 has a 131,072 token context window, while GLM-5V Turbo has a 202,752 token context window.
Llama 3.3 Nemotron Super 49B V1.5 does not support vision. GLM-5V Turbo supports vision.