Price Per TokenPrice Per Token
OpenAI
OpenAI
vs
Qwen
Qwen

GPT-OSS-120b vs Qwen2.5 Coder 32B Instruct

A detailed comparison of pricing, benchmarks, and capabilities

OpenClaw

Best LLMs for OpenClaw Vote for which model works best with OpenClaw

112 out of our 301 tracked models have had a price change in February.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

GPT-OSS-120b wins:

  • Larger context window
  • Faster response time
  • Higher intelligence benchmark
  • Better at coding
  • Better at math

Qwen2.5 Coder 32B Instruct wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Supports tool calls
Price Advantage
Qwen2.5 Coder 32B Instruct
Benchmark Advantage
GPT-OSS-120b
Context Window
GPT-OSS-120b
Speed
GPT-OSS-120b

Pricing Comparison

Price Comparison

MetricGPT-OSS-120bQwen2.5 Coder 32B InstructWinner
Input (per 1M tokens)$0.04$0.03 Qwen2.5 Coder 32B Instruct
Output (per 1M tokens)$0.19$0.11 Qwen2.5 Coder 32B Instruct
Cache Read (per 1M)N/A$15000.00 Qwen2.5 Coder 32B Instruct
Using a 3:1 input/output ratio, Qwen2.5 Coder 32B Instruct is 35% cheaper overall.

GPT-OSS-120b Providers

Chutes $0.04 (Cheapest)
SiliconFlow $0.05
Novita $0.05
Clarifai $0.09
Google $0.09

Qwen2.5 Coder 32B Instruct Providers

Chutes $0.03 (Cheapest)
Hyperbolic $0.20
Cloudflare $0.66

Benchmark Comparison

8
Benchmarks Compared
4
GPT-OSS-120b Wins
1
Qwen2.5 Coder 32B Instruct Wins

Benchmark Scores

BenchmarkGPT-OSS-120bQwen2.5 Coder 32B InstructWinner
Intelligence Index
Overall intelligence score
33.312.9
Coding Index
Code generation & understanding
28.6--
Math Index
Mathematical reasoning
93.4--
MMLU Pro
Academic knowledge
80.863.5
GPQA
Graduate-level science
78.241.7
LiveCodeBench
Competitive programming
87.829.5
Aider
Real-world code editing
41.872.9
AIME
Competition math
-12.0-
GPT-OSS-120b significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
GPT-OSS-120b
Other models

Context & Performance

Context Window

GPT-OSS-120b
131,072
tokens
Qwen2.5 Coder 32B Instruct
32,768
tokens
Max output: 32,768 tokens
GPT-OSS-120b has a 75% larger context window.

Speed Performance

MetricGPT-OSS-120bQwen2.5 Coder 32B InstructWinner
Tokens/second311.5 tok/s40.4 tok/s
Time to First Token0.47s0.48s
GPT-OSS-120b responds 671% faster on average.

Capabilities

Feature Comparison

FeatureGPT-OSS-120bQwen2.5 Coder 32B Instruct
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyGPT-OSS-120bQwen2.5 Coder 32B Instruct
LicenseOpen SourceOpen Source
AuthorOpenAIQwen
ReleasedAug 2025Nov 2024

GPT-OSS-120b Modalities

Input
text
Output
text

Qwen2.5 Coder 32B Instruct Modalities

Input
text
Output
text

Related Comparisons

Compare GPT-OSS-120b with:

Compare Qwen2.5 Coder 32B Instruct with:

Frequently Asked Questions

Qwen2.5 Coder 32B Instruct has cheaper input pricing at $0.03/M tokens. Qwen2.5 Coder 32B Instruct has cheaper output pricing at $0.11/M tokens.
GPT-OSS-120b scores higher on coding benchmarks with a score of 28.6, compared to Qwen2.5 Coder 32B Instruct's score of N/A.
GPT-OSS-120b has a 131,072 token context window, while Qwen2.5 Coder 32B Instruct has a 32,768 token context window.
GPT-OSS-120b does not support vision. Qwen2.5 Coder 32B Instruct does not support vision.