Price Per TokenPrice Per Token
Anthropic
Anthropic
vs
Cohere
Cohere

Claude 3.7 Sonnet vs Command R+ (08-2024)

A detailed comparison of pricing, benchmarks, and capabilities

OpenClaw

Best LLMs for OpenClaw Vote for which model works best with OpenClaw

112 out of our 301 tracked models have had a price change in February.

Get our weekly newsletter on pricing changes, new releases, and tools.

Key Takeaways

Claude 3.7 Sonnet wins:

  • Larger context window
  • Faster response time
  • Better at coding
  • Better at math
  • Supports vision
  • Has reasoning mode

Command R+ (08-2024) wins:

  • Cheaper input tokens
  • Cheaper output tokens
  • Higher intelligence benchmark
Price Advantage
Command R+ (08-2024)
Benchmark Advantage
Claude 3.7 Sonnet
Context Window
Claude 3.7 Sonnet
Speed
Claude 3.7 Sonnet

Pricing Comparison

Price Comparison

MetricClaude 3.7 SonnetCommand R+ (08-2024)Winner
Input (per 1M tokens)$3.00$2.50 Command R+ (08-2024)
Output (per 1M tokens)$15.00$10.00 Command R+ (08-2024)
Cache Read (per 1M)$300000.00N/A Claude 3.7 Sonnet
Cache Write (per 1M)$3750000.00N/A Claude 3.7 Sonnet
Using a 3:1 input/output ratio, Command R+ (08-2024) is 27% cheaper overall.

Claude 3.7 Sonnet Providers

Vercel $3.00 (Cheapest)
Amazon Bedrock $3.00 (Cheapest)
Google $3.00 (Cheapest)
Anthropic $3.00 (Cheapest)

Command R+ (08-2024) Providers

Cohere $2.50 (Cheapest)

Benchmark Comparison

9
Benchmarks Compared
3
Claude 3.7 Sonnet Wins
1
Command R+ (08-2024) Wins

Benchmark Scores

BenchmarkClaude 3.7 SonnetCommand R+ (08-2024)Winner
Intelligence Index
Overall intelligence score
30.833.6
Coding Index
Code generation & understanding
26.7--
Math Index
Mathematical reasoning
21.0--
MMLU Pro
Academic knowledge
80.338.0
GPQA
Graduate-level science
65.613.4
LiveCodeBench
Competitive programming
39.4--
Aider
Real-world code editing
64.938.3
AIME
Competition math
22.3--
BBH
Big-Bench Hard
-42.8-
Claude 3.7 Sonnet significantly outperforms in coding benchmarks.

Cost vs Quality

X-axis:
Y-axis:
Loading chart...
Claude 3.7 Sonnet
Other models

Context & Performance

Context Window

Claude 3.7 Sonnet
200,000
tokens
Max output: 64,000 tokens
Command R+ (08-2024)
128,000
tokens
Max output: 4,000 tokens
Claude 3.7 Sonnet has a 36% larger context window.

Speed Performance

MetricClaude 3.7 SonnetCommand R+ (08-2024)Winner
Tokens/second0.0 tok/sN/A
Time to First Token0.00sN/A

Capabilities

Feature Comparison

FeatureClaude 3.7 SonnetCommand R+ (08-2024)
Vision (Image Input)
Tool/Function Calls
Reasoning Mode
Audio Input
Audio Output
PDF Input
Prompt Caching
Web Search

License & Release

PropertyClaude 3.7 SonnetCommand R+ (08-2024)
LicenseProprietaryOpen Source
AuthorAnthropicCohere
ReleasedFeb 2025Aug 2024

Claude 3.7 Sonnet Modalities

Input
textimagefile
Output
text

Command R+ (08-2024) Modalities

Input
text
Output
text

Related Comparisons

Compare Claude 3.7 Sonnet with:

Compare Command R+ (08-2024) with:

Frequently Asked Questions

Command R+ (08-2024) has cheaper input pricing at $2.50/M tokens. Command R+ (08-2024) has cheaper output pricing at $10.00/M tokens.
Claude 3.7 Sonnet scores higher on coding benchmarks with a score of 26.7, compared to Command R+ (08-2024)'s score of N/A.
Claude 3.7 Sonnet has a 200,000 token context window, while Command R+ (08-2024) has a 128,000 token context window.
Claude 3.7 Sonnet supports vision. Command R+ (08-2024) does not support vision.