Access real-time LLM pricing, benchmark and speed data directly in Claude Code, Cursor, Windsurf, and other MCP-enabled AI assistants.

Our MCP server provides your coding assistant access to all of our data including pricing, benchmark, latency and endpoint availability. Make informed decisions without leaving your workflow.
Query pricing, compare models, and access benchmark data with these tools:
| Tool | Description |
|---|---|
get_all_models | Get pricing for all models with filtering by author, context length, TTFT, speed, or capabilities |
get_model | Get detailed info for a specific model including all pricing tiers, benchmarks, and latency |
compare_models | Side-by-side comparison of multiple models by slug |
get_benchmarks | Get models ranked by a specific benchmark (coding, math, intelligence, etc.) |
get_providers | List all providers with model counts and price ranges |
search_models | Search models by name, author, or description |
Choose your AI assistant below for setup instructions. No API key required.
Run this command in your terminal:
claude mcp add pricepertoken --transport http --url https://api.pricepertoken.com/mcp/mcpAdd this to your ~/.cursor/mcp.json:
{
"mcpServers": {
"pricepertoken": {
"url": "https://api.pricepertoken.com/mcp/mcp"
}
}
}Add this to your claude_desktop_config.json:
{
"mcpServers": {
"pricepertoken": {
"command": "npx",
"args": [
"mcp-remote",
"https://api.pricepertoken.com/mcp/mcp"
]
}
}
}Add this to your ~/.codeium/windsurf/mcp_config.json:
{
"mcpServers": {
"pricepertoken": {
"serverUrl": "https://api.pricepertoken.com/mcp/mcp"
}
}
}Once connected, try asking your AI assistant:
"What is the cheapest model with 100k+ context?"
"Compare GPT-4o and Claude 3.5 Sonnet pricing"
"Show me the top 10 models for coding benchmarks"
"Find the fastest models with TTFT under 2 seconds"
"What Anthropic models support vision?"
No, the Price Per Token MCP is free to use and doesn't require authentication.
Our pricing database is updated regularly as providers announce changes. You always get the latest data.
We track pricing for 100+ models from OpenAI, Anthropic, Google, Meta, Mistral, and many more providers. View all on our LLM pricing page.
Yes, use the get_benchmarks tool to rank models by coding, math, or intelligence scores. See our LLM rankings for more.
There's no strict rate limit, but please use responsibly. If you need high-volume access, contact us.
Set up the MCP in under a minute and start making data-driven model decisions.