Logo

Price Per Token

Mistral AI Models Pricing Calculator

Estimate and compare the costs for Mistral AI's newest models including Codestral-2508, Devstral-Medium, Devstral-Small, Mistral-Medium-3.1, Mistral-Small-3.2-24b-Instruct and more. Specify your input token count, output token count and number of API calls to get a cost estimate.

29 models available

How to Use the LLM Pricing Calculator

Step-by-Step Guide

1

Choose Your Measurement

Select tokens for precision, words for content planning, or characters for short-form content.

Choose calculation type
2

Enter Your Numbers

Input your expected prompt length, desired response size, and number of API calls:

  • Prompt length: How much text you'll send to the AI (your question or instructions)
  • Response size: How much text you expect the AI to generate back
  • API calls: How many times you'll make this request (for total project cost)
Enter calculation numbers
3

Compare Results & Analyze Breakdown

Review your pricing analysis:

  • Model comparison: Compare costs of all tracked models in the table below
  • Cost breakdown: See separate input vs output costs and total per-call expenses
  • Optimization: Use the data to optimize your usage and choose the most cost-effective model

Understanding Input Types

Tokens

The most precise measurement. Tokens are the basic units AI models process - roughly 0.75 words or 4 characters each.

Words

Standard text measurement. Perfect for writers estimating content costs. Converted to ~1.3 tokens per word.

Characters

Ideal for social media or short-form content. Converted to ~0.25 tokens per character.

Calculator Components

In

Input

Text you send to the AI model API (your prompt)

Out

Output

Generated response from the model

×

API Calls

Number of requests you'll make to calculate total project cost

💡

Pro Tips

Pricing data is updated daily from OpenRouter to ensure accuracy

Multiply the per-call cost by your expected usage volume for total project estimates

Use the comparison table to find the most cost-effective model for your specific use case

Consider both input and output costs - some models have different pricing ratios

Frequently Asked Questions

Everything you need to know about AI pricing and tokens

A token is the basic unit that AI models use to process text. Think of tokens as "chunks" of text that the AI reads and writes. 1,000 tokens equals approximately 750 words of English text. One token equals roughly 4 characters on average. Tokens can be whole words, parts of words, or punctuation. Different languages may use different numbers of tokens. For example, "Hello world" equals 2 tokens. "AI" equals 1 token. "ChatGPT" equals 2 tokens (Chat + GPT). A typical email contains 200-400 tokens. A blog post contains 1,000-3,000 tokens.
An execution or API call is each time you send a request to an AI model and get a response back. Each call is charged separately. The text you send is counted as input tokens and the text you get back is counted as output tokens
A prompt is the input text you send to an AI model - essentially your question, instruction, or request. You can tailor your prompt to get a certain style of response. For examole you could say "You are a friendly customer service agent, respond to the user's request" followed by the question from the user.
Choosing the right AI model depends on your specific needs, budget, and quality requirements. For simple tasks like summaries or basic questions, use cheaper models like GPT-4o-mini or Claude Haiku. For complex reasoning, use premium models like GPT-4o or Claude Sonnet. For specialized tasks, look for models trained for your specific use case. Test with a few models on your actual use case. Premium models often need fewer retries. Sometimes a cheaper model with a better prompt works just as well. Budget models like GPT-4o-mini and Claude Haiku cost $0.15-0.30 per 1M tokens. Balanced models like GPT-4o and Claude Sonnet cost $3-15 per 1M tokens. Premium models like GPT-4 Turbo and Claude Opus cost $15-75 per 1M tokens.