Thudm API Pricing (Updated 2025)
This page tracks Thudm API pricing for all 3 models. Prices are shown per 1,000 tokens (cost per token) with clear examples so you can estimate spend quickly.
Current Prices (per 1,000 tokens)
Model | Input ($/1K tokens) | Output ($/1K tokens) | Context Length |
---|---|---|---|
glm-4-32b | $0.000240 | $0.000240 | 32,000 |
glm-4.1v-9b-thinking | $0.000035 | $0.000138 | 65,536 |
glm-z1-32b | $0.000020 | $0.000080 | 32,768 |
* Some models use tiered pricing based on prompt length. Displayed prices are for prompts ≤ 200k tokens.
What does "cost per token" mean?
Tokens are the basic units that AI models use to process text. Generally, 1,000 tokens ≈ ~750 words. The cost per token determines how much you pay for each unit of text processed by the model.
Formula: Total Cost = (tokens / 1,000) × price per 1K tokens
Examples
Example usage: A 500-word prompt + 300-word response ≈ ~1,067 tokens total.
- • 1,000 tokens ≈ ~750 words of text
- • Short email: ~200-400 tokens
- • Blog post: ~1,000-3,000 tokens
- • Research paper: ~10,000+ tokens
Compare with Other Providers
See how Thudm compares with other AI providers including OpenAI, Anthropic, Google, and more.
View full pricing comparison →