LLM Context Window Calculator
Context Used (K tokens)
Model
An LLM context window calculator shows how much of a model's context limit a given amount of text consumes, and estimates the input cost. Context window is the maximum text a model can process at once.
- 1Context window measured in tokens (1 token ≈ 0.75 words)
- 2Input tokens include both the prompt and any prior conversation
- 3Exceeding context limit causes earlier content to be forgotten
- 4Cost = (context tokens ÷ 1000) × input price per 1K tokens
32K tokens used in 128K model=25% context used, ~$0.08 input cost (GPT-4o)
Full 200K context (Claude)=~150,000 words, ~600 A4 pages
1,000 token conversation=~750 words, minimal cost at most price points
| Tokens | Words (approx) | Pages (A4) | Cost (GPT-4o) |
|---|---|---|---|
| 1K | 750 | 3 | $0.003 |
| 10K | 7,500 | 30 | $0.025 |
| 32K | 24,000 | 96 | $0.08 |
| 128K | 96,000 | 384 | $0.32 |
| 200K | 150,000 | 600 | $0.50 |
References
🔒
100% Gratis
Ingen registrering
✓
Korrekt
Verifierade formler
⚡
Omedelbar
Resultat direkt
📱
Mobilanpassad
Alla enheter