LLM Comparison
Grok 4 Fast vs Claude Opus 4.6
Side-by-side specs, pricing & capabilities · Updated May 2026
Add to comparison
2/6 modelsSame tier:
| Organization | ||
| OpenTools Score | 63 4.2 | |
| Family | Grok | Claude |
| Status | Current | Current |
| Release Date | Sep 2025 | Feb 2026 |
| Context Window | 2.0M tokens | 1.0M tokens |
| Input Price | $0.20/M tokens | $5.00/M tokens |
| Output Price | $0.50/M tokens | $25.00/M tokens |
| Pricing Notes | Cache read: $0.0500/M tokens | Cache read: $0.5000/M tokens |
| Capabilities | textvisioncode | textvisioncodetool-use |
| Max Output | 30K tokens | 128K tokens |
| API Identifier | x-ai/grok-4-fast | anthropic/claude-opus-4.6 |
| Benchmarks | ||
| MMLU | — | 89.8anthropic |
| GPQA Diamond | — | 91.3anthropic |
| SWE-bench Verified | — | 80.8anthropic |
| SWE-bench Pro | — | 53.4anthropic |
| Terminal-Bench 2.0 | — | 65.4anthropic |
| MATH 500 | — | 80.7anthropic |
| LiveCodeBench | — | 55.9anthropic |
| Berkeley Function Calling | — | 91.9anthropic |
| HLE | — | 48anthropic |
| MCP-Atlas | — | 75.8anthropic |
| BrowseComp | — | 83.7anthropic |
| OSWorld-Verified | — | 72.7anthropic |
| DocVQA | — | 91.3anthropic |
| GPQA-AA Elo | — | 1619artificial-analysis |
| View Grok 4 Fast | View Claude Opus 4.6 | |
Cost Calculator
Enter your expected monthly token usage to compare costs.
| Model | Input | Output | Total / mo | vs Best |
|---|---|---|---|---|
| Grok 4 FastCheapest | $0.20 | $0.25 | $0.45 | — |
| Claude Opus 4.6 | $5.00 | $12.50 | $17.50 | +3789% |
xAI
Grok 4 Fast
Grok 4 Fast is a multimodal llm from xAI. Supports up to 2,000,000 token context window. Available from $0.20/M input tokens.
Anthropic
Claude Opus 4.6
Claude Opus 4.6 is a multimodal LLM from Anthropic with adaptive reasoning capabilities. Achieves 91.3% on GPQA Diamond and 80.8% on SWE-bench Verified. Supports up to 1,000,000 token context window. Available from $5.00/M input tokens.
More Comparisons
Looking for more AI models?
Browse All LLMs