LLM Comparison
Claude Opus 4.6 vs Claude Opus 4.7
Side-by-side specs, pricing & capabilities · Updated May 2026
Price vs Intelligence
Add to comparison
2/6 models| Organization | ||
| OpenTools Score | 63 4.2 | 71 4.7 |
| Family | Claude | Claude |
| Status | Current | Current |
| Release Date | Feb 2026 | Apr 2026 |
| Context Window | 1.0M tokens | 1.0M tokens |
| Input Price | $5.00/M tokens | $5.00/M tokens |
| Output Price | $25.00/M tokens | $25.00/M tokens |
| Pricing Notes | Cache read: $0.5000/M tokens | Cache read: $0.5000/M tokens |
| Capabilities | textvisioncodetool-use | textvisioncodetool-use |
| Max Output | 128K tokens | 128K tokens |
| API Identifier | anthropic/claude-opus-4.6 | anthropic/claude-opus-4.7 |
| Benchmarks | ||
| MMLU | 89.8anthropic | 84.7anthropic |
| GPQA Diamond | 91.3anthropic | 94.2anthropic |
| SWE-bench Verified | 80.8anthropic | 87.6anthropic |
| SWE-bench Pro | 53.4anthropic | 64.3anthropic |
| Terminal-Bench 2.0 | 65.4anthropic | — |
| MATH 500 | 80.7anthropic | — |
| LiveCodeBench | 55.9anthropic | — |
| Berkeley Function Calling | 91.9anthropic | 77.3anthropic |
| HLE | 48anthropic | 54.7artificial-analysis |
| MCP-Atlas | 75.8anthropic | 77.3anthropic |
| BrowseComp | 83.7anthropic | 79.3anthropic |
| OSWorld-Verified | 72.7anthropic | 78anthropic |
| DocVQA | 91.3anthropic | 93.1anthropic |
| GPQA-AA Elo | 1619artificial-analysis | — |
| MMLU-Pro | — | 78.1anthropic |
| MMMLU | — | 92anthropic |
| SWE-bench Multilingual+Multimodal | — | 80.5anthropic |
| Terminal-Bench | — | 69.4anthropic |
| CharXiv-R | — | 91anthropic |
| CyberGym | — | 73.1anthropic |
| GDPVal-AA Elo | — | 1753artificial-analysis |
| View Claude Opus 4.6 | View Claude Opus 4.7 | |
Cost Calculator
Enter your expected monthly token usage to compare costs.
| Model | Input | Output | Total / mo | vs Best |
|---|---|---|---|---|
| Claude Opus 4.6Cheapest | $5.00 | $12.50 | $17.50 | — |
| Claude Opus 4.7Cheapest | $5.00 | $12.50 | $17.50 | — |
Anthropic
Claude Opus 4.6
Claude Opus 4.6 is a multimodal LLM from Anthropic with adaptive reasoning capabilities. Achieves 91.3% on GPQA Diamond and 80.8% on SWE-bench Verified. Supports up to 1,000,000 token context window. Available from $5.00/M input tokens.
Anthropic
Claude Opus 4.7
Claude Opus 4.7 is Anthropic's most capable generally available model, with significant improvements in advanced software engineering, agentic tool use, and vision resolution. Achieves 87.6% on SWE-bench Verified and 94.2% on GPQA Diamond. Supports up to 1,000,000 token context window with 3.3x higher-resolution vision than Opus 4.6.
More Comparisons
Looking for more AI models?
Browse All LLMs