LLM Comparison
Claude Opus 4.7 vs Claude Opus 4.6
Side-by-side specs, pricing & capabilities · Updated May 2026
Price vs Intelligence
Add to comparison
2/6 models| Organization | ||
| OpenTools Score | 71 4.7 | 63 4.2 |
| Family | Claude | Claude |
| Status | Current | Current |
| Release Date | Apr 2026 | Feb 2026 |
| Context Window | 1.0M tokens | 1.0M tokens |
| Input Price | $5.00/M tokens | $5.00/M tokens |
| Output Price | $25.00/M tokens | $25.00/M tokens |
| Pricing Notes | Cache read: $0.5000/M tokens | Cache read: $0.5000/M tokens |
| Capabilities | textvisioncodetool-use | textvisioncodetool-use |
| Max Output | 128K tokens | 128K tokens |
| API Identifier | anthropic/claude-opus-4.7 | anthropic/claude-opus-4.6 |
| Benchmarks | ||
| MMLU | 84.7anthropic | 89.8anthropic |
| MMLU-Pro | 78.1anthropic | — |
| MMMLU | 92anthropic | — |
| GPQA Diamond | 94.2anthropic | 91.3anthropic |
| HLE | 54.7artificial-analysis | 48anthropic |
| SWE-bench Verified | 87.6anthropic | 80.8anthropic |
| SWE-bench Pro | 64.3anthropic | 53.4anthropic |
| SWE-bench Multilingual+Multimodal | 80.5anthropic | — |
| Terminal-Bench | 69.4anthropic | — |
| MCP-Atlas | 77.3anthropic | 75.8anthropic |
| Berkeley Function Calling | 77.3anthropic | 91.9anthropic |
| OSWorld-Verified | 78anthropic | 72.7anthropic |
| BrowseComp | 79.3anthropic | 83.7anthropic |
| CharXiv-R | 91anthropic | — |
| DocVQA | 93.1anthropic | 91.3anthropic |
| CyberGym | 73.1anthropic | — |
| GDPVal-AA Elo | 1753artificial-analysis | — |
| Terminal-Bench 2.0 | — | 65.4anthropic |
| MATH 500 | — | 80.7anthropic |
| LiveCodeBench | — | 55.9anthropic |
| GPQA-AA Elo | — | 1619artificial-analysis |
| View Claude Opus 4.7 | View Claude Opus 4.6 | |
Cost Calculator
Enter your expected monthly token usage to compare costs.
| Model | Input | Output | Total / mo | vs Best |
|---|---|---|---|---|
| Claude Opus 4.7Cheapest | $5.00 | $12.50 | $17.50 | — |
| Claude Opus 4.6Cheapest | $5.00 | $12.50 | $17.50 | — |
Anthropic
Claude Opus 4.7
Claude Opus 4.7 is Anthropic's most capable generally available model, with significant improvements in advanced software engineering, agentic tool use, and vision resolution. Achieves 87.6% on SWE-bench Verified and 94.2% on GPQA Diamond. Supports up to 1,000,000 token context window with 3.3x higher-resolution vision than Opus 4.6.
Anthropic
Claude Opus 4.6
Claude Opus 4.6 is a multimodal LLM from Anthropic with adaptive reasoning capabilities. Achieves 91.3% on GPQA Diamond and 80.8% on SWE-bench Verified. Supports up to 1,000,000 token context window. Available from $5.00/M input tokens.
More Comparisons
Looking for more AI models?
Browse All LLMs