LLM Comparison
GPT-5.4 vs Claude Opus 4.7
Side-by-side specs, pricing & capabilities · Updated May 2026
Price vs Intelligence
Add to comparison
2/6 modelsSame tier:
| Organization | ||
| OpenTools Score | 74 8.4 | 71 4.7 |
| Family | GPT | Claude |
| Status | Current | Current |
| Release Date | Mar 2026 | Apr 2026 |
| Context Window | 1.1M tokens | 1.0M tokens |
| Input Price | $2.50/M tokens | $5.00/M tokens |
| Output Price | $15.00/M tokens | $25.00/M tokens |
| Pricing Notes | Cache read: $0.2500/M tokens | Cache read: $0.5000/M tokens |
| Capabilities | textvisioncode | textvisioncodetool-use |
| Max Output | 128K tokens | 128K tokens |
| API Identifier | openai/gpt-5.4 | anthropic/claude-opus-4.7 |
| Benchmarks | ||
| GPQA Diamond | 93openai | 94.2anthropic |
| SWE-bench Pro | 57.7openai | 64.3anthropic |
| OSWorld-Verified | 75openai | 78anthropic |
| Terminal-Bench 2.0 | 75.1openai | — |
| ARC-AGI-2 | 73.3openai | — |
| BrowseComp | 82.7openai | 79.3anthropic |
| MMMU-Pro | 81.2openai | — |
| MATH-500 | 94.6openai | — |
| MMLU | — | 84.7anthropic |
| MMLU-Pro | — | 78.1anthropic |
| MMMLU | — | 92anthropic |
| HLE | — | 54.7artificial-analysis |
| SWE-bench Verified | — | 87.6anthropic |
| SWE-bench Multilingual+Multimodal | — | 80.5anthropic |
| Terminal-Bench | — | 69.4anthropic |
| MCP-Atlas | — | 77.3anthropic |
| Berkeley Function Calling | — | 77.3anthropic |
| CharXiv-R | — | 91anthropic |
| DocVQA | — | 93.1anthropic |
| CyberGym | — | 73.1anthropic |
| GDPVal-AA Elo | — | 1753artificial-analysis |
| View GPT-5.4 | View Claude Opus 4.7 | |
Cost Calculator
Enter your expected monthly token usage to compare costs.
| Model | Input | Output | Total / mo | vs Best |
|---|---|---|---|---|
| GPT-5.4Cheapest | $2.50 | $7.50 | $10.00 | — |
| Claude Opus 4.7 | $5.00 | $12.50 | $17.50 | +75% |
OpenAI
GPT-5.4
GPT-5.4 is a multimodal llm from OpenAI. Supports up to 1,050,000 token context window. Available from $2.50/M input tokens.
Anthropic
Claude Opus 4.7
Claude Opus 4.7 is Anthropic's most capable generally available model, with significant improvements in advanced software engineering, agentic tool use, and vision resolution. Achieves 87.6% on SWE-bench Verified and 94.2% on GPQA Diamond. Supports up to 1,000,000 token context window with 3.3x higher-resolution vision than Opus 4.6.
More Comparisons
Looking for more AI models?
Browse All LLMs