LLM Comparison

Claude Opus 4.7 vs GPT-5.5

Side-by-side specs, pricing & capabilities · Updated April 2026

Price vs Intelligence

10090807061$10.50$13.56$16.63$19.69$22.75Avg Price per M tokensIntelligenceClaude Opus 4.770.8 / $15.00GPT-5.589.5 / $17.50

Add to comparison

2/6 models
Same tier:
 AnthropicClaude Opus 4.7OpenAIGPT-5.5
OrganizationAnthropicAnthropicOpenAIOpenAI
OpenTools Score
71
4.7
90
5.1
FamilyClaudeGPT
StatusCurrentCurrent
Release DateApr 2026Apr 2026
Context Window1.0M tokens1.1M tokens
Input Price$5.00/M tokens$5.00/M tokens
Output Price$25.00/M tokens$30.00/M tokens
Pricing NotesCache read: $0.5000/M tokensCached input: $0.50/M tokens. Long context (>272K tokens): 2x input, 1.5x output. Batch API: 50% discount. Priority: 2.5x standard.
Capabilities
textvisioncodetool-use
textvisioncodetool-useextended-thinkingcomputer-useweb-search
Training CutoffDecember 2025
Max Output128K tokens128K tokens
API Identifieranthropic/claude-opus-4.7openai/gpt-5.5
Benchmarks
MMLU
84.7anthropic
92.4openai
MMLU-Pro
78.1anthropic
MMMLU
92anthropic
GPQA Diamond
94.2anthropic
93.6openai
HLE
54.7artificial-analysis
SWE-bench Verified
87.6anthropic
SWE-bench Pro
64.3anthropic
58.6openai
SWE-bench Multilingual+Multimodal
80.5anthropic
Terminal-Bench
69.4anthropic
MCP-Atlas
77.3anthropic
Berkeley Function Calling
77.3anthropic
OSWorld-Verified
78anthropic
78.7openai
BrowseComp
79.3anthropic
84.4openai
CharXiv-R
91anthropic
DocVQA
93.1anthropic
CyberGym
73.1anthropic
81.8openai
GDPVal-AA Elo
1753artificial-analysis
ARC-AGI-2
85openai
Terminal-Bench 2.0
82.7openai
MMMU-Pro
81.2openai
FrontierMath Tier 4
35.4openai
HLE (with tools)
52.2openai
GDPval
84.9openai
Toolathlon
55.6openai
MRCR v2 512K-1M
74openai
 View Claude Opus 4.7View GPT-5.5

Cost Calculator

Enter your expected monthly token usage to compare costs.

ModelInputOutputTotal / movs Best
Claude Opus 4.7Cheapest$5.00$12.50$17.50
GPT-5.5$5.00$15.00$20.00+14%

Looking for more AI models?

Browse All LLMs