LLM Comparison

GPT-5.5 vs Claude Opus 4.7

Side-by-side specs, pricing & capabilities · Updated April 2026

Price vs Intelligence

10090807061$10.50$13.56$16.63$19.69$22.75Avg Price per M tokensIntelligenceGPT-5.589.5 / $17.50Claude Opus 4.770.8 / $15.00

Add to comparison

2/6 models
Same tier:
 OpenAIGPT-5.5AnthropicClaude Opus 4.7
OrganizationOpenAIOpenAIAnthropicAnthropic
OpenTools Score
90
5.1
71
4.7
FamilyGPTClaude
StatusCurrentCurrent
Release DateApr 2026Apr 2026
Context Window1.1M tokens1.0M tokens
Input Price$5.00/M tokens$5.00/M tokens
Output Price$30.00/M tokens$25.00/M tokens
Pricing NotesCached input: $0.50/M tokens. Long context (>272K tokens): 2x input, 1.5x output. Batch API: 50% discount. Priority: 2.5x standard.Cache read: $0.5000/M tokens
Capabilities
textvisioncodetool-useextended-thinkingcomputer-useweb-search
textvisioncodetool-use
Training CutoffDecember 2025
Max Output128K tokens128K tokens
API Identifieropenai/gpt-5.5anthropic/claude-opus-4.7
Benchmarks
MMLU
92.4openai
84.7anthropic
GPQA Diamond
93.6openai
94.2anthropic
ARC-AGI-2
85openai
Terminal-Bench 2.0
82.7openai
SWE-bench Pro
58.6openai
64.3anthropic
OSWorld-Verified
78.7openai
78anthropic
BrowseComp
84.4openai
79.3anthropic
MMMU-Pro
81.2openai
FrontierMath Tier 4
35.4openai
HLE (with tools)
52.2openai
GDPval
84.9openai
Toolathlon
55.6openai
CyberGym
81.8openai
73.1anthropic
MRCR v2 512K-1M
74openai
MMLU-Pro
78.1anthropic
MMMLU
92anthropic
HLE
54.7artificial-analysis
SWE-bench Verified
87.6anthropic
SWE-bench Multilingual+Multimodal
80.5anthropic
Terminal-Bench
69.4anthropic
MCP-Atlas
77.3anthropic
Berkeley Function Calling
77.3anthropic
CharXiv-R
91anthropic
DocVQA
93.1anthropic
GDPVal-AA Elo
1753artificial-analysis
 View GPT-5.5View Claude Opus 4.7

Cost Calculator

Enter your expected monthly token usage to compare costs.

ModelInputOutputTotal / movs Best
Claude Opus 4.7Cheapest$5.00$12.50$17.50
GPT-5.5$5.00$15.00$20.00+14%

Looking for more AI models?

Browse All LLMs