LLM Comparison
Sonar Reasoning Pro vs Claude Opus 4.7
Side-by-side specs, pricing & capabilities · Updated May 2026
Price vs Intelligence
Add to comparison
2/6 modelsSame tier:
| Organization | ||
| OpenTools Score | 44 8.8 | 71 4.7 |
| Family | Sonar | Claude |
| Status | Current | Current |
| Release Date | Mar 2025 | Apr 2026 |
| Context Window | 128K tokens | 1.0M tokens |
| Input Price | $2.00/M tokens | $5.00/M tokens |
| Output Price | $8.00/M tokens | $25.00/M tokens |
| Pricing Notes | — | Cache read: $0.5000/M tokens |
| Capabilities | textvisioncodeextended-thinking | textvisioncodetool-use |
| Max Output | — | 128K tokens |
| API Identifier | perplexity/sonar-reasoning-pro | anthropic/claude-opus-4.7 |
| Benchmarks | ||
| GPQA Diamond | 62.3perplexity | 94.2anthropic |
| MATH-500 | 92.1perplexity | — |
| AIME | 77perplexity | — |
| MMLU Pro | 79perplexity | — |
| MMLU | — | 84.7anthropic |
| MMLU-Pro | — | 78.1anthropic |
| MMMLU | — | 92anthropic |
| HLE | — | 54.7artificial-analysis |
| SWE-bench Verified | — | 87.6anthropic |
| SWE-bench Pro | — | 64.3anthropic |
| SWE-bench Multilingual+Multimodal | — | 80.5anthropic |
| Terminal-Bench | — | 69.4anthropic |
| MCP-Atlas | — | 77.3anthropic |
| Berkeley Function Calling | — | 77.3anthropic |
| OSWorld-Verified | — | 78anthropic |
| BrowseComp | — | 79.3anthropic |
| CharXiv-R | — | 91anthropic |
| DocVQA | — | 93.1anthropic |
| CyberGym | — | 73.1anthropic |
| GDPVal-AA Elo | — | 1753artificial-analysis |
| View Sonar Reasoning Pro | View Claude Opus 4.7 | |
Cost Calculator
Enter your expected monthly token usage to compare costs.
| Model | Input | Output | Total / mo | vs Best |
|---|---|---|---|---|
| Sonar Reasoning ProCheapest | $2.00 | $4.00 | $6.00 | — |
| Claude Opus 4.7 | $5.00 | $12.50 | $17.50 | +192% |
Perplexity AI
Sonar Reasoning Pro
Sonar Reasoning Pro is a multimodal llm from Perplexity AI. Supports up to 128,000 token context window. Achieves 83.0% on MMLU. Available from $2.00/M input tokens.
Anthropic
Claude Opus 4.7
Claude Opus 4.7 is Anthropic's most capable generally available model, with significant improvements in advanced software engineering, agentic tool use, and vision resolution. Achieves 87.6% on SWE-bench Verified and 94.2% on GPQA Diamond. Supports up to 1,000,000 token context window with 3.3x higher-resolution vision than Opus 4.6.
More Comparisons
Looking for more AI models?
Browse All LLMs