LLM Comparison
Claude Opus 4.6 vs Sonar Reasoning Pro
Side-by-side specs, pricing & capabilities · Updated May 2026
Price vs Intelligence
Add to comparison
2/6 modelsSame tier:
| Organization | ||
| OpenTools Score | 63 4.2 | 44 8.8 |
| Family | Claude | Sonar |
| Status | Current | Current |
| Release Date | Feb 2026 | Mar 2025 |
| Context Window | 1.0M tokens | 128K tokens |
| Input Price | $5.00/M tokens | $2.00/M tokens |
| Output Price | $25.00/M tokens | $8.00/M tokens |
| Pricing Notes | Cache read: $0.5000/M tokens | — |
| Capabilities | textvisioncodetool-use | textvisioncodeextended-thinking |
| Max Output | 128K tokens | — |
| API Identifier | anthropic/claude-opus-4.6 | perplexity/sonar-reasoning-pro |
| Benchmarks | ||
| MMLU | 89.8anthropic | — |
| GPQA Diamond | 91.3anthropic | 62.3perplexity |
| SWE-bench Verified | 80.8anthropic | — |
| SWE-bench Pro | 53.4anthropic | — |
| Terminal-Bench 2.0 | 65.4anthropic | — |
| MATH 500 | 80.7anthropic | — |
| LiveCodeBench | 55.9anthropic | — |
| Berkeley Function Calling | 91.9anthropic | — |
| HLE | 48anthropic | — |
| MCP-Atlas | 75.8anthropic | — |
| BrowseComp | 83.7anthropic | — |
| OSWorld-Verified | 72.7anthropic | — |
| DocVQA | 91.3anthropic | — |
| GPQA-AA Elo | 1619artificial-analysis | — |
| MATH-500 | — | 92.1perplexity |
| AIME | — | 77perplexity |
| MMLU Pro | — | 79perplexity |
| View Claude Opus 4.6 | View Sonar Reasoning Pro | |
Cost Calculator
Enter your expected monthly token usage to compare costs.
| Model | Input | Output | Total / mo | vs Best |
|---|---|---|---|---|
| Sonar Reasoning ProCheapest | $2.00 | $4.00 | $6.00 | — |
| Claude Opus 4.6 | $5.00 | $12.50 | $17.50 | +192% |
Anthropic
Claude Opus 4.6
Claude Opus 4.6 is a multimodal LLM from Anthropic with adaptive reasoning capabilities. Achieves 91.3% on GPQA Diamond and 80.8% on SWE-bench Verified. Supports up to 1,000,000 token context window. Available from $5.00/M input tokens.
Perplexity AI
Sonar Reasoning Pro
Sonar Reasoning Pro is a multimodal llm from Perplexity AI. Supports up to 128,000 token context window. Achieves 83.0% on MMLU. Available from $2.00/M input tokens.
More Comparisons
Looking for more AI models?
Browse All LLMs