LLM Comparison
Claude Opus 4.6 vs Qwen3.5-27B
Side-by-side specs, pricing & capabilities · Updated May 2026
Price vs Intelligence
Add to comparison
2/6 modelsSame tier:
| Organization | ||
| OpenTools Score | 63 4.2 | 49 55.8 |
| Family | Claude | Qwen |
| Status | Current | Current |
| Release Date | Feb 2026 | Feb 2026 |
| Context Window | 1.0M tokens | 262K tokens |
| Input Price | $5.00/M tokens | $0.20/M tokens |
| Output Price | $25.00/M tokens | $1.56/M tokens |
| Pricing Notes | Cache read: $0.5000/M tokens | — |
| Capabilities | textvisioncodetool-use | textvisionvideocode |
| Max Output | 128K tokens | 66K tokens |
| API Identifier | anthropic/claude-opus-4.6 | qwen/qwen3.5-27b |
| Benchmarks | ||
| MMLU | 89.8anthropic | — |
| GPQA Diamond | 91.3anthropic | 85.5alibaba |
| SWE-bench Verified | 80.8anthropic | 72.4alibaba |
| SWE-bench Pro | 53.4anthropic | — |
| Terminal-Bench 2.0 | 65.4anthropic | — |
| MATH 500 | 80.7anthropic | — |
| LiveCodeBench | 55.9anthropic | — |
| Berkeley Function Calling | 91.9anthropic | — |
| HLE | 48anthropic | — |
| MCP-Atlas | 75.8anthropic | — |
| BrowseComp | 83.7anthropic | — |
| OSWorld-Verified | 72.7anthropic | — |
| DocVQA | 91.3anthropic | — |
| GPQA-AA Elo | 1619artificial-analysis | — |
| MMLU-Pro | — | 86.1alibaba |
| MathVision | — | 86alibaba |
| TAU2-Bench | — | 79alibaba |
| IFEval | — | 91.5alibaba |
| View Claude Opus 4.6 | View Qwen3.5-27B | |
Cost Calculator
Enter your expected monthly token usage to compare costs.
| Model | Input | Output | Total / mo | vs Best |
|---|---|---|---|---|
| Qwen3.5-27BCheapest | $0.20 | $0.78 | $0.98 | — |
| Claude Opus 4.6 | $5.00 | $12.50 | $17.50 | +1695% |
Anthropic
Claude Opus 4.6
Claude Opus 4.6 is a multimodal LLM from Anthropic with adaptive reasoning capabilities. Achieves 91.3% on GPQA Diamond and 80.8% on SWE-bench Verified. Supports up to 1,000,000 token context window. Available from $5.00/M input tokens.
Alibaba
Qwen3.5-27B
Qwen3.5-27B is a multimodal llm from Alibaba. Supports up to 262,144 token context window. Achieves 87.5% on MMLU. Available from $0.20/M input tokens.
More Comparisons
Looking for more AI models?
Browse All LLMs