LLM Comparison
GPT-5.5 vs Ministral 3 14B 2512
Side-by-side specs, pricing & capabilities · Updated April 2026
Add to comparison
2/6 modelsSame tier:
| Organization | ||
| OpenTools Score | 90 5.1 | |
| Family | GPT | Ministral |
| Status | Current | Current |
| Release Date | Apr 2026 | Dec 2025 |
| Context Window | 1.1M tokens | 262K tokens |
| Input Price | $5.00/M tokens | $0.20/M tokens |
| Output Price | $30.00/M tokens | $0.20/M tokens |
| Pricing Notes | Cached input: $0.50/M tokens. Long context (>272K tokens): 2x input, 1.5x output. Batch API: 50% discount. Priority: 2.5x standard. | Cache read: $0.0200/M tokens |
| Capabilities | textvisioncodetool-useextended-thinkingcomputer-useweb-search | textvisioncodetool-use |
| Training Cutoff | December 2025 | — |
| Max Output | 128K tokens | — |
| API Identifier | openai/gpt-5.5 | mistralai/ministral-14b-2512 |
| Benchmarks | ||
| MMLU | 92.4openai | — |
| GPQA Diamond | 93.6openai | — |
| ARC-AGI-2 | 85openai | — |
| Terminal-Bench 2.0 | 82.7openai | — |
| SWE-bench Pro | 58.6openai | — |
| OSWorld-Verified | 78.7openai | — |
| BrowseComp | 84.4openai | — |
| MMMU-Pro | 81.2openai | — |
| FrontierMath Tier 4 | 35.4openai | — |
| HLE (with tools) | 52.2openai | — |
| GDPval | 84.9openai | — |
| Toolathlon | 55.6openai | — |
| CyberGym | 81.8openai | — |
| MRCR v2 512K-1M | 74openai | — |
| View GPT-5.5 | View Ministral 3 14B 2512 | |
Cost Calculator
Enter your expected monthly token usage to compare costs.
| Model | Input | Output | Total / mo | vs Best |
|---|---|---|---|---|
| Ministral 3 14B 2512Cheapest | $0.20 | $0.10 | $0.30 | — |
| GPT-5.5 | $5.00 | $15.00 | $20.00 | +6567% |
OpenAI
GPT-5.5
GPT-5.5 is OpenAI's smartest and most intuitive model, built for agentic work like coding, research, and data analysis. It matches GPT-5.4 per-token latency while delivering higher intelligence with significantly fewer tokens. Supports a 1,050,000 token context window and five reasoning effort levels (none through xhigh).
Mistral AI
Ministral 3 14B 2512
Ministral 3 14B 2512 is a multimodal llm from Mistral AI. Supports up to 262,144 token context window. Available from $0.20/M input tokens.
More Comparisons
Looking for more AI models?
Browse All LLMs