LLM Comparison
DeepSeek V3.2 vs MiMo-V2-Flash
Side-by-side specs, pricing & capabilities · Updated April 2026
Add to comparison
2/6 modelsSame tier:
D DeepSeek V3.2 | M MiMo-V2-Flash | |
|---|---|---|
| Organization | DeepSeek | Xiaomi |
| OpenTools Score | 64 189 | |
| Family | DeepSeek | MiMo |
| Status | Current | Current |
| Release Date | Dec 2025 | Dec 2025 |
| Context Window | 164K tokens | 262K tokens |
| Input Price | $0.26/M tokens | $0.09/M tokens |
| Output Price | $0.42/M tokens | $0.29/M tokens |
| Pricing Notes | Cache read: $0.1350/M tokens | Cache read: $0.0450/M tokens |
| Capabilities | textcode | textcode |
| Max Output | 164K tokens | 66K tokens |
| API Identifier | deepseek/deepseek-v3.2 | xiaomi/mimo-v2-flash |
| Benchmarks | ||
| MMLU | 87.1 | — |
| HumanEval | 89.2 | — |
| MATH | 90.2 | — |
| View DeepSeek V3.2 | View MiMo-V2-Flash | |
Cost Calculator
Enter your expected monthly token usage to compare costs.
| Model | Input | Output | Total / mo | vs Best |
|---|---|---|---|---|
| MiMo-V2-FlashCheapest | $0.09 | $0.15 | $0.24 | — |
| DeepSeek V3.2 | $0.26 | $0.21 | $0.47 | +100% |
DeepSeek
DeepSeek V3.2
DeepSeek V3.2 is a large language model from DeepSeek. Supports up to 163,840 token context window. Achieves 87.1% on MMLU. Available from $0.26/M input tokens.
Xiaomi
MiMo-V2-Flash
MiMo-V2-Flash is a large language model from Xiaomi. Supports up to 262,144 token context window. Available from $0.09/M input tokens.
More Comparisons
Looking for more AI models?
Browse All LLMs