LLM Comparison
MiMo-V2-Flash vs Command R (08-2024)
Side-by-side specs, pricing & capabilities · Updated April 2026
Add to comparison
2/6 modelsSame tier:
M MiMo-V2-Flash | C Command R (08-2024) | |
|---|---|---|
| Organization | Xiaomi | Cohere |
| OpenTools Score | 39 104 | |
| Family | MiMo | Command |
| Status | Current | Current |
| Release Date | Dec 2025 | Aug 2024 |
| Context Window | 262K tokens | 128K tokens |
| Input Price | $0.09/M tokens | $0.15/M tokens |
| Output Price | $0.29/M tokens | $0.60/M tokens |
| Pricing Notes | Cache read: $0.0450/M tokens | — |
| Capabilities | textcode | textcode |
| Max Output | 66K tokens | 4K tokens |
| API Identifier | xiaomi/mimo-v2-flash | cohere/command-r-08-2024 |
| Benchmarks | ||
| MMLU | — | 68.4openrouter |
| View MiMo-V2-Flash | View Command R (08-2024) | |
Cost Calculator
Enter your expected monthly token usage to compare costs.
| Model | Input | Output | Total / mo | vs Best |
|---|---|---|---|---|
| MiMo-V2-FlashCheapest | $0.09 | $0.15 | $0.24 | — |
| Command R (08-2024) | $0.15 | $0.30 | $0.45 | +91% |
Xiaomi
MiMo-V2-Flash
MiMo-V2-Flash is a large language model from Xiaomi. Supports up to 262,144 token context window. Available from $0.09/M input tokens.
Cohere
Command R (08-2024)
Command R (08-2024) is a large language model from Cohere. Supports up to 128,000 token context window. Achieves 68.4% on MMLU. Available from $0.15/M input tokens.
More Comparisons
Looking for more AI models?
Browse All LLMs