Mistral AI
French AI lab building efficient open-weight LLMs and multimodal models.
Updated April 2026
Overview
- Website
- mistral.ai
- Founded
- 2023
- Headquarters
- Paris, France
- Segment
- Frontier Foundation Model Labs
- Posture
- Open-Weight Frontier
Product overview
Mistral AI develops high-performance large language models like Mistral Large 3 (675B params MoE), Mistral Medium, Ministral edge models, Codestral for code, and multimodal Pixtral, many released open-source under Apache 2.0. Enterprises like ASML, Stellantis, CMA CGM, TotalEnergies, HSBC, and European governments use them via API or self-hosted for chatbots, coding, agents, and workflows needing data sovereignty. They stand out from labs like OpenAI/Anthropic with open-source focus, efficiency via MoE/sparse architectures, on-premises deployment, and European GDPR-compliant sovereignty.
Revenue model
API usage-based pricing per million tokens (e.g. Mistral Medium 3 $0.40/$2.00 input/output; Large $0.50/$1.50); Le Chat Pro subscription ~$15-30/mo; enterprise licensing/subscriptions for on-prem/private deployments; custom model training/services.
Moat
Mistral AI's key competitive moat is its pioneering focus on open-weight, highly efficient large language models that deliver frontier-level performance at significantly lower costs (e.g., Mistral Medium 3 at 8x lower pricing than competitors like Claude Sonnet 4, via innovations like Sparse Mixture of Experts architecture). This creates high switching costs for enterprises through self-hosting, full customization, and data sovereignty advantages, reinforced by large context windows (up to 128k tokens), multilingual capabilities, and permissive licenses enabling proprietary fine-tuning without vendor lock-in.