Meta AI
Develops open-weight Llama large language models and powers AI assistants across Meta's apps.
Updated April 2026
Overview
- Website
- ai.meta.com
- Founded
- 2013
- Headquarters
- Menlo Park, CA
- Segment
- Frontier Foundation Model Labs
- Posture
- Open-Weight Frontier
Product overview
Meta AI builds the Llama family of open-weight multimodal models, including Llama 4 Scout and Maverick, featuring mixture-of-experts architecture, up to 10M token context, and support for 200 languages. These power the free Meta AI assistant in WhatsApp, Messenger, Instagram, Facebook, and meta.ai, used by billions of consumers daily, and downloaded by developers and enterprises for custom applications via llama.com and cloud platforms like AWS Bedrock. Distinct from closed-source labs like OpenAI and Google, Meta releases full model weights openly, enabling community fine-tuning and broad accessibility at lower effective cost.
Revenue model
No direct revenue from models or consumer AI (free access); monetized indirectly by boosting user engagement and ad performance on Meta platforms (AI-driven ad tools handle $60B+ annualized spend); revenue-sharing with Llama hosting cloud providers like AWS; enterprise licensing for large-scale commercial use.
Moat
Meta AI's key competitive moat is its massive compute moat from owning the world's largest GPU clusters, enabling faster iteration on frontier models like Llama 4 (upcoming 2-trillion parameter "Behemoth"), combined with a dominant open-weights strategy that has built the industry's largest developer ecosystem, commoditizing rivals' closed models. This is reinforced by proprietary data from billions of daily users across Meta's Family of Apps (Facebook, Instagram, WhatsApp), fueling superior AI-driven ad tools like Advantage+ ($60B run-rate) and personalized experiences that create high switching costs and scale advantages in the "industrialization of AI."