Anthropic
AI safety and research company building reliable, interpretable, steerable AI systems like Claude.
Updated April 2026
Overview
- Website
- anthropic.com
- Founded
- 2021
- Headquarters
- San Francisco, CA
- Segment
- Frontier Foundation Model Labs
- Posture
- Closed-Source Frontier
Product overview
Anthropic develops the Claude family of frontier large language models, including Opus 4.6, Sonnet 4.6, and Haiku 4.5, emphasizing Constitutional AI for helpful, honest, harmless behavior. Used by enterprises like Lyft, Amazon Alexa, European Parliament, NASA, and 8 of Fortune 10 for customer support, coding, analysis, and agents. Distinct from other labs via safety-first approach with RLAIF over RLHF, public constitutions, and refusal of military uses without safeguards.
Revenue model
API usage-based pricing per million tokens (e.g. Claude Sonnet 4.6: $3 input / $15 output); consumer subscriptions (Pro $20/mo, Max from $100/mo); team plans ($20-125/user/mo); custom enterprise licensing.
Moat
- Proprietary Data
- Scale Advantages
- Network Effects
- Regulatory Moat
Anthropic controls proprietary biological and domain-specific datasets that cannot be easily replicated, combined with an oligopolistic market position (40% enterprise LLM market share) and significant capital resources ($3.5B+ funding) that create barriers to competition as general AI models commoditize.