Arcee AI
US-based AI lab building open-weight foundation models like the Trinity MoE family.
Updated April 2026
Overview
- Website
- arcee.ai
- Founded
- 2023
- Headquarters
- Miami, Florida, USA
- Segment
- Specialized & Emerging
Product overview
Arcee AI develops the Trinity family of open-weight Mixture-of-Experts language models in sizes from 6B to 400B parameters, optimized for agentic tasks, tool use, structured outputs, and long-context support, deployable on edge, on-prem, or cloud. Used by developers via OpenClaw and Hermes Agent, and enterprises seeking U.S.-built alternatives to proprietary or Chinese models for data sovereignty and cost efficiency. Distinct for efficiency per parameter, Apache 2.0 licensing without lock-in, and training on a $20M budget versus billions for larger labs.
Revenue model
Hosted OpenAI-compatible API with pay-per-use pricing per million tokens (e.g., Trinity-Mini input $0.045/output $0.15, Trinity-Large Preview input $0.25/output $1.00); open weights for self-hosting; enterprise dedicated support and custom deployments.
Moat
Arcee AI's key competitive moat is its proprietary expertise in efficiently pre-training large, high-performance open-weight foundation models like the 400B-parameter Trinity family in the U.S., achieved at a fraction of Big Tech costs ($20M total) through optimized architectures such as sparse MoE, enabling superior performance-per-parameter and portability across edge, on-prem, and cloud without lock-in. This is bolstered by their domain-specific adaptations (e.g., US patent-trained models with 50% retrieval gains), end-to-end SLM platforms hosted in customer VPCs for data sovereignty, and a pivot from post-training services to owning the full stack, positioning them to capture developer and enterprise preference over Chinese or Big Tech alternatives.
Headwinds
Small language models may be insufficient for complex enterprise use cases compared to larger foundation models.