The AI Stack
Sign in

DeepSeek

Chinese AI company developing open-weight large language models like DeepSeek-V3 and R1.

Updated April 2026

Overview

Founded
2023
Headquarters
Hangzhou, China
Segment
Frontier Foundation Model Labs
Posture
Open-Weight Frontier

Product overview

DeepSeek builds advanced LLMs such as DeepSeek-V3 (general-purpose MoE), DeepSeek-R1 (reasoning-focused), DeepSeek Coder, and Math models, released under MIT License as open-weight for broad accessibility. These models power a free web/app chatbot rivaling GPT-4o/o1 in benchmarks, used by millions including developers, researchers, and consumers worldwide. Distinct from frontier labs like OpenAI by extreme cost-efficiency (V3 trained for $6M vs. GPT-4 at $100M+), MoE architecture for low inference costs despite chip sanctions, and open-weight releases enabling self-hosting.

Revenue model

API usage-based pricing per token (deepseek-chat: $0.27/M input, $1.10/M output; deepseek-reasoner: $0.55/M input, $2.19/M output; cache discounts apply); free web/app chatbot access. Funded by parent hedge fund High-Flyer.

Moat

DeepSeek's primary competitive moat is superior model efficiency and open-source distribution, which reduces the capital barrier to entry and accelerates adoption across the AI ecosystem. By achieving frontier-level performance with significantly lower computational costs through innovations like knowledge distillation and emergent behavior networks, DeepSeek has democratized access to high-capability AI models, enabling smaller players to compete and creating network effects around its open ecosystem rather than proprietary lock-in.