The directory
Companies
Every company building a layer of the AI stack — searchable, filterable, and cross-referenced against investors and roles.
440Tracked
| Company | Layer | Primary pattern | Moat | Description | Stage |
|---|---|---|---|---|---|
| Anagram | L6 Applications & Products | — | The search results do not mention any company or entity named 'Anagram,' so its specific competitive moat cannot be identified from available information. General economic moats include sources like switching costs, network effects, intangible assets (e.g., brands, patents), cost advantages, and scale advantages, as outlined across multiple sources. | Human-driven security platform embedding security into everyday employee behavior | Speculative |
| Anthropic | L4 Models & Training | Closed-Source Frontier | Anthropic controls proprietary biological and domain-specific datasets that cannot be easily replicated, combined with an oligopolistic market position (40% enterprise LLM market share) and significant capital resources ($3.5B+ funding) that create barriers to competition as general AI models commoditize. | AI safety and research company building reliable, interpretable, steerable AI systems like Claude. | Dominant |
| Anyscale | L5 Orchestration & Frameworks | — | Anyscale's primary competitive moats are its Proprietary Technology in the Ray framework and Anyscale Platform, delivering 23x higher throughput, 75% cost reductions, and rapid scaling to 1000 nodes in 60 seconds, alongside Cost Advantages like $1 per million tokens for LLMs. Additional strengths include Scale Advantages via multi-cloud and multi-region support, Brand bolstered by top investors and partnerships with NVIDIA and Meta, and potential Switching Costs from deep workflow integration. | Anyscale provides a production platform for scaling Ray, the open-source AI compute framework. | Growth |
| Anysphere | L6 Applications & Products | Code Copilots & IDEs | Anysphere's competitive moat includes its AI-native fork of Visual Studio Code (Cursor), exceptional talent density from top firms and MIT grads, proprietary technology like the superior 'Tab' model outperforming rivals by 20-30%, rapid innovation speed, network effects from swift user adoption, cost efficiency with high ARR per employee, and early-mover advantage in the AI coding market. | AI-powered code editor forked from VS Code. | Dominant |
| Anysphere (Cursor) | L6 Applications & Products | Code Copilots & IDEs | Anysphere (Cursor) benefits from rapid adoption and scale advantages, with millions of active users, over half of the Fortune 500 as customers, and explosive revenue growth from $100M ARR in late 2024 to $500M by mid-2025, alongside proprietary AI features like next-line code prediction and autonomous coding. | AI-powered code editor forked from VS Code for developers. | Growth |
| Apple SiliconAAPL | L1 Silicon & Compute | — | Apple Silicon's key competitive moat is its proprietary custom ARM-based system-on-chip (SoC) design, delivering unmatched performance per watt through unified architecture, integrated CPU/GPU/neural engines, and tight hardware-software optimization that competitors like Intel and AMD struggle to match. | Apple's in-house ARM-based SoCs powering all its devices including Macs, iPhones, and iPads. | Dominant |
| Applied Intuition | L6 Applications & Products | — | Applied Intuition's competitive moat is built on dominant OEM penetration, proprietary data scale, and high switching costs that create durable advantages in autonomous vehicle development. ## Core Moat Components Network Effects & OEM Lock-in Applied Intuition works with 18 of the top 20 global automakers, creating a self-reinforcing platform advantage. This high OEM concentration secures platform adoption and generates network effects, as customers become increasingly integrated with the company's infrastructure. Proprietary Data Advantage The company has accumulated over 100+ petabytes of drive data and 1M+ driving scenarios from its customer base. If Applied Intuition licenses anonymized customer simulation and log data, it may own one of the world's most comprehensive autonomy datasets. This data moat enables the company to offer customers access to validated datasets, accelerate perception model training, and potentially resell benchmarked ADAS/AV software modules to newer entrants. Switching Costs Strategic alliances with NVIDIA and OpenAI, combined with acquisitions like EpiSci, extend intellectual property into defense and off-road autonomy. Production integrations and deep technical embeddings raise switching costs for OEMs, making it expensive and disruptive to migrate to competitors. Technology & Performance Advantage Applied Intuition's Physical AI platform—combining on-board Vehicle OS, off-board simulation, and cloud data engines—accelerates validation up to 4x versus legacy workflows. This performance differential creates tangible value that justifies continued customer reliance. Diversified Revenue & Defense Contracts Defense contracts with the U.S. Department of Defense add stable, high-margin revenue streams that reduce dependence on any single market. This diversification strengthens the overall moat by creating multiple revenue sources with different growth dynamics. The durability of these advantages is reinforced by continuous investment requirements in machine learning and hardware partnerships, which create barriers for potential competitors. | Provider of simulation software for developing and validating autonomous vehicles and AI-driven machines. | Growth |
| Arcade | L6 Applications & Products | Enterprise Platforms & Workflow | Arcade's key competitive moat is its proprietary gamified micro-incentives software tailored for sales teams, which drives real-time behavior change, employee engagement, and revenue growth through a fully integrated platform that boosts productivity and company culture in high-turnover industries like retail and automotive.[3] This creates switching costs from customized integrations and performance data lock-in, alongside expansion potential via strategic partnerships that solidify its position as a premier provider.[3] | Interactive API and tool-use playground for building and testing AI agents | Growth |
| Arcee AI | L4 Models & Training | — | Arcee AI's key competitive moat is its proprietary expertise in efficiently pre-training large, high-performance open-weight foundation models like the 400B-parameter Trinity family in the U.S., achieved at a fraction of Big Tech costs ($20M total) through optimized architectures such as sparse MoE, enabling superior performance-per-parameter and portability across edge, on-prem, and cloud without lock-in.[1][3][4][6] This is bolstered by their domain-specific adaptations (e.g., US patent-trained models with 50% retrieval gains), end-to-end SLM platforms hosted in customer VPCs for data sovereignty, and a pivot from post-training services to owning the full stack, positioning them to capture developer and enterprise preference over Chinese or Big Tech alternatives.[2][5] | US-based AI lab building open-weight foundation models like the Trinity MoE family. | Speculative |
| Argyle | L6 Applications & Products | HR & Identity | Argyle's competitive moat is built on years of accumulated integration expertise and the widest, most battle-tested access network to employment platforms in the US, enabling it to process over 40 million employment records with strong commercial adoption that newer competitors cannot quickly replicate.[2] The company's direct-source, real-time data infrastructure and established partnerships with major mortgage and lending ecosystems create significant switching costs and scale advantages that are difficult for new entrants to overcome.[3] | Real-time income and employment data API connecting to payroll systems via AI | Growth |
| Arista NetworksANET | L1 Silicon & Compute | — | Arista Networks' key competitive moat stems from its proprietary EOS software and CloudVision ecosystem, which create high switching costs through technical lock-in, complemented by leadership in high-speed AI Ethernet networking and scale advantages with hyperscale customers. | Arista Networks provides high-performance Ethernet switches for AI data center networking. | Dominant |
| Arize AI | L5 Orchestration & Frameworks | — | Arize AI's key competitive moat is its first-mover advantage in the AI observability market, launched in 2020 with a mature platform offering comprehensive pre- and post-launch evaluation across ML, LLMs, agents, and generative AI, enabling rapid detection of issues like drift and anomalies without custom SQL.[2][1][6] This is reinforced by high switching costs from deep integrations with production stacks (e.g., Vertex AI, AWS), scalable infrastructure for thousands of customers, and proprietary tools like AI-driven clustering, dynamic cohort analysis, and automated evaluations that free teams for model improvement rather than building from scratch.[1][3][6] While competitors like Confident AI and Galileo exist, Arize's broader model support, enterprise traction (e.g., TripAdvisor, GetYourGuide), and open-source Phoenix contribute to sticky adoption without strong network effects, patents, or unique data moats evident.[2][4][6] | ML observability platform for monitoring model drift, performance, and fairness | Growth |
| Ataccama | L3 Data & Storage | — | Ataccama's key competitive moat is its unified data trust platform, Ataccama ONE, which integrates data quality, lineage, observability, governance, and master data management into a single AI-enabled solution, enabling trusted data for AI and analytics initiatives. | AI-powered data trust platform for quality, governance, and MDM. | Growth |
| Augment Code | L6 Applications & Products | Autonomous Coding Agents | Augment Code's key competitive moat is its proprietary Context Engine, which maintains a live, deep understanding of an entire codebase—including code, dependencies, architecture, and history—enabling superior AI agent performance in code generation, reviews, and multi-agent workflows that outperform humans and competitors on benchmarks for precision, recall, functional correctness, and context awareness.[3][1][2] This is reinforced by high switching costs from tenant-isolated architecture protecting IP, seamless integration across IDEs (VS Code, JetBrains), CLI, and GitHub, and proven scalability for enterprise teams like Intercom managing large Ruby/JavaScript monorepos.[1][2][3] | AI coding platform with context engine for large enterprise codebases. | Speculative |
| AutoGenMSFT | L5 Orchestration & Frameworks | Multi-Agent Frameworks | Microsoft AutoGen's key competitive moat is its deep integration with the Microsoft ecosystem (Azure OpenAI, Dynamics 365, Microsoft 365), creating high switching costs for enterprises already invested in Microsoft infrastructure and enabling seamless scalability for Fortune 500 multi-agent AI systems.[1][4][5] This is complemented by its proprietary event-driven architecture for enterprise-grade, asynchronous multi-agent collaboration, which handles complex workflows like supply chain optimization that simpler frameworks cannot match, reinforced by backing from Microsoft's resources and expertise.[1][3][4] | Open-source Microsoft framework for building conversational multi-agent AI systems. | Speculative |
| Avicena | L1 Silicon & Compute | — | Avicena's competitive moat is its first-mover advantage in microLED-based optical interconnects combined with proprietary integration of mature, high-volume GaN microLED display technology into chip-scale communications, enabling sub-pJ/bit energy efficiency that competitors using laser-based or silicon photonics solutions cannot match[1][5]. This is reinforced by strategic partnerships with TSMC and ams OSRAM, 47 filed patents focused on optical devices and photonics[1], and the high switching costs for hyperscalers once LightBundle is integrated into their AI infrastructure[4][5]. | Avicena develops microLED-based ultra-low power optical interconnects for AI data centers and HPC. | Speculative |
| Ayar Labs | L1 Silicon & Compute | — | Ayar Labs' key competitive moat is its proprietary TeraPHY optical I/O chiplet technology and co-packaged optics (CPO) solutions, which deliver 8 Tbps bandwidth, 3-5x power efficiency gains over copper interconnects, and low-latency (10ns) data movement for AI scale-up, proven through manufacturing partnerships with TSMC, Alchip, and integrations with NVIDIA, AMD, and Intel.[1][2][5][6][7] This first-mover advantage in silicon photonics, backed by patents, expert founders from Intel/IBM/MIT, and $500M funding at $3.75B valuation from strategic investors like NVIDIA, creates high barriers via ecosystem lock-in, switching costs, and superior performance addressing AI's "power wall" and memory bottlenecks that rivals like Lightmatter and Celestial AI have yet to match at scale.[2][3][4][5][7] | Optical I/O chiplets for high-speed AI data transfer. | Growth |
| Base44 | L6 Applications & Products | Vibe Coding | Base44's key competitive moat is its pioneering AI-driven app builder technology that generates fully functional web apps—including backend infrastructure, databases, authentication, payments, and scalability—from natural language prompts in minutes, enabling rapid validation and scaling to $1M ARR without code or migrations, as demonstrated by customer cases like Gift My Book[1][2][6]. This is reinforced by powerful network effects from its 400,000+ user base and organic growth, creating a flywheel of shared learnings and integrations that competitors like Lovable or Bolt struggle to match despite funding[4][5][7]. | No-code AI app builder for creating internal tools and workflows from prompts | Speculative |
| Baseten | L4 Models & Training | — | Baseten's key competitive moat is its proprietary hardware-software co-design for AI inference, delivering 225% better cost-performance on NVIDIA Blackwell GPUs via Google Cloud A4 VMs, enabling 5x more requests at lower latency and costs than rivals.[4][5] This is reinforced by multi-cloud scalability, ultra-fast cold starts (e.g., 15 seconds for Stable Diffusion), scale-to-zero autoscaling to eliminate idle GPU waste, and open-source tools like Truss, creating high switching costs for customers optimizing production AI deployments.[1][3] | AI inference platform for deploying and scaling open-source and custom models at production scale. | Speculative |
| Basis | L6 Applications & Products | Finance | No specific information on the competitive moat of 'Basis' is available in the provided search results, which discuss general frameworks like the 7 Powers and common moat types such as network effects and scale advantages instead. Without company-specific details, its moat cannot be identified from this data. | AI agent platform that automates end-to-end accounting work | Speculative |
| Bigspin AI | L5 Orchestration & Frameworks | — | No specific information on the competitive moat of 'Bigspin AI' is available in the search results, which discuss general AI economic moats like proprietary data, network effects, and speed for startups rather than this company. Common AI moats include proprietary data and training sets that competitors cannot replicate, as seen in examples like Welltower and Public Storage. | AI agent conversation monitoring platform that detects issues and closes feedback loops. | Speculative |
| Blackstone Inc.BX | L0 Physical Infrastructure | — | Blackstone Inc. possesses a wide competitive moat driven by its market leadership as the largest alternative asset manager with over $1.2 trillion in AUM, delivering economies of scale, diversified product offerings across private equity, real estate, credit, and infrastructure, strong brand equity, deep client relationships enabling massive capital inflows, industry-leading dry powder for opportunistic investments, and proprietary information access that strengthens with growth. | World's largest alternative asset manager with $1T+ in AUM | Dominant |
| Bloom EnergyBE | L0 Physical Infrastructure | — | Bloom Energy's key competitive moat is its proprietary solid oxide fuel cell (SOFC) technology, which delivers over 60% electrical efficiency, fuel flexibility across natural gas, biogas, and hydrogen, and rapid on-site deployment (as fast as 55 days for AI data centers), bypassing grid delays while enabling direct DC power to server racks for superior space and energy efficiency.[1][2][3][4][5] This is reinforced by high manufacturing scale (1-2 GW capacity), multiple patents, 100% long-term service contracts for recurring high-margin revenue, and a strategic focus on stationary power for mission-critical applications like data centers, creating significant switching costs and barriers to entry.[1][2][3][5] | Distributed fuel cell power systems for data center and industrial use | Growth |
| Bolt.new | L6 Applications & Products | Vibe Coding | Bolt.new's key competitive moat is its proprietary WebContainers technology, which enables instant, browser-based code execution and AI app building with no setup required, delivering unmatched speed, low costs, and scalability that powers a generous freemium model and best-in-class performance[2][6]. This tech edge, built over years from StackBlitz, creates high switching costs for users hooked on its seamless experience and is amplified by rapid viral growth to 5 million users, strong network effects from user referrals, and proprietary AI optimizations like custom models for in-product success[2][4][5][6]. | StackBlitz-powered browser IDE that builds and deploys full-stack apps from prompts | Growth |
| BotsCrew | L6 Applications & Products | AI Support Agents | BotsCrew's competitive moat stems from its proprietary expertise in custom conversational AI and AI agents, evidenced by Clutch's top rankings for chatbot development (2017–2025), generative AI, and consulting (2025–2026), along with a proven track record of 200+ solutions for Fortune 500 clients like Adidas and Mars. This is reinforced by scale advantages from a 50–100 specialist team across Lviv and San Francisco, seamless integrations, and a data flywheel from ongoing optimization and client success in diverse industries. | Custom AI consulting & development for chatbots and AI agents. | Speculative |