The AI Stack
Sign in

d-Matrix

d-Matrix develops efficient AI inference chips using digital in-memory compute architecture.

Updated April 2026

Overview

Segment
GPU & AI Accelerators

Product overview

d-Matrix produces Corsair PCIe accelerator cards based on Jayhawk chiplets, JetStream I/O accelerators, and SquadRack rack-scale solutions optimized for low-latency AI inference in datacenters . These products target hyperscalers, enterprises, and sovereign AI customers like Microsoft, enabling models up to 100B parameters with 10x faster performance and 3x better efficiency than GPUs . Their distinction lies in 3DIMC chiplet-based design integrating compute into memory for ultra-high bandwidth (150 TB/s) and energy savings via DIMC and MX formats . They partner with OEMs like Supermicro and Arista for easy integration .

Revenue model

Sells AI inference accelerators (e.g., Corsair PCIe cards) and related hardware/software to datacenter operators; projecting $10M revenue in 2026, scaling to $70M+ annually .

Moat

d-Matrix's key competitive moat is its proprietary Digital In-Memory Computing (DIMC) technology, the world's first digital implementation that tightly integrates compute and memory to eliminate data movement bottlenecks in AI inference, combined with innovations like 3DIMC stacked DRAM, chiplet-based all-to-all interconnects, and Block Floating Point numerics for ultra-low latency, energy efficiency, and scalability across datacenter models. These hardware-software co-designed breakthroughs, protected by first-mover patents and a veteran team's expertise in shipping over 100M chips, create high barriers to entry amid surging generative AI demand.