The AI Stack
Sign in

Oracle Cloud

Enterprise cloud offering bare metal NVIDIA/AMD GPUs for AI training at massive scale.

Updated April 2026

Overview

Founded
2016
Headquarters
Austin, TX
Segment
Hyperscale Cloud Platforms

Product overview

Oracle Cloud Infrastructure (OCI) provides GPU-accelerated compute via bare metal and VM instances with NVIDIA Blackwell, Hopper, Ampere GPUs and AMD MI300X for AI training, inference, HPC, and graphics. It supports OCI Supercluster scaling to 131,072 GPUs with RDMA networking, integrated AI services like Data Science and Generative AI, distinguishing it with no virtualization overhead and high local storage (61.4 TB/node for H100). Used by enterprises like FedEx, Airbnb for production workloads. .

Revenue model

On-demand GPU pricing e.g. ~$10/GPU-hour for BM.GPU.H100.8 (8x H100 at $80/hr), $3.50/GPU-hour for L40S; preemptible at 50% off, capacity reservations at 85% of on-demand; block storage GB/month; Universal Credit Model with negotiated enterprise rates. , .

Moat

Oracle Cloud's primary competitive moat is its integrated database-to-cloud ecosystem, leveraging its dominant position in enterprise databases to drive cloud adoption and create high switching costs. The company combines this with specialized infrastructure advantages—including superior performance for enterprise workloads (50X better storage latency than competitors), native support for Oracle Database clustering, and GPU-optimized AI computing capacity—that make migration from on-premises Oracle systems to OCI more convenient and cost-effective than switching to rival platforms.