GPU Cloud Provider ยท Livingston, New Jersey, USA

CoreWeave

CoreWeave is a hyperscaler focused on AI and high-performance computing, leveraging advanced Nvidia technologies such as the GB200 Grace Blackwell chips and BlueField DPUs to provide a cloud platform that excels in large-scale AI workloads. The platform is designed for efficiency, scalability, and performance, specifically tailored to meet the intensive demands of modern AI processes.

GPUs
7
Founded
Founded prior to 2025
Countries
2
Data Centers
5
Uptime SLA
99.9%
Team Size
1000+

Company Profile

Company TypeScale-up
Provider TypeCloud Provider
FoundedFounded prior to 2025
HeadquartersLivingston, New Jersey, USA
Legal EntityCoreWeave, Inc.
FundingPrivate
Total Raised$8.7B
Team Size1000+
Magnetar CapitalNvidiaFidelity Management & ResearchCoatueAltimeter Capital

Infrastructure

GPU FleetNVIDIA H100 SXM5 80GB, NVIDIA H100 PCIe 80GB, NVIDIA A100 SXM4 80GB, NVIDIA A100 PCIe 80GB, NVIDIA A40, NVIDIA RTX A6000, NVIDIA RTX A5000, NVIDIA RTX A4000, NVIDIA L40S, NVIDIA H200
Network FabricInfiniBand, Ethernet, custom fabric
ConnectivityUp to 400Gbps per instance
StorageHigh-performance NVMe storage, AI Object Storage with BlueField acceleration
Data Center TierTier 3 and Tier 4 data centers across multiple US and EU regions
Bare MetalYes, bare metal GPU instances available alongside Kubernetes-based deployments
AvailabilityGeneral Availability
EnterpriseStartupResearchGovernment

Compute & Deployment

On-DemandYes
Spot / InterruptibleNo
Reserved InstancesYes (1-month, 3-month, annual commitments)
Bare MetalYes
VM-BasedNo
Container-BasedYes (Docker)
KubernetesYes (managed K8s via CoreWeave Cloud Kubernetes Service)
Serverless GPUNo
Spin-Up TimeUnder 2 minutes
TerraformYes (official provider)

GPU Hardware

Latest GenH100 SXM, H100 PCIe, H200 SXM, L40S, L4, GH200
Legacy SupportA100 SXM, A100 PCIe, A40, A10, V100, RTX A6000, RTX A5000, RTX A4000
Multi-GPU NodesYes (up to 8x per node)
Max GPUs/Node8
Pool Size250,000+ GPUs
NVLinkYes (NVLink 4.0 on H100/H200 SXM nodes; NVLink 3.0 on A100 SXM nodes)
InfiniBandYes (NDR 400Gbps on H100/H200 clusters; HDR 200Gbps on A100 clusters)
PCIe vs SXMBoth PCIe and SXM
HGX PlatformYes (HGX H100 8-GPU)
Liquid CoolingSelect SKUs (direct liquid cooling on high-density SXM nodes)

Pricing Model

Per HourYes (primary billing unit)
Per MinuteNo
SubscriptionYes (monthly/annual reserved contracts)
Reserved DiscountUp to 20-30% off with 1-year or 3-year reserved contracts
Spot DiscountNo spot pricing (reserved and on-demand only)
Public PricingPartial (some GPUs listed on website, full pricing requires account or sales contact)
Hidden FeesNetworking/load balancer charges may apply; storage billed separately
Egress ChargesTiered pricing; free within CoreWeave network, charges apply for external egress
Setup FeesNone disclosed for standard accounts; enterprise onboarding may vary
Pay-as-you-goYes
Credit SystemNo

Performance & Scaling

Multi-Node TrainingYes (up to 1000+ nodes with NCCL and MPI)
Max Cluster Size4096+ GPUs (large-scale HPC clusters available)
Elastic ScalingYes (add/remove nodes dynamically via Kubernetes-native scaling)
Auto ScalingYes (policy-based auto-scaling via Kubernetes HPA and custom operators)
InfiniBandYes (NDR 400Gbps InfiniBand between nodes on H100/A100 clusters)
NVSwitchYes (on SXM nodes including H100 SXM and A100 SXM)
SLA99.9%
Perf IsolationYes (dedicated bare metal, no hypervisor overhead)
Noisy NeighborYes (bare metal, no sharing with other tenants)

Developer Experience

OnboardingSelf-service account creation with API-driven deployment; enterprise onboarding available with dedicated support
FrameworksPyTorch, TensorFlow, Jax
SDK LanguagesPython, Go
CLI ToolingKubernetes-native (kubectl); CoreWeave Cloud UI and API for instance management
JupyterSupported via Kubernetes pod deployments and SSH access
TemplatesLLM Training, Stable Diffusion, PyTorch Training, TensorFlow, Triton Inference Server
Model MarketplaceNone native; integrates with HuggingFace and custom model serving
DocumentationComprehensive docs with tutorials, API reference, and Kubernetes integration guides
API FeaturesCLI, SDK, REST API, Terraform provider

Security & Compliance

SecuritySOC 2 Type II,Regular penetration testing,ISO/IEC 27001:2013 certified
ComplianceGDPR compliant, Service Organization Controls (SOC) reports, CoreWeave Customer Asset Penetration Testing Policy, Vulnerability Disclosure Policy, EU Data Act Addendum
Strategic investment from NVIDIAMulti-billion dollar funding with top-tier institutional investorsPowers AI infrastructure for major AI labs and enterprisesSOC 2 Type II compliantOne of the largest independent GPU cloud providers globally

Data Center Locations

Coverage

CountriesUnited States, United Kingdom
CitiesLas Vegas NV, Chicago IL, Weehawken NJ, Livingston NJ, London
Multi-Region FailoverYes (manual)
Latency TiersUltra-low (<1ms intra-DC), Standard cloud latency between regions
North AmericaEurope

Compliance Regions

EU Data ResidencyYes (London)
US Gov CloudNo
India RegionNo
Datacenter Locations

Key Strengths

Purpose-built for large-scale AI/ML GPU workloads
High-density NVLink and InfiniBand interconnects for multi-node training
NVIDIA strategic partner and early access to latest GPU hardware
Kubernetes-native infrastructure with flexible orchestration
Significant scale with hyperscaler-level GPU capacity without hyperscaler pricing

Known Limitations

Primarily US-focused with limited EU presence compared to hyperscalers
No spot/preemptible instances for cost optimization
Minimum commitment requirements for best pricing tiers
Less consumer-friendly than some competitors; skewed toward enterprise/research
No Windows GPU support; Linux-only environment

Additional Information

Support Options

["24/7 email and phone support","Dedicated technical account managers"]

Community

Slack community for developers; active presence on GitHub and technical forums

Green Energy

Committed to renewable energy sourcing; works with green energy providers for data center operations

Core Proposition

GPU-accelerated cloud purpose-built for AI/ML workloads, offering Kubernetes-native infrastructure with the largest selection of NVIDIA GPUs at scale

Notable Customers

Microsoft
Meta
IBM
Mistral AI
Stability AI
Character.ai

Payment Methods

Credit CardWire TransferEnterprise invoicing
Last updated March 2026. Information subject to change.