CUDO Compute favicon

GPU Cloud Provider · Unknown

CUDO Compute

CUDO Compute specializes in providing dedicated bare metal machine clusters for high performance computing with a focus on GPU-intensive tasks like AI training and data analysis. It offers customizable clusters with up to 8 GPUs per machine, scalable configurations, and features such as Infiniband networking and high-speed storage solutions.

GPUs
2
Founded
Unknown
Countries
5
Data Centers
5
Team Size
51-200

GPU Marketplace

Company Profile

Company TypeScale-up
Provider TypeCloud Provider
Legal EntityCudo Ventures Limited
FundingSeries A/B/C/D
Team Size51-200

Infrastructure

GPU FleetAMD Instinct MI250, AMD Instinct MI300X, NVIDIA A100, NVIDIA H100, NVIDIA RTX A6000, NVIDIA A40
Network FabricInfiniBand
StorageHigh-speed storage, NFS volumes, Local NVMe SSDs, Parallel filesystems such as Lustre and GPFS
Data Center TierTier 3
Bare MetalYes, bare metal GPU servers available
EnterpriseStartupResearchAI/ML Teams

Compute & Deployment

On-DemandYes
Spot / InterruptibleNo
Reserved InstancesYes (committed usage discounts available)
Bare MetalYes (bare metal GPU servers available)
VM-BasedYes
Container-BasedYes (Docker)
KubernetesYes (managed K8s available)
Serverless GPUNo
Spin-Up Time2-5 minutes
TerraformYes (official provider)

GPU Hardware

Latest GenAMD MI300X
Legacy SupportAMD MI250, NVIDIA A100, NVIDIA V100, NVIDIA RTX A5000, NVIDIA RTX A4000
Multi-GPU NodesYes (up to 8x per node)
Max GPUs/Node8
NVLinkNo
PCIe vs SXMPCIe only
HGX PlatformNo

Pricing Model

Per HourYes (primary billing unit)
Public PricingPartial (some GPUs listed)
Hidden FeesNone disclosed
Setup FeesNone disclosed
Pay-as-you-goYes

Performance & Scaling

Multi-Node TrainingYes (multi-node distributed training supported via standard interconnects)
Elastic ScalingManual only
Auto ScalingNo
NVSwitchNo (AMD MI250 uses Infinity Fabric, not NVSwitch)
Perf IsolationPartial (dedicated GPU instances, but isolation level not fully disclosed)

Developer Experience

OnboardingSelf-service web console with relatively quick deployment; account verification may add delay
FrameworksPyTorch, TensorFlow
SDK LanguagesPython
CLI ToolingBasic CLI for instance and resource management
JupyterVia SSH port forwarding or custom setup
TemplatesPyTorch, TensorFlow, ROCm
DocumentationModerate documentation with API reference and getting-started guides; AMD-specific guides available
API FeaturesUnknown

Security & Compliance

UK-based regulated entitySustainability and green energy commitmentsPart of established CUDO groupAMD partner for MI250/MI300X deployments

Data Center Locations

Coverage

CountriesUnited States, United Kingdom, Norway, Iceland, Germany
CitiesAshburn VA, London, Oslo, Reykjavik, Frankfurt
Multi-Region FailoverYes (manual)
EuropeNorth AmericaAsia-Pacific

Compliance Regions

EU Data ResidencyYes (Frankfurt, Norway)
US Gov CloudNo
India RegionNo
Datacenter Locations

Key Strengths

Competitive AMD MI250/MI300X pricing as alternative to NVIDIA
Sustainability-first distributed cloud model
Access to AMD ROCm ecosystem at scale
Aggregated global data center capacity for broader availability
Cost-effective alternative to hyperscaler GPU pricing

Known Limitations

Smaller scale compared to major hyperscalers
AMD ROCm ecosystem less mature than CUDA for some ML frameworks
Limited prebuilt template library
Spot/preemptible instances not prominently featured
Community and support resources less extensive than larger competitors

Additional Information

Support Options

["Unknown"]

Community

Limited public community presence; support via ticketing and documentation

Green Energy

Strong sustainability focus; aims to use renewable and low-carbon energy sources across distributed data centers

Core Proposition

Distributed GPU cloud aggregating underutilized compute from data centers worldwide to offer competitive pricing on GPU instances including AMD MI250 and NVIDIA hardware.

Payment Methods

Credit CardWire Transfer
Last updated March 2026. Information subject to change.