GPU Cloud Provider

SaladCloud

SaladCloud (by Salad Technologies) was founded to monetize idle consumer GPU compute from a distributed network of PC contributors, offering drastically lower GPU cloud prices than traditional hyperscalers. The platform enables AI/ML workloads to run on a globally distributed pool of consumer-grade GPUs contributed by gamers and enthusiasts.

GPUs
1
Countries
7
Data Centers
1
Uptime SLA
Best-effort
Team Size
11-50

GPU Marketplace

Company Profile

Company TypeMarketplace/Aggregator
Provider TypeMarketplace
HeadquartersUnited States
Legal EntitySalad Technologies, Inc.
FundingSeries A/B/C/D
Team Size11-50

Infrastructure

GPU FleetNVIDIA RTX 4090, NVIDIA RTX 3090, NVIDIA RTX 3080, NVIDIA RTX 3070, NVIDIA A100 80GB, NVIDIA RTX 4080, NVIDIA RTX 3060
Data Center TierDistributed consumer hardware; no traditional data center tier certification
Bare MetalNo — container-only deployment on distributed consumer nodes
StartupAI/ML DevelopersInference WorkloadsMedia ProcessingHobbyist

Compute & Deployment

On-DemandYes (container-based workloads deployed on distributed nodes)
Spot / InterruptibleYes (inherently interruptible by design; nodes are consumer machines that can go offline)
Reserved InstancesNo
Bare MetalNo
VM-BasedNo
Container-BasedYes (Docker)
KubernetesNo
Serverless GPUYes (Salad Cloud Container Engine supports serverless-style GPU inference endpoints)
Spin-Up Time2-5 minutes (varies due to distributed node availability and workload scheduling)
TerraformNo

GPU Hardware

Latest GenRTX 4090, RTX 4080, RTX 4070 Ti, RTX 4070
Legacy SupportRTX 3090, RTX 3080, RTX 3070, A100 PCIe, A4000, A5000, A6000
Multi-GPU NodesLimited (consumer nodes typically 1-2 GPUs per machine)
Max GPUs/Node4
Pool Size100,000+ GPUs
NVLinkNo
InfiniBandNo
PCIe vs SXMPCIe only
HGX PlatformNo

Pricing Model

Per HourYes (primary billing unit)
Per MinuteNo
SubscriptionNo
Reserved DiscountNo
Spot DiscountUp to 80% off compared to major cloud providers (distributed/interruptible nodes by design)
Public PricingYes
Hidden FeesNone disclosed
Pay-as-you-goYes
Credit SystemYes (prepaid credits)

Performance & Scaling

Multi-Node TrainingLimited (manual setup, no native orchestration)
Elastic ScalingManual only
Auto ScalingNo
InfiniBandNo (Ethernet only)
NVSwitchNo
SLABest-effort
Perf IsolationNo (shared, consumer-grade GPUs)
Noisy NeighborNo (multi-tenant shared nodes)

Developer Experience

OnboardingDeploy containerized workloads via web portal or API within minutes; no enterprise onboarding required
SDK LanguagesPython
CLI ToolingBasic CLI and REST API for job submission and container management
JupyterNot natively supported; possible via containerized Jupyter image
TemplatesStable Diffusion, LLM Inference, Image Processing
DocumentationModerate documentation with API reference, quickstart guides, and container deployment tutorials

Data Center Locations

Coverage

CountriesUnited States, Canada, Germany, United Kingdom, France, Netherlands, Australia
CitiesNot disclosed
North AmericaEuropeAsia-PacificGlobal (distributed consumer hardware)

Compliance Regions

EU Data ResidencyYes (EU nodes available via distributed network, specific cities not disclosed)
US Gov CloudNo
Datacenter Locations

Key Strengths

Extremely low GPU pricing leveraging idle consumer hardware
Massive distributed global GPU pool not limited by data center capacity
No minimum commitment or reserved instance requirements
Container-native platform with simple deployment model
Unique crowdsourced compute model enabling access to GPUs like RTX 4090 at fractional cost

Known Limitations

Consumer-grade hardware with variable reliability and no uptime guarantees
Nodes can go offline mid-job requiring workload fault tolerance
Not suitable for stateful or long-running jobs without checkpointing
No bare metal access or persistent storage guarantees
Limited enterprise compliance (no SOC 2, HIPAA, or similar certifications known)
Network I/O and latency unpredictable across distributed nodes
No GPU-to-GPU interconnect (NVLink/InfiniBand) for multi-GPU training

Additional Information

Community

Discord community for developers and contributors; GitHub presence

Green Energy

Indirect sustainability benefit by utilizing existing idle hardware rather than building new data centers; no formal green energy commitment

Core Proposition

Distributed cloud platform leveraging idle consumer and prosumer GPUs worldwide to deliver significantly lower-cost GPU compute, primarily for inference and batch workloads.

Payment Methods

Credit Card
Last updated March 2026. Information subject to change.