GPU Cloud Provider · Global
Lambda Labs
Lambda Labs offers cutting-edge, flexible GPU clusters through its 1-Click Clusters service, featuring NVIDIA H100 Tensor Core GPUs interconnected via NVIDIA Quantum-2 InfiniBand technology. The clusters are tailored for ML engineers and researchers requiring high computational power on an on-demand basis without long-term commitments.
GPUs
1
Founded
Unknown
Countries
1
Data Centers
3
Uptime SLA
99.9%
Team Size
201-1000
GPU Marketplace

NVIDIA A100 80GB SXMOn-Demand
Company Profile
Company TypeScale-up
Provider TypeCloud Provider
HeadquartersGlobal
Legal EntityLambda, Inc.
FundingSeries C
Total Raised$602M
Team Size201-1000
NVIDIAGradient VenturesGradient VenturesRazerUS Innovative Technology FundThomas Tull
Infrastructure
GPU FleetNVIDIA H100 SXM5 80GB, NVIDIA H100 PCIe 80GB, NVIDIA A100 SXM4 80GB, NVIDIA A100 PCIe 40GB, NVIDIA A10 24GB, NVIDIA V100 16GB, NVIDIA 3090 24GB
Network FabricNVIDIA Quantum-2 InfiniBand, Ethernet up to 200 Gbps
Connectivity400Gbps (InfiniBand), 200Gbps (Ethernet)
StorageNVMe, Shared file system
Data Center TierTier 3 equivalent colocation facilities
Bare MetalYes, bare metal GPU instances available for dedicated cluster deployments
AvailabilityGA
EnterpriseStartupResearchGovernment
Compute & Deployment
On-DemandYes
Spot / InterruptibleNo
Reserved InstancesYes (1-year and 3-year reserved instances available for select GPU types)
Bare MetalYes (1-click clusters and dedicated instances available)
VM-BasedYes (Linux-based GPU instances)
Container-BasedYes (Docker)
KubernetesYes (managed Kubernetes clusters available via Lambda Cloud)
Serverless GPUNo
Spin-Up Time1-5 minutes
TerraformYes (community provider)
GPU Hardware
Latest GenH100 SXM, H100 PCIe, H200 SXM, A100 SXM, A100 PCIe
Legacy SupportA10, RTX 6000 Ada, RTX A6000, V100
Multi-GPU NodesYes (up to 8x per node)
Max GPUs/Node8
NVLinkYes (NVLink on SXM nodes)
InfiniBandYes (HDR 200Gbps on H100 SXM clusters)
PCIe vs SXMBoth PCIe and SXM
HGX PlatformYes (HGX H100 8-GPU)
Pricing Model
Per HourYes (primary billing unit)
Per MinuteNo
SubscriptionYes (reserved instances with 1-year and 3-year terms)
Reserved DiscountUp to ~45% off on-demand with 1-year reserved instances
Spot DiscountNo spot pricing
Public PricingYes
Hidden FeesNone disclosed
Pay-as-you-goYes
Credit SystemNo
Performance & Scaling
Multi-Node TrainingYes (multi-node distributed training via NCCL and MPI, supported on 1-Click Clusters)
Max Cluster SizeUp to 512+ GPUs (H100 SXM5 clusters available on-demand; larger reserved clusters available)
Elastic ScalingNo (fixed-size cluster reservations; no dynamic node add/remove)
Auto ScalingNo (manual provisioning only; no policy-based auto-scaling)
InfiniBandYes (HDR InfiniBand 200Gbps between nodes on H100 SXM5 clusters)
NVSwitchYes (on H100 SXM5 nodes via NVSwitch fabric for intra-node GPU communication)
SLA99.9%
Perf IsolationYes (dedicated bare metal; no hypervisor or VM layer)
Noisy NeighborYes (bare metal, no sharing with other tenants)
Developer Experience
OnboardingDeploy in under 5 minutes via web UI or API; SSH access included with all instances
FrameworksPyTorch, TensorFlow, CUDA, cuDNN
SDK LanguagesPython
CLI ToolingLambda Cloud CLI available for instance management, SSH key management, and file operations
JupyterNative JupyterHub integration available on all instances; accessible via web browser
TemplatesPyTorch, TensorFlow, LLM Fine-tuning, Stable Diffusion, CUDA Development
DocumentationComprehensive docs with tutorials, API reference, and GPU-specific guides at docs.lambdalabs.com
API FeaturesCLI, SDK, REST API
Security & Compliance
SecurityRegular pentesting
ComplianceSOC2, ISO27001
NVIDIA strategic investorUsed by leading AI research labs and startupsSOC 2 Type II certified$602M total funding raisedFounded 2012 with decade-plus track record in GPU hardware and cloud
Data Center Locations
Coverage
CountriesUnited States
CitiesAustin TX, San Jose CA, Washington DC
Multi-Region FailoverNo
North AmericaEuropeAsia-Pacific
Compliance Regions
EU Data ResidencyNo EU presence
US Gov CloudNo
India RegionNo
Datacenter Locations
Key Strengths
Competitive pricing among lowest in market for H100 and A100 instances
Research and developer-first platform with clean UX
On-premises GPU server sales alongside cloud (Echelon product line)
NVIDIA-backed with strong GPU partnership
High-performance NVLink interconnects on SXM GPU clusters
Known Limitations
Limited geographic regions compared to hyperscalers
No spot/preemptible instances for cost savings
On-demand H100 availability can be constrained
No Windows GPU instances
Limited managed ML services compared to AWS/GCP/Azure
No object storage native offering; relies on external storage solutions
Additional Information
Support Options
24/7 Dedicated support
Community
Active Discord community, GitHub presence, and developer blog; Slack community for enterprise customers
Core Proposition
GPU cloud built specifically for AI/ML workloads, offering high-performance NVIDIA GPU clusters at competitive prices with a developer-first experience and no complex pricing tiers.
Notable Customers
Allen Institute for AI
Stability AI
Hive
Nuro
Payment Methods
Credit CardWire Transfer
Last updated March 2026. Information subject to change.