GPU Cloud Provider · London, UK

FluidStack

Fluidstack is an AI-focused cloud provider offering GPU compute capacity primarily to AI companies. It operates through a dual business model, combining a marketplace that connects to third-party GPU resources and a private cloud service where it owns and directly operates GPU infrastructure. The company has experienced significant growth with strategic partnerships and financing supported by large-scale investments.

GPUs
2
Founded
2019
Countries
6
Data Centers
8
Team Size
11-50

GPU Marketplace

Company Profile

Company TypeMarketplace/Aggregator
Provider TypeMarketplace
Founded2019
HeadquartersLondon, UK
Legal EntityFluidStack Ltd
FundingSeries A/B/C/D
Team Size11-50

Infrastructure

GPU FleetNVIDIA H100 SXM, NVIDIA H100 PCIe, NVIDIA A100 80GB SXM, NVIDIA A100 40GB, NVIDIA A6000, NVIDIA RTX 4090, NVIDIA RTX 3090, NVIDIA V100
Network FabricInfiniBand
ConnectivityInformation not provided
StorageInformation not provided
Data Center TierCarrier-neutral colocation via partner data centers globally
Bare MetalYes, bare metal GPU servers available
AvailabilityGA
EnterpriseStartupResearchAI/ML Teams

Compute & Deployment

On-DemandYes
Spot / InterruptibleYes (preemptible instances available at reduced rates)
Reserved InstancesYes (monthly and longer-term commitments available)
Bare MetalYes (bare metal GPU servers available)
VM-BasedYes
Container-BasedYes (Docker)
KubernetesNo
Serverless GPUNo
Spin-Up Time2-5 minutes
TerraformNo

GPU Hardware

Latest GenH100 SXM, H100 PCIe, H200 SXM
Legacy SupportA100 SXM, A100 PCIe, V100, A10
Multi-GPU NodesYes (up to 8x per node)
Max GPUs/Node8
NVLinkYes (NVLink on SXM nodes)
InfiniBandYes (HDR 200Gbps on H100 clusters)
PCIe vs SXMBoth PCIe and SXM
HGX PlatformYes (HGX H100 8-GPU)

Pricing Model

Per HourYes (primary billing unit)
Per MinuteNo
SubscriptionYes (monthly plans available for dedicated clusters)
Reserved DiscountYes (discounts available for longer-term commitments, specific percentages not publicly disclosed)
Spot DiscountYes (spot/interruptible instances available at significant discounts, exact percentage not publicly disclosed)
Public PricingPartial (some GPUs listed, enterprise pricing requires contact)
Hidden FeesNone disclosed
Setup FeesNone disclosed
Pay-as-you-goYes

Performance & Scaling

Multi-Node TrainingYes (multi-node clusters supported with NCCL)
Elastic ScalingManual only
Auto ScalingNo
InfiniBandYes (InfiniBand available on H100 SXM clusters)
NVSwitchYes (on SXM nodes)
Perf IsolationYes (dedicated bare metal)
Noisy NeighborYes (bare metal, no sharing)

Developer Experience

OnboardingSelf-serve sign-up with API or web console; instances deployable within minutes
FrameworksInformation not provided
SDK LanguagesPython
CLI ToolingBasic CLI and REST API for instance management
JupyterVia SSH port forwarding
DocumentationBasic to moderate documentation with API reference and getting-started guides
API FeaturesWeb console, API for programmatic access

Security & Compliance

SecurityComplies with HIPAA, GDPR, ISO 27001, SOC 2 Type 2
ComplianceHIPAA, GDPR, ISO 27001, SOC 2 Type 2
Partnered with major global data centersUsed by AI startups and research teamsNVIDIA GPU supply agreements

Data Center Locations

Coverage

CountriesUnited States, France, Germany, Netherlands, United Kingdom, Canada
CitiesDallas TX, Los Angeles CA, Ashburn VA, Paris, Frankfurt, Amsterdam, London, Montreal
North AmericaEuropeAsia-Pacific

Compliance Regions

EU Data ResidencyYes (Paris, Frankfurt, Amsterdam)
US Gov CloudNo
India RegionNo
Datacenter Locations

Key Strengths

Aggregated global GPU supply enabling large cluster availability
Competitive pricing versus hyperscalers
Access to high-demand GPUs like H100 at scale
Flexible on-demand and reserved options
Simple API-driven provisioning

Known Limitations

Uptime SLA not formally published
GPU availability can vary by region and model
Limited managed services compared to hyperscalers
Smaller ecosystem of integrations and tooling
Support quality dependent on underlying partner data centers

Additional Information

Support Options

Technical support included, 15-minute response times, 24/7 monitoring

Community

Limited public community presence; primarily direct enterprise engagement

Core Proposition

Aggregates underutilized GPU capacity from data centers worldwide into a single marketplace, offering competitive pricing for AI/ML workloads at scale.

Payment Methods

Credit CardWire Transfer
Last updated March 2026. Information subject to change.