GPU Cloud Provider · Not specified

TensorDock

TensorDock is a global cloud provider specializing in GPU provisioning for diverse computing needs such as AI, ML, gaming, and image processing. Offering a marketplace model, TensorDock allows customers to choose from a variety of GPU types across over 100 locations worldwide, ensuring competitive pricing and flexibility. The platform is known for its on-demand, pay-as-you-go billing model with no minimum commitments, providing high accessibility and scalability to a wide range of customers.

GPUs
2
Founded
Not specified
Countries
8
Data Centers
13
Team Size
11-50

GPU Marketplace

Company Profile

Company TypeMarketplace/Aggregator
Provider TypeMarketplace
FoundedNot specified
HeadquartersNot specified
Legal EntityTensorDock International Inc.
FundingBootstrapped
Team Size11-50

Infrastructure

GPU FleetNVIDIA H100 SXM, NVIDIA H100 PCIe, NVIDIA A100 80GB, NVIDIA A100 40GB, NVIDIA A10, NVIDIA A6000, NVIDIA RTX 4090, NVIDIA RTX 3090, NVIDIA RTX A5000, NVIDIA L40S, NVIDIA V100
Network Fabric1 Gbps minimum included with options available
Connectivity1 Gbps minimum, specific speeds not detailed
StorageBlock NVMe SSD storage
Data Center TierVaries by partner location; carrier-neutral colocation partners
Bare MetalNo
AvailabilityDeployments available across 100+ locations globally with no quotas
StartupDeveloperResearchHobbyistEnterprise

Compute & Deployment

On-DemandYes
Spot / InterruptibleNo
Reserved InstancesYes (monthly commitments available at discounted rates)
Bare MetalNo
VM-BasedYes
Container-BasedYes (Docker)
KubernetesNo
Serverless GPUNo
Spin-Up Time2-5 minutes
TerraformNo

GPU Hardware

Latest GenH100 SXM, H100 PCIe, L40S, A100 SXM
Legacy SupportA100 PCIe, A30, A40, RTX A6000, RTX 3090, RTX 4090, V100
Multi-GPU NodesYes (up to 8x per node)
Max GPUs/Node8
NVLinkYes (SXM nodes)
InfiniBandYes (select H100 SXM clusters)
PCIe vs SXMBoth PCIe and SXM
HGX PlatformYes (HGX H100 8-GPU)

Pricing Model

Per HourYes (primary billing unit)
Per MinuteYes (per-minute billing supported)
SubscriptionNo
Spot DiscountNo spot pricing
Public PricingYes
Hidden FeesNone disclosed
Pay-as-you-goYes
Credit SystemYes (prepaid credits required)

Performance & Scaling

Multi-Node TrainingYes (supported via NCCL, manual cluster setup required)
Elastic ScalingManual only
Auto ScalingNo
InfiniBandNo (Ethernet only)
NVSwitchNo
Perf IsolationPartial (dedicated GPU instances, shared host infrastructure)
Noisy NeighborPartial (dedicated GPU allocation, but shared host nodes)

Developer Experience

OnboardingDeploy in under 5 minutes via web dashboard; credit card signup with immediate access
FrameworksSupport likely for common frameworks but specific frameworks not detailed
SDK LanguagesPython
CLI ToolingBasic CLI for instance management; primarily web console driven
JupyterVia SSH port forwarding or self-configured on instance
TemplatesPyTorch, TensorFlow, CUDA base image
DocumentationBasic-to-moderate documentation with API reference and getting-started guides
API FeaturesWell-documented REST API with comprehensive server management capabilities

Security & Compliance

SecuritySecurity through isolation of host SSH access, certified data center environments, regular security monitoring
ComplianceData centers certified for high standards, additional security measures in place for host isolation and data protection
Used by AI researchers and startups seeking cost-effective GPU accessCompetitive pricing transparency via public pricing pageActive Discord community

Data Center Locations

Coverage

CountriesUnited States, Germany, Netherlands, United Kingdom, Singapore, Japan, Canada, Australia
CitiesDallas TX, Chicago IL, Los Angeles CA, New York NY, Atlanta GA, Seattle WA, Frankfurt, Amsterdam, London, Singapore, Tokyo, Toronto, Sydney
Multi-Region FailoverYes (manual)
Latency TiersStandard cloud latency
North AmericaEuropeAsia-PacificSouth America

Compliance Regions

EU Data ResidencyYes (Frankfurt, Amsterdam)
US Gov CloudNo
India RegionNo
Datacenter Locations

Key Strengths

Among the lowest GPU pricing on the market
Aggregated capacity from global data center partners increases GPU variety and availability
No long-term commitments required
Quick self-serve signup with no enterprise onboarding friction
Wide GPU model selection from budget RTX to high-end H100

Known Limitations

No published uptime SLA
Availability is not guaranteed; dependent on partner data center stock
No bare metal access
Limited enterprise features (no VPC, no private networking options)
Documentation and support quality varies
No built-in model marketplace or MLOps tooling
Reliability can vary across partner locations

Additional Information

Support Options

["Direct messaging for queries","Enterprise support provided by dedicated professionals"]

Community

Active Discord server

Core Proposition

Aggregated marketplace of distributed GPU nodes offering some of the lowest per-hour prices for cloud GPU compute, sourced from a global network of data center partners.

Payment Methods

Credit CardCrypto
Last updated March 2026. Information subject to change.