The San Francisco Compute Company favicon

GPU Cloud Provider · San Francisco

The San Francisco Compute Company

The San Francisco Compute Company is a provider specializing in GPU instances for computing, particularly suitable for training large AI models. They offer flexible, short-term access to GPU clusters without long-term contracts, appealing to startups and businesses focusing on large-scale model training.

Founded
Not specified
Countries
1
Data Centers
1
Team Size
1-10

Company Profile

Company TypeStartup
Provider TypeBaremetal Provider
FoundedNot specified
HeadquartersSan Francisco
FundingSeed
Team Size1-10

Infrastructure

GPU FleetNVIDIA H100 SXM, NVIDIA H100 NVL
Network FabricInfiniBand, Scheduled for future support in VMs by Q2 2026
Connectivity400 Gbps RDMA compute fabric,100 Gbps primary in-band network,1 Gbps out of band IPMI management network
Storage1.5TB+ NVMe, Petabyte-scale storage options
Data Center TierCarrier-neutral colocation
Bare MetalYes
AvailabilityOperational with limited clusters available for immediate deployment
StartupResearchEnterprise

Compute & Deployment

On-DemandYes
Spot / InterruptibleYes (interruptible instances at significant discount)
Reserved InstancesYes (committed capacity contracts available)
Bare MetalYes
VM-BasedNo
Container-BasedYes (Docker)
KubernetesNo
Serverless GPUNo
Spin-Up TimeMinutes

GPU Hardware

Latest GenH100 SXM, H200 SXM
Legacy SupportA100 SXM
Multi-GPU NodesYes (up to 8x per node)
Max GPUs/Node8
NVLinkYes (NVLink on SXM nodes)
InfiniBandYes (InfiniBand interconnect)
PCIe vs SXMSXM only
HGX PlatformYes (HGX H100 8-GPU)

Pricing Model

Per HourYes (primary billing unit)
Per MinuteNo
SubscriptionYes (forward contracts / long-term commitments available)
Reserved DiscountYes (discounts via forward contracts for longer commitments, exact % not publicly disclosed)
Spot DiscountNo spot pricing
Public PricingPartial (some GPUs listed)
Hidden FeesNone disclosed
Setup FeesNone disclosed
Pay-as-you-goYes
Credit SystemNo

Performance & Scaling

Multi-Node TrainingYes (multi-node clusters supported, NCCL-based distributed training)
Elastic ScalingManual only
Auto ScalingNo
InfiniBandYes (InfiniBand interconnect between nodes)
NVSwitchYes (on H100 SXM nodes)
Perf IsolationYes (dedicated bare metal)
Noisy NeighborYes (bare metal, no sharing)

Developer Experience

OnboardingSelf-serve sign-up with relatively fast provisioning via web UI and CLI
CLI ToolingBasic CLI for cluster management and SSH access
JupyterVia SSH port forwarding
DocumentationBasic getting-started guide with API reference
API FeaturesCLI, SDK expected (implied integration capabilities)

Security & Compliance

Security
San Francisco-based team with AI infrastructure focusBare metal H100 access at launchTransparent public pricing

Data Center Locations

Coverage

CountriesUnited States
CitiesSan Francisco, CA
North America

Compliance Regions

EU Data ResidencyNo EU presence
US Gov CloudNo
India RegionNo
Datacenter Locations

Key Strengths

Highly competitive H100 pricing vs. hyperscalers
Bare metal cluster access with full hardware control
Startup-friendly with minimal bureaucracy
Direct H100 SXM availability without long waitlists
Simple, transparent pricing model

Known Limitations

Very small team with limited support resources
No Windows support
Limited geographic presence (US only)
Minimal ecosystem tooling compared to AWS/GCP/Azure
Limited public documentation and community resources
No managed ML platform features

Additional Information

Support Options

24/7 support via Slack, phone, and email; dedicated Slack channels for large clusters

Core Proposition

Low-cost, high-performance bare metal GPU clusters in San Francisco optimized for AI training workloads with spot and on-demand pricing.

Payment Methods

Credit CardWire Transfer
Last updated March 2026. Information subject to change.