GPU Cloud Provider · Luxembourg
Gcore
Gcore is a European cloud provider specializing in AI-driven solutions, notably through its Generative AI Cluster utilizing NVIDIA A100 and H100 GPUs. It offers robust cloud services aimed at accelerating AI/ML model development and providing high-performance, scalable infrastructure to various sectors including healthcare, finance, and gaming.
GPUs
1
Founded
Not specified
Countries
12
Data Centers
12
Uptime SLA
99.9%
Team Size
1000+
GPU Marketplace

NVIDIA GB200 NVL4On-Demand
Company Profile
Company TypeScale-up
Provider TypeCloud Provider
FoundedNot specified
HeadquartersLuxembourg
Legal EntityGcore Luxembourg S.A.
FundingPrivate
Team Size1000+
Infrastructure
GPU FleetNVIDIA H100 SXM5 80GB, NVIDIA H100 NVL 94GB, NVIDIA A100 80GB, NVIDIA L40S 48GB, NVIDIA RTX 4090 24GB
Network FabricInfiniBand
ConnectivityNetwork capacity over 110 Tbps
StorageS3/NFS
Data Center TierTier 3 certified, carrier-neutral colocation
Bare MetalYes
AvailabilityNot specified
EnterpriseStartupMedia & EntertainmentGamingAI/ML TeamsGovernment
Compute & Deployment
On-DemandYes
Spot / InterruptibleNo
Reserved InstancesYes (monthly and longer-term commitments available)
Bare MetalYes
VM-BasedYes
Container-BasedYes (Docker)
KubernetesYes (managed Kubernetes available)
Serverless GPUNo
Spin-Up Time2-5 minutes
TerraformYes (official provider)
GPU Hardware
Latest GenH100 SXM, H100 PCIe, L40S
Legacy SupportA100 80GB, A100 40GB
Multi-GPU NodesYes (up to 8x per node)
Max GPUs/Node8
NVLinkYes (NVLink on SXM nodes)
InfiniBandYes (HDR 200Gbps on H100 SXM clusters)
PCIe vs SXMBoth PCIe and SXM
HGX PlatformYes (HGX H100 8-GPU)
Pricing Model
Per HourYes (primary billing unit)
Per MinuteNo
SubscriptionYes (monthly plans available)
Reserved DiscountYes (discounts for longer-term commitments, exact percentage not publicly disclosed)
Spot DiscountNo spot pricing
Public PricingYes
Hidden FeesNone disclosed
Pay-as-you-goYes
Credit SystemYes (prepaid credits system)
Performance & Scaling
Multi-Node TrainingYes (multi-node distributed training supported with NCCL)
Elastic ScalingManual only
Auto ScalingNo
InfiniBandYes (InfiniBand networking available on HPC cluster configurations)
NVSwitchYes (on SXM nodes with H100/A100)
SLA99.9%
Perf IsolationYes (dedicated bare metal GPU instances available)
Noisy NeighborYes (bare metal, no sharing on dedicated instances)
Developer Experience
OnboardingDeploy in minutes via web console or API; account verification may be required for large reservations
FrameworksNot specified
SDK LanguagesPython
CLI ToolingBasic CLI and REST API for instance management; web console primary interface
JupyterVia SSH port forwarding or custom setup
TemplatesPyTorch, TensorFlow, CUDA
DocumentationComprehensive docs with API reference, tutorials, and quickstart guides
API FeaturesCLI, SDK, REST API, Terraform provider
Security & Compliance
SecurityCompliance with PCI DSS, ISO/EIC 27001, GDPR
ComplianceGDPR, PCI DSS, ISO/EIC 27001
ISO 27001 certifiedPCI DSS compliantEstablished since 2014 with proven infrastructure scaleOperates 30+ PoPs globallyServes enterprise and telecom customers across multiple continents
Data Center Locations
Coverage
CountriesLuxembourg, Germany, Netherlands, France, United States, Singapore, Japan, Brazil, Poland, United Arab Emirates, Hong Kong, Turkey
CitiesLuxembourg, Frankfurt, Amsterdam, Paris, Manassas VA, Singapore, Tokyo, São Paulo, Warsaw, Dubai, Hong Kong, Istanbul
Multi-Region FailoverYes (manual configuration)
Latency TiersUltra-low (<1ms intra-DC), Standard cloud latency across regions
EuropeNorth AmericaAsia-PacificMiddle EastAfrica
Compliance Regions
EU Data ResidencyYes (Luxembourg, Frankfurt, Amsterdam, Paris, Warsaw)
US Gov CloudNo
India RegionNo
Datacenter Locations
Key Strengths
Global edge network combined with GPU cloud in 30+ locations
Strong European presence with GDPR-compliant infrastructure
Integrated CDN, DDoS protection, and GPU compute on one platform
Competitive H100 pricing versus hyperscalers
Low-latency edge GPU for inference workloads
Known Limitations
Smaller GPU fleet than hyperscalers or dedicated AI cloud providers
Limited AI-specific tooling and MLOps integrations compared to specialized platforms
Spot/preemptible GPU instances not offered
Less mature AI developer ecosystem and community
Documentation depth for advanced ML workflows can be limited
Additional Information
Support Options
24/7 technical support
Community
Discord server and community forum; smaller community compared to major AI cloud platforms
Green Energy
Committed to energy efficiency with renewable energy targets in select regions; details not fully disclosed
Core Proposition
Global edge and cloud infrastructure provider offering GPU compute alongside CDN, DDoS protection, and edge services from owned PoPs across 6 continents.
Payment Methods
Credit CardWire TransferPayPal
Last updated March 2026. Information subject to change.