public cloud compute  gpu

Cloud servers specially designed for processing massively parallel tasks

GPU instances integrate NVIDIA graphic processors to meet the requirements of massively parallel processing. Since they are integrated into the OVHcloud solution, you get the advantages of on-demand resources and hourly billing. These cloud servers are adapted to the needs of machine learning and deep learning.

Powered by NVIDIA

This range of GPUs, among the most powerful on the market, is designed for datacentre operations. They speed up calculations in the fields of artificial intelligence (AI), 3D rendering and graphics processing.

NVIDIA NGC

To provide the best user experience, OVHcloud and NVIDIA have partnered up to offer a best-in-class GPU-accelerated platform, for deep learning and high performance computing and ​artificial intelligence (AI). It is the simplest way to deploy and maintain GPU-accelerated containers, via a full catalogue. Find out more.

From one to four cards with guaranteed performance

NVIDIA cards are delivered directly to the instance via PCI Passthrough, without a virtualisation layer, so that all of their power is dedicated to your use. Up to four cards can be connected to combine their performance. As a result, the hardware delivers all of its computing power to your application.

Icons/concept/Page/Page Certificate Created with Sketch.

ISO/IEC 27001, 27701 and healthcare data hosting compliance

NVIDIA H100, A100, L40S, L4, V100S and V100 specifications

  Performance

Management interface

Memory
H100 80 GB
  • FP 64: 26 teraFLOPS
  • FP 32: 51 teraFLOPS
  • FP 16: 205 teraFLOPS
  • PCIe 5.0
  • capacity: 80 GB HBM2e
  • Bandwidth: 2000 GB/s
A100 80 GB
  • FP 64: 9.7 teraFLOPS
  • FP 32: 19.5 teraFLOPS
  • FP 16: 78.0 teraFLOPS
  • PCIe 4.0
  • capacity: 80 GB HBM2e
  • Bandwidth: 2039 GB/s
L40S 48 GB
  • FP 64: 1.43 teraFLOPS
  • FP 32: 91.6 teraFLOPS
  • FP 16: 91.6 teraFLOPS
  • PCIe 4.0
  • capacity: 48 GB GDDR6
  • Bandwidth: 864 GB/s
L4 24 GB
  • FP 64: 0.5 teraFLOPS
  • FP 32: 30.3 teraFLOPS
  • FP 16: 30.3 teraFLOPS
  • PCIe 4.0
  • capacity: 24 GB GDDR6
  • Bandwidth: 300 GB/s
V100S 32 GB
  • FP 64: 8.2 teraFLOPS
  • FP 32: 16.4 teraFLOPS
  • FP 16: 32.7 teraFLOPS
  • PCIe 3.0
  • capacity: 32 GB HBM2
  • Bandwidth: 1133 GB/s
V100 16GB
  • FP 64: 7.1 teraFLOPS
  • FP 32: 14.1 teraFLOPS
  • FP 16: 28.7 teraFLOPS
  • PCIe 3.0
  • capacity: 16 GB HBM2
  • Bandwidth: 897 GB/s

 

Use cases

Image recognition

Extracting data from images to classify them, identify an element or build richer documents is necessary in many industries. With frameworks like Caffe2 combined with the Tesla V100S GPU, medical imaging, social networks, public protection and security become easily accessible.

Situation analysis

Real-time analysis is required in some cases, where an appropriate reaction is expected to face varied and unpredictable situations. This kind of technology is used for self-driving cars and internet network traffic analysis, for example. This is where deep learning comes in, to form neural networks that learn independently through a training stage.

Human interaction

In the past, people learned to communicate with machines. We are now in an era where machines are learning to communicate with people. Whether through speech recognition or the emotion recognition through sound and video, tools such as TensorFlow push the boundaries of these interactions, opening up a multitude of new uses.

Need to train your artificial intelligence with GPUs?

With our AI Training solution, you can train your AI models efficiently and easily, and optimise your GPU computing resources.

Focus on your business instead of the infrastructure that supports it. Launch your training tasks via a command line, and pay for the resources used by the minute.

Get started with OVHcloud AI Training

Usage

1

Get started

Launch your instance by choosing a t2 model and NGC image for your project (t2: V100S below).

2

Configure

$ docker pull nvcr.io/nvidia/tensorflow
$ nvidia-docker run nvidia/tensorflow t2

3

Use

Your AI framework is ready for processing.

Ready to get started?

Create an account and launch your services in minutes

Pricing Public Cloud

GPU billing

GPU instances are billed like all of our other instances, on a pay-as-you-go basis at the end of each month. The price depends on the size of the instance you have booted, and the duration of its use.

FAQ

What SLA does OVHcloud guarantee for a GPU instance?

The SLA guarantees 99.999% monthly availability on GPU instances. For further information, please refer to the Terms & conditions.

Which hypervisor is used for instance virtualisation?

Just like other instances, GPU instances are virtualised by the KVM hypervisor in the Linux kernel.

What is PCI Passthrough?

Cards with GPUs are served via the physical server's PCI bus. PCI Passthrough is a hypervisor feature that allows you to dedicate hardware to a virtual machine by giving direct access to the PCI bus, without going through virtualisation.

Can I resize a GPU instance?

Yes, GPU instances can be upgraded to a higher model after a reboot. However, they cannot be downgraded to a lower model.

Do GPU instances have anti-DDoS protection?

Yes, our anti-DDoS protection is included with all OVHcloud solutions at no extra cost.

Can I switch to hourly billing from an instance that is currently billed monthly?

If you have monthly billing set up, you cannot switch to hourly billing over the course of the month. Before you launch an instance, please take care to select the billing method that is best suited to your project.