Dedicated Cluster

Overview

The vDC provided to our Customers are based on shared resources.

To meet specific Customer constraints (for example the need to use the Customer’s Microsoft SPLA ), Cloud Avenue propose the privatization of a group of physical servers within the shared platform.

Cloud Avenue provides the necessary tools allowing the Customer to verify that its VMs have not been able to operate outside its dedicated cluster, so that the Customer can ensure that it is compliant with its software editor rules.

The basics

The Customer orders several physical servers in order to constitute a dedicated Provider vDC, and on which one (or more) vDC is built, thus grouping together all the resources embedded in the physical servers.

The recommended resource allocation mode is the Reservation Pool. This way of allocating resources has several advantages:

  • The limits are set at the vDC level, which are, in our case, aligned with the capabilities of the physical cluster; which allows VMs to access more resources, as long as they are available in the vDC (“burst mode”).
  • The percentage of guaranteed resources and the vCPU frequency can be set when creating the vDC, which will allow the Customer to manage the consolidation ratio himself.

The parameters that can be set when creating the vDC are:

  • CPU reservation percentage
  • RAM reservation percentage
  • The frequency of vCPUs in GHz

In addition to these parameters, the overbooking rate will be given by the number of VMs that the Customer will create in the vDC.

More information on the “Reservation Pool” resource allocation model on the vmware site.

High availability

To preserve the availability of the applications if a node is missing, the sizing of the cluster must take into account the loss of a blade. A defective blade will be replaced within 48 hours. However, within this time frame, VMs should be able to run on a blade-less cluster without significant performance impact to hosted applications.

Capacity management

Capacity management is at the Customer’s initiative. Cloud Avenue will provide the VMware metrics to monitor the overall performance of the cluster, as well as the performance of the VMs. The decision to change the cluster size is Customer’s responsibility.

The time required for setting up a new blade is generally a few minutes (operation carried out online by the Customer in his Cloud Customer Space).

The other resources (storage, network, etc.) are provisioned from the Cloud Customer Space, for any classical vDC, and are invoiced according to the tariff sheet in force.

Example of configuration

Blade servers with 56 cores/576 GB of RAM per blade, and overbooking targeted to 1,5

Number of blade 56 cores/576 GB of RAM 3 4 5
Total number of available vCPU 168 224 280
Available vCPU if a blade is missing 112 168 224
Overbooking rate (vCPU/physical core) 1,5 1,5 1,5
Resources pool capacity
vCPU (nominal) 252 337 420
RAM (nominal) 1728 2304 2880
vCPU (one blade missing) 168 224 280
RAM (one blade missing) 1152 1728 2304

Subscription

The minimum cluster size is two servers. Invoicing is carried out monthly on the basis of the number of physical servers reserved, and according to their characteristics (see the tariff sheet).

Cloud Avenue provides physical servers with the following specifications:

Specifications Type 1 Type 2 Type 3
CPU Intel xeon Gold
CPU frequency 2,2 GHz 3 GHz 2,2 GHz
Nb of CPUthe VMs are randomly distributed over the 2 rooms according to the capacity rules, on extended clusters (compute + storage)  2 2 2
Nb of core/CPU 28 24 28
Nb of physical cores / blade 56 48 56
Ram 256 GB 256 GB 256 GB
Usage dedicated vmware cluster dedicated vmware cluster dedicated NSX-T cluster

Note: Type 3 server is not proposed anymore.

A blade added during the month is billed pro rata temporis to the number of days it is active in the month.

Subscribing to a dedicated cluster and adding a blade are done from the Cloud Customer Space.

Storage

Only dedicated storage is available for a dedicated cluster. A dedicated cluster is a “Provider vDC” (PvDC) dedicated to a Customer, and the storage (datastore) assigned to a PvDC cannot be shared with another PvDC.

Availability classes

On the Val de Reuil Datacenter campus, the NGP platform is deployed in two rooms, each being completely independent of the other (energy, cooling, networks), which makes it possible to offer several classes of availability for the VM of a vDC :

Availability class Number
of vDC
Service Availbility Configuration VM location Mecanism Customer can choose the VM location

Impact on loss

of a room

SLA

(vDC availability)

One Room 1 Val-de-Reuil et Chartres 1 vDC with a single storage policy Not predictable the VMs are randomly distributed over the 2 rooms according to the capacity rules. no All or part of the VMs are stopped 99,95%
Dual Room  1 Val-de-Reuil 1 vDC with two storage policies (one per room)

example:

  • SILVER_R1 and
  • SILVER_R2
 
Room 1
and
Room 2
the VMs are located in a room thanks to the storage policy chosen to create them. yes

VMs localized

in the faulty room are stopped

99,95%
Dual Room  2 Val-de-Reuil et Chartres  2 vDCs each with a storage policy per room, for example:
  • 1 vDC with GOLD_R1
  • 1 vDC with GOLD_R2
 
Room 1
and
Room 2
the VMs are located in a room thanks to the storage policy chosen to create them. yes

VMs localized

in the faulty room are stopped

99,95%
 HA Dual Room  1 Val-de-Reuil  1 vDC with a single Dual Room type storage policy (storage replicated between the 2 rooms)  Room 1
and
Room 2
the VMs are randomly distributed over the 2 rooms according to the capacity rules, on extended clusters (compute + storage) yes, with anti-affinity rules

Shutdown / restart VMs

of the faulty room

99,99%
Dual Site  2 Val-de-Reuil et Chartres 1 vDC with a Storage Policy (One per Site)  DC1 and DC2 The VMs on the second site are in a dormant state with a DRaaS service policy. yes Shutdown of VMs located in the faulty room. 99,95%

The detailed description of Availability Classes is here: Availability classes

Dedicated cluster topologies

Mono-Room

The principle of the single-site dedicated cluster is that the client has a dedicated pool of resources on the external and internal IaaS Zone and can define its own vCPU service classes at the same time. he noted that a dedicated cluster, regardless of the resources required, must be composed of a minimum of 3 nodes

Dedicated Cluster - Mono room.jpg

Dual Site

The principle of the dual site dedicated cluster is to be able to replicate virtual machines from org-VDC from Site A to Site B using VDCA. It is important to remember that we still have a service interruption. Below we have an example diagram for a manually deployed environment for an end customer.

Design Cloud Avenue - Dedicated Cluster - Dual Site.jpg

HA Dual Room

The HADR Dual High Availability Room topology consists of a single VMware cluster spread evenly across two rooms. The network topology consists of several extended VLANs. In the event of a room failure, VMs restart on the other side through the VMware HA mechanism and also load balancing via the VMware DRS service. An example of a dedicated cluster of 6 nodes, three will be on room 1 and the other 3 on room 2.

In this topology, we are theoretically on a null RPO. HADR is envisaged in a metrocluster for storage, the level of this service does not exist for the Dual Site.

It takes 4 servers minimum for the HA Dual Room.

Design Cloud Avenue - Dedicated Cluster - HA Dual Room 2.jpg