Monday, 8 May 2017

Testing new vSphere 6.5 feature - DRS CPU overcommitment


I am currently working on a project where one of the customer's requirements is to use strict pCPU to vCPU ratio. Luckily, VMware introduced new feature called CPU over-commitment ratio in vSphere 6.5 which helps to meet the requirement. I spent an evening playing with this new feature and would like to share my experience. 

The VMware documentation is quite laconic when it discusses new DRS features. So, after reading the documentation I still had few questions on how CPU over-commitments works:


  1. Does it count vCPUs against Physical or Logical Processors?
  2. What is DRS behaviour when the ratio is violated?
  3. Is over-commitment ratio applied per host or per cluster?
  4. Will HA respect this ratio when restarting VMs after the host failure?
  5. Is ratio changed when host is placed into maintenance mode?


So, let's try to answer all these questions using my lab.

1. Does it count vCPUs against Physical or Logical Processors?

Usually I run most of my tests in the nested labs using nested ESXi servers, but to answer this question I had to use one of my physical clusters which supports hyperthreading and thus provides physical and logical processors.

The cluster consists of 2 x SuperMicro Servers and each of the servers runs on Xeon D-1528 CPU with 6 physical cores. So, in total I have 12 physical / 24 logical processors in the cluster.






Currently I am running 4 VMs with 11 vCPUs assigned in total. DRS is enabled and CPU overcommitment is configured to 100%. I am planning to power on a another VM with 2 vCPUs.
If DRS counts over-commitment ratio using physical CPUs it should give me some kind of warning.

Here is the result of my attempt to power-on another VM.


As you can see it actually answers the second question too.

We can tell now that DRS definitely counts only physical CPUs. Interestingly, in this case DRS behaves as HA Admission Control prohibiting VM power-on operation as it will violate CPU over-commitment ratio.


3. Is over-commitment ratio applied per host or per cluster?

To answer this question I used my nested lab. Here are quick specs of the test cluster:
  • 3 x ESXi servers
  • 2 x CPU per server
  • 3 x virtual machines configured with 2 vCPUs each
  • CPU over-commitment is set to 100%
So, I am running 6 vCPU in total on 6 CPUs in DRS cluster. Attempt to power on one more VM in this cluster will definitely fail as it will violate cluster level ratio. 

Now, I vMotioned VM-2 to ESXi-1 which brought the pCPU to vCPU over-commitment ratio on that host to 200%. As you can see this vMotion didn't fail and no warning were generated.




DRS generate recommendations every 15 minutes and soon this cluster was balanced again, but that's part of DRS functionality that existed in previous versions of vSphere 6.5.

So, we can tell that this over-commitment ratio is applied per cluster.


4. Will HA respect this ratio when restarting VMs after the host failure?

It was the most tickling question for me. Taking into the consideration similarity of CPU over-commitment and HA Admission Control features I was wondering whether over-commitment ratio should be adjusted to take into the consideration host failure.

I used the same lab setup you saw above in question 3. I verified that each host has been running one dummy VM.




Then I restarted vesxi65-3 host and 2 minutes later the VM-3 was successfully restarted on vesxi65-1 server even though the CPU over-commitment ratio was equal to 150%.



This proves that HA restart has higher priority over CPU over-commitment ratio. This totally makes sense to me as VM's availability is more important that potential performance impact.


5. Is ratio changed when host is placed into maintenance mode?

I reverted my lab back to default settings and tried to place the host into maintenance mode which would result in 4 pCPU to 6 vCPU ratio which would violate configured CPU over-commitment ratio. 
The tasks didn't fail so I at first I assumed that there would be no problem.



5 minutes later that task was still running so I checked the DRS Faults and immediately found the following.




Clearly, DRS would always respect its own over-commitment rule when trying to generate vMotion recommendations. 



So, the main takeaways for today are:


  • Only physical CPUs are used in calculations - no hyper threading
  • CPU over-commitment works very similar to Admission Control by preventing VMs to power on if it will violate the configured ratio.
  • During HA failover the CPU overcommitment setting is ignored - makes sense as recovering VMs is more critical than respecting overcommitment ratio
  • The over-commitment ratio is applied at cluster level
  • DRS will prevent placing the host into maintenance mode if it breaks its rules. 

Thursday, 4 May 2017

Creating replica seeds for vSphere Replication

I have known for a while that vSphere Replication allows to use replica seeds to significantly save time of initial sync.

This approach is recommended when there is not sufficient bandwidth between sites to complete replication in time. In this case it is recommended to create copies of the VM disks, transfer them to the destination site using external media, e.g. external USB hard drive. Once the files are copied to the target datastore vSphere Replication can be instructed to use them as replica seeds. The source and target disks will be scanned and only modified blocks of data will be transferred.

There is an issue with this approach. According to VMware documentation the virtual machine has to be powered off before creating disk copies of the original VM. In most environments this kind of action require Request for Change and it can take quite a while before this request approved.

As a workaround you can clone powered on VM, but the disks of the cloned VM will have new UUID. When vSphere Replication is instructed to use replica seeds it compares source and destination disks using two criteria - VMDK name and UUID. If one of them doesn't match in both disks you won't be able to configure vSphere Replication for this VM.

Therefore, I thought it is a nice opportunity to simplify process of creating replica seeds for vSphere Replication with no outage for virtual machines.

So, the whole process is quite simple:

1. Clone running VM. The cloned VM will need to have the same name to keep the disk names identical. Since VMs will have the same name they will need to be placed into different folders.

2. Run the script that will update the cloned VM's disk with original UUIDs.