Thursday, 28 July 2016

Isolating vSphere Replication Traffic

One of the new great features of vSphere Replication 6 is traffic isolation, which significantly enhances security and facilitates QoS using Network I/O Control feature.

Even though TCP/IP stacks are not useful for moving vSphere Replication traffic to separate network it is not too difficult to achieve the same result using static routes.

In this post I will show the different types of vSphere Replication traffic flows and will explain how to achieve full isolation of the replication traffic from management network.

Thursday, 14 July 2016

Automating configuration of a scratch location with PowerCLI

Quite often the modern ESXi servers come with no local storage and ESXi is normally installed on SD card.

As per VMware KB1033696 the SD card can't be used to store scratch partition. The main purpose of the scratch partition is to store logging files and to provide space for vm-support output.

So, the normal practice is to use shared storage (VMFS/NFS) as a scratch location. The problem is that the configuration of the scratch location is not automated in the existing vSphere. So you have to manually create folder for each of the ESXi host and configure each ESXi host to use that folder.
This can be quite time-consuming and boring tasks when you have to do it for hundred of servers.
To make things worse Host Profiles do not let you configure scratch location too.

I had some time last week and thought it was a good chance to have fun with PowerCLI and automate the scratch configuration for ESXi hosts.

So here is overview of what the script does:

  1. Connects to vCenter
  2. Collects the list of ESXi hosts in the cluster. Very often storage is not shared across multiple compute clusters so I decided to use cluster, not a datacenter, as a configuration target.
  3. Checks if there is a designated scratch folder for each of the clusters and creates if it doesn't exist
  4. Checks if the ESXi host configured with scratch location and if it points to the right datastore and folder.
  5. If ESXi is not configured yet or points to the wrong directory the correct setting will be applied.
  6. Provides a list of the ESXi servers to be rebooted for the configuration change to take effect

There are a couple of thing you have to do before running the script:
  • Identify the datastore to be used to store scratch folders
  • In that datastore create a folder where the script will create a scratch folder per each host 

 The syntax is as following:

.\scratch.ps1 -vCenter vCenter_Name -cluster Cluster_Name -datastore Datastore_Name -folder Folder_Name
for example

.\scratch.ps1 -vCenter lab-vc-01.lab.local -cluster HA -datastore ISO -folder Scratch
* I had to add folder as input parameter because I couldn't make the script land into the correct folder with New-PSdrive commandlet

You can go even further by taking advantage of Windows Task Scheduler to run this script on a daily basis to ensure all servers are consistently configured.

Let me know how it worked for you.

Friday, 8 July 2016

vSphere Distributed Switch and Nexus 1000v comparison


Choosing between VMware and Cisco virtual switch products is not an easy tasks as it includes not only side-by-side feature comparison, but also numerous aspects of duty separations, operational overhead, current skill set and expertise. And not all of them can be compared directly. 

Apart from all that it can be simply a political decision to a question "Who is going to manage virtual networks?". 

In this article I am trying to provide essential information on things to help you make the right decision for your infrastructure.

Saturday, 11 June 2016

Bulk IP Address change with PowerCLI

Recently I was given an interesting task on IP Renumbering of more than a hundred VMs.
Along the IP Address change the VMs had to be moved to a new PortGroup.

Doing it manually can be pretty tiresome and boring. The PowerCLI is a perfect option for this task as it relies on leveraging of VMware Tools. Therefore, loosing network connectivity to the VM due to the IP Address change or after the VM is moved to another Portgroup won't impact the functionality of the script.

Prior running the script you will need to prepare the CSV file with the list of VMs to be updated and the following information for each VM: ServerName, Username, Password, NewPortgroup, OrigIP, NewIP, NewMask, NewGateway

Here is how my Inventory.csv file looks like.



When running the script you will need to provide the path to CSV file and the name of your vCenter

ChangeIP.ps1 -Inventory c:\Scripts\inventory.csv - VC lab-vc-01.lab.local


And here is the script text. Hopefully there are enough comments to help you read the script so that you could adjust it to your needs.



Saturday, 28 May 2016

NUMA and Cluster-On-Die

NUMA implementation has gone through several phases of Development in vSphere. At first, it was only responsible for initial placement of VMs, then its functionality was extended with dynamic balancing. In vSphere 5 VMware has presented support of Wide-VMs by exposing NUMA architecture to virtual machines.

New CPUs have presented additional feature - Cluster-on-Die - of splitting physical CPU sockets into  NUMA domains. 

The Full article can be read here

Thursday, 19 May 2016

C# Client is dead, long live the Web Client


Yep, C# client will no longer be available in the next releases of vSphere.

VMware have been giving a pretty clear signals in the last major vSphere releases that C# client would be gone soon, but nobody knew when exactly it was going to happen. However, when SRM and VUM were moved to web client in vSphere 6 it was obvious C# client's days are numbered.

Probably it could have been done a couple of years ago, but first Web client wasn't good enough due to performance issues. It also lacked integration with other VMware solutions and there was no replacement of C# client for standalone ESXi hosts.

Today VMware have moved its plugins (SRM, VUM) to the Web client and other VMware partners are sailing in the same direction. It also presented new embedded HTML5 Host client for ESXi hosts, which has feature parity with C# client for standalone hosts.

Here are some of web client benefits compared to C# client:

  • Scalability – WC handles more objects and more concurrent admin access
  • Bookmarking URLs - WC allows you to quickly return to specific objects or views and share them with others (such as in a support ticket)
  • Recent Objects - WC lets you navigate quickly between things you’re working on
  • Work-In-Progress - WC lets you save your work and come back to it later, even from another computer!
  • Enhanced Linked Mode – WC can call up all your inventory in one view
  • Remembers user customizations to UI – WC enables column selections and widths on grids, portlets on summary pages
  • Latest feature support – WC is the only interface to support all new features 

As a short-term goal I think VMware will be focusing on fixing Client Integration Plugin which causes most of the issues with Web client - people having issues with OVF import, browsing datastores. It also doesn't work on Mac.

The long term goal would be to have a single ultimate client for vSphere and ESXi hosts. That's what actually VMware is doing right now by trying to replace Flash Web client with HTML5. You can already have a preview of H5 Web Client for vSphere - it exists as a Fling.

It has to be noted that The C# client will be kept in all current platforms.

You can read the official announcement here and that's where you can leave your feedback. 

Friday, 29 April 2016

VMware Virtual SAN Network Design Guide v2.0 is just released

VMware has just released the document that covers network aspects of Virtual SAN design. 

Actually, the guide has been re-released as there was v1.0 before (hence the new one goes under v2.0), but as far as I am aware it was removed from the VMware web site due to some inaccuracies in it. So for a while people lacked validated design information on one of the key aspect of VSAN setup. I remember there were quite a few discussions on how to provide network redundancy and load-balancing for VSAN traffic and nobody could get a formal answer.

The guide is very comprehensive and even provides mutlicast configuration examples on Cisco and Brocade switches.


You can check the guide here