Tuesday, 26 July 2011

vCenter Inventory Snapshot - Alternative way of resolving vCenter's SQL issues

Somewhere on upgrade path from VI 3.5 to vSphere 4.0 and finally to vSphere 4.1 we screwed up Performance stats in vCenter. I could still see real time data, but I couldn't select any historical data. Considering my intention to disable Large Pages on all cluster hosts and prove huge RAM savings with TPS I needed that historical data. Surely, the first website I visited was VMware and I immediately found KB which describes all symptoms I experienced. With my about zero SQL experience I followed entire procedure and it even seemed that I fixed the issue, at least I could select historical data options.

However, after checking it out for couple of days I found out that historical data is broken and discontinuous. Moreover, when I tried to use Veeam Monitor for VMware I couldn't get any CPU or Memory usage data on vCenter level.  That is when I remembered that recently in someone's blog I have spotted Inventory Snapshot product in VMware Labs.
Generally speaking, Inventory Snapshot is a bunch of PowerCLI commands that retrieve all essential data from vCenter, such as Datacenters, Host, VMs, HA&DRS Cluster settings, Resource Pools, Permissions, Roles, etc. You can check the whole list of inventory yourself. Once it collected all data from vCenter it generates PowerCLI code that can reproduce your entire environment either on new vCenter or on existing one if you managed to corrupt your SQL database. As you already understood it doesn't save performance data, but in my case it was ideal tool. With all vCenter SQL databases backed up I could play around as much as I wanted.

Here is a brief overview of steps you need to pass through to get my problem solved. However, you definitely want  to read Inventory Snapshot documentation because I am not inclined to reproduce official documentation :)

1. Stop vCenter service and backup its SQL database.
2. Bring vCenter service back online.
3. Run Inventory Snapshot and provide administrator's credentials
4. Once all inventory is retrieved and saved to .ps file you need to provide root's password for ESXi hosts. This is the weakest part of this product since all password are stored in plain text, although we need to remember that this is a lab product.
5. Stop vCenter again, run command line, navigate to C:\Program Files\VMware\Infrastructure\VirtualCenter\ and run vpxd.exe - b. It will fully reinitialize vCenter database, as if it was newly installed. At this step I had some warning about licenses, but seems like it was not important
6. Start vCenter
7. Run PowerCLI, login to vCenter and apply .ps file you saved before to vCenter. With 10 hosts, 4 Resource Pools and 190 VMs it took Inventory Snapshot about 20 mins to fully import back all objects.

So, now I have all types of performance graphs and it was a way simpler than going through VMware KB to fix something in SQL database.

In comments on VMware Lab site I saw people's complaints about missing permissions after they imported Inventory, but in my case I have got all permissions back correctly implemented.

Very Important to know!!! - The problem I had was that Inventory Snapshot couldn't recreate some part of  dvSwitch. Luckily, it was used only for my couple of test hosts. The error says "The host proxy switch associated to dvSwitch no longer exists in vCenter". Today is too late to start investigation. Tomorrow I will try to inform developers if it is not known bug yet. Anyway, it is still kind of beta version and you need to take all precaution measures before you start playing with it.

PS. Veeam Monitor still shows zero CPU and Memory usage. According to Veeam Community forum I need to ask their support for a fix. Why can't they just post it on the website if it is well known issue?


If you find this post useful please share it with any of the buttons below. 

Friday, 22 July 2011

Vlan Tagging and use cases of VLAN ID 4095

It was quite surprising for me to learn how useful VLAN 4095 can be, but let me start from the basics.

There are 3 main approches for vlan tagging - External Swich Tagging (EST), Virtual Switch Tagging (VST), and Virtual Guest Tagging (VGT).

External Swich Tagging

With this approach ESX host doesn't see any vlan tags. All of them are stripped off by external physical switch and then traffic is sent to corresponding physical port. Therefore, you will need one vmnic per vlan from the ESXi  perspective. 
I really don't know in which situations such config might be useful.

Virtual Switch Tagging

This is the most popular way of connecting vSphere to physical network. All traffic down to the vSwitch is tagged with vlan id. It is responsibility of vSwitch now to strip off the vlan tag and send packet to virtual machine in corresponding port group. Thereby, you can run all vlans over one vmnic, but I guess you would like to have another vmnic for redundancy. The same logic is implemented to the packet travelling from virtual machine to physical infrastructure. Packet is delivered to vSwitch and before it is sent to physical switch the packet is tagged with vlan id according to the port group memebership of originating virtual machine.

Virtual Guest Tagging

In some occasions you  might need to deliver traffic with vlan tags directly to VM and let your VM decide what to do with it. To achieve such functionality you need to assign this VM to port group with vlan id 4095 and configure Guest OS Nics with needed vlans. Interresingly, as soon as you put vlan id 4095 it is automatically changed to All (4095).




I didn't pay too much attention to reading about EST and VGT for the simple reasons it is not used at all in our vSphere farms and I don't want to spend time on something I will never have hands on experience with. However, when I questioned myself how I can sniff traffc on vSwitch google lead me directly to vlan 4095. 

Here are the two popular use cases for vlan 4095:

1. Traffic sniffing - from time to time you face networking problems and woud like to use some sniffing tool (or network protocol analyzer in case of Wireshark) to see what is going on behind the scene. Usually, you could enable promiscious mode on specific port group and use one of the VM in this port group to listen to the traffic. However, this leads to additional security risks as Promiscious mode lets all VMs of this port group to detect all frames passed on the vSwitch in allowed vlan. This also doesn't let you to sniff traffic from different vlans simultaneously. That's why you would prefer to use special port group with vlan 4095 and Promiscious mode enabled, where you can connect virtual adapter of VM you will use for traffic sniffing.  


2. IDS - Another good use case of vlan 4095 is to provide your virutal IDS with possibility to inspect all vSwich traffic. According to "VMware VSphere and Virtual Infrastructure Security: Securing the Virtual Environment" book this is quite a common scenario for virtual IDS placement. 

If you know more situations where vlan 4095 can be useful feel free to share in comments. I really like to learn vSphere features with real life examples. 


If you find this post useful please share it with any of the buttons below. 

Monday, 18 July 2011

vSphere 5 - Virtual Storage Appliance

In one of the comments to numerous blog articles I have been reading lately I noticed quite a sceptic opinion about vSphere 5 Virtual Storage Appliance (VSA) with regard to its capabilities and price. That was quite motivating to spend a day on reading about VSA and making my own opinion about it..

So, here is a short overview of VSA from vSphere admin.

The main goal of VSA is to provide SMB companies with shared storage without buying physical NAS or SAN itself. Instead VSA will use internal storage of your ESXi hosts to create shared NFS storage that will be presented as a single entity to all hosts in your vSphere. This is how it looks with 3 node configuration.



Saturday, 16 July 2011

Unexpected benefit of SnS

Just found out that if you are going to buy Support and Subscription (SnS) contract you are entitled for free copy of SUSE Enterprise Linux System (SLES) for VMware. This also includes free subscription to all patches and updates for SLES. Moreover, you get free technical support as well.
I guess it is a good reason for me to start studying Linux.



If you find this post useful please share it with any of the buttons below. 

Thursday, 14 July 2011

vSphere 5 licensing - what upgrade path to choose?

Found good licensing calculations of vSphere 5 upgrade based on quite a big production environment.
So i just followed this guy's advice and tried to make calculations for our production farm.

  • 180 Virtual Machines with 427 GB of consumed RAM.
  • 10 licensed hosts HP BL460 in 1 cluster 20 CPUs, 720 GB RAM total
  • 20 Enterprice Edition licenses
If we want to keep vRAM equal to current physical RAM and stay with Enterprise edition once we upgrade to vSphere 5 we will need 720GB / 32GB = 23. Since we have support contract for another 2 years the upgrade of 20 current licenses will cost us zero. We will need to by 3 more licenses only, the pricing hasn't been changed. It will cost us $2,845x3=$8,535.

There is another, more interesting way to upgrade to vSphere 5. We can move to vSphere 5 for free and upgrade to Enterprise Plus edition for $685 per license. It will cost us $685x20=$13700, and we are fully safe with 48GB vRAM entitlement of Enterprise Plus. We will be able to use up to 48GBx20=960GB.

In a nutshel, we have 2 options:
1. Stay with Enteprise edition and upgrade to vSphere 5 for $8,535
2. Move to Enterprise Plus edition of vSphere 5 for $13700, but have some spare vRAM entitlement and all fantastic new features of vSphere 5.

It is almost obvious that option number 2 is a winner, $5000 is a not a big deal for such a great package.

The only information I lack is what happens to support contract once we upgrade to vSphere 5. Do we have new support for vSphere 5?  Our contract is still valid and it will be waste of money if we upgrade before it close to expiry. This information can definitely influence my primary decision. I will update post once I obtain it.

All these calculations and figures just prove my first thoughts about new vSphere 5 licensing model - it is beneficial and flexible for owners of big farms and Enterprise Plus edition licenses. It is aimed to big players, and I am afraid smaller virutalization customers can swing a bit to free MS Hyper-V solutions. It also seems like VMware decided to give up on SMB market and focus on large cloud providers.

Tthe value of main overcommitment technology TPS is significantly decreased with new licensing. The overall rate of memory overcommitment will also go down, although my assumption was that the resource overcommitment was one of the main drivers towards virtualization.
Another bad impact I can think of is that vsphere admins will try to size their VMs to fit vRAM pools and such approach can badly affect VMs performance, thus, undermining trust and confidentce in virtualization technologies.

Update 1: As I understood if you have Production support contract you are entitled for free upgrade to vSphere 5 and you will still have your support contract which will be transfered to vSphere 5 as well.  I guess it is time to revise your current support contracts and their expiry dates.

Update 2: Just read something really interesting for those whose Support&Subscription contracts are expired. Normally it would force you to buy vSphere 5 plus new SnS contract. However, the information below means you can save quite a few thousands of your currency on upgrade to vSphere 5 through renewing your expired SnS contracts.

Reinstatement Options for Customers with Inactive SnS Contracts:
  • The applicable SnS fees for the current contract term
  • Fees that would have been paid for the period of time that the customer’s SnS contract was not active
  • A 20 percent fee on the sum of the fees in the preceding two
Update 3:  Seems like we need to wait till VMware vRAM entitlement change today and then recalculate the price of upgrade to vSphere 5.




If you find this post useful please share it with any of the buttons below. 

    Wednesday, 13 July 2011

    Vsphere 5 - new features from the admin's perspective

    It is really hard to write about vSphere products more interesting and intelligent that Duncan Epping does, however, I am so excited about all I have heard and read today (and I am sure I will read more information for at least next 2 hours), so I decided to publish some short notes I could catch in a waterfall of vSphere 5 new features. 

    • Currently about 40% of server workload now is running in virtual environment, this number will reach 50% by the end of 2011
    •  The main point of first part of the presentation was mostly the global move to the cloud computing model -  either private or public. Now it is not only virtualizing your physical servers, but providing Virtual Machines AppStore to the clients. Basicallly, VMware is pushing all vSphere’s implementations to IT As A Service model.
    • New cloud infrastructure suite now consists of the following products:
      • vSphere 5
      • vCenter SRM 5
      • vCenter operations 1.0
      • vShield Security 5
      • vCloud Director 1.5
    My short experience covers mostly vSphere, so I tried to write down everything about it, and therefore I missed other products’ new features.

    Let’s concentrate on the most important part of vSphere 5 improvements:

    • Profile Driven Storage – allows you to create storage tiers or they call it also Datastore cluster. You create for instance 3 different performance specs datastore clusters using flash disks for Tier 1, SAS disks for Tier 2 and FATA disks for Tier 3. When you create VM you just assign it to the proper Datastore Cluster according to the service level requirements of VM. If performance of current VMFS doesn’t meet VM’s requirements the VM will be migrated to better VMFS datastore.  To me it seems like a software replacement of the multi tier hardware storage solutions like HP 3Par storages.
    • Storage DRS – it is very close to regular DRS. It will also take care of initial placement of VM with regard to available space, you can create your affinity rules for keeping some of the VMs separate across different datastores, it will migrate your VMs according to IO balancing level you set and space allocation rules you create. Finally, you can move your datastore to Maintenance mode so all VMs will be moved to other datastores for the period of maintenance.
    • VMFS 5:
      • All VMFS datastores are formatted with standard 1MB blocksize
      • The new VMFS can grow up to 64TB
      • VMDK file is limited by 2 TB
      • vSphere 5 support new and old versions of VMFS
      • If you want to upgrade to VMFS 5 it is easy and fast to do
    • vSphere Storage Appliance – not sure if I got all details correctly. Briefly, it will let you use all local ESXi disks to create kind of virtual shared storage.  You don’t need shared storage, but you still can use some benefits of it. But there was nothing told about its limitations and restrictions – something to investigate for me. I guess it is mostly aimed for small vSphere implementations.
    • New Hardware Version 8, including 3D graphics and MacOS X server support
    • New VM’s maximums - now you can create Monster VM:
      • 32 vCPUs
      • 1TB of RAM
      • 36 Gbps
      • 1,000,000 IOPS
    • High Availability
      • The concept of HA has been changed completely to Master/Slave model with  an automated election process. There is now only one Master and the rest of nodes are slaved. Master coordinates all HA actions. With this new model vSphere admins don’t need to worry about HA Active hosts placement and distribution accross blade enclosures
      • HA doesn’t rely on DNS anymore
      • The big change for HA is that now all nodes will use storage paths and subsystems for communication in addition to network. This will help nodes better understand the health of their neighbor nodes.
      • FT VMs still have 1 CPU limit, but probably we can use more than 1 vCore per vCPU with FT VM - have to check. VMware just increased range of FT supported CPUs and Operating Systems.
    • Auto Deploy – it is a tool for easy bulk deployment of ESXi hosts. vCenter can keep Image and Host profiles. You can create some rules that will tell vCenter what Image, Host profiles and Cluster to use while installing new host.  The bad thing for me about it is that I need to start learning PowerCLI – you need these skills to create auto deployment rules.
    • Enhanced Network IO Control – Now it is per Vritual Machine control. In vSphere 4.1 it was per port with vDS.
    • Storage IO Control – Added control for NFS storages
    • vMotion – finally VMware implemented load balancing of vMotion over several vmotion enabled VMKnics. vMotion is now supported with latency up to 10ms.
    • vCenter linux based Virtual Appliance – you can still use vCenter on Windows. Didn’t get what the difference is between those two
    • Vcenter SRM 5
      • No more requirement for hardware array based replication. It is all done now in software. You can have different storage in your Disaster Recovery site
      • Failed VMs can fallback to the main vSphere once it is fully restored
      • Proactive migration using SRM – for instance if you expect power outage in your main datastore you can manually initiate failover to DR site
    • vCenter operations 1.0 - SLA Monitoring

    Here comes the biggest concern of all admins that already built some plans for upgrade to vSphere 5 – licensing!
    The licensing is still counted in CPU units. If you have 4 CPUs in 2 ESXi hostss you will ned 4 licenses. The good news is that your CPUs are not limited by number of cores anymore. Another good news is elimination of RAM limit per host.

    Now we can proceed with bad news. VMware creates new definition – vRam Entitlement, that is, how much RAM you can assign to your VMs per license. For instance, with 1 license for vSphere 5 Enterprise edition you are entitled to use 32GB of RAM. Even if your host has 48GB of RAM you won’t be able to assign to your virtual machines more than 32GB. You can merge vRAM entitlements into Pooled vRAM, that is, the sum of your vRAM entitlements across all hosts connected to your vCenter or across all linked vCenter instances. Consumed vRAM has to be less than pooled vRAM. Amount of entitled vRAMs differs across different vSphere editions.

    Here is another example of pooling vRAM. You have 2 hosts with 2 CPUs and 96 GB of RAM each. With 4 Enterprise Edition licenses your pooled vRAM equals to 128 GB, but you want to use all 192 GB your hosts have. Then you will need to buy 2 more Enterprise edition licenses. Even though you don’t have two more CPU you can use vRAM entitlement that comes together with license per CPU.
    As I understood switched off and suspended VMs are not counted in pooled vRAM. I think it will significantly change approach to VM’s  memory provisioning and management. I can imagine that some admins will start providing VMs with less RAM than they need trying to cut licensing expenses.

    Those companies that have high overcommitment memory will need to pay extra for upgrade to vSphere 5, which makes all memory overcommitment technologies less valuable now. No matter how many GBs you can save using Transparent Page Sharing – you still have to pay for all allocated vRAM.
    There is also tool that can help you to calculate your consumed amount of vRAM and give you an idea on how many licenses to buy and what edition to choose.

    For owners of Enterprise License Agreement the upgrade process is quite straight forward – just contact VMware, they will get provide licenses to you.

    Seems like with new licensing model VMware is more aimed at the market of big companies, where flexibility of pooled vRAM across multiple vCenter instances can save some money. SMB companies will definitely need to pay more for vSphere 5. 

    Update 1: I should never use MS Word again for blogging. Screwed all my text formatiing.

    Update 2: Guys, I will really appreaciate your feedback and comments about the blog content's quality.

    Update 3: There is no firm restriction on vRAM allocation for Enterprise and Enterprise Plus editions. If you use more vRAM than you are enttitled for you will be given a warning.


    If you find this post useful please share it with any of the buttons below. 

    Tuesday, 5 July 2011

    12th of July - Don't miss the vSphere 5 announce

    I knew the new version of vSphere would be release before Vmworld 2011, but I didn't realize it is so close. According to the Vmware.com site the presenation of vSphere 5 is scheduled on 12th of July. Don't forget to register for live webcast called "Raising the bar, Part V". To make time go faster you can read the following blog posts about new features in vSphere 5.0 - 1, 2, 3.

    I personally like new storage DRS, increased size of VMFS partitions and auto deploy features. I am very eager to read some tech papers on what new technologies of CPU and memory virtualization will be implemented in new ESXi. However, the first books about vSphere 5 are not going to be realesed before September 2011.

    The only concern that is not covered in all these rumours about vSphere 5 is if licensing will be changed. I remember that before you only had to have proper support contract in order to upgrade to vSphere 4. If it is still the same, I guess we will be the one of the first companies to upgrade our virtual farms. At least, I can promise I will not rest until this happens. I feel like I am 14 again and waiting for new version of Civilization to be released :)

    one week to go!

    Why Virtual Standard Switch (vSS) doesn't need Spanning Tree Protocol

    Today I want to wirte down new things I have learnt recently about vNetwork Standard Switch in vSphere 4.1 and why it doesn't need Spanning Tree protocol.


    I assume you already have basic knowledge about switching, vlans, switching loops, Spanning tree protocol and any type of  link aggregation protocols. I will go very quickly through main features of standard vSwitches focusing on facts that are not very obvious from official documentation, at least for me. Generally speaking, this article will be more useful for people that already has some experience with vSphere networking.


    The main goal of standard vSwitch is to provide connectivity between your virtual machines and physical network infrastructure. Additionaly, it provides logical division of your VMs with PortGroups, offers different Load Balancing algorithms in case you have more that one uplink, supplies egress traffic shaping tool (from VMs to physical switches) and finally, provides Network Uplink failover detection.