Monday, 4 April 2016

StarWind Virtual SAN review - Part 2 - Network/iSCSI topology

In this chapter I will be focusing on Networking topology and Multipathing configuration.

Multipathing heavily depends on Networking configuration, therefore, these two topics shouldn't be discussed separately.

As I already mentioned StarWind Virtual SAN has two deployment alternatives:

1. Traditional setup when compute and storage layers are separated. In this scenario StarWind Virtual SAN runs on a separate set of servers.


2. Hyperconverged setup where StarWind Virtual SAN servers run on top of the Hypervisors. It is ought to be noticed that there is a major distinction between Hyper-V and other hypervisors. With ESXi or Xen Server the StarWind will be running inside Guest VMs whereas with Hyper-V the Virtual SAN will be installed as a native Windows application. 

Please note that I am reviewing hyperconverged scenario with StarWind virtual servers running on top of ESXi servers. Configuration of storage and networking for Hyper-V or Xen Server may noticeably differ.

Let's have a look at standard diagram of SW HA solution running on 2 x ESXi servers.




As you can see there are 4 different types of networks required by Virtual SAN. Let's briefly explain what each type of the network does:

1. VM Network - that's where you connect the management interface of your Virtual SAN servers.

2. Synchronization network is used to mirror storage data between StarWind Virtual SAN appliances. According to the best practices document the total bandwidth of all SYNC interfaces should be at least equal to the sum of iSCSI throughput between ESXi hosts and StarWind servers. That makes perfect sense since every single packet sent by the ESXi host to the local virtual Virtual SAN server has to be mirrored to another Virtual SAN server.

3. iSCSI network - that's where the actual iSCSI packets are exchanged between ESXi hosts (iSCSI Initiators) and StarWind Virtual SAN (iSCSI Targets). For HA scenarios there always have to be at least two iSCSI networks per host:
a. Local iSCSI network - that where the ESXi host talks to the locally hosted SW Virtual SAN 
b. Remote iSCSI network - this network allows the ESXi host to communicate with SW VSAN on another ESXi host in case the local Virtual SAN server fails. There is slightly different networking configuration for Round-Robin and Fixed multipathing. That will be discussed a bit later.
4. Heartbeat Network is used to avoid split-brain scenarios. When StarWind node fails to contact its HA partner via Synchronization network it will try to reach its partner node using heartbeat network. Therefore, it is very important to keep SYNC and Heartbeat traffic on separate set of physical NICs and physical switches. If you want to save your company some bucks you could just use a pair of crossover links. Heartbeat traffic can also be co-exist with other types of networks, e.g. the Management.
Now let's go back to iSCSI multipathing choice. Basically I can see two options here and your choice depends on what Path Selection Protocol (PSP) you choose: Fixed or Round Robin. With Fixed your ESXI host always talks to the locally hosted Virtual SAN servers unless it fails and then all iSCSI paths are switched to the SW VSAN hosted on the second ESXi server. This is the only supported PSP when running Virtual SAN on ESXi according to StarWind Virtual SAN Best Practices document.

There are also some blog posts and StarWind white papers stating that Round Robin can significantly improve READ operations as ESXi host will talk to both SW VSAN servers evenly distributing the load. Basically, it is like RAID-10 over network. Thus, the disk spindles of all Virtual SAN servers will be servicing READ requests. However, it comes with slight performance penalty for WRITE operations. Therefore, you need to thoroughly analyze your workload profile and make a call.


Let's talk about Fixed PSP first.


StarWind installation guide instructs to use at least 3 pairs of iSCSI initiators and targets per ESXi host. Each pair should be placed in its own subnet/portgroup. Each portgroup should be placed on a separate vSwitch.
*Note Even though separate vSwitches are best-practice you can still use a single vSwitch as long as you correctly configure proper active uplinks per each PortGroup. Both - Standard and Distributed vSwitches are supported.

One of the vSwitches doesn't require physical uplinks as it will be used for local iSCSI traffic only. The other two need to have a single separate physical interface.

So here is how your pairs of iSCSI initiators and targets should look like:



So we end up with the situation when two physical links between ESXi hosts doing nothing as all iSCSI traffic stays local in the ESXI host. Therefore, you can safely use the same links for SYNC traffic. The thing is that these links ARE NOT used for iSCSI traffic when both SW VSAN appliances are online, but if one of them goes down there will be no SYNC traffic. So, with Fixed PSP there is always just one type of the traffic between Virtual SAN servers. You don't even a separate pair of virtual NICs in Virtual SAN servers - just use secondary IP addresses for SYNC traffic. 


There are cases when Round Robin is a preferable solution as it provides significant increase of Read IOPs since the traffic is evenly balanced across both SW VSAN servers. Basically, all disk spindles from both ESXi hosts contribute into Read IOPs.

In this scenario I would recommend using separate 1GB physical links for SYNC and iSCSI traffic to avoid network congestion

Another option would be switching to 10GB/40GB links with thorough monitoring of bandwidth and keeping both traffic types on the same set of physical links. 


Also it has to be noted that StarWind support direct connected 10/40/100 GbE links which lets you achieve best performance while saving some bucks on expensive switching gear. 

But if you have iSCSI and SYNC traffic competing for the bandwidth on the same set of physical links there is a special setting in Virtual SAN which can be used to set priority for each of the traffic types.



Another important setting to be considered with RR is number of iSCSI paths from ESXi to SW. You need to make sure ESXI host has even number of paths. For instance, with network topology we used in previous scenario and multipathing set to Round Robing the ESXi server will be sending 1 IOPS to local SW and 2 IOPS will be send over the network to the SW hosted on second ESXI. This is pretty imbalanced config which can result in unpredictable storage performance.



No comments:

Post a Comment