Monday, 31 August 2015

Another budget vSphere home lab post – Part 2


The first part of the post can be found here

Finally I got some time to write a second part about my home lab.

We'll start with some pictures of the build process, but if you find them boring just scroll down to the next section.





I just have to mention that the assembly process was very simple and straightforward even for the guy who built his last PC about 15 years ago. Took me only a couple of hours to get both servers powered on.






















ASROCK and IPMI


Well, first of all I would like to notice that ASRock documentation on the motherboard sucks. It very poorly covers the configuration of the IPMI interfaces and its undocumented features made me spent a couple of hours before I figured out what was going on.


So the motherboard ASRock C2750D4I comes with 3 interfaces - 1 IPMI and 2 Gigabit LAN interfaces.





That's how it looks like - simple and clear, right? Well, I was wrong too.


When I plugged all 3 network cables into my switch and powered on the server I found out that the server has received 2 IP addresses. How come? It had no OS installed and I only expected IPMI interface to get an IP address.

Turns out IPMI functionality is fault tolerant. Apart from dedicated IPMI interface (number 2 in the Illustration above) the LAN1 port (number 3) is also used by IPMI for redundancy. So basically port LAN1 is shared between IPMI and the OS.

To make it more complicated ASRock gave interfaces strange numbers in IPMI web interface and the BIOS. So the LAN1 port is called Eth0 and Dedicated IPMI interface is called Eth1. And these names have nothing to do with regular Gigabit interfaces - the names are used solely for IPMI.

To get the things back to more common approach you may want to disable Eth0 interface in IPMI Web interface and configure Eth1 interface with static IP address
I bet you will have to read this text twice to understand what ASRock totally ignored in their documentation.


Networking


I haven't had any experience with SOHO switches so I asked guys at Spiceworks for a recommendation and their advice was to go with Netgear Prosafe range. However, my colleague just bought a TP-Link Smart Switch and to my surprise its CLI was very similar to the Cisco. I used to be a Cisco networking guy so I just fell in love with TP-Link switches.

I did a quick count of the minimum port number I needed. I had 2 servers with 3 NICs in each. Then I had Synology DS415 with 2 NICs. And finally I needed an uplink to my ADSL modem. So the minimum port number was 9. Provided that I am going to throw in another 2 or 4 Gigabit ports into each of the servers I went with 16 GB ports switch - TL-SG2216




It has most of the L2 features - vlans, trunks, etherchannels (including LACP support) and that's pretty much I needed.


Another great feature of the switch is that it is fanless and produces no noise at all.

It would be great to have a descent L3 Gigabit switch, but all options I looked through were above my budget.
So I ended up with Sophos UTM Home Edition as my router which comes with a lot of other nice features for free. If you haven't used it before - give it a go. I have never seen simpler and more reliable virtual router appliance, even though it is not really fair to call it just a Router. It can do much more and it is just great for home labs.



Synology - LACP and Trunk configuration


When I started the configuration of Synology I noticed that I can easily bond two NICs and I can even assign a vlan ID to it, but I can't have multiple vlan IDs configured on bond interface and therefore I can't have more than 1 IP address assigned. Basically, it is not possible to configure trunk in GUI.
Quick search revealed that it is pretty easy to configure mutliple vlans/IPs on the same network bond.

Basically you create a bond in GUI and then you simply copy existing bond configuration file and customize for a new vlan/IP.







And after that  you can see all of them in GUI though each has the same name





The only thing I haven't found out yet is how to assign particular service to particular Bond interface. For instance, I would like to have Media player working only on first interface, Management - on the second and iSCSI on the third. Currently all of the services are available on all IP addresses assigned to the Synology. Not a big deal for a home lab, but I really hope Synology will address this issue in the new release of DSM 6.0 at the Synology 2016 event which is happening in the mid of September. 

I have already booked an invite for this event in Sydney.


Performance of the Avoton CPUs

I will show you some screenshot to give an idea on what kind of load that lab is handling right now. Btw, originally I have bought 16 GB of RAM per each host, but I also had 4 x 4GB DIMMs so currently each host has 24GB of RAM.

Here is the list of my powered on VMs. Most of the memory is consumed by vROps nodes and vCenter.




Here are the CPU stats for the last week. I haven't been doing anything CPU intensive on the lab, but even when I was running some CPU intensive tasks I haven't seen CPU usage going higher than 60% on any of the servers.






These are the stats of the RAM usage. Definitely it is my bottleneck for now. Even though I have 48GB of RAM I have to choose what VMs to run for each of the LAB scenarios. Currently, for instance, I can't have running vROPs and vRA at the same time. Definitely, it will be the first thing to upgrade in the lab. Looking forward to Christmas and Black Friday sales.




And here is the datastore latency stats. Even though there are some spikes above 200ms I have no complaints about performance considering that all VMs are running on just 4 x 1TB 7.2K drives in RAID5 configuration and that the Synology is also shared by all members of my family as a home media library.

So I can definitely see no performance issues with this lab even though I was pretty uncertain the Avoton CPUs would be able to handle the load. I think I can easily double the amount of RAM and the number of VMs and still have no issues with CPU performance, but I think in that case the storage will become the bottleneck.


With all that said I ought to notice that you can still notice that some tasks can take longer to complete, especially if it is some single threaded application, e.g. Windows Update. It took twice as long to apply all Windows updates to my Windows VM template. Installation of the vCenter took about hour and a half. Therefore, don't expect this CPU to show top scores, but it definitely shows very descent performance for a home lab and its position at the bottom of the High End CPU chart proves it.


I didn't have time to come up with a tangible benchmark so I just recorded a short video where you can see the following:

  • How responsive the vSphere vCenter with web client is
  • How long it takes to vMotion a small VM with FlashCache
  • How long Storage vMotion takes
  • How responsive vROps 6.0 is





Power consumption and noise level

As I expected this entire setup consumes slightly more than the light bulb we used to have when I was 7 years old.


With regard to noise level surprisingly the Synology is the loudest part of the lab, but still it is very quiet in general and it is not annoying at all. Maybe a bit later I will swap the original fans in Synology and in the servers with Noctua sound-optimised fans.


Here is how it all looks now




Probably the 10RU server rack that can fit under the desk would be the ideal cherry on the cake. There are few options on Ebay, but I haven't made my decision yet.

In a nutshell, I believe this was the most valuable investment in my self-education. I can test almost every VMware products and feature and definitely I can deploy pretty complex lab scenarios provided that Avoton CPUs proved to be very descent performance wise and that I can expand this current setup up to 128GB of RAM. 

Will be happy to answer any questions about this setup.

20 comments:

  1. The 128W figure is for the entire setup shown in the "final look" picture ? Can you measure the consumption of one server ?

    ReplyDelete
  2. I am have no access to the lab now, but I have measured Synology separately and it was consuming 48-50W even though its documentation says it should be less than 40W.
    The switch consumes about 10W.

    That means that each server consumes about 35W.

    ReplyDelete
  3. Your parts list has 2 C2750D41 server motherboards and one C2550D41 motherboard but you have only built two total systems. Which motherboards did you use?

    ReplyDelete
    Replies
    1. Thanks for pointing this out. I think I just messed up the tables with specs. That C2550 was supposed to be an option to replace Synology with white box NAS.

      I will update the post.

      Delete
  4. Hi, thanks for this article. I want to build a home lab server too and I was curious about the performance. How many VM can you run in the same time ? How fast are this VM ? Any informations about performance will be welcome ... :)

    ReplyDelete
    Replies
    1. Hi James,

      so far my 48GB of RAM were the only performance bottleneck. I normally run about 10 VMs - mostly VMware products, like vCenter, vRA, vROps and some Windows servers. Surely, you can see that it takes a bit longer to install applications on VMs, but I have never seen hosts' CPUs using more than 60% of its capacity. So I believe I can easily double number of VMs.

      Delete
    2. this means that one Avoton (C2750D4I) + 48GB Ram, runs 10 VMs without problems, and you also think that will be possible to double the Ram and VMs and the CPU will be able to handle it? Thanks!

      Delete
    3. That's correct. if you don't run some heavy servers, e.g. DB, you are safe to go with this CPU. It definitely runs slower that regular CPUs- restarting vCSA takes 10-15 minutes, but overall performance is satisfactory. And I like my home lab wattage.

      Delete
  5. how about CPU and case fans ? which one you used ?

    ReplyDelete
    Replies
    1. I used Thermaltake Black Core V1 Mini ITX Chassis with comes with 140mm fan. Not sure what model it is. The CPU doesn't require fans

      Delete
  6. Does your registered RAM work with asrock motherboard ? Since it says only UDIMMs are supported

    ReplyDelete
    Replies
    1. What I did was to go to Crucial's website and select the motherboard make. Even though Crucial isn't listed on Asrock's HCL, you can buy them by checking compatibility on their site:

      http://uk.crucial.com/gbr/en/compatible-upgrade-for//c2750d4i

      (UK link, but they'll probably redirect you to your country's website)

      Delete
  7. Hi, excellent report - and it's finally swaying me into buying a (very similar) lab setup.

    Quick question - I'm presuming that with the IPMI interface, there is no need for a KVM (i.e. can you remotely install OS's, etc via the IPMI interface)?

    Finally, the Asrock motherboard only has 4 slots (which, unless I'm mistaken - I frequently am! - allows a maximum of 4 x 16GB = 32GB). However, you've indicated that you're using 48GB? Probably my misunderstanding, but thought I'd check?

    Cheers,


    Nick

    ReplyDelete
    Replies
    1. Of course, 4x16GB does not equal 32GB, but 64GB! I need a coffee... So, you've obviously bought 3x16GB modules...

      I'll go and lie down now! :p

      Delete
    2. Hi Nick,

      yeah, these babies rock. I got two more servers now with Xeon D CPU, but Avoton servers still do their job.

      Yeah, it has the IPMI interface so you can mount the ISO file via IPMI and install OS.

      I have 48 GB in total -24GB in each (2 x 8GB, 2 x 4GB). The server doesn't support odd number of modules.

      4 x 16 GB is very expensive configuration (almost 600 pounds) to be honest if you compare with DDR4 RAM prices. So if you need more than 32GB per server I would consider another solution.

      Delete
    3. Many thanks for clarification on IPMI - I thought that was the case, just thought I'd ask!

      I've considered a number of solutions thus far, and with the soaring pound/dollar scenario, I've plumped for the two Avotons. Yes, the RAM is expensive, but the Motherboard/CPU option is relatively cheap, so potentially offsets the RAM.

      I did consider some of the Supermicro Superservers and the Intel NUCs. The Intel NUCs are indeed limited only to 32GB, and the Supermicro servers are only available via one reseller here in the UK. Maybe if I'd been purchasing these later on in the year/New Year, I might have plumped for the Supermicro servers (E300/E200/5028D). Oh well, maybe next year! :)

      Delete
    4. No worries. If 32GB RAM limit is a showstopper for you I would strongly recommend to look at Supermicro. However, I have to warn you - E200/E300 are pretty loud. This is a common problem for 1U servers. So, I would recommend considering to buy the motherboard they install in E200 (http://www.supermicro.com/products/motherboard/Xeon/D/X10SDV-6C-TLN4F.cfm). Then you can find a bigger case of your choice will be super quiet and still have all the benefits of Xeon D servers - performance and 128GB of memory.

      I have bought E200 myself and it is amazing server, but it is still too loud to keep it in the living room. So for my next server I bought x10sdv-6c+-TLN4F mobo and literally just 30 minutes ago finished the build of the server in a Thermaltake F1 Suppressor case. Apart from the quietness of this case you also get more slots for disks and now you can use PCI slot for anything you need. The mobo is not that expensive, I got mine for less than 700 USD.

      Delete
    5. Ah-ha - now that might have been an option - I hadn't thought about using the same motherboard but in a different case!

      My theory about the 32GB limit was to buy 32GB now (2 x 16GB in each Avoton) and then (if required) buy more RAM in the future (even if it is expensive). But, yes, c. 145 GBP per 16GB module (directly from Crucial) is a bit steep...

      Delete
    6. Yes, DDR4 for Xeon D is way cheeper. I bought 2 x 32GB DIMM for 175USD each.

      Delete
  8. This comment has been removed by the author.

    ReplyDelete