Wednesday, 16 November 2011

iSCSI multipath configuration from CLI only ESXi 4.0

I have already booked my VCAP-DCA exam for 20th of December to have a deadline. So I again started reading through VCAP-DCA blueprint and study guide by Edward Grigson. At the same time I making a task list of things to practice in command line as it is my weakest point in VMware knowledge alongside with scripting and power-cli. Still have 35 days to cover at least basics of those.

Yesterday, I have played with iSCSI multipathing on a lab that I just rebuilt from the scratch and decided to document all the commands I used to be able to refresh this knowledge couple of the days before taking the exam. I will  included all steps I took before I could have ssh access to my ESXi host. Surely, all this information can be easily found in Google, but it is nice to have all you need in one place, isn't it?

Brief description of the lab config:
I have two hosts: ESXi and ESX 4.0 and one FreeNAS virtual appliance. Each host has 3 vmnics. Vmnic0 is solely for management purpose. Vmnic 1 and 2 are configured in separate network for iSCSI connections. FreeNAS has only 1 Nic and 1 IP Address on which there are 2  iSCSI targets presented.

Let's start.


1a.  Getting SSH access to ESXi 4.0 host

You don't have SSH access (it is also called Remote Tech Mode Support) enabled by default in ESXi. Therefore you need to get access to the server's console and there you need to press Alt+F1.
  • type unsupported command and you will get password prompt
  • enter the root password 
  • use vi or nano to edit  /etc/inetd.conf
  • scroll down to Remote Shell Access section and remove # before ssh, save the file and quit

  • to get change applied you need to restart inetd procces. Find inetd process id by running command ps | grep inetd. Use this id in the following command kill -hup xxxx



If you haven't done any mistake you can proceed to step 2.

1bGetting SSH access to ESXi 4.0 host

Here the procedure is a bit different.


  • run vi /etc/ssh/sshd_config and change PermitRootLogin to yes
  • restart sshd service with command service sshd restart
If you haven't done any mistake you can proceed to step 2.

2. Create  vSwitch
Basically, you will have preconfigured standard vSwitch0, but for the sake of the study I deleted it first. So I still have no SSH access :)
  • esxcfg-vswitch -a vSwitch0 

if you see just one command, it means it is the same command used for ESXi and ESX 4.0

3. Create Management and 2 iSCSI portgroups
  • esxcfg-vswitch vSwitch0 -A "Management Network"
  • esxcfg-vswitch vSwitch0 -A "iSCSI-1"
  • esxcfg-vswitch vSwitch0 -A "iSCSI-2"

4. Assign Uplinks to vSwitch and Portgroups

Note that all physical uplinks will be assigned automatically to all Portgrous. Therefore, you will need to adjust  physical vmnics assignment per each PG in the next step

  • esxcfg-vswitch vSwitch0 -L vmnic0 - for Management Network Portgroup
  • esxcfg-vswitch vSwitch0 -L vmnic1 - for iSCSI-1 Portgroup
  • esxcfg-vswitch vSwitch0 -L vmnic2 - for iSCSI-2 Portgroup

5. Adjust  physical vmnics assignment per each Portgroup
I will put here just one command to make it short. However, you may need to run more commands to have at the end one vmnic per Portgroup

  •  esxcfg-vswitch vSwitch0 -N vmnic1 -p "Management Network"

6.Create Management VMKnic and Service Console interfaces

  • esxcfg-vmknic -a -i 192.168.11.1 -n 255.255.255.0 "Management Network" - for ESXi
  • esxcfg-vswif -a -p "Management Network" -i 192.168.11.2 -n 255.255.255.0 vswif0 - for ESX

7. Create VMKnics for iSCSI Portgroups
From here I can already use SSH since we have everything configured to get remote access to hosts.
  •  esxcfg-vmknic -a -i 172.16.11.1 -n 255.255.255.0 -p iSCSI-1
  •  esxcfg-vmknic -a -i 172.16.11.2 -n 255.255.255.0 -p iSCSI-2

Traffic from/to each vmknic will always go via different Portgroup and also via different physical uplink. We will do iSCSI network traffic loadbalancing at NMP level, not networking level.

At this stage you should have the following result on your vSwitch.




Virtual Machines Networking in VMware Workstation 8 does not support Jumbo Frames, so I haven't put it here. If you test this configuration on real hardware you just need to enable JF on vSwitch, on vmknic, on physical switch between host and iSCSI storage and on iSCSI storage itself.

7. Enable iSCSI
By default iSCSI software initiator is disabled.

  • esxcfg-iscsi --enable
7a. Enable iSCSI traffic on ESX firewall
By default traffic from ESX host to tcp port 3260 is denied.
  • esxcfg-firewall -e  swISCSIClient
8. Bind both vmknic to iSCSI initiator
The software iSCSI initiator name in ESX and ESXi is always vmhba33

  • esxcli swiscsi nic add -d vmhba33 -n vmk1
  • esxcli swiscsi nic add -d vmhba33 -n vmk2
You can run esxcli swiscsi nic list -d vmhba33 to check if you configured it correctly



9. Configure iSCSI Dynamic Discovery
  • vmkiscsi-tool -D -a 172.16.1.100 vmhba33

You can check if the iSCSI targets were discovered correctly using this command:

  • vmkiscsi-tool - D -l vmhba33


10. Rescan devices on vmhba33
  •  esxcfg-rescan vmhba33
11. Check if your host can see new iSCSI devices
  • esxcfg-scsidevs -u

12. Check if your host can see 2 paths per each iSCSI target
  • esxcfg-mpath -L | grep vmhba33

Sorry for bad formatting, didn't have time to make it look good.
Hope you will find it useful probably as a hint in situations when you know the process in general, but forgot the order of steps or syntax of commands.

PS. Removing iSCSI discovered targets is not very obvious. Spend some time until figured out how to do it

  • vmkiscsi-tool -D -r 172.16.1.100 vmhba33
  • vmkiscsi-tool  -S -r "172.16.1.100 iqn.2007-09.jp.ne.peach.istgt:VMFS-1"  vmhba33

If you find this post useful please share it with any of the buttons below. 

No comments:

Post a Comment