In a past post I went into how to configure iSCSI over a LAG to give you some path redundancy over a single VMK IP. You can read about that here. For multiple reasons this is not the best way to configure Multipathing, so here is a write up on the proper way to setup the Multipathing Plugin on a VMWare ESXi 5 server (I’ve also included steps to undo what may have been setup in the past).
- Download and install winSCP from here.
- Download the EqualLogic Multipathing Agent for VMWare.
- Download, Install, and Configure the VMWare Management Agent (vMA), read about how to do that here.
- Optionally, Install VMware Update Manager, which can be used to install the MEM in the event that the
setup.pl --install script does not work.
If you’ve already had iSCSI configured on this host, it’s time to make note of a few things and then clean up before we get the EqualLogic MEM installed.
- Make note of all IPs that are being used by a host for iSCSI
- Make note of which NICs are being used by the vSwitch setup for iSCSI
- Delete the VM Kernel ports that are attached to the iSCSI vSwitch
- Delete the iSCSI vSwitch
Disable iSCSI on the Host
- Connect to the vMA using putty, and then attach to your host using the following command:
vifptarget -s <host's FQDN>
- For ESXi 4.x enter the following command:
- for ESXi 5 enter the following command:
esxcli iscsi software set -e false
- Reboot the Host
Enable iSCSI on the Host
- For ESXi 4.x enter the following command:
- for ESXi 5.0 enter the following command:
esxcli iscsi software set -e true</li>
Remove the old VMK bindings from the iSCSI HBA
For each of the VM Kernel ports that you made note of before, run the following command where <vmk_interface> is your vmk port such as vmk1, vmk2, and <vmhba_device> is your vmhba adapter for iSCSI such as vmhba38:
- For ESXi 4.x:
esxcli swiscsi nic remove –n <vmk_interface> –d <vmhba_device>
- for ESXi 5:
esxcli iscsi networkportal remove -n <vmk_interface> -A <vmhba_device>
Installing the EqualLogic Multipathing Agent
Now that our host is fresh and so clean clean, well in terms of iSCSI anyway, it’s time to start configuring the Multipathing Extension Module.
Move the Setup Script and Bundle to the vMA
- Connect to your vMA using winSCP, it should drop you into the home directory for the user ‘vi-admin’
- Find and locate the files that were extracted from the zip file you downloaded from Equal Logic, you are looking for “setup.pl” and “dell-eql-mem-esx5-X-X.X.XXXXXX.zip” the version of the .zip file will depend on whether or not you’re installing it on ESXi 4.x or ESXi 5, just make sure you copy the right file name.
- Once you’ve moved both files to the vMA, right click on the “setup.pl” file from within winSCP, select “properties”. Under the “Permissions” section of the setup.pl change the “Octal” value to “0777”, this will allow you to execute the script.
- Close WinSCP.
Configuring the MEM
- Connect to your vMA using ssh.
- You should automatically be logged into the home directory of the ‘vi-admin’ user, verify this by running a ls, and making sure you see the two files you uploaded.
- enter the following command to get started:
./setup.pl --configure --server=<esxi server's FQDN>
- Follow the bouncing ball once the script gets started, it’s going to ask you for a username and password for the host, it’s also going to ask you to name the new virtual switch, it’s going to ask you what nics to use, list each one with a space in between them, it will also ask you for an IP for each VMK port it creates, and it will ask for the IP of the Group IP you want to connect to, and a few other questions as well such as subnet mask and mtu size, whether or not to use chap, use the information you collected above and the configuration of the Array to answer the questions, and when the script completes you should see the new vSwitch and VMK ports in your configuration.
Installing the Bundle
- While still logged into your vMA run the following command: ./setup.pl –install –server=<esxi server’s FQDN>
- If you receive an error about being unable to install it, try disabling Admission Control on your HA cluster and re-running the command.
If for some reason you are unable to get the setup.pl –install command to work properly you can use the vmware Update Manager to install the Bundle.
- Install and configure vUM, according to VMware instructions.
- Import the MEM offline bundle into the vUM package repository by selecting the “Import Patches” option and browsing to the dell-eql-mem-esxn-version.zip.
- Create a baseline containing the MEM bundle. Be sure to choose a “Host Extension” type for the baseline.
- Optionally add the new baseline to a baseline group.
- Attach the baseline or baseline group to one or more hosts.
- Scan and remediate to install the MEM on the desired hosts. Update Manager will put the hosts in maintenance mode and reboot if necessary as part of the installation process.
- If you get the error:
fault.com.vmware.vcIntegrity.NoEntities.summary disable Addmission control and then try to remediate again.
Verifying that everything is working properly
- Once both the the –configure and the –install commands have been run you can run the follow command to make sure everything is working properly:
./setup.pl --query --server=<esxi server's FQDN>
It’s a little bit more work than the LAG setup, but this is the proper way to get a full and complete Equal Logic Multipathing setup installed and working.
Here is a quick guide to installing and configuring vMA 4.1 into a vSphere 4.1 installation. vMA is a management assistance tool that allows you to more easily manage your hosts or vcenter server. Follow these instructions:
- First download the vMA ovf file from here.
- Open your vSphere client and connect to your vCenter server. Click on the “File” menu and then click “Deploy OVF template…”.
- Click “Browse…” and then locate your downloaded oMA ovf file, click “Next >”.
- Click “Next >”, Agree to the EULA, and then click “Next >”.
- Give the vMA a name, and then select the Data center it will be deployed to. Click “Next >”.
- Select the host or cluster it will run on, and then click “Next >”.
- Select the Data store to place the files on, and then click “Next >”.
- Select your disk provision format, and then click “Next >”.
- Select your network from the drop down list, and then click “Next >”.
- Click Finish.
Once the import is finished we can start the wizard to configure the vMA tool. Open your vSphere client, connect to your vCenter server. Follow these steps:
- Find your vMA VM, open its console and click start.
- The vMA will boot to a prompt asking to use DHCP to assign an IP. Enter “no” and press “Enter”.
- It will now prompt for am IP address, enter an IP address and the press “enter”.
- It will now prompt for a Subnet mask, enter a mask and then press “enter”.
- It will now prompt for a gateway, enter the IP address of your gateway and then press “enter”.
- It will now prompt you twice for your primary and secondary DNS, enter the IP addresses and press “enter” after each.
- It will prompt you for the vMA’s hostname, enter a FQDN and then press “enter”
- Type “yes” to confirm the settings and then press “enter”.
- the vMA vm will now reboot, and when it comes back up it will prompt you twice for a password.
- The VM will now display a screen telling you how to SSH into the box. For now press “Alt” and F2″ to enter the virtual terminal. Login with “vi-admin” and the password you just created.
Before we continue we should make sure that our Active Directory contains a security group called EXACTLY: “ESX Admins” and contains the accounts that we want to have Administrator access to our ESX/ESXi hosts. During the domain join process this group will automatically be granted the Administrator role on each ESX/ESXi host.
Now we need to join the vMA to the active directory domain. If you’re not already logged into the Virtual Terminal on the vMA vm, then follow setup 10 above and then perform the following:
- Enter the command “
sudo domainjoin-cli join <your domain fqdn> <your AD domain username>” press “enter”
- The vMA will now prompt you for the password for the “vi-admin” account created on the vMA. Enter it and then press “enter”.
- The vMA will now prompt you for the password for the Active Directory user account you are trying to use to join it to the domain, enter the password and then press “enter”.
- You should now receive an error about the PAM module, and the word “SUCCESS” at the bottom of the screen. You’ve successfully joined to the Active Directory domain.
If we’ve not already joined our ESXi servers to the Active Directory domain now is a good time to do so. This is not a required step, but it will allow us to cut down on the amount of usernames and passwords we’ll need to use to configure our ESXi hosts when using the vMA. Follow these steps:
- Open the vSphere client and connect to your vCenter Server.
- Navigate to “Inventory” and then “Hosts and Clusters”.
- Select the first ESXi host, and then click on the “Configuration” tab.
- Click on “Authentication Services” and then click on “Properties…”.
- Change the “User Directory Service” from “Local Authentication” to “Active Directory”.
- Enter your domain name in the box titled “Domain:” and then click “Join Domain”.
- When prompted enter your Active Directory name and password, and then Click “OK”.
- Click the “Permissions” tab.
- Right Click and select “Add Permission…”.
- Change the drop down box to “Administrator” and then click the button titled “Add…”.
- Highlight users and/or groups that should be added to the list of local administrators on your ESXi server. Click the button titled “Add”. Click “OK”.
- Click “OK” again to add the permission.
The next thing we need to do is configure our vMA with a list of servers to manage, and which authentication type to use to manage them. Follow these steps:
- Open the console for your vMA
- If you’re not already logged in, log in as “vi-admin”
- Enter the following command to add your servers “
vifp addserver <host's FQDN> --authpolicy adauth” and then press “enter”
- When prompted for a username enter <domain>\<username> of a user who was granted administrator permissions on that ESXi host. Make sure the host is not in standbymode, otherwise you’ll get an error.
- repeat this step for each host and the vcenter server.
Now that we’ve got all of our servers in the list we can issue commands to them by appending the following to each command
--server <Host's FQDN> or if you get tired of having to specify the server each time you can set which server to use by issuing the following command:
vifptarget -s <host's FQDN>. To clear the currently selected server issue the following command to the vMA:
vifptarget -c . Also, if you get tired of having to type your Username and password in each time you can just append the following flag to the end of each command:
I ran into a problem recently when configuring vMA for ESX/ESXi 4.1. I was able to join it, as well as, the ESXi hosts to the domain, but I was unable to log into the ESXi hosts with my AD credentials with either the vMA or the vSphere client. I double checked that my AD account did have Administrator permissions on the hosts, but still I could not log in. I was given the following error by the vSphere Client, as well as the vMA console:
Error connecting to server at 'https://<hostname>/sdk/vimService.wsdl':
Fault string: A general system error occurred: gss_acquire_cred failed
Fault detail: SystemErrorFault
The interesting thing is this: If i manually specified which account to use, instead of checking the box to use the account I was logged in with. I could connect and perform the actions I wanted to do. If I checked the box, then I got the error: “gss_acruire_cred failed”. The was was true with vMA. If I used the –passthroughauth option the command would fail, but if I allowed vMA to prompt me for a username and password the command would succeed. Only Integrated Authentication between windows and the vmware software was failing.
I did some research, and it turns out that when ESXi is installed on USB Drive, or SD card, or flash memory it does not automatically create Persistent Scratch space. This is the space that’s used to store temporary data among other things. This lack of persistent scratch space was somehow effecting the login process, but only when trying to pass credentials from a windows session and not by typing them in manually.
Here is how you can configure Persistent Scratch space on either local storage or a vmfs volume using the vSphere client:
- Connect to vCenter Server or the ESXi host using the vSphere Client.
- Select the ESXi host in the inventory.
- Click the “Configuration” tab.
- Click “Storage”.
- Right-click a datastore and select “Browse”.
- Create a uniquely-named directory for this ESX host (ex.
- Close the Datastore Browser.
- Click “Advanced Settings” under “Software”.
- Select the “ScratchConfig” section.
- Change the
ScratchConfig.ConfiguredScratchLocation configuration option, specifying the full path to the directory. For example:
- Click “OK”.
- Put the ESXi host in maintenance mode and reboot for the configuration change to take effect.
Once the host is rebooted you’ll be able to use vMA with the –passthroughauth flag, or login by checking the box on the vSphere client to use the account you’re already logged in with. To read more about this check out this link to VMware’s KB1033696
When we setup our switches, we changed the mtu on our vlan for iSCSI traffic. Now we need to edit the mtu on our iSCSI port groups, and vSwitch to also allow jumbo frames.
The first thing we need to do is take stock of what virtual swtich and port group we’re using for iSCSI traffic on each ESXi host. Follow these steps:
- Log into your host or vCenter server and then navigate over to your host’s “Configuration” tab.
- Click “Networking” on the left.
- Verify the Port Group name, Virtual Switch name, vmk number, IP address, and
which vmnics are being used. See Figure 1.
- If you’ve not already installed either vCLI or vMA see the posts on how to install and configure them Here and Here.
- Open either vCLI or ssh into your vMA VM.
- enter the following command “
esxcfg-vswitch -m 9000 <vSwitch's name> --server <Hosts's FQDN>“
- When prompted for a username and password enter the name and password of an account with Administrator permissions on that host.
- Verify that this change has taken effect by running the following command: “
esxcfg-vswitch -l --server <Hosts's FQDN>“. The MTU for your vSwitch should now be displayed as 9000.
We can’t modify the mtu on our port group, so we’ll need to migrated any VMs on iSCSI storage on this Host off of this Host and then remove our iSCSI port group. Once you’ve migrated any running VMs follow these steps:
- Open the Preferences of your vSwitch that we just modified.
- Select the port group in questions, and then click “Remove”.
- Now enter the following command in either vCLI or the vMA, “
esxcfg-vswitch -A“. This command re-created our iSCSI port group, attached it to our vSwitch, but did not add a vmknic to the port group.
"iSCSI" <vSwitch's name> --server <Host's FQDN>
- Now enter the following command to re-create the vmknic, “
esxcfg-vmknic -a -i <ip address> -n <netmask> -m 9000 "vmk#" -p "iSCSI" --server <Host's FQDN>“.
- We can now verify that our port group has the correct mtu by running the following commands: “
esxcfg-vmknic -l --server <Host's FQDN>” and “
esxcfg-vmknic -l --server <Host's FQDN>“. Check the MTU settings on both your Port group and Nics, they should now both be 9000.
We now need to rescan our iSCSI software adapter, and refresh our storage view to make sure our iSCSI Datastores are re-connected. Follow these steps:
- Click on “Storage Adapters” under the “Configuration” tab of your Host.
- Scroll down to your iSCSI Software Adapter, and then click “Rescan All…” in the top right, Verify that the iSCSI LUN(s) have re-appeared.
- Now click on “Storage” under the “Configuration” tab of your Host.
- Click “Rescan All…” in the top right of the screen. Verify that your iSCSI Datastores have re-appeared.
Finally let’s verify that our iSCSI network is fully supporting our jumbo frame size. Follow these steps:
- Log into the console of your ESXi Host.
- Press F2 to customize your host.
- When prompted, log into your Host. Scroll down to “Troubleshooting Options”. Press “enter”.
- Press enter on “Enable Local Tech Support” to enable it.
- Now press “Alt and F1” to enter the console, and then log in again.
- Enter the following command: “
vmkping -s 9000 <IP Address of your SAN's iSCSI interface>“. The ping should work and confirm that the mtu is 9000. If this does not succeed double check the mtu settings on your switches and SAN.
- Press ” Alt F2″ to exit the local console.
- Press enter on “Disable Local Tech Support” to disable the local console on your host.
- Exit your hosts’s console.
That’s it, your host is now configured to use jumbo frames, and now you can repeat these steps on the remaining Hosts.
Now that we’ve got our vCenter server setup and running it’s time to finish up it’s basic configuration and get our ESXi servers added to it.
The first thing we’re going to need to do is create a datacenter. Follow these steps:
- Right click on the vCenter server in the upper left part of the screen.
- Select “New Datacenter”, assign it a name.
Now we’ll add the Hosts to the newly created Data Center.
- Right click on the Datacenter you just created and select “Add Host…”.
- Enter the Hosts’s Name, the username (root) and the password configured during the ESXi Host’s orgininal setup process. Click “Next >”.
- Click “Yes” when the Security Alert appears.
- Click “Next >” to confirm the summary .
- Assign a license to the Host, or choose evaluation, and then click “Next >”.
- Check “Enable Lockdown Mode” if you want it enabled, Click “Next >”.
- Select the location for your VMs, if there are any. Click “Next >”.
- Click “Finish”.
Repeat this for each of your Hosts, and when you’ve added them all we can move on to creating a HA / DRS cluster.
- Right click on the Datacenter you just created. Select “New Cluster…”.
- Give your new cluster a name, and then select if you want to enable HA or DRS or both. For the purposes of this write up, we’ll be enabling both. Click “Next >”.
- The first section asks to configure your DRS automation level. I configure this as “Fully automated” and with Priority 1,2,3, & 4 recommendations being performed. Click “Next >”.
- The next section asks how to configure Power Management automation. I configure this to be automatic, and leave the DPM Threshold at the default. Click “Next >”.
- The next section asks about how to configure HA. I leave these at the default settings. Make changes if you wish and then click “Next >”.
- The next section asks about how to handle VMs that stop responding and Hosts that stop responding. I leave these settings at their defaults. Make changes if you wish and then click “Next >”.
- The next section asks about monitoring the guest VMs. Enable VM Monitoring if you want, and then set your sensitivity level. Click “Next >”.
- The next section asks about EVC, if you are running hosts with different versions of processors, then you should enable this, if all of your hosts are identical, you can leave this disabled. Click “Next >”.
- The next section asks about the VM Swap file location. Unless you have a specific reason to do so I would not modify this. I leave it at the default unless I’ve got a raid 0 volume setup somewhere. Click “Next >”.
- Click “Finish” to create you cluster.
Now we need to add our hosts to the newly created cluster. Drag your first host into the cluster and when you drop it you’ll be put into the “Add Host Wizard” Follow these steps to add the host to the cluster:
- The first section will ask you where you want to place the host’s VMs if there are any, if you’ve configured resource pools you and select one, otherwise leave this at the default setting and click “Next >”.
- Click “Finish”.
The last thing we need to do for our hosts is configure their Power Management settings. I’m using Dell servers, so I’m going to configure the Power Managment settings with the IP address, Mac address, and Username/password of the build in iDRAC on each server. Follow these steps:
- From the Hosts and Clusters Inventory,Click on the first host, and then click on the “Configuration” tab.
- Under the “Software” section click “Power Management”.
- Click “Properties…” in the top right corner of the screen.
- Enter the Username, Password, IP address, and MAC address of the host’s iDRAC interface. Click “OK”.
- If Power Management is configured on your cluster, the cluster can now put this host to sleep and wake it up when it’s needed.
Finally, the last thing we need to do to finish basic configuration is configure email alerts on the vCenter server. Follow these steps:
- Go to the “Home” screen in the vCenter client.
- Click on “vCenter Server Settings”.
- Click “Mail” in the left hand pane.
- Enter your SMTP server’s address, and enter a sender account for vCenter server. Click “OK”.
That’s it. We’re done with the basic configuration of vCenter server, our hosts, and our first cluster. We’ll move onto more advanced topics in future posts, such as Resource Pools, Cloning,