Category Archives: Storage Area Networks

Setup and Install the EqualLogic Multipathing Agent for VMWare ESXi 5

In a past post I went into how to configure iSCSI over a LAG to give you some path redundancy over a single VMK IP. You can read about that here. For multiple reasons this is not the best way to configure Multipathing, so here is a write up on the proper way to setup the Multipathing Plugin on a VMWare ESXi 5 server (I’ve also included steps to undo what may have been setup in the past).

Prerequisites

  1. Download and install winSCP from here.
  2. Download the EqualLogic Multipathing Agent for VMWare.
  3. Download, Install, and Configure the VMWare Management Agent (vMA), read about how to do that here.
  4. Optionally, Install VMware Update Manager, which can be used to install the MEM in the event that the setup.pl --install script does not work.

Cleaning Up

If you’ve already had iSCSI configured on this host, it’s time to make note of a few things and then clean up before we get the EqualLogic MEM installed.

  1. Make note of all IPs that are being used by a host for iSCSI
  2. Make note of which NICs are being used by the vSwitch setup for iSCSI
  3. Delete the VM Kernel ports that are attached to the iSCSI vSwitch
  4. Delete the iSCSI vSwitch

Disable iSCSI on the Host

  1. Connect to the vMA using putty, and then attach to your host using the following command: vifptarget -s <host's FQDN>
  2. For ESXi 4.x enter the following command: esxcfg-swiscsi –d
  3. for ESXi 5 enter the following command: esxcli iscsi software set -e false
  4. Reboot the Host

Enable iSCSI on the Host

  1. For ESXi 4.x enter the following command: esxcfg-swiscsi –e
  2. for ESXi 5.0 enter the following command: esxcli iscsi software set -e true</li>

Remove the old VMK bindings from the iSCSI HBA

For each of the VM Kernel ports that you made note of before, run the following command where <vmk_interface> is your vmk port such as vmk1, vmk2, and <vmhba_device> is your vmhba adapter for iSCSI such as vmhba38:

  1. For ESXi 4.x: esxcli swiscsi nic remove –n <vmk_interface> –d <vmhba_device>
  2. for ESXi 5: esxcli iscsi networkportal remove -n <vmk_interface> -A <vmhba_device>

Installing the EqualLogic Multipathing Agent

Now that our host is fresh and so clean clean, well in terms of iSCSI anyway, it’s time to start configuring the Multipathing Extension Module.

Move the Setup Script and Bundle to the vMA

  1. Connect to your vMA using winSCP, it should drop you into the home directory for the user ‘vi-admin’
  2. Find and locate the files that were extracted from the zip file you downloaded from Equal Logic, you are looking for “setup.pl” and “dell-eql-mem-esx5-X-X.X.XXXXXX.zip” the version of the .zip file will depend on whether or not you’re installing it on ESXi 4.x or ESXi 5, just make sure you copy the right file name.
  3. Once you’ve moved both files to the vMA, right click on the “setup.pl” file from within winSCP, select “properties”. Under the “Permissions” section of the setup.pl change the “Octal” value to “0777”, this will allow you to execute the script.
  4. Close WinSCP.

Configuring the MEM

  1. Connect to your vMA using ssh.
  2. You should automatically be logged into the home directory of the ‘vi-admin’ user, verify this by running a ls, and making sure you see the two files you uploaded.
  3. enter the following command to get started: ./setup.pl --configure --server=<esxi server's FQDN>
  4. Follow the bouncing ball once the script gets started, it’s going to ask you for a username and password for the host, it’s also going to ask you to name the new virtual switch, it’s going to ask you what nics to use, list each one with a space in between them, it will also ask you for an IP for each VMK port it creates, and it will ask for the IP of the Group IP you want to connect to, and a few other questions as well such as subnet mask and mtu size, whether or not to use chap, use the information you collected above and the configuration of the Array to answer the questions, and when the script completes you should see the new vSwitch and VMK ports in your configuration.

Installing the Bundle

  1. While still logged into your vMA run the following command: ./setup.pl –install –server=<esxi server’s FQDN>
  2. If you receive an error about being unable to install it, try disabling Admission Control on your HA cluster and re-running the command.

If for some reason you are unable to get the setup.pl –install command to work properly you can use the vmware Update Manager to install the Bundle.

  1. Install and configure vUM, according to VMware instructions.
  2. Import the MEM offline bundle into the vUM package repository by selecting the “Import Patches” option and browsing to the dell-eql-mem-esxn-version.zip.
  3. Create a baseline containing the MEM bundle. Be sure to choose a “Host Extension” type for the baseline.
  4. Optionally add the new baseline to a baseline group.
  5. Attach the baseline or baseline group to one or more hosts.
  6. Scan and remediate to install the MEM on the desired hosts. Update Manager will put the hosts in maintenance mode and reboot if necessary as part of the installation process.
  7. If you get the error: fault.com.vmware.vcIntegrity.NoEntities.summary disable Addmission control and then try to remediate again.

Verifying that everything is working properly

  1. Once both the the –configure and the –install commands have been run you can run the follow command to make sure everything is working properly: ./setup.pl --query --server=<esxi server's FQDN>

 

It’s a little bit more work than the LAG setup, but this is the proper way to get a full and complete Equal Logic Multipathing setup installed and working.

 

Enabling Jumbo Frames on your iSCSI vmnics and vSwitches ESXi 4.1

When we setup our switches, we changed the mtu on our vlan for iSCSI traffic. Now we need to edit the mtu on our iSCSI port groups, and vSwitch to also allow jumbo frames.

The first thing we need to do is take stock of what virtual swtich and port group we’re using for iSCSI traffic on each ESXi host. Follow these steps:

  1. Log into your host or vCenter server and then navigate over to your host’s “Configuration” tab.
  2. Click “Networking” on the left.
  3. Verify the Port Group name, Virtual Switch name, vmk number, IP address, and
    iSCSI Port Group

    Figure 1

    which vmnics are being used. See Figure 1.

  4. If you’ve not already installed either vCLI or vMA see the posts on how to install and configure them Here and Here.
  5. Open either vCLI or ssh into your vMA VM.
  6. enter the following command “esxcfg-vswitch -m 9000 <vSwitch's name> --server <Hosts's FQDN>
  7. When prompted for a username and password enter the name and password of an account with Administrator permissions on that host.
  8. Verify that this change has taken effect by running the following command: “esxcfg-vswitch -l --server <Hosts's FQDN>“. The MTU for your vSwitch should now be displayed as 9000.

We can’t modify the mtu on our port group, so we’ll need to migrated any VMs on iSCSI storage on this Host off of this Host and then remove our iSCSI port group. Once you’ve migrated any running VMs follow these steps:

  1. Open the Preferences of your vSwitch that we just modified.
  2. Select the port group in questions, and then click “Remove”.
  3. Now enter the following command in either vCLI or the vMA, “esxcfg-vswitch -A
    "iSCSI" <vSwitch's name> --server <Host's FQDN>
    “. This command re-created our iSCSI port group, attached it to our vSwitch, but did not add a vmknic to the port group.
  4. Now enter the following command to re-create the vmknic, “esxcfg-vmknic -a -i <ip address> -n <netmask> -m 9000 "vmk#" -p "iSCSI" --server <Host's FQDN>“.
  5. We can now verify that our port group has the correct mtu by running the following commands:  “esxcfg-vmknic -l --server <Host's FQDN>” and “esxcfg-vmknic -l --server <Host's FQDN>“. Check the MTU settings on both your Port group and Nics, they should now both be 9000.
We now need to rescan our iSCSI software adapter, and refresh our storage view to make sure our iSCSI Datastores are re-connected. Follow these steps:
  1. Click on “Storage Adapters” under the “Configuration” tab of your Host.
  2. Scroll down to your iSCSI Software Adapter, and then click “Rescan All…” in the top right, Verify that the iSCSI LUN(s) have re-appeared.
  3. Now click on “Storage” under the “Configuration” tab of your Host.
  4. Click “Rescan All…” in the top right of the screen. Verify that your iSCSI  Datastores have re-appeared.
Finally let’s verify that our iSCSI network is fully supporting our jumbo frame size. Follow these steps:
  1. Log into the console of your ESXi Host.
  2. Press F2 to customize your host.
  3. When prompted, log into your Host.  Scroll down to “Troubleshooting Options”. Press “enter”.
  4. Press enter on “Enable Local Tech Support” to enable it.
  5. Now press “Alt and F1” to enter the console, and then log in again.
  6. Enter the following command: “vmkping -s 9000 <IP Address of your SAN's iSCSI interface>“. The ping should work and confirm that the mtu is 9000. If this does not succeed double check the mtu settings on your switches and SAN.
  7. Press ” Alt F2″ to exit the local console.
  8. Press enter on “Disable Local Tech Support” to disable the local console on your host.
  9. Exit your hosts’s console.
That’s it, your host is now configured to use jumbo frames, and now you can repeat these steps on the remaining Hosts.

Attached Files:

Finishing the configuration of the EqualLogic PS4000E

In another post, we’ve already got the basic setup of the SAN completed, now we just need to finish a few things and then provision some storage.

First let’s get the firmware updated. If’ you’ve not already configured an account with EqualLogic, do so now by going to http://support.equallogic.com and signing up.

Once you’ve downloaded the firmware we’ll update it by following these steps:

  1. Login to the management group ip of your device, expand “Members” in the left hand pane.
  2. Highlight the unit, and then click on the “Maintenance” tab.
  3. Click “Update firmware….”, Enter the admin password, and then click “OK”.
  4. Navigate and point to the .tgz file that you’ve downloaded from EqualLogic, and then press “OK”
  5. In the “Action” column click the link to upgrade and follow the steps to upgrade and reboot.
We’ll now configure some email alerting, Log back into your management group IP and perform the following:
  1. Click the “Notifications” tab.
  2. Check the box labeled “Send e-mail to addresses”.
  3. In the “E-mail recipients” window, click “Add” and enter the email address you’d like to receive alert emails.
  4. in the “SMTP Servers” window click “Add”, and enter the IP address of your SMTP server.
  5. Check the box labeled “Send e-mail alerts to customer support( E-Mail Home)”.
  6. Enter a reply email address so that customer support can return an email to you in the event that the SAN reports a hardware failure.
  7. Enter the email address you want the SAN to use when it sends out emails in the box labeled “Sender e-mail address”.
That’s it for notifications, now let’s configure our first volume that we’ll make available to our ESXi hosts. Follow these steps:
  1. First expand “Storage Pools” in the left pane, and then click on the “Default” Storage Pool.
  2. Click on “Create Volume”, Give the volume a name and description and then click “Next >”.
  3. Give the volume a size, it’s important to remember that ESXi has a limit of 2TB -512B for a LUN size, so for simplicty, don’t make the volume larger than 2047GB. Uncheck “thin provisioned volume” unless you want it to be thinly provisioned. If you are planning on using snapshots, leave this at 100%, otherwise if you are going to be backing up the SAN without using snapshots, change this to 0% to conserve storage space. Click “Next >”.
  4. Click “No access” for now, we’ll add access later. “Access Type” should be set to “read-write” and the box for “allow simultaneous connections from initiators with different IQN names” should be checked. Click “Next >”.
  5. Click “Finish”.
  6. Highlight the newly created volume, and then click the “Access” tab.
  7. Click “Add”, Check the box labeled “Limit access by IP address”, Enter the IP address of the first ESXi server (use the IP address for the nic team on the LAG we created for iSCSI in this post). Click “OK”.
  8. Repeat steps 6 & 7 for each of your ESXi hosts.
That’s it. We’ve got our SAN configured, at least enough to get vCenter installed and running properly. Time to get vCenter installed.

 

Finishing up the ESXi installation

Once all of our networking is configured, we just need to do a few more things to complete the configuration of ESXi, after we’re done with this, we’ll complete the SAN configuration, and install vCenter on a VM and get these hosts connected to it.

First let’s configure our time and NTP settings:

  1. Open the vSphere Client and connect to your ESXi host.
  2. Click the “Configuration tab” at the top, and then click on “Time Configuration” in the left pane.
  3. Click “Properties…” in the top right hand corner of the screen.
  4. Set a time close to that of your NTP server, and then click the “Options…” button.
  5. Click “NTP Settings” in the left pane, click “Add…”, and then Enter the IP address of your NTP server.
  6. Check the box labeled “Restart NTP service to apply changes” and then click on “OK”, Click “OK” on the last window.

That takes care of the Time Configuration, let’s now configure local storage on the Host.

  1. Remaining on the “Configuration” tab, Click “Storage” in the top left hand pane.
  2. Click “Add Storage…” in the top right hand part of the screen.
  3. Select the option for “Disk/LUN” and then click “Next >”.
  4. Select the local RAID or Disk controller on your Host, and then click “Next >”.
  5. If prompted, select to use all available space and partitions, unless you’ve got utility partitions on your system you want to keep. Click “Next >”.
  6. Give your local Datastore a name, it’s a good idea to specifically note that it’s local storage in the name. Click “Next >”.
  7. Choose your block size, select the size that most closely resembles the amount of free storage, or below. Click “Next >”.
  8. Click “Finish”.
Now let’s add the iSCSI storage that we configured on the SAN.
  1. Remaining on the “Configuration” tab, Click “Storage Adapters” in the left hand pane.
  2. Scroll down to the “iSCSI Software Adapter”, select it, and then click on “Properties…” in the top right corner of the screen.
  3. Click “Configure”, Check the box for “Enabled”, and then click “OK”
  4. Click the “Dynamic Discovery” tab, click “Add…”, Enter the IP address of the SAN GROUP (not any individual IP on any one controller), leave the default port and then click “OK”.
  5. When prompted, click “Yes” to rescan the adapters for storage. The LUN on the SAN that we just provisioned should now appear in the list.
  6. Remaining on the “Configuration” tab, Click “Storage” in the left hand pane.
  7. Click “Add…”, Select “Disk/LUN” and click “Next >”.
  8. Select the LUN from the SAN that we just added, and click “Next >”.
  9. Click “Next >” to add a new partition, Enter a name for this volume, preferably one that matches the volume on the SAN, and then click “Next >”.
  10. Select the Maximum file size that you want and then click “Next >”.
  11. Click “Finish”.
That’s all we need to do for now on the ESXi hosts, let’s move on and get vCenter installed on a VM.

Initial Configuration of a EqualLogic PS Series Storage Array

Okay so here are a few things that I wish someone had told me about the EqualLogic SANs before I turned on one and started configuring it for the first time:

  1. Each NIC on the SAN will get it’s own IP, but each NIC purpose will also get an IP, what this means is that each NIC performing iSCSI will have an IP, but there will also be a a GROUP IP for all iSCSI NICs, and the same thing for Management NICs, each NIC has an IP and then there is also a GROUP IP for all Management NICs. Also, if you’re setting up more than one SAN, the GROUP IPs are cumulative, and encompass all NICs on each SAN.
  2. The Modules are Active / Passive. Only one is enabled at a time, so if you are planning on using 4 NICs for iSCSI traffic, better upgrade to a 6000 series unit that has 4 NICs on EACH controller module.
  3. When you are running the setup wizard, and it starts asking for IP information, it’s asking for iSCSI interface IP information, not management NIC IP information, we’ll configure that after the initial turn up.

So, once you’ve got your PS4000 or PS6000 series plugged in and turned on, go ahead and plug Interface 0 into the switch ports configured for iSCSI, if you’ve not configured your switches yet you can head over here to find out how to configure them. Plug a laptop into the same vlan, and run the “Remote Setup Wizard” from the CD that came with the SAN. Then follow these steps:

  1. Make sure that you’ve got “Initialize a PS Series Array” selected, and then click Next >.
  2. Allow the wizard to discover your array, and then select it, then click Next >.
  3. On the “Initialize Array” screen you’ll need to enter the Name for the Array, the IP address, subnet, and Gateway of the First iSCSI NIC, and then click Next >.
  4. On the “Create New Group” screen you’ll need to enter the Name of the Array Group, as well as the iSCSI Group IP, which we talked about above, We’ll also need to select a RAID Type, and enter credential information for the admin account (username: grpadmin), and create a service account to be used for VDS/VSS features later, then click Next >.
  5. You’ll then be told to wait for a bit, and then more than likely also be told that it failed to configure your registration with the iSCSI Initiator, don’t worry about the error it just means you either didn’t have the iSCSI Initiator installed, had the wrong IP information configured, or something else, but it does not matter at this point, click OK, Click OK again, and then click Finish.
  6. Now assign your computer a IP address in the subnet used for iSCSI traffic, and then connect to the GROUP IP you just configured.
  7. Login with the username of grpadmin, and the administrator password you configured in step 4.
  8. Expand “Members” in the left hand pane, and then click on the array you just configured.
  9. Click on the tab “Network” at the top, and then click on each network interface that you’ve not already assigned an IP address to, and assign an IP address, subnet, and a description to the interface, once it’s configured, enable the interface.
  10. Now click on “Group Configuration” in the left hand pane, then click on the tab “Advanced” at the top.
  11. Click the button called “Configure Management Network…”
  12. Check the box for “Enable Dedicated Management Network”, here is where you assign the GROUP IP for the management interfaces on this and all future Arrays, once you assign the IP and gateway, select Interface 2 from the list of interfaces and then click OK.
  13. Make sure your Management NICs are plugged into your MGMT vlan, and then you should be able to manage you array(s) using the new GROUP IP you just assigned.
That’s it, the array is now configured and online, in some future posts we’ll look at configuring SMTP alerts, updating firmware, and creating volumes, but for now let’s get our ESXi servers configured, by going here.

Configuring a Dell 6248 Switch Stack for use with a EqualLogic PS4000E Storage Array

I’m going to be doing some write ups over the next few days pertaining to getting a small VMWare vSphere 4.1 installation set up. We’ll be using a pair of Dell 6248 Switches, configured in a stack, and a Dell EqualLogic PS4000E iSCSI Storage Array as our back end. In preparation for that I’m going to be going over our switch and network configuration in this post so that it’s clear as to how the network is configured.

We’ll have vlans for each of the following purposes:

  • Native vlan 1: we’ll use this as our isolated, un-trunked vlan for this switch, the vlan where unconfigured ports are placed by default. (vlan 1)
  • Management: things like DRACs, iLos, UPS management NICs, SAN  Management NICs, etc (vlan 2)
  • vMotion: Moving Virtual machines from one host to another host (vlan 3)
  • HA: VMWare Fault Tolerance (vlan 4)
  • iSCSI traffic (vlan 5)
  • and finally all vlans needed for the production virtual servers (vlans 6 & 7 )
As a perquisite, we’re going to be doing some basic setup of the switch stack, if you’re not setup the switches in a stack yet, please see this post.
Log into the switch and enter the following commands:
  1. switchstack> enable
  2. switchstack# config
  3. switchstack(config)# vlan database
  4. switchstack(config-vlan)# vlan 2-7
  5. switchstack(config-vlan)# exit
  6. switchstack(config)# interface vlan 2
  7. switchstack(config-if-vlan2)# name MGMT_VLAN
  8. switchstack(config-if-vlan2)# exit
  9. repeat steps 6-8 for each vlan, giving each a descriptive name
  10. switchstack(config)# spanning-tree mode rstp (assuming you are using rstp with your other switches in your network)
Now let’s configure some access ports for the MGMT Vlan devices to plug into, we’ll use the last 4 ports on each switch.
  1. switchstack(config)# interface range ethernet 1/g44-1/g48,2/g44-2/g48
  2. switchstack(config-if)# switchport mode access
  3. switchstack(config-if)# switchport access vlan 2
  4. switchstack(config-if)# spanning-tree portfast
  5. switchstack(config-if)# exit
We used spanning-tree portfast because we know these ports will be plugged into end devices, and we want them to come up instantly if the switch is rebooted, or a cable is unplugged and then plugged back in, we don’t want to wait for spanning tree to check for switching loops.

We’ll also need to define a few access ports for vlan 5, where we’ll be plugging in our pS4000E, follow the exact same steps we used above to configure vlan 2, but substitute vlan 5 for vlan 2, make sure you plug the ports 0 and 1 on the EqualLogic Controller Modules into the vlan 5 ports of your switch, and port 2 on your controller modules into the switch ports for vlan 2 (port 2 on the SAN controller module is strictly for management, and therefore should not be on the vlans used for iSCSI traffic). We’ll also need to enable jumbo frames on the on switch ports that will be moving iSCSI traffic, and disable unicast storm control. To do this enter the following commands:

  1. switchstack(config)# interface ethernet 1/g20
  2. switchstack(config-if-1/g20)# mtu 9216
  3. switchstack(config-if-1/g20)# no storm-control unicast
  4. switchstack(config-if-1/g20)# exit
  5. repeat steps 1 – 3 for each port that that connects to a storage array port (only 0 and 1, 2 is for management only)
Note: typically the mtu would be set to 9000, but when you run the “iSCSI enable” option on these switches it’s set to 9216, which is what I’ve chosen to implement here. I’ll update this post in the future if this turns out to be a problem with either the ESXi hosts or the EqualLogic SAN.

Also, I normally would not disable unicast storm-control, but when you enable a iSCSI optimization of the Dell Switches, they do this automatically when a EqualLogic SAN is detected on a port, If anyone has the explanation of why this happens please feel free to share it.

Finally we’ll also need to enable flow control at the switch level, to do this enter the following command:

  1. switchstack(config)# flowcontrol
We’re also going to place this switch into the MGMT_VLAN so that it’s management interface is on the same vlan as everything else we’re going to manage. Enter the following commands:
  1. switchstack(config)# IP Address vlan 2
  2. switchstack(config)# ip address x.x.x.x y.y.y.y
  3. switchstack(config)# ip default-gateway z.z.z.z
Where x.x.x.x is the IP address of your switch on the new vlan, y.y.y.y is your subnet mask, and z.z.z.z is your gateway on the mgmt_vlan.

That’s all of the configuration we’ll need at this point, we’ll now setup the EqualLogic San here, and later we’ll configure the switches for Link Aggregation Groups to handle the connections to our ESXi servers.