Category Archives: ESX/ESXi Networking

Setup and Install the EqualLogic Multipathing Agent for VMWare ESXi 5

In a past post I went into how to configure iSCSI over a LAG to give you some path redundancy over a single VMK IP. You can read about that here. For multiple reasons this is not the best way to configure Multipathing, so here is a write up on the proper way to setup the Multipathing Plugin on a VMWare ESXi 5 server (I’ve also included steps to undo what may have been setup in the past).

Prerequisites

  1. Download and install winSCP from here.
  2. Download the EqualLogic Multipathing Agent for VMWare.
  3. Download, Install, and Configure the VMWare Management Agent (vMA), read about how to do that here.
  4. Optionally, Install VMware Update Manager, which can be used to install the MEM in the event that the setup.pl --install script does not work.

Cleaning Up

If you’ve already had iSCSI configured on this host, it’s time to make note of a few things and then clean up before we get the EqualLogic MEM installed.

  1. Make note of all IPs that are being used by a host for iSCSI
  2. Make note of which NICs are being used by the vSwitch setup for iSCSI
  3. Delete the VM Kernel ports that are attached to the iSCSI vSwitch
  4. Delete the iSCSI vSwitch

Disable iSCSI on the Host

  1. Connect to the vMA using putty, and then attach to your host using the following command: vifptarget -s <host's FQDN>
  2. For ESXi 4.x enter the following command: esxcfg-swiscsi –d
  3. for ESXi 5 enter the following command: esxcli iscsi software set -e false
  4. Reboot the Host

Enable iSCSI on the Host

  1. For ESXi 4.x enter the following command: esxcfg-swiscsi –e
  2. for ESXi 5.0 enter the following command: esxcli iscsi software set -e true</li>

Remove the old VMK bindings from the iSCSI HBA

For each of the VM Kernel ports that you made note of before, run the following command where <vmk_interface> is your vmk port such as vmk1, vmk2, and <vmhba_device> is your vmhba adapter for iSCSI such as vmhba38:

  1. For ESXi 4.x: esxcli swiscsi nic remove –n <vmk_interface> –d <vmhba_device>
  2. for ESXi 5: esxcli iscsi networkportal remove -n <vmk_interface> -A <vmhba_device>

Installing the EqualLogic Multipathing Agent

Now that our host is fresh and so clean clean, well in terms of iSCSI anyway, it’s time to start configuring the Multipathing Extension Module.

Move the Setup Script and Bundle to the vMA

  1. Connect to your vMA using winSCP, it should drop you into the home directory for the user ‘vi-admin’
  2. Find and locate the files that were extracted from the zip file you downloaded from Equal Logic, you are looking for “setup.pl” and “dell-eql-mem-esx5-X-X.X.XXXXXX.zip” the version of the .zip file will depend on whether or not you’re installing it on ESXi 4.x or ESXi 5, just make sure you copy the right file name.
  3. Once you’ve moved both files to the vMA, right click on the “setup.pl” file from within winSCP, select “properties”. Under the “Permissions” section of the setup.pl change the “Octal” value to “0777”, this will allow you to execute the script.
  4. Close WinSCP.

Configuring the MEM

  1. Connect to your vMA using ssh.
  2. You should automatically be logged into the home directory of the ‘vi-admin’ user, verify this by running a ls, and making sure you see the two files you uploaded.
  3. enter the following command to get started: ./setup.pl --configure --server=<esxi server's FQDN>
  4. Follow the bouncing ball once the script gets started, it’s going to ask you for a username and password for the host, it’s also going to ask you to name the new virtual switch, it’s going to ask you what nics to use, list each one with a space in between them, it will also ask you for an IP for each VMK port it creates, and it will ask for the IP of the Group IP you want to connect to, and a few other questions as well such as subnet mask and mtu size, whether or not to use chap, use the information you collected above and the configuration of the Array to answer the questions, and when the script completes you should see the new vSwitch and VMK ports in your configuration.

Installing the Bundle

  1. While still logged into your vMA run the following command: ./setup.pl –install –server=<esxi server’s FQDN>
  2. If you receive an error about being unable to install it, try disabling Admission Control on your HA cluster and re-running the command.

If for some reason you are unable to get the setup.pl –install command to work properly you can use the vmware Update Manager to install the Bundle.

  1. Install and configure vUM, according to VMware instructions.
  2. Import the MEM offline bundle into the vUM package repository by selecting the “Import Patches” option and browsing to the dell-eql-mem-esxn-version.zip.
  3. Create a baseline containing the MEM bundle. Be sure to choose a “Host Extension” type for the baseline.
  4. Optionally add the new baseline to a baseline group.
  5. Attach the baseline or baseline group to one or more hosts.
  6. Scan and remediate to install the MEM on the desired hosts. Update Manager will put the hosts in maintenance mode and reboot if necessary as part of the installation process.
  7. If you get the error: fault.com.vmware.vcIntegrity.NoEntities.summary disable Addmission control and then try to remediate again.

Verifying that everything is working properly

  1. Once both the the –configure and the –install commands have been run you can run the follow command to make sure everything is working properly: ./setup.pl --query --server=<esxi server's FQDN>

 

It’s a little bit more work than the LAG setup, but this is the proper way to get a full and complete Equal Logic Multipathing setup installed and working.

 

Installing vMA 4.1 in vShpere 4.1

Here is a quick guide to installing and configuring vMA 4.1 into a vSphere 4.1 installation. vMA is a management assistance tool that allows you to more easily manage your hosts or vcenter server. Follow these instructions:

  1. First download the vMA ovf file from here.
  2. Open your vSphere client and connect to your vCenter server. Click on the “File” menu and then click “Deploy OVF template…”.
  3. Click “Browse…” and then locate your downloaded oMA ovf file, click “Next >”.
  4. Click “Next >”, Agree to the EULA, and then click “Next >”.
  5. Give the vMA a name, and then select the Data center it will be deployed to. Click “Next >”.
  6. Select the host or cluster it will run on, and then click “Next >”.
  7. Select the Data store to place the files on, and then click “Next >”.
  8. Select your disk provision format, and then click “Next >”.
  9. Select your network from the drop down list, and then click “Next >”.
  10. Click Finish.

Once the import is finished we can start the wizard to configure the vMA tool. Open your vSphere client, connect to your vCenter server. Follow these steps:

  1. Find your vMA VM, open its console and click start.
  2. The vMA will boot to a prompt asking to use DHCP to assign an IP. Enter “no” and press “Enter”.
  3. It will now prompt for am IP address, enter an IP address and the press “enter”.
  4. It will now prompt for a Subnet mask, enter a mask and then press “enter”.
  5. It will now prompt for a gateway, enter the IP address of your gateway and then press “enter”.
  6. It will now prompt you twice for your primary and secondary DNS, enter the IP addresses and press “enter” after each.
  7. It will prompt you for the vMA’s hostname, enter a FQDN and then press “enter”
  8. Type “yes” to confirm the settings and then press “enter”.
  9. the vMA vm will now reboot, and when it comes back up it will prompt you twice for a password.
  10. The VM will now display a screen telling you how to SSH into the box. For now press “Alt” and F2″ to enter the virtual terminal. Login with “vi-admin” and the password you just created.

Before we continue we should make sure that our Active Directory contains a security group called EXACTLY: “ESX Admins” and contains the accounts that we want to have Administrator access to our ESX/ESXi hosts. During the domain join process this group will automatically be granted the Administrator role on each ESX/ESXi host.

Now we need to join the vMA to the active directory domain. If you’re not already logged into the Virtual Terminal on the vMA vm, then follow setup 10 above and then perform the following:

  1. Enter the command “sudo domainjoin-cli join <your domain fqdn> <your AD domain username>” press “enter”
  2. The vMA will now prompt you for the password for the “vi-admin” account created on the vMA. Enter it and then press “enter”.
  3. The vMA will now prompt you for the password for the Active Directory user account you are trying to use to join it to the domain, enter the password and then press “enter”.
  4. You should now receive an error about the PAM module, and the word “SUCCESS” at the bottom of the screen. You’ve successfully joined to the Active Directory domain.

If we’ve not already joined our ESXi servers to the Active Directory domain now is a good time to do so. This is not a required step, but it will allow us to cut down on the amount of usernames and passwords we’ll need to use to configure our ESXi hosts when using the vMA. Follow these steps:

  1. Open the vSphere client and connect to your vCenter Server.
  2. Navigate to “Inventory” and then “Hosts and Clusters”.
  3. Select the first ESXi host, and then click on the “Configuration” tab.
  4. Click on “Authentication Services” and then click on “Properties…”.
  5. Change the “User Directory Service” from “Local Authentication” to “Active Directory”.
  6. Enter your domain name in the box titled “Domain:” and then click “Join Domain”.
  7. When prompted enter your Active Directory name and password, and then Click “OK”.
  8. Click the “Permissions” tab.
  9. Right Click and select “Add Permission…”.
  10. Change the drop down box to “Administrator” and then click the button titled “Add…”.
  11. Highlight users and/or groups that should be added to the list of local administrators on your ESXi server. Click the button titled “Add”. Click “OK”.
  12. Click “OK” again to add the permission.

The next thing we need to do is configure our vMA with a list of servers to manage, and which authentication type to use to manage them. Follow these steps:

  1. Open the console for your vMA
  2. If you’re not already logged in, log in as “vi-admin”
  3. Enter the following command to add your servers “vifp addserver <host's FQDN> --authpolicy adauth” and then press “enter”
  4. When prompted for a username enter <domain>\<username> of a user who was granted administrator permissions on that ESXi host. Make sure the host is not in standbymode, otherwise you’ll get an error.
  5. repeat this step for each host and the vcenter server.

Now that we’ve got all of our servers in the list we can issue commands to them by appending the following to each command --server <Host's FQDN>  or if you get tired of having to specify the server each time you can set which server to use by issuing the following command: vifptarget -s <host's FQDN>. To clear the currently selected server issue the following command to the vMA: vifptarget -c . Also, if you get tired of having to type your Username and password in each time you can just append the following flag to the end of each command:  --passthroughauth

Enabling Jumbo Frames on your iSCSI vmnics and vSwitches ESXi 4.1

When we setup our switches, we changed the mtu on our vlan for iSCSI traffic. Now we need to edit the mtu on our iSCSI port groups, and vSwitch to also allow jumbo frames.

The first thing we need to do is take stock of what virtual swtich and port group we’re using for iSCSI traffic on each ESXi host. Follow these steps:

  1. Log into your host or vCenter server and then navigate over to your host’s “Configuration” tab.
  2. Click “Networking” on the left.
  3. Verify the Port Group name, Virtual Switch name, vmk number, IP address, and
    iSCSI Port Group

    Figure 1

    which vmnics are being used. See Figure 1.

  4. If you’ve not already installed either vCLI or vMA see the posts on how to install and configure them Here and Here.
  5. Open either vCLI or ssh into your vMA VM.
  6. enter the following command “esxcfg-vswitch -m 9000 <vSwitch's name> --server <Hosts's FQDN>
  7. When prompted for a username and password enter the name and password of an account with Administrator permissions on that host.
  8. Verify that this change has taken effect by running the following command: “esxcfg-vswitch -l --server <Hosts's FQDN>“. The MTU for your vSwitch should now be displayed as 9000.

We can’t modify the mtu on our port group, so we’ll need to migrated any VMs on iSCSI storage on this Host off of this Host and then remove our iSCSI port group. Once you’ve migrated any running VMs follow these steps:

  1. Open the Preferences of your vSwitch that we just modified.
  2. Select the port group in questions, and then click “Remove”.
  3. Now enter the following command in either vCLI or the vMA, “esxcfg-vswitch -A
    "iSCSI" <vSwitch's name> --server <Host's FQDN>
    “. This command re-created our iSCSI port group, attached it to our vSwitch, but did not add a vmknic to the port group.
  4. Now enter the following command to re-create the vmknic, “esxcfg-vmknic -a -i <ip address> -n <netmask> -m 9000 "vmk#" -p "iSCSI" --server <Host's FQDN>“.
  5. We can now verify that our port group has the correct mtu by running the following commands:  “esxcfg-vmknic -l --server <Host's FQDN>” and “esxcfg-vmknic -l --server <Host's FQDN>“. Check the MTU settings on both your Port group and Nics, they should now both be 9000.
We now need to rescan our iSCSI software adapter, and refresh our storage view to make sure our iSCSI Datastores are re-connected. Follow these steps:
  1. Click on “Storage Adapters” under the “Configuration” tab of your Host.
  2. Scroll down to your iSCSI Software Adapter, and then click “Rescan All…” in the top right, Verify that the iSCSI LUN(s) have re-appeared.
  3. Now click on “Storage” under the “Configuration” tab of your Host.
  4. Click “Rescan All…” in the top right of the screen. Verify that your iSCSI  Datastores have re-appeared.
Finally let’s verify that our iSCSI network is fully supporting our jumbo frame size. Follow these steps:
  1. Log into the console of your ESXi Host.
  2. Press F2 to customize your host.
  3. When prompted, log into your Host.  Scroll down to “Troubleshooting Options”. Press “enter”.
  4. Press enter on “Enable Local Tech Support” to enable it.
  5. Now press “Alt and F1” to enter the console, and then log in again.
  6. Enter the following command: “vmkping -s 9000 <IP Address of your SAN's iSCSI interface>“. The ping should work and confirm that the mtu is 9000. If this does not succeed double check the mtu settings on your switches and SAN.
  7. Press ” Alt F2″ to exit the local console.
  8. Press enter on “Disable Local Tech Support” to disable the local console on your host.
  9. Exit your hosts’s console.
That’s it, your host is now configured to use jumbo frames, and now you can repeat these steps on the remaining Hosts.

Attached Files:

Adding your ESXi Host to vCenter and finishing its configuration

Now that we’ve got our vCenter server setup and running it’s time to finish up it’s basic configuration and get our ESXi servers added to it.

The first thing we’re going to need to do is create a datacenter. Follow these steps:

  1. Right click on the vCenter server in the upper left part of the screen.
  2. Select “New Datacenter”, assign it a name.
Now we’ll add the Hosts to the newly created Data Center.
  1. Right click on the Datacenter you just created and select “Add Host…”.
  2. Enter the Hosts’s Name, the username (root) and the password configured during the ESXi Host’s orgininal setup process. Click “Next >”.
  3. Click “Yes” when the Security Alert appears.
  4. Click “Next >” to confirm the summary .
  5. Assign a license to the Host, or choose evaluation, and then click “Next >”.
  6. Check “Enable Lockdown Mode” if you want it enabled, Click “Next >”.
  7. Select the location for your VMs, if there are any. Click “Next >”.
  8. Click “Finish”.
Repeat this for each of your Hosts, and when you’ve added them all we can move on to creating a HA / DRS cluster.
  1. Right click on the Datacenter you just created. Select “New Cluster…”.
  2. Give your new cluster a name, and then select if you want to enable HA or DRS or both. For the purposes of this write up, we’ll be enabling both. Click “Next >”.
  3. The first section asks to configure your DRS automation level. I configure this as “Fully automated” and with Priority 1,2,3, & 4 recommendations being performed. Click “Next >”.
  4. The next section asks how to configure Power Management automation. I configure this to be automatic, and leave the DPM Threshold at the default. Click “Next >”.
  5. The next section asks about how to configure HA. I leave these at the default settings. Make changes if you wish and then click “Next >”.
  6. The next section asks about how to handle VMs that stop responding and Hosts that stop responding. I leave these settings at their defaults. Make changes if you wish and then click “Next >”.
  7. The next section asks about monitoring the guest VMs. Enable VM Monitoring if you want, and then set your sensitivity level. Click “Next >”.
  8. The next section asks about EVC, if you are running hosts with different versions of processors, then you should enable this, if all of your hosts are identical, you can leave this disabled. Click “Next >”.
  9. The next section asks about the VM Swap file location. Unless you have a specific reason to do so I would not modify this. I leave it at the default unless I’ve got a raid 0 volume setup somewhere. Click “Next >”.
  10. Click “Finish” to create you cluster.
Now we need to add our hosts to the newly created cluster. Drag your first host into the cluster and when you drop it you’ll be put into the “Add Host Wizard” Follow these steps to add the host to the cluster:
  1. The first section will ask you where you want to place the host’s VMs if there are any, if you’ve configured resource pools you and select one, otherwise leave this at the default setting and click “Next >”.
  2. Click “Finish”.
The last thing we need to do for our hosts is configure their Power Management settings. I’m using Dell servers, so I’m going to configure the Power Managment settings with the IP address, Mac address, and Username/password of the build in iDRAC on each server. Follow these steps:
  1. From the Hosts and Clusters Inventory,Click on the first host, and then click on the “Configuration” tab.
  2. Under the “Software” section click “Power Management”.
  3. Click “Properties…” in the top right corner of the screen.
  4. Enter the Username, Password, IP address, and MAC address of the host’s iDRAC interface. Click “OK”.
  5. If Power Management is configured on your cluster, the cluster can now put this host to sleep and wake it up when it’s needed.
Finally, the last thing we need to do to finish basic configuration is configure email alerts on the vCenter server. Follow these steps:
  1. Go to the “Home” screen in the vCenter client.
  2. Click on “vCenter Server Settings”.
  3. Click “Mail” in the left hand pane.
  4. Enter your SMTP server’s address, and enter a sender account for vCenter server. Click “OK”.
That’s it. We’re done with the basic configuration of vCenter server, our hosts, and our first cluster. We’ll move onto more advanced topics in future posts, such as Resource Pools, Cloning, Creating Templates, and Backing up VMs.

Configuring LAG Groups between Dell 62xx Series Switches and ESXi 4.1

Okay, so we’ve already configured the basics on both our switches, and ESXi servers, now it’s time to configure the LAG groups, and vSwitches for each of our necessary purposes.

We’re going to configure one LAG group for each of the following:

  • Production network traffic for the VMs
  • iSCSI Traffic
  • Management and vMotion
  • We’re only going to be using one NIC for Fault Tolerance, so we’re not going to configure a LAG group for that.
Let’s start by first identifying which ports we’ll use on each switch, and for which purpose we’ll use each group. When we started we said we’ll by using vlan 2 for Management, vlan 3 for vMotion, vlan4 for Fault Tolerance, vlan 5 for iSCSI, and vlans 6 & 7 for various production VMs (also vlan 2 if you are going to virtualize the vCenter server, which we are).
So we’ll need a total for 3 LAG groups, two of which will be trunking more than one vlan. Let’s start by configuring the first LAG group. This one is going to be for the Management and vMotion purposes, we’ll need 1 port on each switch in the stack, so let’s use port 10 on both the first and second switch in the stack, start by doing the following:

 

  1. Open your connection to your switch stack
  2. switchstack> enable
  3. switchstack# config
  4. switchstack(config)# interface range ethernet 1/g10,2/g10
  5. switchstack(config-if)# channel-group 10 mode on
  6. switchstack(config-if)#exit
  7. switchstack(config)# interface port-channel 10
  8. switchstack(config-if-ch10)# spanning-tree portfast
  9. switchstack(config-if-ch10)# hashing-mode 6
  10. switchstack(config-if-ch10)# switchport mode trunk
  11. switchstack(config-if-ch10)# switchport trunk allowed vlan add 2-3
  12. switchstack(config-if-ch10)# exit
What we just did was build a new Link Aggregation Group, Added port 10 on both of the switches in the stack to the LAG group, enabled the port to transition to forwarding state right away, be enabling portfast, set the LAG group load balancing method to IP-Source-Destination (hashing-mode 6), and converted the LAG group to a trunk, and added vlans 2 & 3 to the trunk as tagged vlans on that trunk.
We’ll be doing the same thing for our next LAG, only we’re going to add some commands because this LAG will be handling iSCSI traffic. We’re going to use ports 11 on each switch for this next LAG group, start by entering the following:

 UPDATE: if you are configuring iSCSI for an Equal Logic Array, please see this post instead of configuring LAGs for you iSCSI traffic.

  1. switchstack(config)# interface range ethernet 1/g11,2/g11
  2. switchstack(config-if)# channel-group 11 mode on
  3. switchstack(config-if)#exit
  4. switchstack(config)# interface port-channel 10
  5. switchstack(config-if-ch11)# spanning-tree portfast
  6. switchstack(config-if-ch11)# hashing-mode 6
  7. switchstack(config-if-ch11)# switchport mode access
  8. switchstack(config-if-ch11)# switchport access vlan 5
  9. switchstack(config-if-ch11)# mtu 9216
  10. switchstack(config-if-ch11)# exit
What we’ve done here is pretty much what we did for the first lag, but we made this LAG an access port for only one vlan, instead of a trunk port for more than one. We also adjusted the mtu to support jumbo frames for the iSCSI traffic because that’s what this vlan is used for.
Our Final LAG group is going to contain three ports two on 1 switch, and just one port on the other, let’s start by:
  1. switchstack(config)# interface range ethernet 1/g12-1/g13,2/g12
  2. switchstack(config-if)# channel-group 12 mode on
  3. switchstack(config-if)#exit
  4. switchstack(config)# interface port-channel 12
  5. switchstack(config-if-ch12)# spanning-tree portfast
  6. switchstack(config-if-ch12)# hashing-mode 6
  7. switchstack(config-if-ch12)# switchport mode trunk
  8. switchstack(config-if-ch12)# switchport trunk allowed vlan add 2,6-7
  9. switchstack(config-if-ch12)# exit

Don’t forget to “copy run start” on you switch, you don’t wan’t to lose all that work you’ve just done! Okay, our first few LAGs are configured, time to setup our first ESXi server’s network configuration:

Now comes time to configure the networking on the first ESXi server. The first thing we’re going to do is setup the vSwitch that corresponds to the LAG group for the Management and vMotion vlans. Follow these steps:

  1. Log into your ESXi server using the vSphere Client.
  2. Click on the Configuration tab at the top.
  3. Click on “Networking” under the hardware section, in the left pane.
  4. We’re going to be adding a new vSwitch, so click on “Add Networking…” in the top right hand corner of the screen.
  5. Select the Option for “VMkernel”, because this vSwitch will be supporting non- Virtual Machine tasks, click Next.
  6. Select “Create New Virtual Switch” and then check two vmnics (make sure these two are plugged into port 10 on each switch) and then press “Next”.
  7. Give this network the label of “MGMT_Network” or whatever you’ve named vlan 2 on the switches, for VLAN ID, enter the value of “2”, Check the box labeled “use this port group for management traffic”, click “Next”.
  8. Assign an IP address and subnet mask that are within the subnet of vlan 2. Click Next.
  9. Click “Finish”.
  10. Find the newly created vSwitch and click on “Properties”.
  11. Click “Add” to add a new port group.
  12. Select “VMkernel” again, and then click “Next”.
  13. Give this port group a name of “vMotion”, and a VLAN ID of “3”, Check the box labeled “use this port group for VMotion”, click “Next”.
  14. Click Finish.
  15. Select the “vSwitch”, which should be the first item in the list when the Port Group window closes, click “Edit…”.
  16. Click on the “NIC Teaming” tab.
  17. Change the “Load Balancing:” setting to “Route based on IP hash”.
  18. Leave the defaults of “Link status only” and “Yes” for the middle two settings, and then change the setting “Failback:” to “No”.
  19. Verify that both vmnics are listed under the “Active Adapters” section.
  20. Close all of the windows.
What we’ve just done is this: We’ve created a vSwitch, added two NICs to it, both of which are plugged into the LAG on the switches, and we configured ip hashing as the method of balancing (which is the ONLY method you can use with a LAG group), and we disabled link failover on this vSwitch. We also created two Port Groups, assigned each a VLAN ID, and an IP address/subnet mask that match our existing vlans configured on the switches. We identified that these networks should be used for either management or vMotion, and gave them descriptive names that match the vlans on the switches.
We’ll repeat this process to creating new vSwitches 3 more times, here are the break downs:
  • iSCSI port group, two vmnics: both plugged into the ports that make up LAG 11 on the switches, assigned vlan 5, assigned the name “iSCSI” or whatever you named the vlan on the switch, assigned a IP address in that subnet, NIC teaming configuration exactly the same as the first vSwitch we configured.
  • Fault Tolerance port group, one vmnic: plugged into one of the switch ports configured as an access port on vlan 4, VLAN ID of 4, a name that matches the vlan name on the switches, check the box for “Fault Tolerance Logging”, and an ip address in the corresponding subnet, leave all of the NIC Teaming settings in their default states.
  • and finally a vSwitch that contains a port group for each of your production VM networks, Assign VLAN IDs to each, and plug them into the switch ports that make up your final LAG groups. Make sure the NIC Teaming settings match the example LAG group above. Don’t forgot to create a Port Group for MGMT traffic otherwise your vCenter server wont be able to communicate to the ESXi servers later.
That’s it, after it’s all configured on the ESXi side, it may take a reboot of the ESXi host when configuring and changing the Management port groups, it’s not supposed to require that, but sometimes it does, so if you reconfigure the management networks, and then lose the ability to ping or connect to it, reboot the system before you start other troubleshooting. Also you’re going to want to make sure all of your LAG groups came up properly on the switches you can use the following commands to test:
  • Show interfaces port-channel – this will display the status of all interfaces in all LAG groups
  • show interfaces switchport port-channel XX – This will display a list of all tagged or untagged vlans on this particular LAG group or Ethernet port
That’s it, we’re now ready to finish up our ESXi configurations, Install a VM to run vCenter, and configure our iSCSI storage.

Configuring a Dell 6248 Switch Stack for use with a EqualLogic PS4000E Storage Array

I’m going to be doing some write ups over the next few days pertaining to getting a small VMWare vSphere 4.1 installation set up. We’ll be using a pair of Dell 6248 Switches, configured in a stack, and a Dell EqualLogic PS4000E iSCSI Storage Array as our back end. In preparation for that I’m going to be going over our switch and network configuration in this post so that it’s clear as to how the network is configured.

We’ll have vlans for each of the following purposes:

  • Native vlan 1: we’ll use this as our isolated, un-trunked vlan for this switch, the vlan where unconfigured ports are placed by default. (vlan 1)
  • Management: things like DRACs, iLos, UPS management NICs, SAN  Management NICs, etc (vlan 2)
  • vMotion: Moving Virtual machines from one host to another host (vlan 3)
  • HA: VMWare Fault Tolerance (vlan 4)
  • iSCSI traffic (vlan 5)
  • and finally all vlans needed for the production virtual servers (vlans 6 & 7 )
As a perquisite, we’re going to be doing some basic setup of the switch stack, if you’re not setup the switches in a stack yet, please see this post.
Log into the switch and enter the following commands:
  1. switchstack> enable
  2. switchstack# config
  3. switchstack(config)# vlan database
  4. switchstack(config-vlan)# vlan 2-7
  5. switchstack(config-vlan)# exit
  6. switchstack(config)# interface vlan 2
  7. switchstack(config-if-vlan2)# name MGMT_VLAN
  8. switchstack(config-if-vlan2)# exit
  9. repeat steps 6-8 for each vlan, giving each a descriptive name
  10. switchstack(config)# spanning-tree mode rstp (assuming you are using rstp with your other switches in your network)
Now let’s configure some access ports for the MGMT Vlan devices to plug into, we’ll use the last 4 ports on each switch.
  1. switchstack(config)# interface range ethernet 1/g44-1/g48,2/g44-2/g48
  2. switchstack(config-if)# switchport mode access
  3. switchstack(config-if)# switchport access vlan 2
  4. switchstack(config-if)# spanning-tree portfast
  5. switchstack(config-if)# exit
We used spanning-tree portfast because we know these ports will be plugged into end devices, and we want them to come up instantly if the switch is rebooted, or a cable is unplugged and then plugged back in, we don’t want to wait for spanning tree to check for switching loops.

We’ll also need to define a few access ports for vlan 5, where we’ll be plugging in our pS4000E, follow the exact same steps we used above to configure vlan 2, but substitute vlan 5 for vlan 2, make sure you plug the ports 0 and 1 on the EqualLogic Controller Modules into the vlan 5 ports of your switch, and port 2 on your controller modules into the switch ports for vlan 2 (port 2 on the SAN controller module is strictly for management, and therefore should not be on the vlans used for iSCSI traffic). We’ll also need to enable jumbo frames on the on switch ports that will be moving iSCSI traffic, and disable unicast storm control. To do this enter the following commands:

  1. switchstack(config)# interface ethernet 1/g20
  2. switchstack(config-if-1/g20)# mtu 9216
  3. switchstack(config-if-1/g20)# no storm-control unicast
  4. switchstack(config-if-1/g20)# exit
  5. repeat steps 1 – 3 for each port that that connects to a storage array port (only 0 and 1, 2 is for management only)
Note: typically the mtu would be set to 9000, but when you run the “iSCSI enable” option on these switches it’s set to 9216, which is what I’ve chosen to implement here. I’ll update this post in the future if this turns out to be a problem with either the ESXi hosts or the EqualLogic SAN.

Also, I normally would not disable unicast storm-control, but when you enable a iSCSI optimization of the Dell Switches, they do this automatically when a EqualLogic SAN is detected on a port, If anyone has the explanation of why this happens please feel free to share it.

Finally we’ll also need to enable flow control at the switch level, to do this enter the following command:

  1. switchstack(config)# flowcontrol
We’re also going to place this switch into the MGMT_VLAN so that it’s management interface is on the same vlan as everything else we’re going to manage. Enter the following commands:
  1. switchstack(config)# IP Address vlan 2
  2. switchstack(config)# ip address x.x.x.x y.y.y.y
  3. switchstack(config)# ip default-gateway z.z.z.z
Where x.x.x.x is the IP address of your switch on the new vlan, y.y.y.y is your subnet mask, and z.z.z.z is your gateway on the mgmt_vlan.

That’s all of the configuration we’ll need at this point, we’ll now setup the EqualLogic San here, and later we’ll configure the switches for Link Aggregation Groups to handle the connections to our ESXi servers.