Category Archives: ESXi 4.1

Setup and Install the EqualLogic Multipathing Agent for VMWare ESXi 5

In a past post I went into how to configure iSCSI over a LAG to give you some path redundancy over a single VMK IP. You can read about that here. For multiple reasons this is not the best way to configure Multipathing, so here is a write up on the proper way to setup the Multipathing Plugin on a VMWare ESXi 5 server (I’ve also included steps to undo what may have been setup in the past).

Prerequisites

  1. Download and install winSCP from here.
  2. Download the EqualLogic Multipathing Agent for VMWare.
  3. Download, Install, and Configure the VMWare Management Agent (vMA), read about how to do that here.
  4. Optionally, Install VMware Update Manager, which can be used to install the MEM in the event that the setup.pl --install script does not work.

Cleaning Up

If you’ve already had iSCSI configured on this host, it’s time to make note of a few things and then clean up before we get the EqualLogic MEM installed.

  1. Make note of all IPs that are being used by a host for iSCSI
  2. Make note of which NICs are being used by the vSwitch setup for iSCSI
  3. Delete the VM Kernel ports that are attached to the iSCSI vSwitch
  4. Delete the iSCSI vSwitch

Disable iSCSI on the Host

  1. Connect to the vMA using putty, and then attach to your host using the following command: vifptarget -s <host's FQDN>
  2. For ESXi 4.x enter the following command: esxcfg-swiscsi –d
  3. for ESXi 5 enter the following command: esxcli iscsi software set -e false
  4. Reboot the Host

Enable iSCSI on the Host

  1. For ESXi 4.x enter the following command: esxcfg-swiscsi –e
  2. for ESXi 5.0 enter the following command: esxcli iscsi software set -e true</li>

Remove the old VMK bindings from the iSCSI HBA

For each of the VM Kernel ports that you made note of before, run the following command where <vmk_interface> is your vmk port such as vmk1, vmk2, and <vmhba_device> is your vmhba adapter for iSCSI such as vmhba38:

  1. For ESXi 4.x: esxcli swiscsi nic remove –n <vmk_interface> –d <vmhba_device>
  2. for ESXi 5: esxcli iscsi networkportal remove -n <vmk_interface> -A <vmhba_device>

Installing the EqualLogic Multipathing Agent

Now that our host is fresh and so clean clean, well in terms of iSCSI anyway, it’s time to start configuring the Multipathing Extension Module.

Move the Setup Script and Bundle to the vMA

  1. Connect to your vMA using winSCP, it should drop you into the home directory for the user ‘vi-admin’
  2. Find and locate the files that were extracted from the zip file you downloaded from Equal Logic, you are looking for “setup.pl” and “dell-eql-mem-esx5-X-X.X.XXXXXX.zip” the version of the .zip file will depend on whether or not you’re installing it on ESXi 4.x or ESXi 5, just make sure you copy the right file name.
  3. Once you’ve moved both files to the vMA, right click on the “setup.pl” file from within winSCP, select “properties”. Under the “Permissions” section of the setup.pl change the “Octal” value to “0777”, this will allow you to execute the script.
  4. Close WinSCP.

Configuring the MEM

  1. Connect to your vMA using ssh.
  2. You should automatically be logged into the home directory of the ‘vi-admin’ user, verify this by running a ls, and making sure you see the two files you uploaded.
  3. enter the following command to get started: ./setup.pl --configure --server=<esxi server's FQDN>
  4. Follow the bouncing ball once the script gets started, it’s going to ask you for a username and password for the host, it’s also going to ask you to name the new virtual switch, it’s going to ask you what nics to use, list each one with a space in between them, it will also ask you for an IP for each VMK port it creates, and it will ask for the IP of the Group IP you want to connect to, and a few other questions as well such as subnet mask and mtu size, whether or not to use chap, use the information you collected above and the configuration of the Array to answer the questions, and when the script completes you should see the new vSwitch and VMK ports in your configuration.

Installing the Bundle

  1. While still logged into your vMA run the following command: ./setup.pl –install –server=<esxi server’s FQDN>
  2. If you receive an error about being unable to install it, try disabling Admission Control on your HA cluster and re-running the command.

If for some reason you are unable to get the setup.pl –install command to work properly you can use the vmware Update Manager to install the Bundle.

  1. Install and configure vUM, according to VMware instructions.
  2. Import the MEM offline bundle into the vUM package repository by selecting the “Import Patches” option and browsing to the dell-eql-mem-esxn-version.zip.
  3. Create a baseline containing the MEM bundle. Be sure to choose a “Host Extension” type for the baseline.
  4. Optionally add the new baseline to a baseline group.
  5. Attach the baseline or baseline group to one or more hosts.
  6. Scan and remediate to install the MEM on the desired hosts. Update Manager will put the hosts in maintenance mode and reboot if necessary as part of the installation process.
  7. If you get the error: fault.com.vmware.vcIntegrity.NoEntities.summary disable Addmission control and then try to remediate again.

Verifying that everything is working properly

  1. Once both the the –configure and the –install commands have been run you can run the follow command to make sure everything is working properly: ./setup.pl --query --server=<esxi server's FQDN>

 

It’s a little bit more work than the LAG setup, but this is the proper way to get a full and complete Equal Logic Multipathing setup installed and working.

 

Installing vMA 4.1 in vShpere 4.1

Here is a quick guide to installing and configuring vMA 4.1 into a vSphere 4.1 installation. vMA is a management assistance tool that allows you to more easily manage your hosts or vcenter server. Follow these instructions:

  1. First download the vMA ovf file from here.
  2. Open your vSphere client and connect to your vCenter server. Click on the “File” menu and then click “Deploy OVF template…”.
  3. Click “Browse…” and then locate your downloaded oMA ovf file, click “Next >”.
  4. Click “Next >”, Agree to the EULA, and then click “Next >”.
  5. Give the vMA a name, and then select the Data center it will be deployed to. Click “Next >”.
  6. Select the host or cluster it will run on, and then click “Next >”.
  7. Select the Data store to place the files on, and then click “Next >”.
  8. Select your disk provision format, and then click “Next >”.
  9. Select your network from the drop down list, and then click “Next >”.
  10. Click Finish.

Once the import is finished we can start the wizard to configure the vMA tool. Open your vSphere client, connect to your vCenter server. Follow these steps:

  1. Find your vMA VM, open its console and click start.
  2. The vMA will boot to a prompt asking to use DHCP to assign an IP. Enter “no” and press “Enter”.
  3. It will now prompt for am IP address, enter an IP address and the press “enter”.
  4. It will now prompt for a Subnet mask, enter a mask and then press “enter”.
  5. It will now prompt for a gateway, enter the IP address of your gateway and then press “enter”.
  6. It will now prompt you twice for your primary and secondary DNS, enter the IP addresses and press “enter” after each.
  7. It will prompt you for the vMA’s hostname, enter a FQDN and then press “enter”
  8. Type “yes” to confirm the settings and then press “enter”.
  9. the vMA vm will now reboot, and when it comes back up it will prompt you twice for a password.
  10. The VM will now display a screen telling you how to SSH into the box. For now press “Alt” and F2″ to enter the virtual terminal. Login with “vi-admin” and the password you just created.

Before we continue we should make sure that our Active Directory contains a security group called EXACTLY: “ESX Admins” and contains the accounts that we want to have Administrator access to our ESX/ESXi hosts. During the domain join process this group will automatically be granted the Administrator role on each ESX/ESXi host.

Now we need to join the vMA to the active directory domain. If you’re not already logged into the Virtual Terminal on the vMA vm, then follow setup 10 above and then perform the following:

  1. Enter the command “sudo domainjoin-cli join <your domain fqdn> <your AD domain username>” press “enter”
  2. The vMA will now prompt you for the password for the “vi-admin” account created on the vMA. Enter it and then press “enter”.
  3. The vMA will now prompt you for the password for the Active Directory user account you are trying to use to join it to the domain, enter the password and then press “enter”.
  4. You should now receive an error about the PAM module, and the word “SUCCESS” at the bottom of the screen. You’ve successfully joined to the Active Directory domain.

If we’ve not already joined our ESXi servers to the Active Directory domain now is a good time to do so. This is not a required step, but it will allow us to cut down on the amount of usernames and passwords we’ll need to use to configure our ESXi hosts when using the vMA. Follow these steps:

  1. Open the vSphere client and connect to your vCenter Server.
  2. Navigate to “Inventory” and then “Hosts and Clusters”.
  3. Select the first ESXi host, and then click on the “Configuration” tab.
  4. Click on “Authentication Services” and then click on “Properties…”.
  5. Change the “User Directory Service” from “Local Authentication” to “Active Directory”.
  6. Enter your domain name in the box titled “Domain:” and then click “Join Domain”.
  7. When prompted enter your Active Directory name and password, and then Click “OK”.
  8. Click the “Permissions” tab.
  9. Right Click and select “Add Permission…”.
  10. Change the drop down box to “Administrator” and then click the button titled “Add…”.
  11. Highlight users and/or groups that should be added to the list of local administrators on your ESXi server. Click the button titled “Add”. Click “OK”.
  12. Click “OK” again to add the permission.

The next thing we need to do is configure our vMA with a list of servers to manage, and which authentication type to use to manage them. Follow these steps:

  1. Open the console for your vMA
  2. If you’re not already logged in, log in as “vi-admin”
  3. Enter the following command to add your servers “vifp addserver <host's FQDN> --authpolicy adauth” and then press “enter”
  4. When prompted for a username enter <domain>\<username> of a user who was granted administrator permissions on that ESXi host. Make sure the host is not in standbymode, otherwise you’ll get an error.
  5. repeat this step for each host and the vcenter server.

Now that we’ve got all of our servers in the list we can issue commands to them by appending the following to each command --server <Host's FQDN>  or if you get tired of having to specify the server each time you can set which server to use by issuing the following command: vifptarget -s <host's FQDN>. To clear the currently selected server issue the following command to the vMA: vifptarget -c . Also, if you get tired of having to type your Username and password in each time you can just append the following flag to the end of each command:  --passthroughauth

ESXi 4.1 Embedded (Installed on USB, SD, Flash) Does not allow Integrated Authentication to work. Error: gss_acquire_cred failed

I ran into a problem recently when configuring vMA for ESX/ESXi 4.1. I was able to join it, as well as, the ESXi hosts to the domain, but I was unable to log into the ESXi hosts with my AD credentials with either the vMA or the vSphere client. I double checked that my AD account did have Administrator permissions on the hosts, but still I could not log in. I was given the following error by the vSphere Client, as well as the vMA console:

The interesting thing is this: If i manually specified which account to use, instead of checking the box to use the account I was logged in with. I could connect and perform the actions I wanted to do. If I checked the box, then I got the error: “gss_acruire_cred failed”. The was was true with vMA. If I used the –passthroughauth option the command would fail, but if I allowed vMA to prompt me for a username and password the command would succeed. Only Integrated Authentication between windows and the vmware software was failing.

I did some research, and it turns out that when ESXi is installed on USB Drive, or SD card, or flash memory it does not automatically create Persistent Scratch space. This is the space that’s used to store temporary data among other things. This lack of persistent scratch space was somehow effecting the login process, but only when trying to pass credentials from a windows session and not by typing them in manually.

Here is how you can configure Persistent Scratch space on either local storage or a vmfs volume using the vSphere client:

  1. Connect to vCenter Server or the ESXi host using the vSphere Client.
  2. Select the ESXi host in the inventory.
  3. Click the “Configuration” tab.
  4. Click “Storage”.
  5. Right-click a datastore and select “Browse”.
  6. Create a uniquely-named directory for this ESX host (ex. .locker-<ESXHostname> )
  7. Close the Datastore Browser.
  8. Click “Advanced Settings” under “Software”.
  9. Select the “ScratchConfig” section.
  10. Change the ScratchConfig.ConfiguredScratchLocation configuration option, specifying the full path to the directory. For example: /vmfs/volumes/DatastoreName/.locker-<ESXHostname>
  11. Click “OK”.
  12. Put the ESXi host in maintenance mode and reboot for the configuration change to take effect.

Once the host is rebooted you’ll be able to use vMA with the –passthroughauth flag, or login by checking the box on the vSphere client to use the account you’re already logged in with. To read more about this check out this link to VMware’s KB1033696

Enabling Jumbo Frames on your iSCSI vmnics and vSwitches ESXi 4.1

When we setup our switches, we changed the mtu on our vlan for iSCSI traffic. Now we need to edit the mtu on our iSCSI port groups, and vSwitch to also allow jumbo frames.

The first thing we need to do is take stock of what virtual swtich and port group we’re using for iSCSI traffic on each ESXi host. Follow these steps:

  1. Log into your host or vCenter server and then navigate over to your host’s “Configuration” tab.
  2. Click “Networking” on the left.
  3. Verify the Port Group name, Virtual Switch name, vmk number, IP address, and
    iSCSI Port Group

    Figure 1

    which vmnics are being used. See Figure 1.

  4. If you’ve not already installed either vCLI or vMA see the posts on how to install and configure them Here and Here.
  5. Open either vCLI or ssh into your vMA VM.
  6. enter the following command “esxcfg-vswitch -m 9000 <vSwitch's name> --server <Hosts's FQDN>
  7. When prompted for a username and password enter the name and password of an account with Administrator permissions on that host.
  8. Verify that this change has taken effect by running the following command: “esxcfg-vswitch -l --server <Hosts's FQDN>“. The MTU for your vSwitch should now be displayed as 9000.

We can’t modify the mtu on our port group, so we’ll need to migrated any VMs on iSCSI storage on this Host off of this Host and then remove our iSCSI port group. Once you’ve migrated any running VMs follow these steps:

  1. Open the Preferences of your vSwitch that we just modified.
  2. Select the port group in questions, and then click “Remove”.
  3. Now enter the following command in either vCLI or the vMA, “esxcfg-vswitch -A
    "iSCSI" <vSwitch's name> --server <Host's FQDN>
    “. This command re-created our iSCSI port group, attached it to our vSwitch, but did not add a vmknic to the port group.
  4. Now enter the following command to re-create the vmknic, “esxcfg-vmknic -a -i <ip address> -n <netmask> -m 9000 "vmk#" -p "iSCSI" --server <Host's FQDN>“.
  5. We can now verify that our port group has the correct mtu by running the following commands:  “esxcfg-vmknic -l --server <Host's FQDN>” and “esxcfg-vmknic -l --server <Host's FQDN>“. Check the MTU settings on both your Port group and Nics, they should now both be 9000.
We now need to rescan our iSCSI software adapter, and refresh our storage view to make sure our iSCSI Datastores are re-connected. Follow these steps:
  1. Click on “Storage Adapters” under the “Configuration” tab of your Host.
  2. Scroll down to your iSCSI Software Adapter, and then click “Rescan All…” in the top right, Verify that the iSCSI LUN(s) have re-appeared.
  3. Now click on “Storage” under the “Configuration” tab of your Host.
  4. Click “Rescan All…” in the top right of the screen. Verify that your iSCSI  Datastores have re-appeared.
Finally let’s verify that our iSCSI network is fully supporting our jumbo frame size. Follow these steps:
  1. Log into the console of your ESXi Host.
  2. Press F2 to customize your host.
  3. When prompted, log into your Host.  Scroll down to “Troubleshooting Options”. Press “enter”.
  4. Press enter on “Enable Local Tech Support” to enable it.
  5. Now press “Alt and F1” to enter the console, and then log in again.
  6. Enter the following command: “vmkping -s 9000 <IP Address of your SAN's iSCSI interface>“. The ping should work and confirm that the mtu is 9000. If this does not succeed double check the mtu settings on your switches and SAN.
  7. Press ” Alt F2″ to exit the local console.
  8. Press enter on “Disable Local Tech Support” to disable the local console on your host.
  9. Exit your hosts’s console.
That’s it, your host is now configured to use jumbo frames, and now you can repeat these steps on the remaining Hosts.

Attached Files:

Adding your ESXi Host to vCenter and finishing its configuration

Now that we’ve got our vCenter server setup and running it’s time to finish up it’s basic configuration and get our ESXi servers added to it.

The first thing we’re going to need to do is create a datacenter. Follow these steps:

  1. Right click on the vCenter server in the upper left part of the screen.
  2. Select “New Datacenter”, assign it a name.
Now we’ll add the Hosts to the newly created Data Center.
  1. Right click on the Datacenter you just created and select “Add Host…”.
  2. Enter the Hosts’s Name, the username (root) and the password configured during the ESXi Host’s orgininal setup process. Click “Next >”.
  3. Click “Yes” when the Security Alert appears.
  4. Click “Next >” to confirm the summary .
  5. Assign a license to the Host, or choose evaluation, and then click “Next >”.
  6. Check “Enable Lockdown Mode” if you want it enabled, Click “Next >”.
  7. Select the location for your VMs, if there are any. Click “Next >”.
  8. Click “Finish”.
Repeat this for each of your Hosts, and when you’ve added them all we can move on to creating a HA / DRS cluster.
  1. Right click on the Datacenter you just created. Select “New Cluster…”.
  2. Give your new cluster a name, and then select if you want to enable HA or DRS or both. For the purposes of this write up, we’ll be enabling both. Click “Next >”.
  3. The first section asks to configure your DRS automation level. I configure this as “Fully automated” and with Priority 1,2,3, & 4 recommendations being performed. Click “Next >”.
  4. The next section asks how to configure Power Management automation. I configure this to be automatic, and leave the DPM Threshold at the default. Click “Next >”.
  5. The next section asks about how to configure HA. I leave these at the default settings. Make changes if you wish and then click “Next >”.
  6. The next section asks about how to handle VMs that stop responding and Hosts that stop responding. I leave these settings at their defaults. Make changes if you wish and then click “Next >”.
  7. The next section asks about monitoring the guest VMs. Enable VM Monitoring if you want, and then set your sensitivity level. Click “Next >”.
  8. The next section asks about EVC, if you are running hosts with different versions of processors, then you should enable this, if all of your hosts are identical, you can leave this disabled. Click “Next >”.
  9. The next section asks about the VM Swap file location. Unless you have a specific reason to do so I would not modify this. I leave it at the default unless I’ve got a raid 0 volume setup somewhere. Click “Next >”.
  10. Click “Finish” to create you cluster.
Now we need to add our hosts to the newly created cluster. Drag your first host into the cluster and when you drop it you’ll be put into the “Add Host Wizard” Follow these steps to add the host to the cluster:
  1. The first section will ask you where you want to place the host’s VMs if there are any, if you’ve configured resource pools you and select one, otherwise leave this at the default setting and click “Next >”.
  2. Click “Finish”.
The last thing we need to do for our hosts is configure their Power Management settings. I’m using Dell servers, so I’m going to configure the Power Managment settings with the IP address, Mac address, and Username/password of the build in iDRAC on each server. Follow these steps:
  1. From the Hosts and Clusters Inventory,Click on the first host, and then click on the “Configuration” tab.
  2. Under the “Software” section click “Power Management”.
  3. Click “Properties…” in the top right corner of the screen.
  4. Enter the Username, Password, IP address, and MAC address of the host’s iDRAC interface. Click “OK”.
  5. If Power Management is configured on your cluster, the cluster can now put this host to sleep and wake it up when it’s needed.
Finally, the last thing we need to do to finish basic configuration is configure email alerts on the vCenter server. Follow these steps:
  1. Go to the “Home” screen in the vCenter client.
  2. Click on “vCenter Server Settings”.
  3. Click “Mail” in the left hand pane.
  4. Enter your SMTP server’s address, and enter a sender account for vCenter server. Click “OK”.
That’s it. We’re done with the basic configuration of vCenter server, our hosts, and our first cluster. We’ll move onto more advanced topics in future posts, such as Resource Pools, Cloning, Creating Templates, and Backing up VMs.

Finishing the configuration of the EqualLogic PS4000E

In another post, we’ve already got the basic setup of the SAN completed, now we just need to finish a few things and then provision some storage.

First let’s get the firmware updated. If’ you’ve not already configured an account with EqualLogic, do so now by going to http://support.equallogic.com and signing up.

Once you’ve downloaded the firmware we’ll update it by following these steps:

  1. Login to the management group ip of your device, expand “Members” in the left hand pane.
  2. Highlight the unit, and then click on the “Maintenance” tab.
  3. Click “Update firmware….”, Enter the admin password, and then click “OK”.
  4. Navigate and point to the .tgz file that you’ve downloaded from EqualLogic, and then press “OK”
  5. In the “Action” column click the link to upgrade and follow the steps to upgrade and reboot.
We’ll now configure some email alerting, Log back into your management group IP and perform the following:
  1. Click the “Notifications” tab.
  2. Check the box labeled “Send e-mail to addresses”.
  3. In the “E-mail recipients” window, click “Add” and enter the email address you’d like to receive alert emails.
  4. in the “SMTP Servers” window click “Add”, and enter the IP address of your SMTP server.
  5. Check the box labeled “Send e-mail alerts to customer support( E-Mail Home)”.
  6. Enter a reply email address so that customer support can return an email to you in the event that the SAN reports a hardware failure.
  7. Enter the email address you want the SAN to use when it sends out emails in the box labeled “Sender e-mail address”.
That’s it for notifications, now let’s configure our first volume that we’ll make available to our ESXi hosts. Follow these steps:
  1. First expand “Storage Pools” in the left pane, and then click on the “Default” Storage Pool.
  2. Click on “Create Volume”, Give the volume a name and description and then click “Next >”.
  3. Give the volume a size, it’s important to remember that ESXi has a limit of 2TB -512B for a LUN size, so for simplicty, don’t make the volume larger than 2047GB. Uncheck “thin provisioned volume” unless you want it to be thinly provisioned. If you are planning on using snapshots, leave this at 100%, otherwise if you are going to be backing up the SAN without using snapshots, change this to 0% to conserve storage space. Click “Next >”.
  4. Click “No access” for now, we’ll add access later. “Access Type” should be set to “read-write” and the box for “allow simultaneous connections from initiators with different IQN names” should be checked. Click “Next >”.
  5. Click “Finish”.
  6. Highlight the newly created volume, and then click the “Access” tab.
  7. Click “Add”, Check the box labeled “Limit access by IP address”, Enter the IP address of the first ESXi server (use the IP address for the nic team on the LAG we created for iSCSI in this post). Click “OK”.
  8. Repeat steps 6 & 7 for each of your ESXi hosts.
That’s it. We’ve got our SAN configured, at least enough to get vCenter installed and running properly. Time to get vCenter installed.

 

Finishing up the ESXi installation

Once all of our networking is configured, we just need to do a few more things to complete the configuration of ESXi, after we’re done with this, we’ll complete the SAN configuration, and install vCenter on a VM and get these hosts connected to it.

First let’s configure our time and NTP settings:

  1. Open the vSphere Client and connect to your ESXi host.
  2. Click the “Configuration tab” at the top, and then click on “Time Configuration” in the left pane.
  3. Click “Properties…” in the top right hand corner of the screen.
  4. Set a time close to that of your NTP server, and then click the “Options…” button.
  5. Click “NTP Settings” in the left pane, click “Add…”, and then Enter the IP address of your NTP server.
  6. Check the box labeled “Restart NTP service to apply changes” and then click on “OK”, Click “OK” on the last window.

That takes care of the Time Configuration, let’s now configure local storage on the Host.

  1. Remaining on the “Configuration” tab, Click “Storage” in the top left hand pane.
  2. Click “Add Storage…” in the top right hand part of the screen.
  3. Select the option for “Disk/LUN” and then click “Next >”.
  4. Select the local RAID or Disk controller on your Host, and then click “Next >”.
  5. If prompted, select to use all available space and partitions, unless you’ve got utility partitions on your system you want to keep. Click “Next >”.
  6. Give your local Datastore a name, it’s a good idea to specifically note that it’s local storage in the name. Click “Next >”.
  7. Choose your block size, select the size that most closely resembles the amount of free storage, or below. Click “Next >”.
  8. Click “Finish”.
Now let’s add the iSCSI storage that we configured on the SAN.
  1. Remaining on the “Configuration” tab, Click “Storage Adapters” in the left hand pane.
  2. Scroll down to the “iSCSI Software Adapter”, select it, and then click on “Properties…” in the top right corner of the screen.
  3. Click “Configure”, Check the box for “Enabled”, and then click “OK”
  4. Click the “Dynamic Discovery” tab, click “Add…”, Enter the IP address of the SAN GROUP (not any individual IP on any one controller), leave the default port and then click “OK”.
  5. When prompted, click “Yes” to rescan the adapters for storage. The LUN on the SAN that we just provisioned should now appear in the list.
  6. Remaining on the “Configuration” tab, Click “Storage” in the left hand pane.
  7. Click “Add…”, Select “Disk/LUN” and click “Next >”.
  8. Select the LUN from the SAN that we just added, and click “Next >”.
  9. Click “Next >” to add a new partition, Enter a name for this volume, preferably one that matches the volume on the SAN, and then click “Next >”.
  10. Select the Maximum file size that you want and then click “Next >”.
  11. Click “Finish”.
That’s all we need to do for now on the ESXi hosts, let’s move on and get vCenter installed on a VM.

Configuring LAG Groups between Dell 62xx Series Switches and ESXi 4.1

Okay, so we’ve already configured the basics on both our switches, and ESXi servers, now it’s time to configure the LAG groups, and vSwitches for each of our necessary purposes.

We’re going to configure one LAG group for each of the following:

  • Production network traffic for the VMs
  • iSCSI Traffic
  • Management and vMotion
  • We’re only going to be using one NIC for Fault Tolerance, so we’re not going to configure a LAG group for that.
Let’s start by first identifying which ports we’ll use on each switch, and for which purpose we’ll use each group. When we started we said we’ll by using vlan 2 for Management, vlan 3 for vMotion, vlan4 for Fault Tolerance, vlan 5 for iSCSI, and vlans 6 & 7 for various production VMs (also vlan 2 if you are going to virtualize the vCenter server, which we are).
So we’ll need a total for 3 LAG groups, two of which will be trunking more than one vlan. Let’s start by configuring the first LAG group. This one is going to be for the Management and vMotion purposes, we’ll need 1 port on each switch in the stack, so let’s use port 10 on both the first and second switch in the stack, start by doing the following:

 

  1. Open your connection to your switch stack
  2. switchstack> enable
  3. switchstack# config
  4. switchstack(config)# interface range ethernet 1/g10,2/g10
  5. switchstack(config-if)# channel-group 10 mode on
  6. switchstack(config-if)#exit
  7. switchstack(config)# interface port-channel 10
  8. switchstack(config-if-ch10)# spanning-tree portfast
  9. switchstack(config-if-ch10)# hashing-mode 6
  10. switchstack(config-if-ch10)# switchport mode trunk
  11. switchstack(config-if-ch10)# switchport trunk allowed vlan add 2-3
  12. switchstack(config-if-ch10)# exit
What we just did was build a new Link Aggregation Group, Added port 10 on both of the switches in the stack to the LAG group, enabled the port to transition to forwarding state right away, be enabling portfast, set the LAG group load balancing method to IP-Source-Destination (hashing-mode 6), and converted the LAG group to a trunk, and added vlans 2 & 3 to the trunk as tagged vlans on that trunk.
We’ll be doing the same thing for our next LAG, only we’re going to add some commands because this LAG will be handling iSCSI traffic. We’re going to use ports 11 on each switch for this next LAG group, start by entering the following:

 UPDATE: if you are configuring iSCSI for an Equal Logic Array, please see this post instead of configuring LAGs for you iSCSI traffic.

  1. switchstack(config)# interface range ethernet 1/g11,2/g11
  2. switchstack(config-if)# channel-group 11 mode on
  3. switchstack(config-if)#exit
  4. switchstack(config)# interface port-channel 10
  5. switchstack(config-if-ch11)# spanning-tree portfast
  6. switchstack(config-if-ch11)# hashing-mode 6
  7. switchstack(config-if-ch11)# switchport mode access
  8. switchstack(config-if-ch11)# switchport access vlan 5
  9. switchstack(config-if-ch11)# mtu 9216
  10. switchstack(config-if-ch11)# exit
What we’ve done here is pretty much what we did for the first lag, but we made this LAG an access port for only one vlan, instead of a trunk port for more than one. We also adjusted the mtu to support jumbo frames for the iSCSI traffic because that’s what this vlan is used for.
Our Final LAG group is going to contain three ports two on 1 switch, and just one port on the other, let’s start by:
  1. switchstack(config)# interface range ethernet 1/g12-1/g13,2/g12
  2. switchstack(config-if)# channel-group 12 mode on
  3. switchstack(config-if)#exit
  4. switchstack(config)# interface port-channel 12
  5. switchstack(config-if-ch12)# spanning-tree portfast
  6. switchstack(config-if-ch12)# hashing-mode 6
  7. switchstack(config-if-ch12)# switchport mode trunk
  8. switchstack(config-if-ch12)# switchport trunk allowed vlan add 2,6-7
  9. switchstack(config-if-ch12)# exit

Don’t forget to “copy run start” on you switch, you don’t wan’t to lose all that work you’ve just done! Okay, our first few LAGs are configured, time to setup our first ESXi server’s network configuration:

Now comes time to configure the networking on the first ESXi server. The first thing we’re going to do is setup the vSwitch that corresponds to the LAG group for the Management and vMotion vlans. Follow these steps:

  1. Log into your ESXi server using the vSphere Client.
  2. Click on the Configuration tab at the top.
  3. Click on “Networking” under the hardware section, in the left pane.
  4. We’re going to be adding a new vSwitch, so click on “Add Networking…” in the top right hand corner of the screen.
  5. Select the Option for “VMkernel”, because this vSwitch will be supporting non- Virtual Machine tasks, click Next.
  6. Select “Create New Virtual Switch” and then check two vmnics (make sure these two are plugged into port 10 on each switch) and then press “Next”.
  7. Give this network the label of “MGMT_Network” or whatever you’ve named vlan 2 on the switches, for VLAN ID, enter the value of “2”, Check the box labeled “use this port group for management traffic”, click “Next”.
  8. Assign an IP address and subnet mask that are within the subnet of vlan 2. Click Next.
  9. Click “Finish”.
  10. Find the newly created vSwitch and click on “Properties”.
  11. Click “Add” to add a new port group.
  12. Select “VMkernel” again, and then click “Next”.
  13. Give this port group a name of “vMotion”, and a VLAN ID of “3”, Check the box labeled “use this port group for VMotion”, click “Next”.
  14. Click Finish.
  15. Select the “vSwitch”, which should be the first item in the list when the Port Group window closes, click “Edit…”.
  16. Click on the “NIC Teaming” tab.
  17. Change the “Load Balancing:” setting to “Route based on IP hash”.
  18. Leave the defaults of “Link status only” and “Yes” for the middle two settings, and then change the setting “Failback:” to “No”.
  19. Verify that both vmnics are listed under the “Active Adapters” section.
  20. Close all of the windows.
What we’ve just done is this: We’ve created a vSwitch, added two NICs to it, both of which are plugged into the LAG on the switches, and we configured ip hashing as the method of balancing (which is the ONLY method you can use with a LAG group), and we disabled link failover on this vSwitch. We also created two Port Groups, assigned each a VLAN ID, and an IP address/subnet mask that match our existing vlans configured on the switches. We identified that these networks should be used for either management or vMotion, and gave them descriptive names that match the vlans on the switches.
We’ll repeat this process to creating new vSwitches 3 more times, here are the break downs:
  • iSCSI port group, two vmnics: both plugged into the ports that make up LAG 11 on the switches, assigned vlan 5, assigned the name “iSCSI” or whatever you named the vlan on the switch, assigned a IP address in that subnet, NIC teaming configuration exactly the same as the first vSwitch we configured.
  • Fault Tolerance port group, one vmnic: plugged into one of the switch ports configured as an access port on vlan 4, VLAN ID of 4, a name that matches the vlan name on the switches, check the box for “Fault Tolerance Logging”, and an ip address in the corresponding subnet, leave all of the NIC Teaming settings in their default states.
  • and finally a vSwitch that contains a port group for each of your production VM networks, Assign VLAN IDs to each, and plug them into the switch ports that make up your final LAG groups. Make sure the NIC Teaming settings match the example LAG group above. Don’t forgot to create a Port Group for MGMT traffic otherwise your vCenter server wont be able to communicate to the ESXi servers later.
That’s it, after it’s all configured on the ESXi side, it may take a reboot of the ESXi host when configuring and changing the Management port groups, it’s not supposed to require that, but sometimes it does, so if you reconfigure the management networks, and then lose the ability to ping or connect to it, reboot the system before you start other troubleshooting. Also you’re going to want to make sure all of your LAG groups came up properly on the switches you can use the following commands to test:
  • Show interfaces port-channel – this will display the status of all interfaces in all LAG groups
  • show interfaces switchport port-channel XX – This will display a list of all tagged or untagged vlans on this particular LAG group or Ethernet port
That’s it, we’re now ready to finish up our ESXi configurations, Install a VM to run vCenter, and configure our iSCSI storage.

Initial Configuration of ESXi 4.1 Servers

The servers that we’ve chosen for this particular VMWare deployment are Dell R710’s with ESXi 4.1 Embedded into a SD card that’s in the server itself, the good thing about this is that we didn’t have to buy any local storage for the servers, and as such can save on raid controllers and local disks, the downside is that we don’t have any cheap local storage on any of the host systems, so everything’s got to go on the SAN.

So once the Servers are racked, plugged in, and turned on you’ll watch them boot, and then after everything is said and done you’ll be left with an unfulfilling “no boot device found” bios error message. Here is how we resolve this first hurdle:

  1. Reboot and enter the BIOS
  2. Scroll down to the section labeled “Integrated Devices” and then press Enter
  3. Scroll down to the section labeled “Internal SD Card Port”, change it to “ON” and then press Enter.
  4. Now reboot, and re-enter the BIOS
  5. Scroll down to the section labeled “Boot Settings” and then press Enter
  6. Scroll down to the section labeled “Hard-Disk Drive Sequence” and then press Enter
  7. Change the first boot device to “Internal SD Card: Flash Reader” and then save and exit the bios, now when the computer boots it will load ESXi.
Once you’ve booted into ESXi’s configuration tool, login with the username of root and a blank password. You’ll now want to do a few things such as:
  • Configure a new password
  • Select Which NIC to use for managment, and configure an IP, Subnet, and Gateway
  • Configure DNS servers (and if you’ve not done this yet configure A records on your DNS servers for the ESXi servers now)
  • Identify which Physical NIC ports on your servers correspond to the logical vmnics listed by esxi, which you can do by entering into the “Configure Management Network” section, and then select “Network Adapters”, move a single cable from one nic to another and then exit and re-enter this screen to see which physical nics correspond to which vmnics in esxi
Now that we’ve configured enough of our ESXi servers to be able to use them we can exit out of all of these screens and head over to a windows computer, install the vSphere client, and connect in to finish the network setup that we started back in this post.

Let’s move on and configure the LAG groups on our Switches and ESXi servers here.