Monthly Archives: November 2011

Enabling Jumbo Frames on your iSCSI vmnics and vSwitches ESXi 4.1

When we setup our switches, we changed the mtu on our vlan for iSCSI traffic. Now we need to edit the mtu on our iSCSI port groups, and vSwitch to also allow jumbo frames.

The first thing we need to do is take stock of what virtual swtich and port group we’re using for iSCSI traffic on each ESXi host. Follow these steps:

  1. Log into your host or vCenter server and then navigate over to your host’s “Configuration” tab.
  2. Click “Networking” on the left.
  3. Verify the Port Group name, Virtual Switch name, vmk number, IP address, and
    iSCSI Port Group

    Figure 1

    which vmnics are being used. See Figure 1.

  4. If you’ve not already installed either vCLI or vMA see the posts on how to install and configure them Here and Here.
  5. Open either vCLI or ssh into your vMA VM.
  6. enter the following command “esxcfg-vswitch -m 9000 <vSwitch's name> --server <Hosts's FQDN>
  7. When prompted for a username and password enter the name and password of an account with Administrator permissions on that host.
  8. Verify that this change has taken effect by running the following command: “esxcfg-vswitch -l --server <Hosts's FQDN>“. The MTU for your vSwitch should now be displayed as 9000.

We can’t modify the mtu on our port group, so we’ll need to migrated any VMs on iSCSI storage on this Host off of this Host and then remove our iSCSI port group. Once you’ve migrated any running VMs follow these steps:

  1. Open the Preferences of your vSwitch that we just modified.
  2. Select the port group in questions, and then click “Remove”.
  3. Now enter the following command in either vCLI or the vMA, “esxcfg-vswitch -A
    "iSCSI" <vSwitch's name> --server <Host's FQDN>
    “. This command re-created our iSCSI port group, attached it to our vSwitch, but did not add a vmknic to the port group.
  4. Now enter the following command to re-create the vmknic, “esxcfg-vmknic -a -i <ip address> -n <netmask> -m 9000 "vmk#" -p "iSCSI" --server <Host's FQDN>“.
  5. We can now verify that our port group has the correct mtu by running the following commands:  “esxcfg-vmknic -l --server <Host's FQDN>” and “esxcfg-vmknic -l --server <Host's FQDN>“. Check the MTU settings on both your Port group and Nics, they should now both be 9000.
We now need to rescan our iSCSI software adapter, and refresh our storage view to make sure our iSCSI Datastores are re-connected. Follow these steps:
  1. Click on “Storage Adapters” under the “Configuration” tab of your Host.
  2. Scroll down to your iSCSI Software Adapter, and then click “Rescan All…” in the top right, Verify that the iSCSI LUN(s) have re-appeared.
  3. Now click on “Storage” under the “Configuration” tab of your Host.
  4. Click “Rescan All…” in the top right of the screen. Verify that your iSCSI  Datastores have re-appeared.
Finally let’s verify that our iSCSI network is fully supporting our jumbo frame size. Follow these steps:
  1. Log into the console of your ESXi Host.
  2. Press F2 to customize your host.
  3. When prompted, log into your Host.  Scroll down to “Troubleshooting Options”. Press “enter”.
  4. Press enter on “Enable Local Tech Support” to enable it.
  5. Now press “Alt and F1” to enter the console, and then log in again.
  6. Enter the following command: “vmkping -s 9000 <IP Address of your SAN's iSCSI interface>“. The ping should work and confirm that the mtu is 9000. If this does not succeed double check the mtu settings on your switches and SAN.
  7. Press ” Alt F2″ to exit the local console.
  8. Press enter on “Disable Local Tech Support” to disable the local console on your host.
  9. Exit your hosts’s console.
That’s it, your host is now configured to use jumbo frames, and now you can repeat these steps on the remaining Hosts.

Attached Files:

Installing the VMware vCLI on a vCenter 4.1 server

This is a quick guide on downloading and installing the vCLI on your vCenter Server. Follow these steps to install the vCLI.

  1. Open a browser and head over to
  2. Select Release 4.1
  3. Click the link to download the installer, when prompted login with your vmware account.
  4. Agree to the EULA, and then download the file.
  5. Run the downloaded .exe file, Click “Next >” on the first screen.
  6. Accept the EULA, and then click “Next >”.
  7. Click “Next >” one more time, and then click “Install”.
  8. Click “Finish”.
  9. You can now access the vCLI by clicking on START > All Programs > VMware > VMware vSphere CLI > Command Prompt.

Adding your ESXi Host to vCenter and finishing its configuration

Now that we’ve got our vCenter server setup and running it’s time to finish up it’s basic configuration and get our ESXi servers added to it.

The first thing we’re going to need to do is create a datacenter. Follow these steps:

  1. Right click on the vCenter server in the upper left part of the screen.
  2. Select “New Datacenter”, assign it a name.
Now we’ll add the Hosts to the newly created Data Center.
  1. Right click on the Datacenter you just created and select “Add Host…”.
  2. Enter the Hosts’s Name, the username (root) and the password configured during the ESXi Host’s orgininal setup process. Click “Next >”.
  3. Click “Yes” when the Security Alert appears.
  4. Click “Next >” to confirm the summary .
  5. Assign a license to the Host, or choose evaluation, and then click “Next >”.
  6. Check “Enable Lockdown Mode” if you want it enabled, Click “Next >”.
  7. Select the location for your VMs, if there are any. Click “Next >”.
  8. Click “Finish”.
Repeat this for each of your Hosts, and when you’ve added them all we can move on to creating a HA / DRS cluster.
  1. Right click on the Datacenter you just created. Select “New Cluster…”.
  2. Give your new cluster a name, and then select if you want to enable HA or DRS or both. For the purposes of this write up, we’ll be enabling both. Click “Next >”.
  3. The first section asks to configure your DRS automation level. I configure this as “Fully automated” and with Priority 1,2,3, & 4 recommendations being performed. Click “Next >”.
  4. The next section asks how to configure Power Management automation. I configure this to be automatic, and leave the DPM Threshold at the default. Click “Next >”.
  5. The next section asks about how to configure HA. I leave these at the default settings. Make changes if you wish and then click “Next >”.
  6. The next section asks about how to handle VMs that stop responding and Hosts that stop responding. I leave these settings at their defaults. Make changes if you wish and then click “Next >”.
  7. The next section asks about monitoring the guest VMs. Enable VM Monitoring if you want, and then set your sensitivity level. Click “Next >”.
  8. The next section asks about EVC, if you are running hosts with different versions of processors, then you should enable this, if all of your hosts are identical, you can leave this disabled. Click “Next >”.
  9. The next section asks about the VM Swap file location. Unless you have a specific reason to do so I would not modify this. I leave it at the default unless I’ve got a raid 0 volume setup somewhere. Click “Next >”.
  10. Click “Finish” to create you cluster.
Now we need to add our hosts to the newly created cluster. Drag your first host into the cluster and when you drop it you’ll be put into the “Add Host Wizard” Follow these steps to add the host to the cluster:
  1. The first section will ask you where you want to place the host’s VMs if there are any, if you’ve configured resource pools you and select one, otherwise leave this at the default setting and click “Next >”.
  2. Click “Finish”.
The last thing we need to do for our hosts is configure their Power Management settings. I’m using Dell servers, so I’m going to configure the Power Managment settings with the IP address, Mac address, and Username/password of the build in iDRAC on each server. Follow these steps:
  1. From the Hosts and Clusters Inventory,Click on the first host, and then click on the “Configuration” tab.
  2. Under the “Software” section click “Power Management”.
  3. Click “Properties…” in the top right corner of the screen.
  4. Enter the Username, Password, IP address, and MAC address of the host’s iDRAC interface. Click “OK”.
  5. If Power Management is configured on your cluster, the cluster can now put this host to sleep and wake it up when it’s needed.
Finally, the last thing we need to do to finish basic configuration is configure email alerts on the vCenter server. Follow these steps:
  1. Go to the “Home” screen in the vCenter client.
  2. Click on “vCenter Server Settings”.
  3. Click “Mail” in the left hand pane.
  4. Enter your SMTP server’s address, and enter a sender account for vCenter server. Click “OK”.
That’s it. We’re done with the basic configuration of vCenter server, our hosts, and our first cluster. We’ll move onto more advanced topics in future posts, such as Resource Pools, Cloning, Creating Templates, and Backing up VMs.

Script to set internal and external URLs on all of your Exchange Virtual Directories at once

I stumbled upon this script earlier today, it was posted on This is script is great! a one stop shop for setting all of the necessary URLs needed for both internal and external access.

Copy this to a text file, save it with a .ps1 extension, and then open it and run it from the Exchange Management Shell ( ./<scriptname>.ps1 )

#Changing InternalURL Path
$urlpath = Read-Host "Type Internal Client Access FQDN starting with http:// or https://"
Set-AutodiscoverVirtualDirectory -Identity * –internalurl “$urlpath/autodiscover/autodiscover.xml”
Set-ClientAccessServer –Identity * –AutodiscoverServiceInternalUri “$urlpath/autodiscover/autodiscover.xml”
Set-webservicesvirtualdirectory -Identity * –internalurl “$urlpath/ews/exchange.asmx”
Set-oabvirtualdirectory –Identity * –internalurl “$urlpath/oab”
Set-owavirtualdirectory –Identity * –internalurl “$urlpath/owa”
Set-ecpvirtualdirectory –Identity * –internalurl “$urlpath/ecp”
Set-ActiveSyncVirtualDirectory -Identity * -InternalUrl "$urlpath/Microsoft-Server-ActiveSync"
#Changing ExternalURL Path
$urlpath2 = Read-Host "Type External Client Access FQDN starting with http:// or https://"
Set-AutodiscoverVirtualDirectory -Identity * –externalurl “$urlpath2/autodiscover/autodiscover.xml”
Set-ClientAccessServer –Identity * –AutodiscoverServiceInternalUri “$urlpath2/autodiscover/autodiscover.xml”
Set-webservicesvirtualdirectory –Identity * –externalurl “$urlpath2/ews/exchange.asmx”
Set-oabvirtualdirectory –Identity * –externalurl “$urlpath2/oab”
Set-owavirtualdirectory –Identity * –externalurl “$urlpath2/owa”
Set-ecpvirtualdirectory –Identity * –externalurl “$urlpath2/ecp”
Set-ActiveSyncVirtualDirectory -Identity * -ExternalUrl "$urlpath/Microsoft-Server-ActiveSync"
#get commands to doublecheck the config
get-AutodiscoverVirtualDirectory | fl identity,internalurl, externalurl
get-ClientAccessServer | fl identity,AutodiscoverServiceInternalUri
get-webservicesvirtualdirectory | fl identity,internalurl,externalurl
get-oabvirtualdirectory | fl identity,internalurl,externalurl
get-owavirtualdirectory | fl identity,internalurl,externalurl
get-ecpvirtualdirectory | fl identity,internalurl,externalurl
get-ActiveSyncVirtualDirectory | fl identity,internalurl,externalurl

Installing vCenter Server 4.1 on Server 2008 R2, and SQL 2008

Well we’re almost there, it’s now time to install vCenter Server. If you haven’t already done so, create a new VM and install Server 2008 R2 on it. Afterwards complete these steps:

  1. Join to the Active Directory Domain.
  2. Install the .NET Framework 3.5.1 Feature.
  3. Run Windows Updates.
  4. Disable and Stop the “World Wide Web Publishing Service”, This is installed by .NET framework, and it’s unneeded, and will get in the way of our vCenter installation.

First we’ll need to install SQL 2008, and update it. Insert your SQL 2008 Disk and then double click it’s icon to launch. Follow these steps:

  1. From the “Installation” section of the SQL launcher select “New SQL Server stand-alone installation or add Features…” .
  2. Run through the steps until you get the the section for selecting which features you want to install. Select the Following: Instance Features: Database Engine Services, Instance Features: Full-Text Search, Shared Features: Management Tools – Complete, Click “Next”.
  3. On the “Instance Configuration” screen, rename your instance to “vCenter” or some other descriptive name. Click Next.
  4. On the “Server Configuration” screen click the box for “Use the same account for all SQL Server Services” Enter in a domain user account with local admin privileges on this computer. Make Sure both of the services that are running with that user’s context are set to “Automatic”. Click “Next”.
  5. On the “Database Engine Configuration” screen Select the option for “Mixed Mode” and set a password for your SA account. Under the “Specify SQL Server Administrators” section add the service account you want to use for vCenter to this list, as well as, your other admin accounts. Click Next.
  6. Once completed update SQL to the latest service pack level.

Once all of the updates have been installed, reboot the server and log back in as you vCenter service account (domain user with local admin permissions on this box). We’ll now create our Database. Follow these steps:

  1. Open SQL Management Studio, and connect to your vCenter instance using Windows Authentication (you’re logged in as your vCenter service account right?).
  2. Right click on the Server and Instance at the top of the Management Studio,  select “Properties”. Click “Memory” on the left pane. Assign and Maximum Memory in MB. Click “OK”.
  3. Right Click on “Databases”, Click “New Database…”, in the General Section name your database “VCDB”, click “Options”, set your recovery model to “Simple”. Click “OK”.
  4. Right Click on the “Security” folder, Click “New” and then click “Login…”. Create a new SQL Server Authentication user called “vpxuser”, Assign a password and then clear the check box “Enforce Password Policy”. Set the Default Database to “VCDB”.
  5. Click “User Mapping” in the left pane. Check the box labeled “Map” on both the “msdb” and “VCDB” databases. Click the button “…” for each, and select the schema “dbo” for each. Assign the role “db_owner” for each database. Click “OK”.

A few final SQL Configuration steps and then we’ll install vCenter server. First let’s configure Microsoft SQL Server TCP/IP settings for JDBC. Follow these steps:

  1. Start the “SQL Server Configuration Manager”.
  2. Select “SQL Server Network Configuration” then “Protocols for <instance name>.
  3. Enable “TCP/IP”.
  4. Open TCP/IP Properties.
  5. On the “Protocol” tab verify the following settings: Enabled: Yes, Listen All: Yes, Keep Alive: 30000.
  6. On the “IP Addresses” tab, verify the following settings: Active: Yes, TCP Dynamic Ports: 0.
  7. Restart the SQL Services if you made any changes.

Now let’s grant SQL 2008 “Local Launch” permissions in Component Services. Follow these instructions:

  1. Open “Administrative Tools”, Open “Component Services”.
  2. Navigate to ” Console Root > Component Services > Computers > My Computer > DCOM Config > MsDtsServer100.
  3. Right Click on “MsDtsServer100”, select Properties.
  4. Click the “Security” tab, Click “Customize” under the section labeled “Launch and Activation Permissions”. Click Edit.
  5. Click “Add…” Add the account that’s used to run your SQL Services. Check the box labeled “Allow” for “Local Launch”. Click OK on all boxes.

Okay, we’re done with the SQL Configuration, it’s now time to create our ODBC driver. Follow these steps:

  1. Open up Administrative Tools, and then click on “Data Sources (ODBC)”.
  2. Click the “System DSN” tab and then click “Add…”.
  3. Give a name and description to your driver, and then specify your server\instance name in the “Server:” section. Click “Next >”.
  4. Change the Authentication type to “With SQL Server authentication…” and enter the username of “vpxuser” and the password you created for this account. Click “Next >”
  5. Check the box for “Change the default database to:” and then select “VCDB”. Click “Next >”.
  6. Click “Finish”. Click “Test” to verify that the driver is working.

Okay! We’re here, we’re finally going to install vCenter Server. Follow these steps:

  1. This is actually pretty straight forward. Insert your installation media and select to install vCenter Server.
  2. When prompted to select SQL 2005 Express or select a DSN, choose the option to select a DSN, and then choose your DSN from list. Click “Next”.
  3. Enter the username and password for the dsn, which will be “vpxuser” and the password you set for that account in SQL. Click “Next”.
  4. When prompted which account to use to run the VMWare services, change from “SYSTEM” to the account you created for this task, the one that was added to the SQL admins group during the SQL installation. Enter the password and click “Next”.
  5. When the installation finishes, open “Services” and change both “VMware VirtualCenter Management Webservices” and “VMware VirtualCenter Server” to “Automatic (Delayed Start).
  6. Reboot your server.

That’s it. You can now connect to your vCenter server using the vSphere client and any Active Directory “Domain Admin” account.

Finishing the configuration of the EqualLogic PS4000E

In another post, we’ve already got the basic setup of the SAN completed, now we just need to finish a few things and then provision some storage.

First let’s get the firmware updated. If’ you’ve not already configured an account with EqualLogic, do so now by going to and signing up.

Once you’ve downloaded the firmware we’ll update it by following these steps:

  1. Login to the management group ip of your device, expand “Members” in the left hand pane.
  2. Highlight the unit, and then click on the “Maintenance” tab.
  3. Click “Update firmware….”, Enter the admin password, and then click “OK”.
  4. Navigate and point to the .tgz file that you’ve downloaded from EqualLogic, and then press “OK”
  5. In the “Action” column click the link to upgrade and follow the steps to upgrade and reboot.
We’ll now configure some email alerting, Log back into your management group IP and perform the following:
  1. Click the “Notifications” tab.
  2. Check the box labeled “Send e-mail to addresses”.
  3. In the “E-mail recipients” window, click “Add” and enter the email address you’d like to receive alert emails.
  4. in the “SMTP Servers” window click “Add”, and enter the IP address of your SMTP server.
  5. Check the box labeled “Send e-mail alerts to customer support( E-Mail Home)”.
  6. Enter a reply email address so that customer support can return an email to you in the event that the SAN reports a hardware failure.
  7. Enter the email address you want the SAN to use when it sends out emails in the box labeled “Sender e-mail address”.
That’s it for notifications, now let’s configure our first volume that we’ll make available to our ESXi hosts. Follow these steps:
  1. First expand “Storage Pools” in the left pane, and then click on the “Default” Storage Pool.
  2. Click on “Create Volume”, Give the volume a name and description and then click “Next >”.
  3. Give the volume a size, it’s important to remember that ESXi has a limit of 2TB -512B for a LUN size, so for simplicty, don’t make the volume larger than 2047GB. Uncheck “thin provisioned volume” unless you want it to be thinly provisioned. If you are planning on using snapshots, leave this at 100%, otherwise if you are going to be backing up the SAN without using snapshots, change this to 0% to conserve storage space. Click “Next >”.
  4. Click “No access” for now, we’ll add access later. “Access Type” should be set to “read-write” and the box for “allow simultaneous connections from initiators with different IQN names” should be checked. Click “Next >”.
  5. Click “Finish”.
  6. Highlight the newly created volume, and then click the “Access” tab.
  7. Click “Add”, Check the box labeled “Limit access by IP address”, Enter the IP address of the first ESXi server (use the IP address for the nic team on the LAG we created for iSCSI in this post). Click “OK”.
  8. Repeat steps 6 & 7 for each of your ESXi hosts.
That’s it. We’ve got our SAN configured, at least enough to get vCenter installed and running properly. Time to get vCenter installed.


Finishing up the ESXi installation

Once all of our networking is configured, we just need to do a few more things to complete the configuration of ESXi, after we’re done with this, we’ll complete the SAN configuration, and install vCenter on a VM and get these hosts connected to it.

First let’s configure our time and NTP settings:

  1. Open the vSphere Client and connect to your ESXi host.
  2. Click the “Configuration tab” at the top, and then click on “Time Configuration” in the left pane.
  3. Click “Properties…” in the top right hand corner of the screen.
  4. Set a time close to that of your NTP server, and then click the “Options…” button.
  5. Click “NTP Settings” in the left pane, click “Add…”, and then Enter the IP address of your NTP server.
  6. Check the box labeled “Restart NTP service to apply changes” and then click on “OK”, Click “OK” on the last window.

That takes care of the Time Configuration, let’s now configure local storage on the Host.

  1. Remaining on the “Configuration” tab, Click “Storage” in the top left hand pane.
  2. Click “Add Storage…” in the top right hand part of the screen.
  3. Select the option for “Disk/LUN” and then click “Next >”.
  4. Select the local RAID or Disk controller on your Host, and then click “Next >”.
  5. If prompted, select to use all available space and partitions, unless you’ve got utility partitions on your system you want to keep. Click “Next >”.
  6. Give your local Datastore a name, it’s a good idea to specifically note that it’s local storage in the name. Click “Next >”.
  7. Choose your block size, select the size that most closely resembles the amount of free storage, or below. Click “Next >”.
  8. Click “Finish”.
Now let’s add the iSCSI storage that we configured on the SAN.
  1. Remaining on the “Configuration” tab, Click “Storage Adapters” in the left hand pane.
  2. Scroll down to the “iSCSI Software Adapter”, select it, and then click on “Properties…” in the top right corner of the screen.
  3. Click “Configure”, Check the box for “Enabled”, and then click “OK”
  4. Click the “Dynamic Discovery” tab, click “Add…”, Enter the IP address of the SAN GROUP (not any individual IP on any one controller), leave the default port and then click “OK”.
  5. When prompted, click “Yes” to rescan the adapters for storage. The LUN on the SAN that we just provisioned should now appear in the list.
  6. Remaining on the “Configuration” tab, Click “Storage” in the left hand pane.
  7. Click “Add…”, Select “Disk/LUN” and click “Next >”.
  8. Select the LUN from the SAN that we just added, and click “Next >”.
  9. Click “Next >” to add a new partition, Enter a name for this volume, preferably one that matches the volume on the SAN, and then click “Next >”.
  10. Select the Maximum file size that you want and then click “Next >”.
  11. Click “Finish”.
That’s all we need to do for now on the ESXi hosts, let’s move on and get vCenter installed on a VM.

Configuring LAG Groups between Dell 62xx Series Switches and ESXi 4.1

Okay, so we’ve already configured the basics on both our switches, and ESXi servers, now it’s time to configure the LAG groups, and vSwitches for each of our necessary purposes.

We’re going to configure one LAG group for each of the following:

  • Production network traffic for the VMs
  • iSCSI Traffic
  • Management and vMotion
  • We’re only going to be using one NIC for Fault Tolerance, so we’re not going to configure a LAG group for that.
Let’s start by first identifying which ports we’ll use on each switch, and for which purpose we’ll use each group. When we started we said we’ll by using vlan 2 for Management, vlan 3 for vMotion, vlan4 for Fault Tolerance, vlan 5 for iSCSI, and vlans 6 & 7 for various production VMs (also vlan 2 if you are going to virtualize the vCenter server, which we are).
So we’ll need a total for 3 LAG groups, two of which will be trunking more than one vlan. Let’s start by configuring the first LAG group. This one is going to be for the Management and vMotion purposes, we’ll need 1 port on each switch in the stack, so let’s use port 10 on both the first and second switch in the stack, start by doing the following:


  1. Open your connection to your switch stack
  2. switchstack> enable
  3. switchstack# config
  4. switchstack(config)# interface range ethernet 1/g10,2/g10
  5. switchstack(config-if)# channel-group 10 mode on
  6. switchstack(config-if)#exit
  7. switchstack(config)# interface port-channel 10
  8. switchstack(config-if-ch10)# spanning-tree portfast
  9. switchstack(config-if-ch10)# hashing-mode 6
  10. switchstack(config-if-ch10)# switchport mode trunk
  11. switchstack(config-if-ch10)# switchport trunk allowed vlan add 2-3
  12. switchstack(config-if-ch10)# exit
What we just did was build a new Link Aggregation Group, Added port 10 on both of the switches in the stack to the LAG group, enabled the port to transition to forwarding state right away, be enabling portfast, set the LAG group load balancing method to IP-Source-Destination (hashing-mode 6), and converted the LAG group to a trunk, and added vlans 2 & 3 to the trunk as tagged vlans on that trunk.
We’ll be doing the same thing for our next LAG, only we’re going to add some commands because this LAG will be handling iSCSI traffic. We’re going to use ports 11 on each switch for this next LAG group, start by entering the following:

 UPDATE: if you are configuring iSCSI for an Equal Logic Array, please see this post instead of configuring LAGs for you iSCSI traffic.

  1. switchstack(config)# interface range ethernet 1/g11,2/g11
  2. switchstack(config-if)# channel-group 11 mode on
  3. switchstack(config-if)#exit
  4. switchstack(config)# interface port-channel 10
  5. switchstack(config-if-ch11)# spanning-tree portfast
  6. switchstack(config-if-ch11)# hashing-mode 6
  7. switchstack(config-if-ch11)# switchport mode access
  8. switchstack(config-if-ch11)# switchport access vlan 5
  9. switchstack(config-if-ch11)# mtu 9216
  10. switchstack(config-if-ch11)# exit
What we’ve done here is pretty much what we did for the first lag, but we made this LAG an access port for only one vlan, instead of a trunk port for more than one. We also adjusted the mtu to support jumbo frames for the iSCSI traffic because that’s what this vlan is used for.
Our Final LAG group is going to contain three ports two on 1 switch, and just one port on the other, let’s start by:
  1. switchstack(config)# interface range ethernet 1/g12-1/g13,2/g12
  2. switchstack(config-if)# channel-group 12 mode on
  3. switchstack(config-if)#exit
  4. switchstack(config)# interface port-channel 12
  5. switchstack(config-if-ch12)# spanning-tree portfast
  6. switchstack(config-if-ch12)# hashing-mode 6
  7. switchstack(config-if-ch12)# switchport mode trunk
  8. switchstack(config-if-ch12)# switchport trunk allowed vlan add 2,6-7
  9. switchstack(config-if-ch12)# exit

Don’t forget to “copy run start” on you switch, you don’t wan’t to lose all that work you’ve just done! Okay, our first few LAGs are configured, time to setup our first ESXi server’s network configuration:

Now comes time to configure the networking on the first ESXi server. The first thing we’re going to do is setup the vSwitch that corresponds to the LAG group for the Management and vMotion vlans. Follow these steps:

  1. Log into your ESXi server using the vSphere Client.
  2. Click on the Configuration tab at the top.
  3. Click on “Networking” under the hardware section, in the left pane.
  4. We’re going to be adding a new vSwitch, so click on “Add Networking…” in the top right hand corner of the screen.
  5. Select the Option for “VMkernel”, because this vSwitch will be supporting non- Virtual Machine tasks, click Next.
  6. Select “Create New Virtual Switch” and then check two vmnics (make sure these two are plugged into port 10 on each switch) and then press “Next”.
  7. Give this network the label of “MGMT_Network” or whatever you’ve named vlan 2 on the switches, for VLAN ID, enter the value of “2”, Check the box labeled “use this port group for management traffic”, click “Next”.
  8. Assign an IP address and subnet mask that are within the subnet of vlan 2. Click Next.
  9. Click “Finish”.
  10. Find the newly created vSwitch and click on “Properties”.
  11. Click “Add” to add a new port group.
  12. Select “VMkernel” again, and then click “Next”.
  13. Give this port group a name of “vMotion”, and a VLAN ID of “3”, Check the box labeled “use this port group for VMotion”, click “Next”.
  14. Click Finish.
  15. Select the “vSwitch”, which should be the first item in the list when the Port Group window closes, click “Edit…”.
  16. Click on the “NIC Teaming” tab.
  17. Change the “Load Balancing:” setting to “Route based on IP hash”.
  18. Leave the defaults of “Link status only” and “Yes” for the middle two settings, and then change the setting “Failback:” to “No”.
  19. Verify that both vmnics are listed under the “Active Adapters” section.
  20. Close all of the windows.
What we’ve just done is this: We’ve created a vSwitch, added two NICs to it, both of which are plugged into the LAG on the switches, and we configured ip hashing as the method of balancing (which is the ONLY method you can use with a LAG group), and we disabled link failover on this vSwitch. We also created two Port Groups, assigned each a VLAN ID, and an IP address/subnet mask that match our existing vlans configured on the switches. We identified that these networks should be used for either management or vMotion, and gave them descriptive names that match the vlans on the switches.
We’ll repeat this process to creating new vSwitches 3 more times, here are the break downs:
  • iSCSI port group, two vmnics: both plugged into the ports that make up LAG 11 on the switches, assigned vlan 5, assigned the name “iSCSI” or whatever you named the vlan on the switch, assigned a IP address in that subnet, NIC teaming configuration exactly the same as the first vSwitch we configured.
  • Fault Tolerance port group, one vmnic: plugged into one of the switch ports configured as an access port on vlan 4, VLAN ID of 4, a name that matches the vlan name on the switches, check the box for “Fault Tolerance Logging”, and an ip address in the corresponding subnet, leave all of the NIC Teaming settings in their default states.
  • and finally a vSwitch that contains a port group for each of your production VM networks, Assign VLAN IDs to each, and plug them into the switch ports that make up your final LAG groups. Make sure the NIC Teaming settings match the example LAG group above. Don’t forgot to create a Port Group for MGMT traffic otherwise your vCenter server wont be able to communicate to the ESXi servers later.
That’s it, after it’s all configured on the ESXi side, it may take a reboot of the ESXi host when configuring and changing the Management port groups, it’s not supposed to require that, but sometimes it does, so if you reconfigure the management networks, and then lose the ability to ping or connect to it, reboot the system before you start other troubleshooting. Also you’re going to want to make sure all of your LAG groups came up properly on the switches you can use the following commands to test:
  • Show interfaces port-channel – this will display the status of all interfaces in all LAG groups
  • show interfaces switchport port-channel XX – This will display a list of all tagged or untagged vlans on this particular LAG group or Ethernet port
That’s it, we’re now ready to finish up our ESXi configurations, Install a VM to run vCenter, and configure our iSCSI storage.

Initial Configuration of ESXi 4.1 Servers

The servers that we’ve chosen for this particular VMWare deployment are Dell R710’s with ESXi 4.1 Embedded into a SD card that’s in the server itself, the good thing about this is that we didn’t have to buy any local storage for the servers, and as such can save on raid controllers and local disks, the downside is that we don’t have any cheap local storage on any of the host systems, so everything’s got to go on the SAN.

So once the Servers are racked, plugged in, and turned on you’ll watch them boot, and then after everything is said and done you’ll be left with an unfulfilling “no boot device found” bios error message. Here is how we resolve this first hurdle:

  1. Reboot and enter the BIOS
  2. Scroll down to the section labeled “Integrated Devices” and then press Enter
  3. Scroll down to the section labeled “Internal SD Card Port”, change it to “ON” and then press Enter.
  4. Now reboot, and re-enter the BIOS
  5. Scroll down to the section labeled “Boot Settings” and then press Enter
  6. Scroll down to the section labeled “Hard-Disk Drive Sequence” and then press Enter
  7. Change the first boot device to “Internal SD Card: Flash Reader” and then save and exit the bios, now when the computer boots it will load ESXi.
Once you’ve booted into ESXi’s configuration tool, login with the username of root and a blank password. You’ll now want to do a few things such as:
  • Configure a new password
  • Select Which NIC to use for managment, and configure an IP, Subnet, and Gateway
  • Configure DNS servers (and if you’ve not done this yet configure A records on your DNS servers for the ESXi servers now)
  • Identify which Physical NIC ports on your servers correspond to the logical vmnics listed by esxi, which you can do by entering into the “Configure Management Network” section, and then select “Network Adapters”, move a single cable from one nic to another and then exit and re-enter this screen to see which physical nics correspond to which vmnics in esxi
Now that we’ve configured enough of our ESXi servers to be able to use them we can exit out of all of these screens and head over to a windows computer, install the vSphere client, and connect in to finish the network setup that we started back in this post.

Let’s move on and configure the LAG groups on our Switches and ESXi servers here.

Initial Configuration of a EqualLogic PS Series Storage Array

Okay so here are a few things that I wish someone had told me about the EqualLogic SANs before I turned on one and started configuring it for the first time:

  1. Each NIC on the SAN will get it’s own IP, but each NIC purpose will also get an IP, what this means is that each NIC performing iSCSI will have an IP, but there will also be a a GROUP IP for all iSCSI NICs, and the same thing for Management NICs, each NIC has an IP and then there is also a GROUP IP for all Management NICs. Also, if you’re setting up more than one SAN, the GROUP IPs are cumulative, and encompass all NICs on each SAN.
  2. The Modules are Active / Passive. Only one is enabled at a time, so if you are planning on using 4 NICs for iSCSI traffic, better upgrade to a 6000 series unit that has 4 NICs on EACH controller module.
  3. When you are running the setup wizard, and it starts asking for IP information, it’s asking for iSCSI interface IP information, not management NIC IP information, we’ll configure that after the initial turn up.

So, once you’ve got your PS4000 or PS6000 series plugged in and turned on, go ahead and plug Interface 0 into the switch ports configured for iSCSI, if you’ve not configured your switches yet you can head over here to find out how to configure them. Plug a laptop into the same vlan, and run the “Remote Setup Wizard” from the CD that came with the SAN. Then follow these steps:

  1. Make sure that you’ve got “Initialize a PS Series Array” selected, and then click Next >.
  2. Allow the wizard to discover your array, and then select it, then click Next >.
  3. On the “Initialize Array” screen you’ll need to enter the Name for the Array, the IP address, subnet, and Gateway of the First iSCSI NIC, and then click Next >.
  4. On the “Create New Group” screen you’ll need to enter the Name of the Array Group, as well as the iSCSI Group IP, which we talked about above, We’ll also need to select a RAID Type, and enter credential information for the admin account (username: grpadmin), and create a service account to be used for VDS/VSS features later, then click Next >.
  5. You’ll then be told to wait for a bit, and then more than likely also be told that it failed to configure your registration with the iSCSI Initiator, don’t worry about the error it just means you either didn’t have the iSCSI Initiator installed, had the wrong IP information configured, or something else, but it does not matter at this point, click OK, Click OK again, and then click Finish.
  6. Now assign your computer a IP address in the subnet used for iSCSI traffic, and then connect to the GROUP IP you just configured.
  7. Login with the username of grpadmin, and the administrator password you configured in step 4.
  8. Expand “Members” in the left hand pane, and then click on the array you just configured.
  9. Click on the tab “Network” at the top, and then click on each network interface that you’ve not already assigned an IP address to, and assign an IP address, subnet, and a description to the interface, once it’s configured, enable the interface.
  10. Now click on “Group Configuration” in the left hand pane, then click on the tab “Advanced” at the top.
  11. Click the button called “Configure Management Network…”
  12. Check the box for “Enable Dedicated Management Network”, here is where you assign the GROUP IP for the management interfaces on this and all future Arrays, once you assign the IP and gateway, select Interface 2 from the list of interfaces and then click OK.
  13. Make sure your Management NICs are plugged into your MGMT vlan, and then you should be able to manage you array(s) using the new GROUP IP you just assigned.
That’s it, the array is now configured and online, in some future posts we’ll look at configuring SMTP alerts, updating firmware, and creating volumes, but for now let’s get our ESXi servers configured, by going here.