Category Archives: Dell

Installing or Updating OpenManage on ESXi Hosts

Assuming you are using Dell Servers, you might be interested to know that you can install and use the Dell Open Manage Server Administrator application on your ESXi hosts and manage their hardware in nearly the same way you can use it on your windows servers’ hardware. First, OpenManage needs to be downloaded,  you can find it here: OpenManage. Make sure to download both the windows version (for the system that will be managing the ESXi host) as well as the version that matches your ESXi version.

Next, we need to move the vib over to the host. The way i do this is with a tool call WinSCP which can be found here

Enabling SSH on ESXi host

  1. Connect to the host with the vSphere Client
  2. Select the Host, and then click the “Configuration” tab
  3. Click “Security Profile” from the bottom right-hand box
  4. Click “Properties…” in the row titled “Services”
  5. Highlight “SSH” and then click “Options…”
  6. Click “Start”
  7. Click “OK”

Moving file to host with WinSCP

  1. Open WinSCP and enter the following:
    1. File Protocol: SCP
    2. Hostname: <IP Address of Host>
    3. Username: root
    4. Password: <root password>
  2. When prompted click “Yes” to accept the private key of the server
  3. In the right hand pane find the folder called /tmp and double click on it.
  4. In the left hand pane, locate the .vib on your PC and then copy it into the /tmp folder.

Installing OpenManage

  1. place the host in maintenance mode
  2. SSH into the host using putty
  3. Enter the following command, making adjustments to the file name to match that of your vib:

Now reboot the host.

Updating the OpenManage on the host

  1. place the host in maintenance mode
  2. SSH into the host using putty
  3. Enter the following command to confirm open mange is installed on host, scroll up looking for VIBs from “Dell” and then verify that “OpenManage” is in the list
  4. Next, run the following command to remove the existing version of the open manage:
  5. Lastly follow the installation instructions above, to install the newest version.

Managing the Host

  1. Install the windows version of the software that was downloaded earlier on you management system
  2. Double click on the OpenManage application once installed,
  3. Enter the following:
    1. Hostname/IP Address: This is your host’s IP
    2. Username: root
    3. Password: root password
    4. Ignore Certificate Warnings: Checked

Now you can manage your host’s Dell hardware as if it were any windows system with Open Manage installed.

NOTE: Annoyingly, both the MGMT system and the hosts must be using the same version of the software, in this example that was OpenManage 8.2

Reenable a Port on Dell PowerConnect switch after BPDU Guard disable

If you’ve enabled BPDUGuard on a your endpoint facing ports (which you should do) you’ve probably asked yourself what to do when those ports auto disable themselves after a switch is plugged into them. It’s pretty simple, first remove whatever caused the port to disable, such as a loop or another switch, and then enter the following command on your power connect:

# set interface active gigabitethernet 1/0/13

Assuming that port 13 is the port you want to reactivate.

Configure Stacking and update firmware on Dell 55xx Series Switches

Here is a quick and dirty guide to getting a Dell 55xx switch stack up and running and get the firmware updated across the stack.

  1. First download the most recent firmware and a TFTP server, then start the TFTP server and extract the firmware files into the TFTP server’s directory.
  2. Plug in your HDMI stack cables into each switch(configure in such a way that they switches form a circle)
  3. Once the HDMI cables are plugged in, plug in to the console and power up the first switch. You’ll have to use hyper terminal, putty, teraterm, or some other console tool to run the initial wizard. Please set the IP address of the switch and when it’s completed you can power up the second(and 3rd,4th, etc) in the stack.
  4. Set the Master switch by using the following command: stack master unit 1
  5. Upload the firmware to each unit in the stack with the following command: copy tftp://z.z.z.z/powerconnect_55xx-yyyy.ros unit://*/image replacing the z.z.z.z and yyyy values with the IP address of the TFTP server and the version of the firmware you downloaded.
  6. Upload the boot code to each unit in the stack with the following command: copy tftp://z.z.z.z/powerconnect_55xx_boot-yyyyy.rfb unit://*/boot replacing the z.z.z.z and yyyyy values with the IP address of the TFTP server and the version of the firmware you downloaded.
  7. After the boot files and firmware have been uploaded you can issue the following command to check with image location it was placed in: show bootvar
  8. Finally once you know which image location it’s in, you can issue this command to boot from that firmware: boot system image-2 all, assuming your firmware was placed in image location 2 in the “show bootvar” output.

Setup and Install the EqualLogic Multipathing Agent for VMWare ESXi 5

In a past post I went into how to configure iSCSI over a LAG to give you some path redundancy over a single VMK IP. You can read about that here. For multiple reasons this is not the best way to configure Multipathing, so here is a write up on the proper way to setup the Multipathing Plugin on a VMWare ESXi 5 server (I’ve also included steps to undo what may have been setup in the past).


  1. Download and install winSCP from here.
  2. Download the EqualLogic Multipathing Agent for VMWare.
  3. Download, Install, and Configure the VMWare Management Agent (vMA), read about how to do that here.
  4. Optionally, Install VMware Update Manager, which can be used to install the MEM in the event that the --install script does not work.

Cleaning Up

If you’ve already had iSCSI configured on this host, it’s time to make note of a few things and then clean up before we get the EqualLogic MEM installed.

  1. Make note of all IPs that are being used by a host for iSCSI
  2. Make note of which NICs are being used by the vSwitch setup for iSCSI
  3. Delete the VM Kernel ports that are attached to the iSCSI vSwitch
  4. Delete the iSCSI vSwitch

Disable iSCSI on the Host

  1. Connect to the vMA using putty, and then attach to your host using the following command: vifptarget -s <host's FQDN>
  2. For ESXi 4.x enter the following command: esxcfg-swiscsi –d
  3. for ESXi 5 enter the following command: esxcli iscsi software set -e false
  4. Reboot the Host

Enable iSCSI on the Host

  1. For ESXi 4.x enter the following command: esxcfg-swiscsi –e
  2. for ESXi 5.0 enter the following command: esxcli iscsi software set -e true</li>

Remove the old VMK bindings from the iSCSI HBA

For each of the VM Kernel ports that you made note of before, run the following command where <vmk_interface> is your vmk port such as vmk1, vmk2, and <vmhba_device> is your vmhba adapter for iSCSI such as vmhba38:

  1. For ESXi 4.x: esxcli swiscsi nic remove –n <vmk_interface> –d <vmhba_device>
  2. for ESXi 5: esxcli iscsi networkportal remove -n <vmk_interface> -A <vmhba_device>

Installing the EqualLogic Multipathing Agent

Now that our host is fresh and so clean clean, well in terms of iSCSI anyway, it’s time to start configuring the Multipathing Extension Module.

Move the Setup Script and Bundle to the vMA

  1. Connect to your vMA using winSCP, it should drop you into the home directory for the user ‘vi-admin’
  2. Find and locate the files that were extracted from the zip file you downloaded from Equal Logic, you are looking for “” and “” the version of the .zip file will depend on whether or not you’re installing it on ESXi 4.x or ESXi 5, just make sure you copy the right file name.
  3. Once you’ve moved both files to the vMA, right click on the “” file from within winSCP, select “properties”. Under the “Permissions” section of the change the “Octal” value to “0777”, this will allow you to execute the script.
  4. Close WinSCP.

Configuring the MEM

  1. Connect to your vMA using ssh.
  2. You should automatically be logged into the home directory of the ‘vi-admin’ user, verify this by running a ls, and making sure you see the two files you uploaded.
  3. enter the following command to get started: ./ --configure --server=<esxi server's FQDN>
  4. Follow the bouncing ball once the script gets started, it’s going to ask you for a username and password for the host, it’s also going to ask you to name the new virtual switch, it’s going to ask you what nics to use, list each one with a space in between them, it will also ask you for an IP for each VMK port it creates, and it will ask for the IP of the Group IP you want to connect to, and a few other questions as well such as subnet mask and mtu size, whether or not to use chap, use the information you collected above and the configuration of the Array to answer the questions, and when the script completes you should see the new vSwitch and VMK ports in your configuration.

Installing the Bundle

  1. While still logged into your vMA run the following command: ./ –install –server=<esxi server’s FQDN>
  2. If you receive an error about being unable to install it, try disabling Admission Control on your HA cluster and re-running the command.

If for some reason you are unable to get the –install command to work properly you can use the vmware Update Manager to install the Bundle.

  1. Install and configure vUM, according to VMware instructions.
  2. Import the MEM offline bundle into the vUM package repository by selecting the “Import Patches” option and browsing to the
  3. Create a baseline containing the MEM bundle. Be sure to choose a “Host Extension” type for the baseline.
  4. Optionally add the new baseline to a baseline group.
  5. Attach the baseline or baseline group to one or more hosts.
  6. Scan and remediate to install the MEM on the desired hosts. Update Manager will put the hosts in maintenance mode and reboot if necessary as part of the installation process.
  7. If you get the error: disable Addmission control and then try to remediate again.

Verifying that everything is working properly

  1. Once both the the –configure and the –install commands have been run you can run the follow command to make sure everything is working properly: ./ --query --server=<esxi server's FQDN>


It’s a little bit more work than the LAG setup, but this is the proper way to get a full and complete Equal Logic Multipathing setup installed and working.


MAC Laptop can’t connect to Dell 55xx Series Switch

I ran into a problem with various Mac laptops being unable to obtain an IP, or determine network speed, when plugged into a Dell Power Connect 55xx series switch. Turns out this isen’t just effecting Apple products, it’s also a problem with some PC’s that have newer Intel network cards. The problem is stemming from some of the newer Green Ethernet standards and in this case the switch and computer are unable to work out power settings on the NIC and are unable to set the proper speed and duplex. If you set the computer’s network card to Full Duplex and set the speed you should be able to connect, but this becomes burdonsome. The best way to fix this issue it to disable “EEE” on the 55xx series switch. Follow these steps:

  1. Console into your switch and enter config mode by typing “config”.
  2. Enter the command “No eee enable”.
  3. Save the running config and then reboot the switch.

After the switch reboots, connect the Mac and verify that you can obtain network connectivity with the nic set to automatic.

Configuring LAG Groups between Dell 62xx Series Switches and ESXi 4.1

Okay, so we’ve already configured the basics on both our switches, and ESXi servers, now it’s time to configure the LAG groups, and vSwitches for each of our necessary purposes.

We’re going to configure one LAG group for each of the following:

  • Production network traffic for the VMs
  • iSCSI Traffic
  • Management and vMotion
  • We’re only going to be using one NIC for Fault Tolerance, so we’re not going to configure a LAG group for that.
Let’s start by first identifying which ports we’ll use on each switch, and for which purpose we’ll use each group. When we started we said we’ll by using vlan 2 for Management, vlan 3 for vMotion, vlan4 for Fault Tolerance, vlan 5 for iSCSI, and vlans 6 & 7 for various production VMs (also vlan 2 if you are going to virtualize the vCenter server, which we are).
So we’ll need a total for 3 LAG groups, two of which will be trunking more than one vlan. Let’s start by configuring the first LAG group. This one is going to be for the Management and vMotion purposes, we’ll need 1 port on each switch in the stack, so let’s use port 10 on both the first and second switch in the stack, start by doing the following:


  1. Open your connection to your switch stack
  2. switchstack> enable
  3. switchstack# config
  4. switchstack(config)# interface range ethernet 1/g10,2/g10
  5. switchstack(config-if)# channel-group 10 mode on
  6. switchstack(config-if)#exit
  7. switchstack(config)# interface port-channel 10
  8. switchstack(config-if-ch10)# spanning-tree portfast
  9. switchstack(config-if-ch10)# hashing-mode 6
  10. switchstack(config-if-ch10)# switchport mode trunk
  11. switchstack(config-if-ch10)# switchport trunk allowed vlan add 2-3
  12. switchstack(config-if-ch10)# exit
What we just did was build a new Link Aggregation Group, Added port 10 on both of the switches in the stack to the LAG group, enabled the port to transition to forwarding state right away, be enabling portfast, set the LAG group load balancing method to IP-Source-Destination (hashing-mode 6), and converted the LAG group to a trunk, and added vlans 2 & 3 to the trunk as tagged vlans on that trunk.
We’ll be doing the same thing for our next LAG, only we’re going to add some commands because this LAG will be handling iSCSI traffic. We’re going to use ports 11 on each switch for this next LAG group, start by entering the following:

 UPDATE: if you are configuring iSCSI for an Equal Logic Array, please see this post instead of configuring LAGs for you iSCSI traffic.

  1. switchstack(config)# interface range ethernet 1/g11,2/g11
  2. switchstack(config-if)# channel-group 11 mode on
  3. switchstack(config-if)#exit
  4. switchstack(config)# interface port-channel 10
  5. switchstack(config-if-ch11)# spanning-tree portfast
  6. switchstack(config-if-ch11)# hashing-mode 6
  7. switchstack(config-if-ch11)# switchport mode access
  8. switchstack(config-if-ch11)# switchport access vlan 5
  9. switchstack(config-if-ch11)# mtu 9216
  10. switchstack(config-if-ch11)# exit
What we’ve done here is pretty much what we did for the first lag, but we made this LAG an access port for only one vlan, instead of a trunk port for more than one. We also adjusted the mtu to support jumbo frames for the iSCSI traffic because that’s what this vlan is used for.
Our Final LAG group is going to contain three ports two on 1 switch, and just one port on the other, let’s start by:
  1. switchstack(config)# interface range ethernet 1/g12-1/g13,2/g12
  2. switchstack(config-if)# channel-group 12 mode on
  3. switchstack(config-if)#exit
  4. switchstack(config)# interface port-channel 12
  5. switchstack(config-if-ch12)# spanning-tree portfast
  6. switchstack(config-if-ch12)# hashing-mode 6
  7. switchstack(config-if-ch12)# switchport mode trunk
  8. switchstack(config-if-ch12)# switchport trunk allowed vlan add 2,6-7
  9. switchstack(config-if-ch12)# exit

Don’t forget to “copy run start” on you switch, you don’t wan’t to lose all that work you’ve just done! Okay, our first few LAGs are configured, time to setup our first ESXi server’s network configuration:

Now comes time to configure the networking on the first ESXi server. The first thing we’re going to do is setup the vSwitch that corresponds to the LAG group for the Management and vMotion vlans. Follow these steps:

  1. Log into your ESXi server using the vSphere Client.
  2. Click on the Configuration tab at the top.
  3. Click on “Networking” under the hardware section, in the left pane.
  4. We’re going to be adding a new vSwitch, so click on “Add Networking…” in the top right hand corner of the screen.
  5. Select the Option for “VMkernel”, because this vSwitch will be supporting non- Virtual Machine tasks, click Next.
  6. Select “Create New Virtual Switch” and then check two vmnics (make sure these two are plugged into port 10 on each switch) and then press “Next”.
  7. Give this network the label of “MGMT_Network” or whatever you’ve named vlan 2 on the switches, for VLAN ID, enter the value of “2”, Check the box labeled “use this port group for management traffic”, click “Next”.
  8. Assign an IP address and subnet mask that are within the subnet of vlan 2. Click Next.
  9. Click “Finish”.
  10. Find the newly created vSwitch and click on “Properties”.
  11. Click “Add” to add a new port group.
  12. Select “VMkernel” again, and then click “Next”.
  13. Give this port group a name of “vMotion”, and a VLAN ID of “3”, Check the box labeled “use this port group for VMotion”, click “Next”.
  14. Click Finish.
  15. Select the “vSwitch”, which should be the first item in the list when the Port Group window closes, click “Edit…”.
  16. Click on the “NIC Teaming” tab.
  17. Change the “Load Balancing:” setting to “Route based on IP hash”.
  18. Leave the defaults of “Link status only” and “Yes” for the middle two settings, and then change the setting “Failback:” to “No”.
  19. Verify that both vmnics are listed under the “Active Adapters” section.
  20. Close all of the windows.
What we’ve just done is this: We’ve created a vSwitch, added two NICs to it, both of which are plugged into the LAG on the switches, and we configured ip hashing as the method of balancing (which is the ONLY method you can use with a LAG group), and we disabled link failover on this vSwitch. We also created two Port Groups, assigned each a VLAN ID, and an IP address/subnet mask that match our existing vlans configured on the switches. We identified that these networks should be used for either management or vMotion, and gave them descriptive names that match the vlans on the switches.
We’ll repeat this process to creating new vSwitches 3 more times, here are the break downs:
  • iSCSI port group, two vmnics: both plugged into the ports that make up LAG 11 on the switches, assigned vlan 5, assigned the name “iSCSI” or whatever you named the vlan on the switch, assigned a IP address in that subnet, NIC teaming configuration exactly the same as the first vSwitch we configured.
  • Fault Tolerance port group, one vmnic: plugged into one of the switch ports configured as an access port on vlan 4, VLAN ID of 4, a name that matches the vlan name on the switches, check the box for “Fault Tolerance Logging”, and an ip address in the corresponding subnet, leave all of the NIC Teaming settings in their default states.
  • and finally a vSwitch that contains a port group for each of your production VM networks, Assign VLAN IDs to each, and plug them into the switch ports that make up your final LAG groups. Make sure the NIC Teaming settings match the example LAG group above. Don’t forgot to create a Port Group for MGMT traffic otherwise your vCenter server wont be able to communicate to the ESXi servers later.
That’s it, after it’s all configured on the ESXi side, it may take a reboot of the ESXi host when configuring and changing the Management port groups, it’s not supposed to require that, but sometimes it does, so if you reconfigure the management networks, and then lose the ability to ping or connect to it, reboot the system before you start other troubleshooting. Also you’re going to want to make sure all of your LAG groups came up properly on the switches you can use the following commands to test:
  • Show interfaces port-channel – this will display the status of all interfaces in all LAG groups
  • show interfaces switchport port-channel XX – This will display a list of all tagged or untagged vlans on this particular LAG group or Ethernet port
That’s it, we’re now ready to finish up our ESXi configurations, Install a VM to run vCenter, and configure our iSCSI storage.

Initial Configuration of a EqualLogic PS Series Storage Array

Okay so here are a few things that I wish someone had told me about the EqualLogic SANs before I turned on one and started configuring it for the first time:

  1. Each NIC on the SAN will get it’s own IP, but each NIC purpose will also get an IP, what this means is that each NIC performing iSCSI will have an IP, but there will also be a a GROUP IP for all iSCSI NICs, and the same thing for Management NICs, each NIC has an IP and then there is also a GROUP IP for all Management NICs. Also, if you’re setting up more than one SAN, the GROUP IPs are cumulative, and encompass all NICs on each SAN.
  2. The Modules are Active / Passive. Only one is enabled at a time, so if you are planning on using 4 NICs for iSCSI traffic, better upgrade to a 6000 series unit that has 4 NICs on EACH controller module.
  3. When you are running the setup wizard, and it starts asking for IP information, it’s asking for iSCSI interface IP information, not management NIC IP information, we’ll configure that after the initial turn up.

So, once you’ve got your PS4000 or PS6000 series plugged in and turned on, go ahead and plug Interface 0 into the switch ports configured for iSCSI, if you’ve not configured your switches yet you can head over here to find out how to configure them. Plug a laptop into the same vlan, and run the “Remote Setup Wizard” from the CD that came with the SAN. Then follow these steps:

  1. Make sure that you’ve got “Initialize a PS Series Array” selected, and then click Next >.
  2. Allow the wizard to discover your array, and then select it, then click Next >.
  3. On the “Initialize Array” screen you’ll need to enter the Name for the Array, the IP address, subnet, and Gateway of the First iSCSI NIC, and then click Next >.
  4. On the “Create New Group” screen you’ll need to enter the Name of the Array Group, as well as the iSCSI Group IP, which we talked about above, We’ll also need to select a RAID Type, and enter credential information for the admin account (username: grpadmin), and create a service account to be used for VDS/VSS features later, then click Next >.
  5. You’ll then be told to wait for a bit, and then more than likely also be told that it failed to configure your registration with the iSCSI Initiator, don’t worry about the error it just means you either didn’t have the iSCSI Initiator installed, had the wrong IP information configured, or something else, but it does not matter at this point, click OK, Click OK again, and then click Finish.
  6. Now assign your computer a IP address in the subnet used for iSCSI traffic, and then connect to the GROUP IP you just configured.
  7. Login with the username of grpadmin, and the administrator password you configured in step 4.
  8. Expand “Members” in the left hand pane, and then click on the array you just configured.
  9. Click on the tab “Network” at the top, and then click on each network interface that you’ve not already assigned an IP address to, and assign an IP address, subnet, and a description to the interface, once it’s configured, enable the interface.
  10. Now click on “Group Configuration” in the left hand pane, then click on the tab “Advanced” at the top.
  11. Click the button called “Configure Management Network…”
  12. Check the box for “Enable Dedicated Management Network”, here is where you assign the GROUP IP for the management interfaces on this and all future Arrays, once you assign the IP and gateway, select Interface 2 from the list of interfaces and then click OK.
  13. Make sure your Management NICs are plugged into your MGMT vlan, and then you should be able to manage you array(s) using the new GROUP IP you just assigned.
That’s it, the array is now configured and online, in some future posts we’ll look at configuring SMTP alerts, updating firmware, and creating volumes, but for now let’s get our ESXi servers configured, by going here.

Configuring a Dell 6248 Switch Stack for use with a EqualLogic PS4000E Storage Array

I’m going to be doing some write ups over the next few days pertaining to getting a small VMWare vSphere 4.1 installation set up. We’ll be using a pair of Dell 6248 Switches, configured in a stack, and a Dell EqualLogic PS4000E iSCSI Storage Array as our back end. In preparation for that I’m going to be going over our switch and network configuration in this post so that it’s clear as to how the network is configured.

We’ll have vlans for each of the following purposes:

  • Native vlan 1: we’ll use this as our isolated, un-trunked vlan for this switch, the vlan where unconfigured ports are placed by default. (vlan 1)
  • Management: things like DRACs, iLos, UPS management NICs, SAN  Management NICs, etc (vlan 2)
  • vMotion: Moving Virtual machines from one host to another host (vlan 3)
  • HA: VMWare Fault Tolerance (vlan 4)
  • iSCSI traffic (vlan 5)
  • and finally all vlans needed for the production virtual servers (vlans 6 & 7 )
As a perquisite, we’re going to be doing some basic setup of the switch stack, if you’re not setup the switches in a stack yet, please see this post.
Log into the switch and enter the following commands:
  1. switchstack> enable
  2. switchstack# config
  3. switchstack(config)# vlan database
  4. switchstack(config-vlan)# vlan 2-7
  5. switchstack(config-vlan)# exit
  6. switchstack(config)# interface vlan 2
  7. switchstack(config-if-vlan2)# name MGMT_VLAN
  8. switchstack(config-if-vlan2)# exit
  9. repeat steps 6-8 for each vlan, giving each a descriptive name
  10. switchstack(config)# spanning-tree mode rstp (assuming you are using rstp with your other switches in your network)
Now let’s configure some access ports for the MGMT Vlan devices to plug into, we’ll use the last 4 ports on each switch.
  1. switchstack(config)# interface range ethernet 1/g44-1/g48,2/g44-2/g48
  2. switchstack(config-if)# switchport mode access
  3. switchstack(config-if)# switchport access vlan 2
  4. switchstack(config-if)# spanning-tree portfast
  5. switchstack(config-if)# exit
We used spanning-tree portfast because we know these ports will be plugged into end devices, and we want them to come up instantly if the switch is rebooted, or a cable is unplugged and then plugged back in, we don’t want to wait for spanning tree to check for switching loops.

We’ll also need to define a few access ports for vlan 5, where we’ll be plugging in our pS4000E, follow the exact same steps we used above to configure vlan 2, but substitute vlan 5 for vlan 2, make sure you plug the ports 0 and 1 on the EqualLogic Controller Modules into the vlan 5 ports of your switch, and port 2 on your controller modules into the switch ports for vlan 2 (port 2 on the SAN controller module is strictly for management, and therefore should not be on the vlans used for iSCSI traffic). We’ll also need to enable jumbo frames on the on switch ports that will be moving iSCSI traffic, and disable unicast storm control. To do this enter the following commands:

  1. switchstack(config)# interface ethernet 1/g20
  2. switchstack(config-if-1/g20)# mtu 9216
  3. switchstack(config-if-1/g20)# no storm-control unicast
  4. switchstack(config-if-1/g20)# exit
  5. repeat steps 1 – 3 for each port that that connects to a storage array port (only 0 and 1, 2 is for management only)
Note: typically the mtu would be set to 9000, but when you run the “iSCSI enable” option on these switches it’s set to 9216, which is what I’ve chosen to implement here. I’ll update this post in the future if this turns out to be a problem with either the ESXi hosts or the EqualLogic SAN.

Also, I normally would not disable unicast storm-control, but when you enable a iSCSI optimization of the Dell Switches, they do this automatically when a EqualLogic SAN is detected on a port, If anyone has the explanation of why this happens please feel free to share it.

Finally we’ll also need to enable flow control at the switch level, to do this enter the following command:

  1. switchstack(config)# flowcontrol
We’re also going to place this switch into the MGMT_VLAN so that it’s management interface is on the same vlan as everything else we’re going to manage. Enter the following commands:
  1. switchstack(config)# IP Address vlan 2
  2. switchstack(config)# ip address x.x.x.x y.y.y.y
  3. switchstack(config)# ip default-gateway z.z.z.z
Where x.x.x.x is the IP address of your switch on the new vlan, y.y.y.y is your subnet mask, and z.z.z.z is your gateway on the mgmt_vlan.

That’s all of the configuration we’ll need at this point, we’ll now setup the EqualLogic San here, and later we’ll configure the switches for Link Aggregation Groups to handle the connections to our ESXi servers.

Configuring Stacking on Dell 6248 Switches

I opened up a set of new Power Connect 6248’s today and 4 stacking modules as well. I installed both stacking modules, and then connected the stacking cables as laid out in the installation manual (Switch 1 Port 1 going to Switch 2 Port 2, and Switch 2 Port 1 going to Switch 1 Port 2) and then turned them on.

To my surprise they both had the  “Master” light lit, and both had stack ID light of “1” lit. I consoled into each of them, and neither saw the other switch, even though all of the cables were correct, and the instruction manual said that nothing else needed to be preformed. The manual stated that the first switch started would automatically become master, and the others would just fall in line after it, this was not the case for me.

Here is what I had to do to get them working properly, and performing like stacked switches:

  1. log into the first switch via the console
  2. Type “show stack-port” and then press enter, this should verify that your stack ports are set to “ethernet” instead of “stack” which is why they are not forming a stack
  3. Type “config”, press enter, and then type “stack” and press enter.
  4. Type “stack-port 1/xg1 stack” and then press enter.
  5. Type “stack-port 1/xg2 stack” and then press enter.
  6. Repeat these steps on the other switch, and then reboot both of them. But don’t forget to Type “Copy run start” before rebooting.
  7. Once they both reboot only one should be displaying the “Master” light now, move your console cable to this switch, log into the console and type “show switch”. Both switches should now be listed, one as the master (the one you are console connected to, and the other(s) in “Oper Stby” waiting to assume the master role if the master fails.

Dell T710 R710 Servers Running 2008 R2 with Hyper-V Blue Screen

I had a problem not too long ago where a Dell Power Edge R710 (T710 in a rack chassis) blue screened about once a week. The Server was running Server 2008 R2 with only the Hyper-V role installed, and it was fully patched. The output of the dump file was:

MODULE_NAME: Unknown_Module
IMAGE_NAME: Unknown_Image
Followup: MachineOwner

I performed a little research and this appears to be related to systems running Xeon 5500 Series processors and a power saving feature in the BIOS called Enhanced Halt State (C1E). I went into the BIOS hoping to disable this, but I couldn’t find the option to disable it anywhere.

I then turned to Dell’s Support site and downloaded the newest revision to the BIOS, which at the time was 2.1.15_1. After the update there was an option for disable C1E in the BIOS, which I promptly disabled.

After 7 days testing I did end up getting this blue screen one more time. I double checked that C1E was disabled, which it was, but it still blue screened none the less. On a whim I also disabled power saving S3 feature in the BIOS and patiently waited another week.

After 7 days, it had not blue screened. Two weeks went by, no blue screen. After 4 weeks it was still running without a hitch so I called the case closed. It appears the combination of disabling C1E and S3 worked for me, but for many others disabling just C1E worked.

UPDATE: it appears that Microsoft has released an OS Hotfix for this that can be requested here.