Enabling Jumbo Frames on your iSCSI vmnics and vSwitches ESXi 4.1

When we setup our switches, we changed the mtu on our vlan for iSCSI traffic. Now we need to edit the mtu on our iSCSI port groups, and vSwitch to also allow jumbo frames.

The first thing we need to do is take stock of what virtual swtich and port group we’re using for iSCSI traffic on each ESXi host. Follow these steps:

  1. Log into your host or vCenter server and then navigate over to your host’s “Configuration” tab.
  2. Click “Networking” on the left.
  3. Verify the Port Group name, Virtual Switch name, vmk number, IP address, and
    iSCSI Port Group

    Figure 1

    which vmnics are being used. See Figure 1.

  4. If you’ve not already installed either vCLI or vMA see the posts on how to install and configure them Here and Here.
  5. Open either vCLI or ssh into your vMA VM.
  6. enter the following command “esxcfg-vswitch -m 9000 <vSwitch's name> --server <Hosts's FQDN>
  7. When prompted for a username and password enter the name and password of an account with Administrator permissions on that host.
  8. Verify that this change has taken effect by running the following command: “esxcfg-vswitch -l --server <Hosts's FQDN>“. The MTU for your vSwitch should now be displayed as 9000.

We can’t modify the mtu on our port group, so we’ll need to migrated any VMs on iSCSI storage on this Host off of this Host and then remove our iSCSI port group. Once you’ve migrated any running VMs follow these steps:

  1. Open the Preferences of your vSwitch that we just modified.
  2. Select the port group in questions, and then click “Remove”.
  3. Now enter the following command in either vCLI or the vMA, “esxcfg-vswitch -A
    "iSCSI" <vSwitch's name> --server <Host's FQDN>
    “. This command re-created our iSCSI port group, attached it to our vSwitch, but did not add a vmknic to the port group.
  4. Now enter the following command to re-create the vmknic, “esxcfg-vmknic -a -i <ip address> -n <netmask> -m 9000 "vmk#" -p "iSCSI" --server <Host's FQDN>“.
  5. We can now verify that our port group has the correct mtu by running the following commands:  “esxcfg-vmknic -l --server <Host's FQDN>” and “esxcfg-vmknic -l --server <Host's FQDN>“. Check the MTU settings on both your Port group and Nics, they should now both be 9000.
We now need to rescan our iSCSI software adapter, and refresh our storage view to make sure our iSCSI Datastores are re-connected. Follow these steps:
  1. Click on “Storage Adapters” under the “Configuration” tab of your Host.
  2. Scroll down to your iSCSI Software Adapter, and then click “Rescan All…” in the top right, Verify that the iSCSI LUN(s) have re-appeared.
  3. Now click on “Storage” under the “Configuration” tab of your Host.
  4. Click “Rescan All…” in the top right of the screen. Verify that your iSCSI  Datastores have re-appeared.
Finally let’s verify that our iSCSI network is fully supporting our jumbo frame size. Follow these steps:
  1. Log into the console of your ESXi Host.
  2. Press F2 to customize your host.
  3. When prompted, log into your Host.  Scroll down to “Troubleshooting Options”. Press “enter”.
  4. Press enter on “Enable Local Tech Support” to enable it.
  5. Now press “Alt and F1” to enter the console, and then log in again.
  6. Enter the following command: “vmkping -s 9000 <IP Address of your SAN's iSCSI interface>“. The ping should work and confirm that the mtu is 9000. If this does not succeed double check the mtu settings on your switches and SAN.
  7. Press ” Alt F2″ to exit the local console.
  8. Press enter on “Disable Local Tech Support” to disable the local console on your host.
  9. Exit your hosts’s console.
That’s it, your host is now configured to use jumbo frames, and now you can repeat these steps on the remaining Hosts.

Attached Files:

2 thoughts on “Enabling Jumbo Frames on your iSCSI vmnics and vSwitches ESXi 4.1

  1. Mike

    Thanks for the useful article Sean. I have a few comments after using your instructions this weekend to enable jumbo frames in my vmware 5.0 environment. Maybe these differences can be attributed to using vmware 4.0 vs. 5.0, but I thought I would post in case it’s helpful to anyone else.

    On the iSCSI port group, I was able to change the MTU using the vsphere client. You mentioned you can’t change the MTU on the port group, did you mean that it’s not possible at the command line? I wanted to avoid recreating the portgroup because last time I had to do this, vmware had to assist using commands to configure iscsi port binding that I didn’t have handy. Without the bind commands, vcenter would send alerts about lost storage path redundancy.

    Also a note on the vmkping. Once all the devices were enabled for 9000 MTU, I still don’t get a response using vmkping -s 9000. I do get a response for anything up to -s 8991. But 8992 and higher do not respond, I assume the packet overhead causes the packets to be larger than 9000.

    ~ # vmkping -s 8991
    PING ( 8991 data bytes
    8999 bytes from icmp_seq=0 ttl=64 time=1.487 ms
    8999 bytes from icmp_seq=1 ttl=64 time=1.442 ms
    8999 bytes from icmp_seq=2 ttl=64 time=1.491 ms
    — ping statistics —
    3 packets transmitted, 3 packets received, 0% packet loss
    round-trip min/avg/max = 1.442/1.473/1.491 ms

    ~ # vmkping -s 8992
    PING ( 8992 data bytes
    — ping statistics —
    3 packets transmitted, 0 packets received, 100% packet loss


    1. Sean LaBrie


      You are correct, there are some changes between 5 and 4, most noticeably that you can make those changes in the GUI now.

      As for your vmkping, what is the MTU set to on your switch? It should be 9000 on your ESXi box and the SAN, but higher on the switch, I typically go as high as the switch will allow, to make sure there is room for overhead.

      We’re pretty much using EqualLogic for everything now, and they have a nic setup script for taking care of the iSCSI settings for you.




Leave a Reply

Your email address will not be published. Required fields are marked *