top of page
Search
fothapacu1975

Configure Jumbo Frames in esx 4 Vsphere for Better Network Performance



This document describes how to configure jumbo Maximum Transition Unit (MTU) end-to-end across Cisco Data Center devices in a network that consists of a VMware ESXi host installed on the Cisco Unified Computing System (UCS), Cisco Nexus 1000V Series Switches (N1kV), Cisco Nexus 5000 Series Switches (N5k), and the Cisco NetApp controller.




Configure Jumbo Frames in esx 4 Vsphere



Whether the upstream network is 1 GbE or 10 GbE, the use of jumbo frames (an MTU size of 9000, for example) improves performance because it reduces the number of individual frames that must be sent for a given amount of data and reduces the need to separate iSCSI data blocks into multiple Ethernet frames. They also lower the host and storage CPU utilization.


If jumbo frames are used, you must ensure that the UCS and storage target, as well as all of the network equipment between, are able and configured in order to support the larger frame size. This means that the jumbo MTU must be configured end-to-end (initiator to target) in order for it to be effective across the domain.


Since Jumbo Frames are not enabled by default. It can't since when you install ESXi 4, by default there is only one vSwitch and console and then you only start to build your network configuration. And you will not want to enable jumbo frames for ALL your networks and switches neither.


You must enable jumbo frames on every hop in the network otherwise it won't work and you'll find yourself in a possition that the performance on the storage network will actualy be worse than with Jumbo frames disabled.


Could someone direct me to the documentation for the proper way to enable jumbo frame support on UCS? I'm trying to troubleshoot a problem and have configured MTU of 9000 from ESX running on a blade to an NFS server off a Nexus 5k. I have good documentation for configuring jumbo frame support on the Nexus 5k and on ESX but nothing really specific for UCS.


The MTU is set on a per CoS basis in UCS. If you do not have any QoS policy defined for the vNIC that is going to the vSwitch, then the traffic is going to Best-Effort Class. All you need to do to enable jumbo frames is to enter the MTU to be 9000 in the MTU pulldown for whatever class the traffic is going to.


I also wanted to add to this post that if you SSH into Fab Interconnect and do a show ethernet 1/x that you may see an MTU size listed here of 1500, even though you may have, say, NFS traffic going over an appliance port and vNIC's configured for jumbo in UCSM.


Note: certain vmkNIC parameters (such as jumbo frame configuration) can only be done as the vmkNIC is being initially configured. Changing them subsequently requires removing and re-adding the vmkNIC. For the jumbo frame example, see that section later in this post.


The idea is that larger frames represent less overhead on the wire and less processing on each end to segment and then reconstruct Ethernet frames into the TCP/IP packets used by iSCSI. Note that recent Ethernet enhancements TSO (TCP Segment Offload) and LRO (Large Receive Offload) lessen the need to save host CPU cycles, but jumbo frames are still often configured to extract any last benefit possible.


Inside ESX, jumbo frames must be configured on the physical NICs, on the vswitch and on the vmkNICs used by iSCSI. The physical uplinks and vswitch are set by configuring the MTO of the vswitch. Once this is set, any physical NICs that are capable of passing jumbo frames are also configured. For iSCSI, the vmkNICs must also be configured to pass jumbo frames.


When I rescan my iscsi adapter I get 4 paths with a default setting of "Fixed". This host has 4 NICs configured per the article with jumbo frames enabled. Each of the paths seem to be attaching directly to one of the nodes ".112".


We are currently running ESX 3.5 in our data center with redundant 10gig nic connections (Neterion) with jumbo frames (I know it's not officially supported but were not seeing any issues with it) and a LeftHand SAN. With the LeftHand SAN providing a different target for each LUN and our using link aggregation, I'm seeing very consistent load balancing across both NICs in our current 3.5 deployment. Given that link aggregation will provide faster convergence in the event of a NIC/link failure are there other compelling reasons for me to use ISCSI MPIO instead of my current setup as we migrate to vSphere.


However, for the jumbo frame section, I would add a line about configuring vSwitch MTU. It seems that every article I could find about multipathing, iSCSI and jumbo frames includes configuration for the nic and hba but not the vSwitch.


Just as an aside, Cisco introduced iSCSI multipath support for the Nexus 1000v DVS in their most recent release, with some additional improvements (one of them addressing an issue with jumbo frames) coming hopefully later this month or early next.


Then I found this post. With some time, testing and patience I have gotten my speeds up to 250-300 MB/s on the 4 port GB cards (with other hosts connected to the SAN in the background). And that is without jumbo frames enabled.


You will get Ethernet errors if missing the jumbo frames configuration for example a switch, but those will be either "CRC Errors" or possible "Giant errors". However there is no mechanism for fragmentation on Ethernet as stated above, so for a device in the middle the larger frame will just look corrupt.


We had an issue with ISCSI.ISCSI Network was lost - one of the guys deleted the VLAN and all the nodes (configured with ISCSI) failed to respond. They were running but we could not connect to them with vsphere, so effectively they could not be vmotioned etc during the outage.


We run jumbo frames (9000-byte MTU) for our iSCSI SAN. MTU end-to-end on the parent Nexus 9k and the FEX ports is set at 9216. All blades except the Gen9 can successfully pass 9k frames to the NetApp. The Gen9 server is dropping ICMP frames larger than 2344 bytes with the DF bit set.


In the meantime, I hope this post can help someone experiencing the same scenario. The end result is that 9k jumbo frames are being passed properly, the iSCSI vmk interfaces have been attached to the BCM 57810 dependent iSCSI adapters, and the blade successfully added to the cluster.


We need to enable jumbo frames both on the new vSwitch as well as each new port group. iSCSI performance benefits greatly from the use of jumbo frames. Jumbo frames are Ethernet frames with a payload greater than a maximum transmission unit (MTU) value of 1500 bytes. Typically for iSCSI and other use cases, we use a value of 9000 MTU.


It would be highly desirable to be able to increase the MTU size over the WAN. If the MTU size could be increased throughout the path across the WAN, then the added encapsulation overhead could be compensated for by the WAN interface of the routers. This would eliminate the need to reduce the MTU size on the tunnel interfaces, adjust MSS, and alleviate the routers from performing any fragmentation. That's where jumbo frames come in


Jumbo frames are network-layer PDUs that have a size much larger than the typical 1,500 byte Ethernet MTU. In some situations, jumbo frames can be used to allow for much larger frame sizes if the networking hardware is capable of this configuration. Most modern routers and switches, as well as most datacenter networking hardware, can support jumbo frames.


The key concept to keep in mind is that all the network devices along the communication path must support jumbo frames. Jumbo frames need to be configured to work on the ingress and egress interface of each device along the end-to-end transmission path. Furthermore, all devices in the topology must also agree on the maximum jumbo frame size. If there are devices along the transmission path that have varying frame sizes, then you can end up with fragmentation problems. Also, if a device along the path does not support jumbo frames and it receives one, it will drop it.


Most network devices support a jumbo frame size of 9,216 bytes. This isn't standardized like Ethernet's 1,500 byte MTU, though, so you want to check with your particular manufacturer on the largest frame size their devices support and how to configure the changes. Even within a single manufacturer's line of network products, the MTU capabilities may vary greatly, so it is important to do a thorough investigation of all your devices in the communication paths and validate their settings. For instance, some Intel Gigabit adapters support jumbo frames but many do not.


Learning about the benefits of jumbo frames may be beneficial to your network's performance. However, it is important to explore if and how your network devices support jumbo frames before you turn this feature on. Some of the biggest gains of using jumbo frames can be realized within and between data centers. But you should be cognizant of the fragmentation that may occur if those large frames try to cross a link that has a smaller MTU size.


Several vSphere components can benefit from using a larger network frame size (MTU) than the regular size of 1500 bytes. vMotion, Storage: NFS, iSCSI and VSAN are examples that would gain some performance by increasing the frame size. In most cases, you would configure the MTU to a jumbo frame size, which is 9000.


I wanted to share an alternate method for getting through this problem for those that might still be having issues and want to use jumbo frames on a vDS. Essentially you use a vSwitch as a workshop to create a vmknic with a 9000 MTU and then migrate it over to your vDS.


Default implementations of TCP/IP typically set the MTU (maximum transmission unit) size at 1500 bytes. This is to ensure optimal interactions with legacy systems and Internet pathways. Using larger values for MTU (jumbo frames) can increase the speed of large data transfers because, when all goes well, there are more useful data bytes per overhead header and encapsulation bytes. 2ff7e9595c


1 view0 comments

Recent Posts

See All

Comments


bottom of page