This site has been archived. For information on the GN Project’s eduPERT initiative please visit https://archive.geant.org/projects/gn3/geant/services/edupert/Pages/Home.html

Support for Large Frames/Packets ("Jumbo MTU")

On current research networks (and most other parts of the Internet), end nodes (hosts) are restricted to a 1500-byte IP Maximum Transmission Unit (MTU). Because a larger MTU would save effort on the part of both hosts and network elements - fewer packets per volume of data means less work - many people have been advocating efforts to increase this limit, in particular on research network initiatives such as Internet² and GÉANT.

Impact of Large Packets

Improved Host Performance for Bulk Transfers

It has been argued that TCP is most efficient when the payload portions of TCP segments match integer multiples of the underlying virtual memory system's page size - this permits "page flipping" techniques in zero-copy TCP implementations. A 9000 byte MTU would "naturally" lead to 8960-byte segments, which doesn't correspond to any common page size. However, a TCP implementation should be smart enough to adapt segment size to page sizes where this actually matters, for example by using 8192-byte segments even though 8960-byte segments are permitted. In addition, Large Send Offload (LSO), which is becoming increasingly common with high-speed network adapters, removes the direct correspondence between TCP segments and driver transfer units, making this issue moot.

On the other hand, Large Send Offload (LSO) and Interrupt Coalescence remove most of the host-performance motivations for large MTUs: The segmentation and reassembly function between (large) application data units and (smaller) network packets is mostly moved to the network interface controller.

Lower Packet Rate in the Backbone

Another benefit of large frames is that their use reduces the number of packets that have to be processed by routers, switches, and other devices in the network. However, most high-speed networks are not limited by per-packet processing costs, and packet processing capability is often dimensioned for the worst case, i.e. the network should continue to work even when confronted with a flood of small packets. On the other hand, per-packet processing overhead may be an issue for devices such as firewalls, which have to do significant processing of the headers (but possibly not the contents) of each packet.

Reduced Framing Overhead

Another advantage of large packets is that they reduce the overhead for framing (headers) in relation to payload capacity. A typical TCP segment over IPv4 carries 40 bytes of IP and TCP header, plus link-dependent framing (e.g. 14 bytes over Ethernet). This represents about 3% of overhead with the customary 1500-byte MTU, whereas an MTU of 9000 bytes reduces this overhead to 0.5%.

Network Support for Large MTUs

Research Networks

GÉANT and most other Research and Education backbones now both support a 9000-byte IP MTU. The access MTUs of the NRENs to GÉANT are listed in the GÉANT Monthly Report (not publicly available).

Commercial Internet Service Providers

Many commercial ISPs support larger-than-1500-byte MTUs in their backbones (4470 bytes is a typical value) and on certain types of access interfaces, i.e. Packet-over-SONET (POS). But when Ethernet is used as an access interface, as is more and more frequently the case, the access MTU is usually set to the 1500 bytes corresponding to the Ethernet standard frame size limit. In addition, many inter-provider connections are over shared Ethernet networks at public exchange points, which also imply a 1500-byte limit, with very few exceptions - MAN LAN is a rare example of an Ethernet-based access point that explicitly supports the use of larger frames.

Possible Issues

Path MTU Discovery issues

Moving to larger MTUs may exhibit problems with the traditional Path MTU Discovery mechanism - see that topic for more information about this.

Inconsistent MTUs within a subnet

There are other deployment considerations that make the introduction of large MTUs tricky, in particular the requirement that all hosts on a logical IP subnet must use identical MTUs; this makes gradual introduction hard for large bridged (switched) campus or data center networks, and also for most Internet Exchange Points.

When different MTUs are used on a link (logical subnet), this can often go unnoticed for a long time. Packets smaller than the minimum MTU will always pass through the link, and the end with the smaller MTU configured will always fragment larger packets towards the other side. The only packets that will be affected are packets from the larger-MTU side to the smaller-MTU side that are larger than the smaller MTU. Those will typically be sent unfragmented, and dropped at the receiving end (the one with the smaller MTU). However, it may happen that such packets rarely occur in normal situations, and the misconfiguration isn't detected immediately.

Some routing protocols such as OSPF and IS-IS detect MTU mismatches and will refuse to build adjacencies in this case. This helps diagnose configurations. Other protocols, in particular BGP, will appear to work as long as only small amounts of data are sent (e.g. during initial handshake and option negotiation), but get stuck when larger amounts of data (i.e. the initial route advertisements) must be sent in the bigger-to-smaller-MTU direction.

Problems with Large MTUs on end systems network interface cards (NICs)

On some NICs it's possible to configure jumbo frames (for example MTU=9000) and the NIC is working fine when checking it's functionality with pings (jumbo sized ICMP packets), although the NIC vendor states there is no jumbo frame support.
In that cases there are packet losses on the NIC if it receives jumbo frames with typical production data rates.

Therefore the NIC vendor information should be consulted before activating Large MTUs on the host interface. Then high data rate tests should be done with large MTUs. The hosts interface statistics should be checked on input packet drops.
Typical commands on unix systems: 'ifconfig <ifname>' or 'ethtool -S <ifname>'

References

Vendor-specific

Simon Leinen – 2005-03-16 – 2021-09-09

Hank Nussbacher – 2005-07-18 (added Phil Dykstra paper)

  • No labels