Link Aggregation for VMWare ESXi and ESX

Requirements:

• ESXi/ESX host only supports NIC teaming on a single physical switch or stacked switches.
• Link aggregation is never supported on disparate trunked switches.
• The switch must be set to perform 802.3ad link aggregation in static mode ON and the virtual switch must have its load balancing method set to Route based on IP hash. Ensure that the participating NICs are connected to the ports configured on the same physical switch.
• Enabling either Route based on IP hash without 802.3ad aggregation or vice-versa disrupts networking, so you must make the changes to the virtual switch first. That way, the service console is not available, but the physical switch management interface is, so you can enable aggregation on the ports involved to restore networking.
• The LACP support in vSphere Distributed Switch 5.1 supports only IP hash load balancing. In vSphere Distributed Switch 5.5, all load balancing algorithms of LACP are supported. For more information, see LACP Support on a vSphere Distributed Switch.
• Do not use beacon probing with IP HASH load balancing.
• Do not configure standby or unused uplinks with IP HASH load balancing.
• VMware supports only one Etherchannel bond per Virtual Standard Switch (vSS). Prior to vSphere 5.5, when using Virtual Distributed Switches (vDS), each ESXi/ESX host can only have one etherchannel bond configured per vDS.
• ESXi 5.1, 5.5, 6.0 and 6.5 support LACP on vDS only. For more information, see Enabling or disabling LACP on an Uplink Port Group using the vSphere Web Client (2034277).

Link aggregation concepts:

EtherChannel: This is a link aggregation (port trunking) method used to provide fault-tolerance and high-speed links between switches, routers, and servers by grouping two to eight physical Ethernet links to create a logical Ethernet link with additional failover links. For additional information on Cisco EtherChannel, see the EtherChannel Introduction by Cisco.
LACP or IEEE 802.3ad: The Link Aggregation Control Protocol (LACP) is included in IEEE specification as a method to control the bundling of several physical ports together to form a single logical channel. LACP allows a network device to negotiate an automatic bundling of links by sending LACP packets to the peer (directly connected device that also implements LACP). For more information on LACP, see the Link Aggregation Control Protocol whitepaper by Cisco.

Note: LACP is only supported in vSphere 5.1, 5.5 and 6.0 using vSphere Distributed Switches (VDS) or the Cisco Nexus 1000v.

EtherChannel vs. 802.3ad: EtherChannel and IEEE 802.3ad standards are very similar and accomplish the same goal. There are a few differences between the two, other than EtherChannel is Cisco proprietary and 802.3ad is an open standard.

EtherChannel supported scenarios:
• One IP to many IP connections. (Host A making two connection sessions to Host B and C)
• Many IP to many IP connections. (Host A and B multiple connection sessions to Host C, D, etc)

Note: One IP to one IP connections over multiple NICs is not supported. (Host A one connection session to Host B uses only one NIC).

• Compatible with all ESXi/ESX VLAN configuration modes: VST, EST, and VGT. For more information on these modes, see VLAN Configuration on Virtual Switch, Physical Switch, and Virtual Machines (1003806).
• Supported Cisco configuration: EtherChannel Mode ON – ( Enable EtherChannel only)
• Supported HP configuration: Trunk Mode
• Supported switch Aggregation algorithm: IP-SRC-DST (short for IP-Source-Destination)
• Supported Virtual Switch NIC Teaming mode: IP HASH. However, see this note:

Note: The LACP support in vSphere Distributed Switch 5.1 supports only IP hash load balancing. In vSphere Distributed Switch 5.5 and later, all the load balancing algorithms of LACP are supported:

• Do not use beacon probing with IP HASH load balancing.
• Do not configure standby or unused uplinks with IP HASH load balancing.
• vSphere Distributed Switch 5.1 only supports one EtherChannel per vNetwork Distributed Switch (vDS). However, vSphere Distributed Switch 5.5 and later supports multiple LAGs.

• Lower model Cisco switches may have MAC-SRC-DST set by default, and may require additional configuration. For more information, see the Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switches article from Cisco.

Sample Cisco Configuration:

interface Port-channel1
switchport
switchport access vlan 100
switchport mode access
no ip address

interface GigabitEthernet1/1
switchport
switchport access vlan 100
switchport mode access
no ip address
channel-group 1 mode on

Cisco Verification Commands:

Switch# show etherchannel load-balance
EtherChannel Load-Balancing Configuration:
src-dst-ip
mpls label-ip
EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source XOR Destination MAC address
IPv4: Source XOR Destination IP address
IPv6: Source XOR Destination IP address
MPLS: Label or IP

Switch# show etherchannel summary
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator
M - not in use, minimum links not met
u - unsuitable for bundling
w - waiting to be aggregated
Number of channel-groups in use: 2
Number of aggregators: 2
Group Port-channel Protocol Ports
------+-------------+-----------+--------------------------
1 Po1(SU) - Gi1/15(P) Gi1/16(P)
2 Po2(SU) - Gi1/1(P) Gi1/2(P)

Switch# show etherchannel protocol
Channel-group listing:
-----------------------
Group: 1
----------
Protocol: - (Mode ON)
Group: 2
----------
Protocol: - (Mode ON)

Sample VMware configuration:

Configuring load balancing within the vSphere/VMware Infrastructure Client
To configure vSwitch properties for load balancing:
1. Click the ESXi/ESX host.
2. Click the Configuration tab.
3. Click the Networking link.
4. Click Properties.
5. Click the virtual switch in the Ports tab and click Edit.
6. Click the NIC Teaming tab.
From the Load Balancing dropdown, select Route based on ip hash. However, see note below.

Verify that there are two or more network adapters listed under Active Adapters.

vmware nic teaming

Note: The LACP support in vSphere Distributed Switch 5.1 supports only IP hash load balancing. In vSphere Distributed Switch 5.5 and later, all the load balancing algorithms of LACP are supported:
• You must set NIC teaming to IP HASH in both the vSwitch and the included port group containing the kernel management port. See Additional Information section, For additional NIC teaming with EtherChannel information.
• Do not use beacon probing with IP HASH load balancing.
• Do not configure standby or unused uplinks with IP HASH load balancing.
• vSphere Distributed Switch 5.1 only supports one EtherChannel per vNetwork Distributed Switch (vDS). However, vSphere Distributed Switch 5.5 and later supports multiple LAGs.
• ESX/ESXi running on a blade system do not require IP Hash load balancing if an EtherChannel exists between the blade chassis and upstream switch. This is only required if an EtherChannel exists between the blade and the internal chassis switch, or if the blade is operating in a network pass-through mode with an EtherChannel to the upstream switch. For more information on these various scenarios, contact your blade hardware vendor.

Removing an EtherChannel configuration from a running ESX/ESXi host

To remove EtherChannel, there must only be one active network adapter on the vSwitch/dvSwitch. Ensure that the other host NICs in the EtherChannel configuration are disconnected (Link Down). Perform one of these options:
• Disconnect the network cables from the network adapters (ensure that one is left online).
• Shut down the network port from the physical switch.
• Disable the vmnic network cards in ESXi. For more information, see Forcing a link state up or down for a vmnic interface on ESXi 5.x (2006074).
With only a single network card online, you can then remove the portchannel configuration from the physical network switch and change the network teaming settings on the vSwitch/dvSwitch from IP HASH to portID. For more information about teaming, see NIC teaming in ESXi and ESX (1004088).