Last updated on: 2024-04-19 09:21:35.

KVM: Implement BIG-IP NIC bonding

This topic discusses the basic setup of the bonding, teaming, and aggregation of BIG-IP VE NICs on a Linux KVM hypervisor. The following descriptions distinguish between each reference to managing failover of multiple NICs in a BIG-IP VE:

  • Bonding - involves combining multiple interfaces together to achieve link failure redundancy or aggregation. This approach combines two 10G NICs in an ACTIVE/PASSIVE configuration, allowing for continued operation when one link fails.

    Caution

    Bonding may not give you aggregation, because the total bandwidth is limited to the throughput of the lowest speed NIC. Traffic will only flow across the ACTIVE link.

  • Teaming - typically used in the context of Windows Server OS; teaming can involve both performance and fault tolerance in the event of a network adapter failure.

  • Load balancing - refers to an ACTIVE/ACTIVE configuration, where the bandwidth is limited to the throughput of the lowest speed NIC.

    Note

    Traffic in this mode is expected to be routed across multiple interfaces.

  • Aggregation - utilizes multiple network interface ports to combine the collective bandwidth (for example, 10 GB + 10 GB = 20 GB).

    Caution

    Aggregation may not give you link redundancy; for example, when utilizing MAC cloning on multiple interfaces the upstream switch can fail to forward packets correctly after a link drops.

Evaluate the options

This evaluation uses the following terms:

  • Link redundancy - failover (active/passive) or load balancing (active/active) in case of an interface outage.
  • Aggregation - combining the bandwidth of two or more NICs.
  • Trunking - a trunk is a logical grouping of interfaces on the BIG-IP system. When you create a trunk, this logical group of interfaces function as a single interface.

Generally, PCIe 8x mechanical slots only achieves 63Gbps. A 16x mechanical/electrical slot achieves up to 126Gbps, due to CPU lane throughput limitation. For example, if you have a dual port 40G NIC in an 8x slot, then the maximum speed you will achieve is 63Gbps versus 80Gbps, due to a hardware limitation and not due to the BIG-IP VE.

Options include:

Tip

Make sure that your operating system on the hypervisor has the latest Intel/Mellanox drivers installed, enabling the (guest) BIG-IP VE to make changes to the MAC in a trusted or untrusted mode, based on recent driver implementations.

OPTION 1: LACP bonding at the hypervisor and switch level

Ideally, you can have both link redundancy and aggregation in a trunk interface on BIG-IP VE; however, this reduces flexibility and resource allocation.

Caution

There is an issue with bonding at the hypervisor level. The BIG-IP VE loads the socket driver versus a high speed SR-IOV driver. Therefore, you will not achieve line rate at the full bond speed. Instead, you will experience an approximate 50% - 70% reduction in speed.

Link Aggregation Control Protocol (LACP)

Use this LACP for collectively handling multiple physical ports, seen as a single channel, for network traffic purposes. This protocol is defined by the Link Aggregation standard IEEE 802.1AX-2008 (formerly IEEE 802.3ad). This standard offers both increased bandwidth and link failure redundancy in layer 2.

In the case of BIG-IP VE, LACP active monitoring in the guest is not possible, because the guest does not receive bridge control packets; therefore, F5 removed the LACP setting for BIG-IP VE. You can however configure the hypervisor to bond interfaces and present a BOND interface to BIG-IP VE. In the case of two 40 GB interfaces, you would see an 80 GB interface in VE when both interfaces are running. When an interface fails, the connection speed decreases, but the traffic will route over the remaining interfaces, automatically.

../_images/bond1.png

The previous figure illustrates a setup with the following requirements:

  • Must configure the switch and the hypervisor to support LACP Mode 4 bonding.
  • Must dedicate the NIC interface to BIG-IP VE:
    • When the signalling notifies the LACP bond to come up at the switch, the resource is dedicated.
    • Avoids additional SR-IOV VFs that appear to function, but do not pass traffic.
  • Configure the trunk in BIG-IP VE with VLAN’s to separate internal and external traffic, and recommend the management interface be outside of this fast data path.

Caution

Bond using a hypervisor is NOT an SR-IOV virtual function (VF). Therefore, the custom high speed drivers in BIG-IP VE will NOT load. Your only available options include the basic Virtio or Socket drivers, leaving you with 50 percent line rate at best. The VM guest will also use more CPU resources. For example, if you have a 10G plus 10G bond, then your maximum line rate is 10G. Avoid using this option, whenever SR-IOV is available. Using a larger interface is always preferred, versus bonding smaller interfaces.

Troubleshooting guide

  1. To troubleshoot iptables settings, do the following:

    • To check/list iptables, type:

      # sudo iptables -L
      
    • To temporarily disable iptables, type:

      # iptables -F
      
    • To stop iptables, type:

      # service iptables stop
      
  2. Depending on your application, you can do the following to disable SELINUX (this can affect security):

    • Disable SELINUX on this file: /etc/selinux/config.
  3. To disable the firewall (this can affect security), type:

    # sudo systemctl disable firewalld
    # sudo systemctl stop firewalld
    
  4. To disable the Network Manager, type:

    # sudo systemctl disable NetworkManager
    # sudo systemctl stop NetworkManager
    # sudo systemctl enable network
    # sudo systemctl start network
    
  5. To set the host name, type:

    # sudo hostnamectl set-hostname <newhostname>
    
  6. Use other useful commands:

    • Show network bus info:

      # lshw -c network -businfo
      
    • Determine running driver:

      # ethtool -i <interface> | grep ^driver
      
    • Set MTU on interface:

      # ifconfig <interface> mtu 9100