KVM: Configure Intel X710 series NICs for High Performance

This document explains the basic driver and SR-IOV setup of the Intel X710 series of NICs on Linux.

The document assumes the built-in driver is loaded in the base OS and that BIG-IP 13.0.0 and later is using the default optimized driver.

To configure your KVM host, verify the required prerequisites, and then complete the following steps:

  1. Add Intel IOMMU to the Linux grub file
  2. Modify driver settings to enable SR-IOV
    1. Verify the OS has loaded the Intel driver
    2. Install the i40e Linux Base Driver
    3. Install the supplied Intel IAVF driver
  3. Upgrade X710 NIC firmware using supplied NVM tool
  4. Create VFs
    1. Use the rc.local file
    2. Initialize VFs
    3. Initialize the VFs for the driver
  5. Deploy BIG-IP VE in KVM
  6. Diagnostics and troubleshooting tips

Prerequisites

Before you begin, ensure you have completed the following tasks.

  1. Enable Intel® Virtualization Technology (Intel® VT) in the host machine BIOS.
  2. Enable SR-IOV in the BIOS.
  3. Optional. Optimize power management settings:
    1. Turn off speed-stepping.
    2. Change power management from Balanced to Performance.
    3. Disable C-State power controls.

Tip

Linux lshw utility

Use the lshw tool to extract detailed information on the hardware configuration.

  • To install lshw, type: yum install -y lshw

  • To look up i40e driver information, type: Modinfo i40e

  • Other lshw commands include:

    lshw -c network -businfo
    ip l | grep vf
    virsh nodedev-list –tree
    ifconfig -a
    lsmod or lsmod |grep igb
    iplink show
    ethtool -i enp134s0f0
    

Add Intel IOMMU to the Linux grub file

Modify the Linux grub file to add Intel input–output memory management unit (IOMMU) support. Depending on the Linux distribution, use grub or grub2. Grub files are located in the following directories:

/boot/grub/grub.conf

/boot/grub2/grub.cfg

  1. View the current config by typing:

    grubby --info=ALL

  2. Configure intel_iommu=on in the grub file, and add iommu=pt (pass-through) to the grub file, when using SR-IOV. When in pass-through mode, the adapter does not use DMA translation to the memory, improving performance.

  3. Append the IOMMU settings to the grub file:

    grubby --update-kernel=ALL --args="intel_iommu=on iommu=pt"

  4. Type, update-grub.

For an example on RHEL 7.6 using Grubby, consult this RHEL article.

To modify the hugee page file size settings, use this command:

grubby --update-kernel=ALL --args="hugepagesz=2M hugepages=320 default_hugepagesz=1G hugepagesz=1G hugepages=16"

Modify driver settings to enable SR-IOV

Intel NIC’s ship with the SR-IOV Virtual Functions (VF) set to zero. You must modify the operating system driver settings so the VF’s will persist (even after an OS reload).

PF and VF drivers for the X710 and XL710 server adapters are included in Red Hat Enterprise Linux, Centos and Ubuntu. 7.x distribution are named i40e and i40evf respectively. Newer versions of these drivers are available on the Intel Downloads site.

The driver or software for your Intel® component may have changed or been replaced by the computer manufacturer. F5 recommends you work with your computer manufacturer, before installing the mainstream Intel driver, so you don’t lose OEM features or customizations.

Verify the OS has loaded the Intel driver

  1. Check that the adapters are recognized by running the following lspci command:

    sudo lspci -D | grep Ethernet

    A list of network adaptors similar to the following is returned:

    ../_images/intel_driverlist.png

    In the previous list you see our onboard I350 and the Dual Port Intel XL710:

    • Port 0 of the PF is at PCI address 0000:86:00.0
    • Port 1 of the PF is at PCI address 0000:86:00.1
  2. OPTIONAL: If you do not see the Intel XL710 listed, then load the OEM or Intel driver.

Install the i40e Linux Base Driver

Visit the i40e-2.15.9.tar.gz download site for Intel® i40e Linux Base Drivers series.

  1. Move the base driver tar file to the desired directory. For example, use /home/username/i40e or /usr/local/src/i40e.

  2. Unpack the archive, where <x.x.x> is the version number for the driver tar file:

    tar zxf i40e-<x.x.x>.tar.gz

  3. Change to the driver src directory, where <x.x.x> is the version number for the driver tar:

    cd i40e-<x.x.x>/src/

  4. Compile the driver module:

    make install

    The binary will be installed as: /lib/modules/<KERNEL VER>/updates/drivers/net/ethernet/intel/i40e/i40e.ko

    The previous install location is the default location and can differ for other Linux distributions.

    Note

    To gather and display additional statistics, use the I40E_ADD_PROBES pre-processor macro: make CFLAGS_EXTRA=-DI40E_ADD_PROBES Also, collecting additional statistics can affect performance.

  5. Load the module using the modprobe command.

    To check the version of the driver, and then load the diver:

    modinfo i40e modprobe i40e [parameter=port1_value,port2_value]

    Alternately, remove older i40e versions of the drivers from the kernel, before loading the new module:

    rmmod i40e; modprobe i40e

  6. To assign an IP address to the interface, type the following where <ethX> is the interface name that was shown in dmesg after modprobe:

    ip address add <IP_address>/<netmask bits> dev <ethX>

  7. Verify that the interface works. Type the following, where IP_address is the IP address for another machine on the same subnet as the interface that is being tested:

    ping <IP_address>

    Note

    For certain distributions like (but not limited to) Red Hat Enterprise Linux 7, Ubuntu, and SUSE Linux Enterprise Server (SLES) 11, once the driver is installed, you may need to update the initrd/initramfs file to prevent the OS loading old versions of the i40e driver.

    • Red Hat distributions:

      dracut --force

    • Ubuntu:

      update-initramfs -u

    • SLES:

      mkinitrd

Install the supplied Intel IAVF driver

Both the Intel X710 NIC series uses the IAVF driver.

  1. Set VF’s to zero before upgrading IAVF driver:

    echo 0 > /sys/class/net/ens818f0/device/sriov_numvfs
    echo 0 > /sys/class/net/ens818f1/device/sriov_numvfs
    
  2. Download the Network Adapter Linux* Virtual Function Driver for Intel® Ethernet Controller 700 Series iavf-4.1.1.tar.gz file.

  3. To determine bus information, device ID, and description, use the following command:

    lshw -class network -businfo
    
    # example output
    [root@prototype ~]# lshw -class network -businfo
    
    Bus info          Device          Class          Description
    ============================================================
    pci@0000:b1:01.0                  network        Ethernet Adaptive Virtual Function
    pci@0000:b1:01.1                  network        Ethernet Adaptive Virtual Function
    pci@0000:b1:11.0                  network        Ethernet Adaptive Virtual Function
    pci@0000:b1:11.1                  network        Ethernet Adaptive Virtual Function
    
  4. Install new IAVF driver, iavf-4.1.1.tar.gz file that you downloaded in step 2.

  5. OPTIONAL: If you see errors in compile, change directory to the cd src directory and type: chmod +x *.

  6. Compile the driver module, type:

    make
    sudo make install
    
  7. To verify that all older i40evf drivers are removed from the kernel BEFORE loading the new module, type:

    rmmod i40evf

  8. To load the new driver module, type:

    modprobe iavf

    Note

    The make install command creates /etc/modprobe.d/iavf-blacklist-i40evf.conf that contains denylisti40evf. ##!!!! Adds the line alias i40evf iavf to the modprobe configuration.

  9. Reboot the server.

Upgrade X710 NIC firmware using supplied NVM tool

This is an optional step for most hypervisors. However, for VMware, upgrading the Intel X710 firmware is a requirement. Consult the VMware setup guide for firmware details.

Create VF’s

Create as many virtual functions as needed using the following:

Use the rc.local file

The following example initializes the VFs using two VFs per PF. Assigning MACs is optional.

sudo vi /etc/rc.d/rc.local
echo 2 > /sys/class/net/ens801f0/device/sriov_numvfs
ip link set ens801f0 vf 0 trust on
ip link set ens801f0 vf 0 spoofchk off
#ip link set ens801f0 vf 0 mac [insert mac address]
ip link set ens801f0 vf 1 trust on
ip link set ens801f0 vf 1 spoofchk off
#ip link set ens801f0 vf 1 mac [insert mac address]
echo 2 > /sys/class/net/ens801f1/device/sriov_numvfs
ip link set ens801f1 vf 0 trust on
ip link set ens801f1 vf 0 spoofchk off
#ip link set ens801f1 vf 0 mac [insert mac address]
ip link set ens801f1 vf 1 trust on
ip link set ens801f1 vf 1 spoofchk off
#ip link set ens801f1 vf 1 mac [insert mac address]

Initialize VFs

  1. Set VF’s to zero before upgrading IAVF driver:

    echo 0 > /sys/class/net/ens818f0/device/sriov_numvfs
    echo 0 > /sys/class/net/ens818f1/device/sriov_numvfs
    
  2. On Linux Kernel version 3.8.x and later, query the maximum number of VFs supported by the adapter by reading the sriov_totalvfs parameter using sysfs interface and the following:

    #cat /sys/class/net/<device_name>/device/sriov_totalvfs
    

    For example, mst start.

  3. Using the IDs obtained in the previous step, type the following:

    sudo mlxconfig -d /dev/mst/<mtXXX_pciconf0> set SRIOV_EN=1 NUM_OF_VFS=<number between 0-127>
    

    For example, if the card/port ID is mt4119_pciconf0 and you want 24 VFs per port:

    sudo mlxconfig -d /dev/mst/mt4119_pciconf0 set SRIOV_EN=1 NUM_OF_VFS=24
    
  4. Verify your changes:

    sudo mlxconfig -d /dev/mst/<mtxxx_pciconf0> query
    sudo mlxconfig -d /dev/mst/<mtxxx_pciconf0.1> query
    
  5. OPTIONAL: Download the latest Intel X710 series device drivers for Linux.

Note

For an Intel XL710 dual port NIC, each port is identified by a unique number. To determine the adapter ID, use ip link show.

Initialize the VFs for the driver

Module options are not persistent from one boot to the next. To ensure that the desired number of VFs are created, each time you power cycle the server, append the rc.local file, located in the /etc/rc.d/ directory. The Linux OS executes the rc.local script at the end of the boot process. Edit /etc/rc.d/rc.local to initialize the VFs for the driver.

  1. Modify the rc.local file to initialize the VFs for the driver. On a new install the rc.local file may not be set to initialize on startup. To allow for initialization, modify the file attributes:

    sudo chmod +x /etc/rc.d/rc.local
    
  2. For each device port (for example, enp175s0f0, enp175s0f1, enp24s0f0, enp24s0f1), add to the /etc/rc.d/rc.local file:

    sudo vi /etc/rc.d/rc.local
    
  3. Add the following information by using vi (i = insert mode, esc = exit mode, :w = write, :q = quit).

    echo 24 > /sys/class/net/enp24s0f0/device/sriov_numvfs
    echo 24 > /sys/class/net/enp24s0f1/device/sriov_numvfs
    

    This example assumes 24 VFs on two ports. The variables are <#ofVFs> and <portname>:

    echo <#ofVF’s> > /sys/class/net/<portname>/device/sriov_numvfs
    
  4. Save the file and reboot.

  5. Start and enable the rc-local service:

    sudo systemctl start rc-local
    sudo systemctl enable rc-local
    

Deploy BIG-IP VE in KVM

To deploy BIG-IP VE, download an image from F5 and deploy it in your environment.

Important

  • Do not change the configuration (CPU, RAM, and network adapters) of the KVM guest environment with settings less powerful than those recommended and described here.
  • When using F5’s virtio synthetic driver, use the default i440FX machine type. The QEMU Q35 machine type is not supported.
  1. In a browser, open the F5 Downloads page and log in.

  2. On the Downloads Overview page, click Find a Download.

  3. Under Product Line, click the link similar to BIG-IP v.x/Virtual Edition.

  4. Click the link similar to x.x.x_Virtual-Edition.

  5. If the End User Software License is displayed, read it and then click I Accept.

  6. Download the BIG-IP VE file package ending with qcow2.zip.

  7. Extract the file from the Zip archive and save it where your qcow2 files reside on the KVM server.

  8. Use VNC to access the KVM server, and then start Virt Manager.

  9. Right click localhost (QEMU), and from the popup menu, select New.

    The Create a new virtual machine, Step 1 of 4 dialog box opens.

  10. In the Name field, type a name for the connection.

  11. Select import existing disk image as the method for installing the operating system, and click Forward.

  12. Type the path to the extracted qcow file, or click Browse to navigate to the path location; select the file, and then click the Choose Volume button to fill in the path.

  13. In the OS type setting, select Linux, for the Version setting, select Red Hat Enterprise Linux 6, and click Forward.

  14. In the Memory (RAM) field, type the appropriate amount of memory (in megabytes) for your deployment. (For example 4096 for a 4GB deployment). From the CPUs list, select the number of CPU cores appropriate for your deployment, and click Forward.

  15. Select Customize configuration before install, and click the Advanced options arrow.

  16. Select the network interface adapter that corresponds to your management IP address, and click Finish.

    The Virtual Machine configuration dialog box opens.

  17. Click Add Hardware.

    The Add New Virtual Hardware dialog box opens.

  18. If SR-IOV is not required, select Network.

  19. From the Host device list, select the network interface adapter for your external network, and from the Device model list, select virtio. Then click Finish.

    Do this again for your internal and HA networks.

  20. If SR-IOV is required, select PCI Host Device and then select the PCI device for to the virtual function mapped to your host device’s external VLAN. Then click Finish.

    Be sure to use the Virtual Function (VF) PCI Host Device instead of the Physical Function (PF) to take advantage of VE high-speed drivers.

    The following image illustrates adding a PCI VF Network Interface within the Virtual Machine Manager:

    ../_images/kvm_qemu.png
  21. Repeat step 20 for your host device’s internal VLAN and HA VLAN.

  22. From the left pane, select Disk 1.

  23. Click the Advanced options button.

  24. From the Disk bus list, select Virtio.

  25. From the Storage format list, select qcow2.

  26. Click Apply.

  27. Click Begin Installation.

Virtual Machine Manager creates the virtual machine just as you configured it.

Diagnostics and troubleshooting tips

When using ifconfig to bring down the ports, the physical link can continue to indicate that it is up on the Switch side.

  1. Use the following commands at the OS level (not BIG-IP VE) to change this behavior, and close any VM’s and zero-out the VF’s.

    Note

    This behavior does not persist after rebooting the server.

    ethtool --set-priv-flags ens801f0 link-down-on-close on
    
    ethtool --set-priv-flags ens801f1 link-down-on-close on
    
  2. To set different media speeds, use the Intel Port Configuration Tool.

  1. For a list of supported NICs with SR-IOV capability, consult the K17204 support article.