F5 VE SmartNIC Deployment Guide

When running BIG-IP VE 15.1.0.4, you can utilize the Intel PAC N3000 25GbE version technology on any of the Intel-qualified servers to program and accelerate multiple BIG-IP VE virtual machines and the AFM DDOS module in a Linux KVM hypervisor. The F5 SmartNIC Orchestrator v1.0.7 is a bit file and configuration/orchestration utility that uses SR-IOV to isolate the VMs, and a REST API in a Docker container for orchestration.

../_images/smartnic_setup_1.png

Utilizing the SmartNIC technology in your qualified server can accelerate your VE VMs and provides support for the following:

  • 40 Gbps throughput
  • SR-IOV for 2 physical functions (PFs) and 8 virtual functions (VFs)
  • 8 Direct memory access (DMA) with 4 rings
  • Global DDOS mitigation (100+ Vectors)
  • Custom behavioral DOS vectors
  • Global SYN attack mitigation using cookies
  • Global allow-listing
  • Tunnel support of IP-in-IP, GRE, VXLAN, VXLAN-GPE, Geneve, NSH, NVGRE, Ether-IP (HW DDOS, Checksum offload)
  • QinQ (HW DDOS, Checksum offload)
  • TCP, UDP checksum offload on transmit
  • Transmitting VLAN filtering

Setup Guide

This setup guide describes the system prerequisites, supported components, installation, and configuration processes for all systems required by the BIG-IP VE for SmartNIC solution and Orchestrator utility.

Prerequisites

Your system must meet the following requirements:

  • Intel-qualified servers with:
    • 6c/12t 3.6GHZ (E5-1650 v4) CPU (Broadwell-EP or better)
    • Dedicated 16X mechanical and 16X electrical lanes for the N3000 riser
    • RHEL or CentOS 7.6 or later
    • 103G disk space for the configuration utility and BIG-IP VE including all modules
  • Connection recommendations:
    • 100G switch, divided into 4x25G with two 100G optics on both ends, and connected by a single fiber cable.
    • Only use port 0, the port closest to the USB connectors.
    • The n3000 SmartNIC must have both QSFP ports active, with two MACs active on each port, running at 25G.
    • To achieve 40G+ requires a virtual function from each PCI Bus to be a member of the trunk. Consult the Trunking using BIG-IP - link redundancy topic for details.
  • Intel PAC N3000 25GbE version
  • Linux KVM hypervisor
  • BIG-IP AFM, DHD, or VNF Better HPVE plus SmartNIC DDoS add-on license (to enable DDoS acceleration)

Add-on license

For the appropriate BIG-IP AFM, DHD, or VNF HPVE to enable SmartNIC DDoS offload, you will need an add-on license to existing or new BIG-IP image running v15.1.0.4. For complete details, consult the Activating add-on modules procedure in the K7752 article.

Supported components

This section lists the supported versions of hardware and software qualified by Intel for use with PAC N3000 and the BIG-IP VE 15.1.0.4.

Servers

You can use any of these Intel-qualified servers listed.

Operating Systems on KVM Hypervisor

The following lists the operating system versions that F5 tested and recommends using in a KVM hypervisor with the Intel PAC N3000 and the BIG-IP VE 15.1.0.4. For complete details, consult the Intel Acceleration Stack.

Operating System Kernel
CentOS version 7.6 3.10 and 4.19
CentOS version 7.7 3.10
Red Hat Enterprise Linux (RHEL) version 7.6 3.10

Switches

Intel supported switches MUST support the following features:

  • QSFP28 optical ports
  • 4x25Gbps breakouts of ports
  • Reed Solomon (RS)-FEC on 25G breakout ports

Caution

Using non-RS-FEC mode is intended for diagnostic purposes only, and is NOT an F5-supported production configuration (not supported by F5 Support team). Contact your switch manufacturer for enabling support for FEC options, like RS-FEC (also known as Reed Solomon) or BASE-R FEC (also known as Firecode).

Optics and Cables

You can use any of the Intel qualified optics.

Setup an Intel qualified server

Do the following on an Intel-qualified servers PRIOR to installing the Intel PAC N3000 SmartNIC.

Optimize the BIOS

Caution

Before installing the SmartNIC, you MUST optimize your server BIOS settings and set the Fan Profile setting to Performance; otherwise, you can damage the SmartNIC hardware. If the server airflow is inadequate, then the N3000 can initiate a thermal protection mode and appear unresponsive. If this occurs, change the appropriate BIOS settings, and then shut down the server for 60 minutes, allowing the card to cool.

Update your BIOS to the latest manufacturer’s release version. Consult your manufacturer’s support Web site. Updating your BIOS can take up to two hours. If your server is already in production, plan your back up and redundancy options accordingly.

  1. To optimize the BIOS, during the boot process, press the F11 or DELETE key.
  2. At the BIOS screen, navigate to the Fan Profile menu and set it to Performance/Maximum.
  3. Verify that the client has the following settings defined:
    • In the Processor Configuration menu, enable Virtualization support.
    • In the Integrated IO Configuration menu, enable Intel VT-x.
    • in the PCI Configuration menu, enable SR-IOV support in the BIOS.
  4. Turn OFF speed-stepping.
  5. Change the Power Management setting to Performance.
  6. Change the Workload Configuration setting to I/O Sensitive.
  7. Disable the C-State power controls.

Configure the switch, cables, and optics

Consult the following quick tips for an overview of recommended/required optics and cable configuration. For more in-depth configuration information and diagrams, consult the following section.

Tip

To configure the simplest solution, consult the following quick tips:

  • Use two 100G optics and a standard 12-strand fiber cable - use port A only off the SmartNIC, as there is no need/advantage to running dual ports.
  • A 100G switch is required; you must split the individual 100G switch port into 25G x 4 channel mode.
  • Normal operating mode includes the first two channels becoming enabled, and the other two channels remaining disabled (offline).
  • Enable the Reed-Solomon Forward Error Correction (RS-FEC) setting on the switch is the normal operating mode; however, if your switch does NOT support this RS-FEC option, use the Settings tab in the F5 SmartNIC Orchestrator tool to run with this switch in disabled mode.
  • Add both interfaces to a BIG-IP VE trunk, allowing you to achieve 50G; because LACP is not supported and if you require redundancy to dual switches, then you MUST use two servers.
  • Monitor the IOMMU settings is REQUIRED: grubby --update-kernel=ALL --args="intel_iommu=on pci=realloc. The pci=realloc setting is CRITICAL or the SmartNIC card will NOT initialize.
  • Verify that the fan settings on the servers are set to maximum/performance mode, as it is CRITICAL that the card is passively cooled.

The Intel® FPGA PAC N3000 has two Quad Small Form-Factor Pluggable (QSFP) 28 cages on the faceplate panel; therefore, there are two possible ordering parts numbers (OPN). You must obtain the 25GbE OPN. The 25GbE network configuration requirements include:

  • 2 x 25 GbE per QSFP28

  • Programmable Forward Error Correction (FEC) including Reed-Solomon Forward Error Correction (RS-FEC), BASE-R FEC (also known as Firecode) and no FEC

    Note

    Support is provided for IEEE 802.3 clause 108 and clause 74. Clause 91 is not supported.

  • 25GBASE-CR

  • 25GBASE-SR

The following diagram illustrates which 25G channels of the QSFP 100G Optic are utilized for both 1-optic configuration and 2-optic configuration:

../_images/smartnic_n3000_connect.png

Figure: Supported Intel® FPGA PAC N3000 Port-Optic Connection

Important

  • F5 is running 100G Optics on channels 1 and 2 on Port A – (optionally) on Port B, supporting channels 3 and 4 off the SmartNIC.
  • If you use a 100G optic and split the channels into 4 x 25G, then the SmartNIC channels will connect on ports 1 and 2 on your split switch ports.

For example, on both of the split 100G to 4 x 25G switch ports; 3 and 4 will be inactive, ports 1 and 2 support 25G.

Recommended single-switch configurations:

Option 1 Option 2
Single-Port 2-Channel Non-Bonded Single-Port 2-Channel Bonded-VE Trunk
../_images/snic_singOpt1.png
../_images/snic_singOpt2.png

Option 3 Option 4
Dual-Port 2-Channel Non-Bonded Dual-Port 2-Channel Bonded-VE Trunk
../_images/snic_singOpt3.png
../_images/snic_singOpt4.png

Recommended Multi-Chassis Link Aggregation (MLAG) configurations:

Option 1 Option 2
Single Port Dual-Port
../_images/snic_mlag1.png
../_images/snic_mlag2.png

Switch recommendations

  • Reed-Solomon Forward Error Correction (RS-FEC)—enable this setting on the switch ports in use on each individual interface. By default the SmartNIC has RS-FEC enabled. To achieve uplink, you must define these settings the same on both the SmartNIC and the switch.

    An example using RS-FEC command:

    localhost(config) # show interfaces status
    show interfaces error-correction
    localhost(config-if-<interface>)#error-correction encoding reed-solomon
    
  • Splitting switch channels and optic types—the default configuration of 100G optics in Port A of the SmartNIC. The optics should match brands and part numbers on each side. The recommended configuration is a 100G optic split into 4 X 25G channels from the 100G port.

    Caution

    Avoid configuring a 4 x 25G in a 40G switch port. Use only the port off your switch that supports 100G, and has the ability to split into 4 x 25G mode and rs-fec support enabled. Many switch manufacturers mix 40G and 100G ports. A 100g optics will fit in both, but only certain ports support both 100G and split mode with FEC support. Contact your switch manufacture BEFORE contacting F5 Support.

To change 100G QSFP ports from 100GbE mode to 4 x 25G mode

Configure the desired speed as 25G. The following example uses port 33:

(config) # interface Et33/1-4
(config-if-Et3/1-4) # speed forced 25gfull

Important

Make sure all channels in use have RS-FEC enabled. If not possible, turn OFF FEC on the SmartNIC to enable diagnostic mode (not supported in production).

Optics and switches

Running F5 Optics or any optic in a switch requires a bypass code unique to each customer. Consult your manufacturer’s unsupported optic bypass for the code to use in the following command example:

service unsupported-transceiver F5 [code]

Different firmware versions of the same switch handle the 4 X 25G mode differently. For example, the newer firmware on an Arista switch supports FEC on each sub-channel and the older firmware does not, failing to achieve an uplink to the SmartNIC. Therefore, F5 recommends using the same optics running the same firmware on both sides of the switch.

  1. Physical ports have only FEC support for certain ranges. For example, ports 13 - 23 support FEC, all others do not. To enable FEC for ports, use the F5 SmartNIC Orchestrator, and set FEC option to ON next to each port you want to FEC-enable.

  2. Connect your N3000 card to your upstream ethernet switch or traffic generator. F5 recommends using QSFP28 (100G) optics and change the port mode of your switch/traffic generator to 4 X 25G. Currently, the N3000 version F5 supports only operates in a 2 x 2 x 25G mode. Therefore, only two 25G MACs exist per QSFP port. For example, if you connected a N3000 QSFP port to a 4 x 25G switch port, then only two of the four 25G links appear as active/linked.

  3. Additionally, the 25G MACs in the N3000 SmartNIC support RS-FEC (Reed-Solomon). You MUST enable RS-FEC for ports on your upstream switch. If your switch cannot support FEC, you can dynamically turn on/off FEC on the N3000. However, this is NOT recommended or supported, because 100GBASE-SR4 must have FEC enabled for reliable data transfer. Currently, the orchestrator does not have an option in the GUI to turn on/off FEC, but in Docker, you can run the following command:

    fecmode

    The N3000 supports the default Reed Solomon FEC as well as the following FEC modes:

    fec_mode Mode
    no No FEC
    kr Fire Code Error Correction (IEEE 802.3 Clause 74)
    rs Reed Solomon Error Correction (IEEE 802.3 Clause 108)

    The configurable FEC is supported for only 2 x 2 x 25G network configurations. For 8 x 10G, the FEC settings has no effect.

    To set FEC mode

    $ sudo fecmode -B [bus] [mode]
    [mode] = ‘no’, ‘kr’, ‘rs’
    [bus] = PCIe bus of FPGA in the format “0xyz”
    

Install the OS and basic options

  1. Use SSH/puTTy to remote into the server, and Install Centos 7.6 or 7.7 with either kernel 3.10 or kernel 4.19.

  2. To install basic software, type the following:

    sudo yum update
    
  3. To install grubby, type:

    grubby --update-kernel=ALL --args="intel_iommu=on pci=realloc"
    

    Or

    vi /etc/sysconfig/grub
    

    Edit the /etc/sysconfig/grub and add “intel_iommu=on pci=realloc” to the end of the GRUB_CMDLINE_LINUX= line.

  4. Reboot your machine.

  5. To install the rest of the tools and packages, type:

    sudo yum groupinstall -y "Development and Creative Workstation" "Additional Development" "Compatibility Libraries" "Development Tools" "Platform Development" "Python" "Virtualization Host"
    
  6. To get a working GUI desktop, type, yum groupinstall -y "GNOME Desktop" "Graphical Administration Tools".

  7. Install common libraries, tools, and enable SR-IOV, type:

    sudo yum install python27-python-pip python27-python-devel numactl-libs libpciaccess-devel parted-devel yajl-devel libxml2-devel glib2-devel libnl-devel libxslt-devel libyaml-devel numactl-devel redhat-lsb kmod-ixgbe libvirt-daemon-kvm numactl telnet net-tools
    sudo yum install epel-release -y
    sudo yum install emacs -y
    
  8. Install KVM Hypervisor and disable the Linux firewall, type:

    yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils -y
    systemctl enable libvirtd
    systemctl start libvirtd
    systemctl stop firewalld
    systemctl disable firewalld
    
  9. Install and start xRDP on the Linux server, type:

    sudo yum -y update
    sudo yum install -y epel-release
    sudo yum install -y xrdp
    sudo systemctl enable xrdp
    sudo systemctl start xrdp
    sudo firewall-cmd --add-port=3389/tcp --permanent
    sudo firewall-cmd --reload
    sudo reboot (may be necessary)
    
  10. Set up the Remote Desktop client already installed on your Windows, Mac, or Linux environments:

    1. Type in the IP address used for the Hypervisor; define user as root and define the root user password.
    2. If you see a self-signed cert error, ignored it, along with any warnings about connecting to the computer for the first time.
    3. The auto-scaling function of the monitor size usually depends on your primary monitor, when you drag to the second monitor the resolution may be off. To fix, pre-set the Remote Desktop client to the fixed resolution of your desired target monitor.
  11. Reboot your machine.

Install Docker

This process takes several minutes to complete.

  1. Type:

    curl -fsSL https://get.docker.com/ | sh

    If that fails, point your browser to: https://docs.docker.com/install/linux/docker-ce/centos/.

  2. Then do the following to start, get status, and enable Docker:

    sudo systemctl start docker
    sudo systemctl status docker
    sudo systemctl enable docker
    
  3. OPTIONAL: log into Docker (if required), docker login --username. F5 recommends creating an access token for your password.

Pull F5 VE SmartNIC Docker container

Before deploying the F5 VE SmartNIC Orchestrator Docker container, read the F5 Licensing EULA, then use the following commands to pull and run the SmartNIC Orchestrator Docker container

  1. To verify that no container is running, type:

    1. docker ps and verify that nothing is returned.
    2. If something does return, type: docker stop <container id>, where <container ID> is the value returned in the previous step.
  2. To pull the stable F5 SmartNIC Orchestrator Docker container, type the following: docker pull f5networks/smartnic-orchestrator:stable.

  3. To run the container, type the following, which will auto-accept the F5 EULA:

    sudo docker run  -d -t -e TZ=America/Los_Angeles -e ACCEPT_EULA=Y -e DEBUG=Y --name f5smartnic-orchtool --mount src=/lib/modules,target=/lib/modules,type=bind --mount src=/usr/src,target=/usr/src,type=bind --mount src=/dev,target=/dev,type=bind --mount src=/var/log,target=/var/log,type=bind --mount src=/var/lib,target=/var/lib,type=bind --mount src=/usr/share/hwdata,target=/usr/share/hwdata,type=bind --cap-add=ALL -p 8443:8443 --privileged=true f5networks/smartnic-orchestrator:stable
    

    Note

    Command line parameter options include:

    • -e DEBUG=Y - enable read/write SmartNIC registers and execute remote bash commands using the API or UI is disabled.
    • -p <any port>:8443 - defines the port on which the Orchestrator’s Redfish API is listening. This parameter is case sensitive. The first port is the incoming port you want to use, and the second port is the port to which the Orchestrator is listening inside the Docker container (consult the Docker CLI reference guide).
    • --mount src=<path to log file on local host> ,target=/var/log,type=bind - defines the location of the logfile on the host.
    • --mount src=<path to config file on local host  /var/lib/f5snic/f5snic.config>,target=/var/lib,type=bind - defines the location of the config file on the host.
  4. Point your browser to https://{hostip}:8443/, enter the following:

    • User name: admin
    • Password: admin

    Wait for the pipeline to finish.

    ../_images/smartnic_orch1.png

Note

You can find the orchestration f5snic.log.[date] file in /var/log/ directory.

Auto-start the F5 SmartNIC Docker container

Once you set up the Docker container, you can configure it to restart automatically upon system reboot. This is useful if your server crashes unexpectedly. You can use the Docker restart policy to either control whether your container starts automatically when they exit, or when Docker restarts.

Additionally, you can use systemd.

  1. To create the service file used by systemd (systemctl command), in your shell/terminal get your container name:

    $ docker ps -a
    

    Output looks similar to:

    ../_images/smartnic-autoStrtOutput.png

    Note the container name in the last column.

  2. Create a file (filename must use all lowercase). This example uses, docker-f5smartnic.service:

    sudo vi /etc/systemd/system/docker-f5smartnic.service
    
  3. Paste the following into that file, enter a description, and then update the container name in ExecStart and ExecStop:

    [Unit]
    Description=SmartNIC Orch Tool Container
    Requires=docker.service
    After=docker.service
    
    [Service]
    Restart=always
    ExecStart=/usr/bin/docker start -a f5smartnic.service
    ExecStop=/usr/bin/docker stop -t 2 f5smartnic.service
    
    [Install]
    WantedBy=local.target
    

    Tip

    • This file is called a unit file for systemd.
    • Avoid any extra line brakes within the sections, like Unit, or Service.
    • The -a option in the Docker command for ExecStart ensures it is running in attached mode; for example, attaching STDOUT/STDERR and forwarding signals.
    • The -t option in the Docker command for ExecStop specifies seconds to wait for it to stop before killing the container.
  4. Before activating the service, you must reload the unit file, and then run the following command anytime you modify the unit file:

    $ sudo systemctl daemon-reload
    
  5. To auto-start and enable:

    $ sudo systemctl start docker- f5smartnic.service
    $ sudo systemctl enable docker- f5smartnic.service
    
  6. OPTIONAL: To disable the auto-start service, and then reboot your system (remember to change the service name):

    $ sudo systemctl stop docker- f5smartnic.service
    $ sudo systemctl disable docker- f5smartnic.service
    $ sudo reboot
    
  7. Reboot your system to apply changes:

    $ sudo reboot
    

Your container will now start on a server reboot, Docker restart, or a crash.

Deploy BIG-IP VE in KVM

F5 VE SmartNIC supports only BIG-IP VE 15.1.0.4.

Tip

Consult the following Define SmartNIC configuration settings topic when configuring your SmartNIC as PCI interfaces and enabling SR-IOV (steps 18-20 in this procedure).

To deploy BIG-IP VE, download an image from F5 and deploy it in your environment.

Important

Do not change the configuration (CPU, RAM, and network adapters) of the KVM guest environment with settings less powerful than those recommended and described here.

  1. In a browser, open the F5 Downloads page and log in.

  2. On the Downloads Overview page, click Find a Download.

  3. Under Product Line, click the link similar to BIG-IP v.x/Virtual Edition.

  4. Click the link similar to x.x.x_Virtual-Edition.

  5. If the End User Software License is displayed, read it and then click I Accept.

  6. Download the BIG-IP VE file package ending with qcow2.zip.

  7. Extract the file from the Zip archive and save it where your qcow2 files reside on the KVM server.

  8. Use VNC to access the KVM server, and then start Virt Manager.

  9. Right click localhost (QEMU), and from the popup menu, select New.

    The Create a new virtual machine, Step 1 of 4 dialog box opens.

  10. In the Name field, type a name for the connection.

  11. Select import existing disk image as the method for installing the operating system, and click Forward.

  12. Type the path to the extracted qcow file, or click Browse to navigate to the path location; select the file, and then click the Choose Volume button to fill in the path.

  13. In the OS type setting, select Linux, for the Version setting, select Red Hat Enterprise Linux 6, and click Forward.

  14. In the Memory (RAM) field, type the appropriate amount of memory (in megabytes) for your deployment. (For example 4096 for a 4GB deployment). From the CPUs list, select the number of CPU cores appropriate for your deployment, and click Forward.

  15. Select Customize configuration before install, and click the Advanced options arrow.

  16. Select the network interface adapter that corresponds to your management IP address, and click Finish.

    The Virtual Machine configuration dialog box opens.

  17. Click Add Hardware.

    The Add New Virtual Hardware dialog box opens.

  18. If SR-IOV is not required, select Network.

  19. From the Host device list, select the network interface adapter for your external network, and from the Device model list, select virtio. Then click Finish.

    Do this again for your internal and HA networks.

  20. If SR-IOV is required, select PCI Host Device and then select the PCI device for to the virtual function mapped to your host device’s external VLAN. Then click Finish.

    Be sure to use the Virtual Function (VF) PCI Host Device instead of the Physical Function (PF) to take advantage of VE high-speed drivers.

    The following image illustrates adding a PCI VF Network Interface within the Virtual Machine Manager:

    ../_images/kvm_qemu1.png
  21. Repeat step 20 for your host device’s internal VLAN and HA VLAN.

  22. From the left pane, select Disk 1.

  23. Click the Advanced options button.

  24. From the Disk bus list, select Virtio.

  25. From the Storage format list, select qcow2.

  26. Click Apply.

  27. Click Begin Installation.

Virtual Machine Manager creates the virtual machine just as you configured it.

To assist with configuring the management IP, consult the BIG-IP configuration utility tool.

Define SmartNIC configuration settings

Do the following for defining specific settings for enabling your BIG-IP VE SmartNIC on your server:

  1. On your server, set the following:

    • Under NIC change network source to Host device em1: macvtap and set the source mode to Bridge and device model to virtio.

    • To add SmartNIC interfaces, click Add Hardware, click PCI Host Device, scroll down and select F5 Networks VF (PCIe device ID = 0x0100), and then click Finish.

    • You can add multiple interfaces to each BIG-IP VE. When configuring the interfaces as a trunk, for optimal performance, you MUST configure the trunk members on different PCI Buses in the operating system. For example:

      ../_images/snic_OS_PCI.png

      Note

      Do NOT use the physical functions.

  2. OPTIONAL: To verify that the SmartNIC driver is properly bound to the N3000 PAC, navigate to the /var/log/tmm file and if the N3000 SmartNIC (HSBse) was properly discovered on the PCI bus and the SmartNIC driver was bound to the device, you will see something similar to the following:

    <13> May 15 07:28:01 www notice f5hsb1[0000:00:08.0]: ---------XNET PROBE of HSBse successful -----------
    
  3. Start the VE, type: bigstart restart tmm.

  4. Confirm that the xnet driver (also known as HSBse driver in log files) is registered with VE: tmctl -dblade -i tmm/device_probed.

    ../_images/smartnic-driverList.png
  5. Confirm that the XNET probe of HSBse succeeded: grep “XNET PROBE of HSBse successful” /var/log/ltm.

  6. License the BIG-IP VE plus the add-on SmartNIC license.

  7. Consult this AFM DoS/DDoS Protection topic to validate which vectors are hardware-accelerated DoS vectors on BIG-IP Virtual Edition 15.1.0.4 - AFM module for use with BIG-IP VE SmartNIC.

A single SmartNIC supports up to eight VEs with one virtual function (VF) assigned.

Orchestrator User Guide

To optimize your Intel PAC N3000 SmartNIC for accelerating the BIG-IP VE, use the F5 SmartNIC Orchestrator utility.

Note

Screenshots of the F5 BIG-IP VE SmartNIC Orchestrator depicted in this guide may vary, depending upon the version you are using.

Deploy

Before deploying the F5 SmartNIC Orchestrator Docker container, read the F5 Licensing EULA.

  1. To verify that no container is running, type:

    1. docker ps and verify that nothing is returned.
    2. If something does return, type: docker stop <container id>, where <container ID> is the value returned in the previous step.
  2. OPTIONAL: log into Docker (if required), docker login.

  3. To pull the stable F5 SmartNIC Orchestrator Docker container, type the following: docker pull f5networks/smartnic-orchestrator:stable.

  4. To run the container, type the following, which will auto-accept the F5 EULA:

    sudo docker run  -d -t -e TZ=America/Los_Angeles -e ACCEPT_EULA=Y -e DEBUG=Y --name f5smartnic-orchtool --mount src=/lib/modules,target=/lib/modules,type=bind --mount src=/usr/src,target=/usr/src,type=bind --mount src=/dev,target=/dev,type=bind --mount src=/var/log,target=/var/log,type=bind --mount src=/var/lib,target=/var/lib,type=bind --mount src=/usr/share/hwdata,target=/usr/share/hwdata,type=bind --cap-add=ALL -p 8443:8443 --privileged=true f5networks/smartnic-orchestrator:stable
    

    Note

    Use the -e DEBUG=Y option to enable read/write SmartNIC registers and execute remote bash commands using the API or UI is disabled.

  5. Point your browser to https://{hostip}:8443/, enter the following:

    • User name: admin
    • Password: admin

    Wait for the pipeline to finish.

  6. OPTIONAL: You can also set up the Docker Container to run automatically as a service.

Update

To pull the new Orchestrator from the Docker repository, do the following:

  1. Stop all VEs/VMs, in BIG-IP VE terminal type: shutdown -H now for immediate shutdown.

  2. Stop the Orchestrator, type: docker stop <container ID>. If you do not know the container ID, fetch it by typing: docker ps.

  3. Pull the stable F5 SmartNIC Orchestrator Docker container, type: docker pull f5networks/smartnic-orchestrator:stable

  4. Run the new version of the F5 SmartNIC Orchestrator, type:

    sudo docker run  -d -t -e TZ=America/Los_Angeles -e ACCEPT_EULA=Y -e DEBUG=Y --name f5smartnic-orchtool --mount src=/lib/modules,target=/lib/modules,type=bind --mount src=/usr/src,target=/usr/src,type=bind --mount src=/dev,target=/dev,type=bind --mount src=/var/log,target=/var/log,type=bind --mount src=/var/lib,target=/var/lib,type=bind --mount src=/usr/share/hwdata,target=/usr/share/hwdata,type=bind --cap-add=ALL -p 8443:8443 --privileged=true f5networks/smartnic-orchestrator:stable
    
  5. Update the PCI devices assigned to your VMs, if they changed during PCI re-enumeration.

  6. Restart your VMs.

Change login credentials

To change login credentials, in the top-right corner of the window, click Change Credentials, change the username and password accordingly, confirm password, and then click Update.

../_images/snic_login.png

Configuration pipeline

Click the Configuration Pipeline tab or the Home menu to view the status for and run the following pipeline stages:

  • SmartNIC present
  • SmartNIC Management Driver
  • SmartNIC Base image – detailing image and build version
  • SmartNIC F5 image – current bitfile version loaded
  • Enable SR-IOV – enablement status.
  • Configure SmartNIC
  • Enable Network Interface
../_images/smartnic_orch1.png
  1. To run an individual pipeline stage, click retry Retry in a stage row.

  2. To run the entire pipeline, in the top-right corner of the tab, click retryAll.

  3. Monitor your pipeline progress using the built-in terminal window:

    ../_images/smartnic_terminal.png

Settings

  1. Click the Settings tab to set possible optic modes:

    • 1 Optic
    • 2 Optics
    ../_images/smartnic_settings1.png

    Consult the Configure the switch, cables, and optic topic for optic mode recommendations.

Note

For intrinsic mapping of the MACs to the actual VFs that they can reach, connect your Intel N3000 card to your upstream ethernet switch or traffic generator. F5 recommends using QSFP28 (100G) optics and change the port mode of your switch/traffic generator to 4 x 25G. F5 SmartNIC supports single port with two channels; one-optic mode and two-optic mode, using one MAC from each channel.

  1. Click Submit.

  2. To set the FEC mode, click one of the following options that your switch supports, and then click Submit. Consult your switch manufacture documentation for details regarding the supported FEC mode(s) for a 100G split mode to 4x25G.

    • no (No FEC) - To remain in compliance with IEEE standards, use this mode for diagnostic purposes only.
      1. BEFORE using this mode, you must stop any running SmartNIC VE VMs.
      2. Click the Diagnostics tab, and slide the SR-IOV switch to OFF.
      3. Click the Settings tab and select the no (no FEC) option.
      4. Click the Configuration Pipeline tab and in the top-right corner, click retryAll.
    • kr (Fire Code Forward Error Correction (IEEE 802.3 Clause 74))
    • rs (Reed Solomon Forward Error Correction (IEEE 802.3 Clause 108))
    ../_images/smartnic-settings-fec.png

Diagnostics

../_images/snic_diagnostics.png

Click the Diagnostics tab to do the following:

  1. SmartNIC F5 image - do the following to reload an updated SmartNIC F5 image, reconnect it if disconnected, and/or restore the factory default settings for your SmartNIC:

    1. Stop all VEs/VMs, in BIG-IP VE terminal type: shutdown -H now for immediate shutdown.
    2. In the SR-IOV row, slide the toggle to OFF. Wait for SR-IOV to status to display disabled.
    3. In the SmartNIC F5 image row click Retry retry.
    4. To rerun the pipeline, click the Configuration Pipeline tab, and then click retryAll.
    5. Restart/reboot your VMs, type: reboot.
  2. SR-IOV - slide the Enable/Disable switch accordingly. After enabling/disabling SR-IOV, on the Configuration Pipeline tab, you may need to click retry Retry for the following stages:

    • Configure SmartNIC
    • Enable Network Interface
  3. Run Diagnostics - do the following to validate the SmartNIC hardware (except for optical interfaces):

    1. Stop all VEs/VMs, in BIG-IP VE terminal type: shutdown -H now for immediate shutdown.
    2. In the SR-IOV row, slide the toggle to OFF. Wait for SR-IOV status to display disabled.
    3. In the Run Diagnostics row, click Retry retry. Wait for the status to return a Pass/Fail value.
    4. If you receive a Fail status, then consult the /var/log/f5snic/f5snic.log.[date] log files for more information.
    5. To reload the SmartNIC functionality, in the SmartNIC F5 image row, click Retry retry, and then on the Configuration Pipeline tab, click retryAll.
    6. Restart/reboot your VMs, type: reboot.
  4. SmartNIC Management Driver - click Uninstall trash to delete the currently installed SmartNIC Management Driver BEFORE updating to a new version.

  5. F5 SmartNIC bitfile - click Retry retry to validate the current F5 bitfile.

  6. Collect Snapshot - click Collect Snapshot qview to create a smartnic_snapshot.json file containing F5 VE SmartNIC diagnostic data, and then click Download. The smartnic_snapshot.json downloads locally that you will give the F5 Support team.

    To upload this smartnic_snapshot.json to iHealth for analysis:

    1. Copy/move the smartnic_snapshot.json file to the /config folder on the VE for which you want to upload to iHealth, for example:

      scp snapshot.json root@10.238.8.196:/config/snapshot.json
      
    2. On the VE, navigate to System -> Support -> New Support Snapshot.

    3. Either generate a qkview file or generate and upload to iHealth. The qkview uploaded to iHealth will contain the smartnic_snapshot.json file. Consult this video demo for complete iHealth instructions.

Tip

Advanced diagnostics: Malicious traffic uses more CPU. The FPGA on board the SmartNIC (FPGA + SmartNIC) actively blocks malicious traffic before it can reach your CPU. Depending upon the attack, you will see a big difference in the amount of malicious traffic saturating your CPU. To TPS test your SmartNIC, do the following:

  1. To disable the SmartNIC hardware offload, and notice that approximately 500 Mbps of malicious traffic will max out your CPU to 100 percent, without this protection of the FPGA + SNIC, use the following command:

    modify sys db dos.forceswdos value TRUE set to disable SmartNIC hardware offload

  2. To enable the FPGA + SmartNIC (default mode of operation), and notice that your SmartNIC can process over 30Gbps of malicious traffic, because the FPGA + SmartNIC is monitoring, actively detecting, and blocking the malicious traffic BEFORE it reaches your CPU/software layer, use the following command:

    modify sys db dos.forceswdos value FALSE set to enable SmartNIC hardware offload.

  3. Consult your BIG-IP AFM dashboard for traffic statistics.

Status

Click the Status tab to verify the following;

  • MAC link status for hardware information, channel connection status for 1-optic and/or 2-optics mode
  • MAC stats for MAC channels 0 through 3
../_images/snic_status.png

You will see two or four MAC links for channels used depending upon the optic setting you defined. Click Clear MAC Stats to reload stats for MAC channels.

API

  1. To access the API, in the top-right corner, click the API tab, and then enter your login credentials at the prompt.

    ../_images/smartic_API1.png
  2. The F5 SmartNIC API tab describes how to use the following get and post commands that utilizes an HTTPS scheme. Click each GET, POST, or PULL request to the view full descriptions.

    GET commands

    • smartnic - Returns SmartNIC Configuration Pipeline Status
    • services - Returns SmartNIC services inventory
    • configuration pipeline status - returns SmartNIC configuration pipeline status
    • operating conditions - Returns SmartNIC operating conditions
    • drivers - Returns SmartNIC services inventory
    • network - Returns the status of the network interfaces and optical configuration
    • virtualization - Returns the statu SR-IOV configuration
    • NicInitialization - Returns the SmartNIC configuration status
    • BaseImg - Returns the SmartNIC Factory FPGA status
    • F5Img - Returns the SmartNIC FPGA F5 Networks Application Image status
    • Diagnostics - Get Intel Diagnostic test results - Diagnostics.GetSnapshot - Get all snapshot objects
    • registers - Returns the value of the register at the provided pci coordinates

    Post commands

    • Run Configuration Pipeline - Run Configuration Pipeline to prepare the system for SmartNIC
    • Drivers - Modules.install and modules.uninstall Intel fgpa driver modules
    • Network:
      • /{nic}/Network/Actions/Interfaces.SetNumberOfOptics - Configures the network interface to use single or dual optics
      • /{nic}/Network/Actions/Interfaces.Enable - Configures the Intel MAC network interface CRC mode
      • /{nic}/Network/Actions/Interfaces.SetFECMode - Configures the network interface FEC Mode
      • /{nic}/Network/Actions/Interfaces.ClearMACStats - Clears the network mac stats.
    • virtualization - Enable/disable SRIOV for SmartNIC device
    • NicInitialization - /{nic}/NicInitialization/Actions/NIC.Initialize to initialize SmartNIC device
    • BaseImg - /{nic}/BaseImg/Actions/BaseImg.Install to upgrade the SmartNIC Factory FPGA bitfile
    • F5Img:
      • /{nic}/F5Img/Actions/F5Img.Install to upgrade the SmartNIC F5 Networks FPGA bitfile
      • /{nic}/F5Img/Actions/F5Img.ReloadF5Img to reload the SmartNIC F5 Networks FPGA bitfile
    • Diagnostics - /{nic}/Diagnostics/Actions/Diagnostics.Run to run Intel Diagnostic tests
    • registers - /{nic}/Registers/Actions/Register.Write/{bus}/{pf}/{vf}/{addr} to write a value to the register at the provided PCI coordinates
    • Shell Execution - /{nic}/Commands/Actions/Command.Run to run remote shell commands

    Put commands

    • /{nic}/AccountService/Accounts - to update username and password credentials
  3. Expand Models, and then expand each request to view complete code examples.

    ../_images/smartic_API2.png

Uninstall F5 VE SmartNIC Orchestrator

To completely remove the F5 BIG-IP VE SmartNIC Orchestrator/restore your SmartNIC default factory settings, do the following:

  1. Stop all VEs/VMs, in BIG-IP VE terminal type: shutdown -H now for immediate shutdown.

  2. On the Diagnostics tab, in the SR-IOV row, slide the toggle to OFF.

  3. On the Diagnostics tab, in the SmartNIC Management Driver row click Uninstall trash to delete the currently installed SmartNIC Management Driver.

  4. To stop SmartNIC Orchestrator Container, in your terminal type docker stop <container ID>. If you do not know the container ID, fetch it by typing: docker ps.

  5. To delete SmartNIC Container and Docker Image, in your terminal type the following where image/container ID/Name is the image/container you want to delete. If you do not know the container ID, fetch it by typing docker ps.

    • Remove container:

      $ docker ps -a
      $ docker rm [OPTIONS] CONTAINER [CONTAINER...]
      

      0r remove all stopped containers:

      $ docker rm $(docker ps -a -q)
      
    • Remove image:

      $ docker images -a
      $ docker image rm [OPTIONS] IMAGE [IMAGE...]
      

Consult the Docker documentation for command usage details.