Last updated on: 2024-04-19 09:21:35.

F5 VE SmartNIC Deployment Guide

When running BIG-IP VE 15.1.0.4, BIG-IP VE 15.1.4, and BIG-IP VE 15.1.6 you can utilize the Intel PAC N3000 25GbE technology on any of the Intel-qualified servers to program and accelerate multiple BIG-IP VE virtual machines and the AFM DDOS module in a Linux KVM hypervisor. The F5 SmartNIC Orchestrator is a bit file and configuration/orchestration utility that uses SR-IOV to isolate the VMs, and a REST API in a Docker container for orchestration.

../_images/smartnic_setup_1.png

Utilizing the SmartNIC technology in your qualified server can accelerate your VE VMs and provides support for the following features and capabilities:

  • Throughput capacity:
    • 40 Gbps throughput for SmartNIC v1.0
    • 50 Gbps throughput for SmartNIC v2.0-2.0.1
  • SR-IOV capability:
    • For SmartNIC 1.0 has 2 physical functions (PFs) and 8 virtual functions (VFs)
    • For SmartNIC 2.0-2.0.1 has 2 physical functions (PFs) and 2 virtual functions (VFs)
  • 8 Direct memory access (DMA) with 4 rings
  • Global DDOS mitigation (100+ Vectors)
  • Custom behavioral DOS vectors
  • Global SYN attack mitigation using cookies
  • Global allow-listing
  • Tunnel support of IP-in-IP, GRE, VXLAN, VXLAN-GPE, Geneve, NSH, NVGRE, Ether-IP (HW DDOS, Checksum offload)
  • QinQ (HW DDOS, Checksum offload)
  • TCP, UDP checksum offload on transmit
  • Transmitting VLAN filtering

F5 VE SmartNIC 2.0-2.0.1 provides support for the previous feature set plus the following new features:

  • Off-load compute-intensive functions on behalf of the VE to which it is connected, realizing performance and cost of ownership benefits.
  • L4 flow optimization, including CGNAT
  • VIP-based (per app) SYN Cookie and DDoS mitigation
  • Updated F5 SmartNIC Orchestrator support for SmartNIC 2.0-2.0.1

Setup Guide

This setup guide describes the system prerequisites, supported components, installation, and configuration processes for all systems required by the BIG-IP VE for SmartNIC solution and Orchestrator utility.

Caution

  • Do NOT install any Intel SDK tools. The F5 SmartNIC Orchestrator utility will install all of the required tools, automatically.
  • Be sure to install ONLY one N3000 SmartNIC on an Intel-qualified servers. Installing multiple SNICs can result in failures.
  • In dual CPU machines, different PCIe slots are associated with either CPU0 or CPU1 (otherwise known as, NUMA domain). For optimal performance, pin the guest VM to the same CPU with which your SmartNIC is associated.

Prerequisites

Your system must meet the following requirements:

Prerequisite Type Requirement Description
Intel-qualified servers
  • 6c/12t 3.6GHZ (E5-1650 v4) CPU (Broadwell-EP or better)
  • Dedicated 16X mechanical and 16X electrical lanes for the N3000 riser
  • RHEL or CentOS 7.6 or later
  • 103G disk space for the configuration utility and BIG-IP VE including all modules
Connection recommendations
  • 100G switch, divided into 4x25G with two 100G optics on both ends, and connected by a single fiber cable.
  • Only use port 0, the port closest to the USB connectors.
  • The n3000 SmartNIC must have both QSFP ports active, with two MACs active on each port, running at 25G.
  • To achieve 40G+ requires a virtual function from each PCI Bus to be a member of the trunk. Consult the Trunking using BIG-IP - link redundancy topic for details.
  • Tagged VLANs: Both SmartNIC 1.0 and 2.0-2.0.1, the BIG-IP VE requires tagged interfaces:
    • SmartNIC 1.0 requires a tagged EXTERNAL network.
    • SmartNIC 2.0-2.0.1 deployed in an untagged network, use Orchestrator Settings to route that traffic on the SmartNIC.
  • Configure your virtual machines with a performance profile
Intel PAC N3000 25GbE For complete information about performance and benchmark results, visit www.intel.com/benchmarks.
Linux KVM hypervisor For complete information, visit www.linux-kvm.org/page/Main_Page.
For SmartNIC 1.0: BIG-IP AFM, DHD, or VNF Better HPVE plus SmartNIC DDoS add-on license SmartNIC 1.0 requires a specific SmartNIC DDoS add-on license to existing or new BIG-IP VE image running v15.1.0.4 to enable the SmartNIC offloads.
For SmartNIC 2.0-2.0.1 and Traffic Acceleration offloads add-on licenses

You require the following add-on license:

  • F5-ADD-BIG-VE-SMARTT BIG-IP ADD-ON (SMARTNIC TRAFFIC ACCELERATION VIRTUAL EDITION ADD-ON LICENSE (50G, PER INTEL FPGA PAC N3000))

SmartNIC 2.0-2.0.1 requires a separate SmartNIC 2.0-2.0.1 Traffic Acceleration add-on license for BIG-IP image running v15.1.4 or v15.1.6.1 to enable L4 and CGNAT offloads.

Note

If you require only traffic acceleration offloads, then you only need the F5-ADD-BIG-VE-SMARTT license. If you require both DDoS and traffic acceleration, then you require both add-on licenses. Consult the Activating add-on modules procedure in the K7752 article.

Add-on license

F5 SmartNIC requires two add-on licenses if you using both DDoS and Traffic Acceleration.

  • SmartNIC 1.0 - For the appropriate BIG-IP AFM, DHD, or VNF HPVE to enable SmartNIC DDoS offload, you will need an add-on license to existing or new BIG-IP VE image running v15.1.0.4 or higher, including v15.1.4.
  • SmartNIC 2.0-2.0.1 – For the CGNAT, L4 flow acceleration, and other features specific to SNIC 2.0-2.0.1 release in 15.1.4 and 15.1.6.1 respectively, procure the [F5-ADD-BIG-VE-SMARTT] add-on license to existing or new BIG-IP VE image running v15.1.4 or v15.1.6.1.

For complete details about add-on licenses, consult the Activating add-on modules procedure in the K7752 article.

Supported components

This section lists the supported versions of hardware and software qualified by Intel for use with PAC N3000 and the BIG-IP VE 15.1.0.4 or BIG-IP VE 15.1.4.

Servers

You can use any of these Intel-qualified servers listed.

Operating Systems on KVM Hypervisor

The following lists the operating system versions that F5 tested and recommends using in a KVM hypervisor with the Intel PAC N3000 and the BIG-IP VE 15.1.0.4 or BIG-IP VE 15.1.4. For complete details, consult the Intel Acceleration Stack.

Operating System Kernel
CentOS version 7.6 3.10 and 4.19
CentOS version 7.7 3.10
Red Hat Enterprise Linux (RHEL) version 7.6 3.10

Switches

Intel supported switches MUST support the following features:

  • QSFP28 optical ports
  • 4x25Gbps breakouts of ports
  • Reed Solomon (RS)-FEC on 25G breakout ports

Caution

Using non-RS-FEC mode is intended for diagnostic purposes only, and is NOT an F5-supported production configuration (not supported by F5 Support team). Contact your switch manufacturer for enabling support for FEC options, like RS-FEC (also known as Reed Solomon) or BASE-R FEC (also known as Firecode).

Optics and Cables

You can use any of the Intel qualified optics.

Set up an Intel qualified server

Do the following on Intel-qualified servers PRIOR to installing the Intel PAC N3000 SmartNIC.

Optimize the BIOS

Caution

Before installing the SmartNIC, you MUST optimize your server BIOS settings and set the Fan Profile setting to Performance; otherwise, you can damage the SmartNIC hardware. If the server airflow is inadequate, then the N3000 can initiate a thermal protection mode and appear unresponsive. If this occurs, change the appropriate BIOS settings, and then shut down the server for 60 minutes, allowing the card to cool.

Update your BIOS to the latest manufacturer’s release version. Consult your manufacturer’s support Web site. Updating your BIOS can take up to two hours. If your server is already in production, plan your back up and redundancy options accordingly.

  1. To optimize the BIOS, during the boot process, press the F11 or DELETE key.
  2. At the BIOS screen, navigate to the Fan Profile menu and set it to Performance/Maximum.
  3. Verify that the client has the following settings defined:
    • In the Processor Configuration menu, enable Virtualization support.
    • In the Integrated IO Configuration menu, enable Intel VT-x.
    • in the PCI Configuration menu, enable SR-IOV support in the BIOS.
  4. Turn OFF speed-stepping.
  5. Change the Power Management setting to Performance.
  6. Change the Workload Configuration setting to I/O Sensitive.
  7. Disable the C-State power controls.

Configure the switch, cables, and optics

Consult the following quick tips for an overview of recommended/required optics and cable configuration. For more in-depth configuration information and diagrams, consult the following section.

Tip

To configure the simplest solution, consult the following quick tips:

  • On your switch, turn-off/disable the port speed auto-negotiation setting.

  • Use two 100G optics and a standard 12-strand fiber cable - use port A only off the SmartNIC, as there is no need/advantage to running dual ports.

  • A 100G switch is required; you must split the individual 100G switch port into 25G x 4 channel mode.

  • Normal operating mode includes the first two channels becoming enabled, and the other two channels remaining disabled (offline).

  • Enable the Reed-Solomon Forward Error Correction (RS-FEC) setting on the switch is the normal operating mode; however, if your switch does NOT support this RS-FEC option, use the Settings tab in the F5 SmartNIC Orchestrator tool to run with this switch in disabled mode.

  • Add both interfaces to a BIG-IP VE trunk, allowing you to achieve 50G; because LACP is not supported and if you require redundancy to dual switches, then you MUST use two servers.

  • You MUST monitor the IOMMU settings (REQUIRED):

    1. The pci=realloc setting is CRITICAL or the SmartNIC card will NOT initialize:

      grubby --update-kernel=ALL --args="intel_iommu=on pci=realloc

    2. To change the Intel IOMMU settings, you must also update the grubby:

      1. To determine whether the system is under BIOS (legacy) or Unified Extensible Firmware Interface (UEFI) mode, type:

        ls -ld /sys/firmware/efi

      2. If a “No such file or directory” message appears, then the system is using the BIOS firmware, type:

        grub2-mkconfig -o /boot/grub2/grub.cfg

        Otherwise, the system is using the UEFI, type:

        grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg

  • Verify that the fan settings on the servers are set to maximum/performance mode, as it is CRITICAL that the card is passively cooled.

The Intel® FPGA PAC N3000 has two Quad Small Form-Factor Pluggable (QSFP) 28 cages on the faceplate panel; therefore, there are two possible ordering parts numbers (OPN). You must obtain the 25GbE OPN. The 25GbE network configuration requirements include:

  • 2 x 25 GbE per QSFP28

  • Programmable Forward Error Correction (FEC) including Reed-Solomon Forward Error Correction (RS-FEC), BASE-R FEC (also known as Firecode) and no FEC

    Note

    Support is provided for IEEE 802.3 clause 108 and clause 74. Clause 91 is not supported.

  • 25GBASE-CR

  • 25GBASE-SR

The following diagram illustrates which 25G channels of the QSFP 100G Optic in SmartNIC 1.0 are utilized for both 1-optic configuration and 2-optic configuration:

../_images/smartnic1-0_n3000_connect.png

Figure: Supported Intel® FPGA PAC N3000 Port-Optic Connection for SmartNIC 1.0

The following diagram illustrates which 25G channels of the QSFP 100G Optic in SmartNIC 2.0-2.0.1 are utilized for both 1-optic configuration and 2-optic configuration:

../_images/smartnic2-0_n3000_connect.png

Figure: Supported Intel® FPGA PAC N3000 Port-Optic Connection for SmartNIC 2.0-2.0.1

Important

  • F5 is running 100G Optics on channels 1 and 2 on Port A – (optionally) on Port B, supporting channels 3 and 4 off the SmartNIC.
  • If you use a 100G optic and split the channels into 4 x 25G, then the SmartNIC channels will connect on ports 1 and 2 on your split switch ports.

For example, on both of the split 100G to 4 x 25G switch ports; 3 and 4 will be inactive, ports 1 and 2 support 25G.

Recommended single-switch configurations:

Option 1 Option 2
Single-Port 2-Channel Non-Bonded Single-Port 2-Channel Bonded-VE Trunk
../_images/snic_singOpt1.png
../_images/snic_singOpt2.png

Option 3 Option 4
Dual-Port 2-Channel Non-Bonded Dual-Port 2-Channel Bonded-VE Trunk
../_images/snic_singOpt3.png
../_images/snic_singOpt4.png

Recommended Multi-Chassis Link Aggregation (MLAG) configurations:

Option 1 Option 2
Single Port Dual-Port
../_images/snic_mlag1.png
../_images/snic_mlag2.png

Switch recommendations

  • Reed-Solomon Forward Error Correction (RS-FEC)—enable this setting on the switch ports in use on each individual interface. By default the SmartNIC has RS-FEC enabled. To achieve uplink, you must define these settings the same on both the SmartNIC and the switch.

    An example using RS-FEC command:

    localhost(config) # show interfaces status
    show interfaces error-correction
    localhost(config-if-<interface>)#error-correction encoding reed-solomon
    
  • Splitting switch channels and optic types—the default configuration of 100G optics in Port A of the SmartNIC. The optics should match brands and part numbers on each side. The recommended configuration is a 100G optic split into 4 X 25G channels from the 100G port.

    Caution

    Avoid configuring a 4 x 25G in a 40G switch port. Use only the port off your switch that supports 100G, and has the ability to split into 4 x 25G mode and rs-fec support enabled. Many switch manufacturers mix 40G and 100G ports. A 100G optics will fit in both, but only certain ports support both 100G and split mode with FEC support. Contact your switch manufacture BEFORE contacting F5 Support.

To change 100G QSFP ports from 100GbE mode to 4 x 25G mode

The following process applies to SmartNIC 1.0 ONLY.

  1. On your switch, turn-off/disable the port speed auto-negotiation setting.

  2. Configure the desired speed as 25G. The following example uses port 33:

    (config) # interface Et33/1-4
    (config-if-Et3/1-4) # speed forced 25gfull
    

    Important

    Make sure all channels in use have RS-FEC enabled. If not possible, turn OFF FEC on the SmartNIC to enable diagnostic mode (not supported in production).

Optics and switches

Running F5 Optics or any optic in a switch requires a bypass code unique to each customer. Consult your manufacturer’s unsupported optic bypass for the code to use in the following command example:

service unsupported-transceiver F5 [code]

Different firmware versions of the same switch handle the 4 X 25G mode differently. For example, the newer firmware on an Arista switch supports FEC on each sub-channel and the older firmware does not, failing to achieve an uplink to the SmartNIC. Therefore, F5 recommends using the same optics running the same firmware on both sides of the switch.

  1. Physical ports have only FEC support for certain ranges. For example, ports 13 - 23 support FEC, all others do not. To enable FEC for ports, use the F5 SmartNIC Orchestrator, and set FEC option to ON next to each port you want to FEC-enable.

  2. Connect your N3000 card to your upstream ethernet switch or traffic generator. F5 recommends using QSFP28 (100G) optics and change the port mode of your switch/traffic generator to 4 X 25G. Currently, the N3000 version F5 supports only operates in a 2 x 2 x 25G mode. Therefore, only two 25G MACs exist per QSFP port. For example, if you connected a N3000 QSFP port to a 4 x 25G switch port, then only two of the four 25G links appear as active/linked.

  3. Additionally, the 25G MACs in the N3000 SmartNIC support RS-FEC (Reed-Solomon). You MUST enable RS-FEC for ports on your upstream switch. If your switch cannot support FEC, you can dynamically turn on/off FEC on the N3000. However, this is NOT recommended or supported, because 100GBASE-SR4 must have FEC enabled for reliable data transfer. Currently, you can use the orchestrator Settings tab to set FEC, or in Docker, you can run the following command:

    fecmode

    The N3000 supports the default Reed Solomon FEC as well as the following FEC modes:

    fec_mode Mode
    no No FEC
    kr Fire Code Error Correction (IEEE 802.3 Clause 74)
    rs Reed Solomon Error Correction (IEEE 802.3 Clause 108)

    The configurable FEC is supported for only 2 x 2 x 25G network configurations. For 8 x 10G, the FEC settings has no effect.

    To set FEC mode

    $ sudo fecmode -B [bus] [mode]
    [mode] = ‘no’, ‘kr’, ‘rs’
    [bus] = PCIe bus of FPGA in the format “0xyz”
    

Install the OS and basic options

  1. Use SSH/puTTy to remote into the server, and Install Centos 7.6 or 7.7 with either kernel 3.10 or kernel 4.19.

  2. To install basic software, type the following:

    sudo yum update
    
  3. To install grubby, type:

    grubby --update-kernel=ALL --args="intel_iommu=on pci=realloc"
    

    Or

    vi /etc/sysconfig/grub
    

    Edit the /etc/sysconfig/grub and add “intel_iommu=on pci=realloc” to the end of the GRUB_CMDLINE_LINUX= line.

  4. Reboot your machine.

  5. To install the rest of the tools and packages, type:

    sudo yum groupinstall -y "Development and Creative Workstation" "Additional Development" "Compatibility Libraries" "Development Tools" "Platform Development" "Python" "Virtualization Host"
    
  6. If you want to install a working GUI desktop, type: yum groupinstall -y "GNOME Desktop" "Graphical Administration Tools".

    If installing the CLI, do one of the following:

    • OPTION 1: Open text editor on the VM, type: virsh edit <VM name>, and then add the following to the Devices section of the XML configuration:

      
          <hostdev mode='subsystem' type='pci' managed='yes'>
            <driver name='vfio'/>
            <source>
              <address domain='0x0000' bus='0xb2' slot='0x00' function='0x3'/>
            </source>
          </hostdev>
      
      
    • OPTION 2: Create a file with the XML configuration, and use the attach device command to add the device to the XML configuration. For example, a file named, n3000_nic2.xml, type: virsh attach-device <VM name> --config --persistent -f n3000_nic2.xml.

      
          <hostdev mode='subsystem' type='pci' managed='yes'>
            <driver name='vfio'/>
            <source>
              <address domain='0x0000' bus='0xb4' slot='0x00' function='0x3'/>
            </source>
          </hostdev>
      
      

    Note

    The values in both options represent the PCI bus, slot, and virtual-function of the N3000 being assigned to VE. You can assign multiple functions to the VE, and each is recognized as a separate interface by the VE.

  7. Install common libraries, tools, and enable SR-IOV, type:

    sudo yum install python27-python-pip python27-python-devel numactl-libs libpciaccess-devel parted-devel yajl-devel libxml2-devel glib2-devel libnl-devel libxslt-devel libyaml-devel numactl-devel redhat-lsb kmod-ixgbe libvirt-daemon-kvm numactl telnet net-tools
    sudo yum install epel-release -y
    sudo yum install emacs -y
    
  8. Install KVM Hypervisor and disable the Linux firewall, type:

    yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils -y
    systemctl enable libvirtd
    systemctl start libvirtd
    systemctl stop firewalld
    systemctl disable firewalld
    

    Tip

    If you do not want to disable the Linux firewall, you can restrict access to the BIG-IP VE and the SmartNIC orchestrator web application. Configure the firewall using the following:

    1. Add allow rules.
    2. Set up a zone.
    3. On the client, assign a CIDR to that zone.

    For firewall configuration information, see the Secure your Linux network with firewall-cmd in the RedHat Sys Admin Guide.

  9. If installing the GUI desktop, install and start xRDP on the Linux server, type:

    sudo yum -y update
    sudo yum install -y epel-release
    sudo yum install -y xrdp
    sudo systemctl enable xrdp
    sudo systemctl start xrdp
    sudo firewall-cmd --add-port=3389/tcp --permanent
    sudo firewall-cmd --reload
    sudo reboot (may be necessary)
    
  10. If installing the GUI desktop, set up the Remote Desktop client already installed on your Windows, Mac, or Linux environments:

    1. Type in the IP address used for the Hypervisor; define user as root and define the root user password.
    2. If you see a self-signed cert error, ignored it, along with any warnings about connecting to the computer for the first time.
    3. The auto-scaling function of the monitor size depends on your primary monitor, when you drag to the second monitor the resolution may be off. To fix, pre-set the Remote Desktop client to the fixed resolution of your desired target monitor.
  11. Reboot your machine.

Install Docker

To install on CentOS

This process takes several minutes to complete.

  1. Type:

    curl -fsSL https://get.docker.com/ | sh

    If that fails, point your browser to: https://docs.docker.com/install/linux/docker-ce/centos/.

  2. Then do the following to start, get status, and enable Docker:

    sudo systemctl start docker
    sudo systemctl status docker
    sudo systemctl enable docker
    
  3. OPTIONAL: log into Docker (if required), docker login --username. F5 recommends creating an access token for your password.

To install on Red Hat Enterprise Linux (RHEL)

This process takes several minutes to complete.

Tip

The RHEL SELinux default mode, is set to enforcing. To set up Docker to auto-start, set the RHEL SELinux mode to permissive. To persist this change, you must update the /etc/selinux/config file:

# Default setting
SELINUX=enforcing
# New setting
SELINUX=permissive

For complete details, consult the SELinux states and modes topic.

  1. Uninstall any native Red Hat Docker application.
  2. Clean the cache, type: dnf clean dbcache.
  3. Remove Podman and associated manpages, type: sudo dnf remove podman-manpages.
  4. Install Docker CE, type: dnf install docker-ce.x86_64 --allowerasing.
  5. Start Docker, type: systemctl start docker.
  6. Enable Docker, type: systemctl enable docker.

Pull F5 VE SmartNIC Docker container

Before deploying the F5 VE SmartNIC Orchestrator Docker container, read the F5 Licensing EULA, then use the following commands to pull and run the SmartNIC Orchestrator Docker container.

  1. To verify that no container is running, type:

    1. docker ps and verify that nothing is returned.
    2. If something does return, type: docker stop <container id>, where <container ID> is the value returned in the previous step.
  2. To pull the stable F5 SmartNIC Orchestrator Docker container, type the following: docker pull f5networks/smartnic-orchestrator:stable.

  3. To run the container, type the following, which will auto-accept the F5 EULA:

    sudo docker run  -d -t -e TZ=America/Los_Angeles -e ACCEPT_EULA=Y -e DEBUG=N --name f5smartnic-orchtool --mount src=/lib/modules,target=/lib/modules,type=bind --mount src=/usr/src,target=/usr/src,type=bind --mount src=/dev,target=/dev,type=bind --mount src=/var/log,target=/var/log,type=bind --mount src=/var/lib,target=/var/lib,type=bind --mount src=/usr/share/hwdata,target=/usr/share/hwdata,type=bind --cap-add=ALL -p 8443:8443 --privileged=true f5networks/smartnic-orchestrator:stable
    

    Note

    Command line parameter options include:

    • -e DEBUG=N - this variable is mandatory and disables executing remote bash commands using the API or UI. If not set, this flag is treated as disabled. If instructed by F5 Support, enable (-e DEBUG=Y) read/write operations to the SmartNIC registers.
    • -p <any port>:8443 - defines the port on which the Orchestrator’s Redfish API is listening. This parameter is case sensitive. The first port is the incoming port you want to use, and the second port is the port to which the Orchestrator is listening inside the Docker container (consult the Docker CLI reference guide).
    • --mount src=<path to log file on local host> ,target=/var/log,type=bind - defines the location of the logfile on the host.
    • --mount src=<path to config file on local host  /var/lib/f5snic/f5snic.config>,target=/var/lib,type=bind - defines the location of the config file on the host.
  4. Point your browser to https://{hostip}:8443/, enter the following:

    • User name: admin
    • Password: admin

    Wait for the pipeline to finish.

    ../_images/smartnic_orch1.png

Note

You can find the orchestration f5snic.log.[date] file in /var/log/ directory.

Tip

If you do NOT see a green checkmark next to the “SmartNIC present” message, then the Orchestrator is NOT detecting the SmartNIC. This can happen for several reasons, like not readjusting the fan setting in the BIOS to performance mode, causing the SmartNIC to overheat, or a power cable was not connected to the card, or launching the Orchestrator BEFORE installing a SmartNIC in the server. You can confirm this detection error when running the following command has no return:

$ lspci -d 8086:b30

Auto-start the F5 SmartNIC Docker container

Once you set up the Docker container, you can configure it to restart automatically upon system reboot. This is useful if your server crashes unexpectedly. You can use the Docker restart policy to either control whether your container starts automatically when they exit, or when Docker restarts.

Additionally, you can use systemd.

  1. To create the service file used by systemd (systemctl command), in your shell/terminal get your container name:

    $ docker ps -a
    

    Output looks similar to:

    ../_images/smartnic-autoStrtOutput.png

    Note the container name in the last column.

  2. Create a file (filename must use all lowercase). This example uses, docker-f5smartnic.service:

    sudo vi /etc/systemd/system/docker-f5smartnic.service
    
  3. Paste the following into that file, enter a description, and then update the container name in ExecStart and ExecStop:

    [Unit]
    Description=SmartNIC Orch Tool Container
    Requires=docker.service
    After=docker.service
    
    [Service]
    Restart=always
    ExecStart=/usr/bin/docker start -a f5smartnic.service
    ExecStop=/usr/bin/docker stop -t 2 f5smartnic.service
    
    [Install]
    WantedBy=local.target
    

    Tip

    • This file is called a unit file for systemd.
    • Avoid any extra line brakes within the sections, like Unit, or Service.
    • The -a option in the Docker command for ExecStart ensures it is running in attached mode; for example, attaching STDOUT/STDERR and forwarding signals.
    • The -t option in the Docker command for ExecStop specifies seconds to wait for it to stop before killing the container.
    • The installation occurs alphabetically. Therefore, F5 recommends you add a delay before installing the docker-f5smartnic.service, so everything else has a chance to execute before auto-starting Docker.
  4. Before activating the service, you must reload the unit file, and then run the following command anytime you modify the unit file:

    $ sudo systemctl daemon-reload
    
  5. To auto-start and enable:

    $ sudo systemctl start docker-f5smartnic.service
    $ sudo systemctl enable docker-f5smartnic.service
    
  6. OPTIONAL: To disable the auto-start service, and then reboot your system (remember to change the service name):

    $ sudo systemctl stop docker-f5smartnic.service
    $ sudo systemctl disable docker-f5smartnic.service
    $ sudo reboot
    
  7. Reboot your system to apply changes:

    $ sudo reboot
    

Your container will now start on a server reboot, Docker restart, or a crash.

Deploy BIG-IP VE in KVM

F5 VE SmartNIC supports only BIG-IP VE 15.1.0.4 or BIG-IP VE 15.1.4.

Tip

Consult the following Define SmartNIC configuration settings topic when configuring your SmartNIC as PCI interfaces and enabling SR-IOV (steps 18-20 in this procedure).

To deploy BIG-IP VE, download an image from F5 and deploy it in your environment.

Important

  • Do not change the configuration (CPU, RAM, and network adapters) of the KVM guest environment with settings less powerful than those recommended and described here.
  • When using F5’s virtio synthetic driver, use the default i440FX machine type. The Quick Emulator (QEMU) Q35 machine type is not supported.
  1. In a browser, open the F5 Downloads page and log in.
  2. On the Downloads Overview page, do the following:
    1. Click Find a Download.
    2. Under Product Line, click the link similar to BIG-IP v.x/Virtual Edition.
    3. If the End User Software License is displayed, click I Accept.
    4. Download the BIG-IP VE file package ending with qcow2.zip.
  3. Extract the file from the Zip archive and save it where your qcow2 files reside on the KVM server.
  4. Use VNC to access the KVM server, and then start Virt Manager.

Warning

If you are using QEMU v8.1.0 or later, there have been identified issues with System Management BIOS (SMBIOS) v3.x (64-bit entry point). It is recommended to downgrade SMBIOS to v2.x (32-bit entry point). When configuring a virtual machine (VM), use the following command to enforce the 32-bit entry point:

-machine smbios-entry-point-type=32

  1. Right-click localhost (QEMU), and on the popup menu, select New.

    The Create a new virtual machine, Step 1 of 4 dialog box opens.

    1. In the Name field, enter a name for the connection.

    2. Select the import existing disk image method for installing the operating system, and then click Forward.

    3. Enter the path to the extracted qcow file, or click Browse and navigate to the file.

    4. Select the file, and then click Choose Volume.

    5. Expand OS type, select Linux, expand Version, select Red Hat Enterprise Linux 6, and then click Forward.

    6. In the Memory (RAM), enter the appropriate amount of memory (in megabytes) for your deployment (for example 4096 for a 4GB deployment).

    7. In the CPUs list, select the number of CPU cores appropriate for your deployment, and click Forward.

    8. Select Customize configuration before installing, and then click Advanced options.

    9. Select the network interface adapter that corresponds to your management IP address, and click Finish.

      The Virtual Machine configuration dialog box opens.

  2. Click Add Hardware.

    The Add New Virtual Hardware dialog box opens. Do one of the following:

    • If SR-IOV is NOT required, select Network.

      1. In the Host device list, select the network interface adapter for your external network, in the Device model list, select virtio, and then click Finish.
      2. Repeat the previous step for your internal and HA networks.
    • If SR-IOV is required, select PCI Host Device.

      1. Select the PCI device for the virtual function that is mapped to your host device’s external VLAN, and then click Finish.

        Tip

        Be sure to use the Virtual Function (VF) PCI Host Device instead of the Physical Function (PF) to take advantage of VE high-speed drivers.

        The following image illustrates adding a PCI VF Network Interface within the Virtual Machine Manager:

        ../_images/kvm_qemu1.png
      2. Repeat the previous step for your host device’s internal VLAN and HA VLAN.

  3. In the left pane, select Disk 1, and then click Advanced options.

    1. From the Disk bus list, select Virtio.
    2. In the Storage format list, select qcow2.
    3. Click Apply.
  4. Click Begin Installation.

    The Virtual Machine Manager creates the virtual machine configured as you defined.

To assist with configuring the management IP, consult the BIG-IP configuration utility tool.

Define SmartNIC configuration settings

Do the following for defining specific settings for enabling your BIG-IP VE SmartNIC on your server:

  1. On your server, set the following:

    • Under NIC change network source to Host device em1: macvtap and set the source mode to Bridge and device model to virtio.

    • To add SmartNIC interfaces, click Add Hardware, click PCI Host Device, scroll down and select F5 Inc. VF (PCIe device ID = 0x0100), and then click Finish.

    • You can add multiple interfaces to each BIG-IP VE. When configuring the interfaces as a trunk, for optimal performance, you MUST configure the trunk members on different PCI Buses in the operating system. For example:

      ../_images/snic1-0_OS_PCI.png

      SmartNIC 1.0 PCI Virtual Function

      ../_images/snic2-0_OS_PCI.png

      SmartNIC 2.0-2.0.1 PCI Virtual Function

      Note

      Do NOT use the physical functions.

  2. OPTIONAL: To verify that the SmartNIC driver is properly bound to the N3000 PAC, navigate to the /var/log/tmm file and if the N3000 SmartNIC (HSBse) was properly discovered on the PCI bus and the SmartNIC driver was bound to the device, you will see something similar to the following:

    <13> May 15 07:28:01 www notice f5hsb1[0000:00:08.0]: ---------XNET PROBE of HSBse successful -----------
    
  3. Start the VE, type: bigstart restart tmm.

  4. Confirm that the xnet driver (also known as HSBse driver in log files) is registered with VE: tmctl -dblade -i tmm/device_probed.

    ../_images/smartnic-driverList.png
  5. Confirm that the XNET probe of HSBse succeeded: grep “XNET PROBE of HSBse successful” /var/log/ltm.

  6. License the BIG-IP VE plus the add-on SmartNIC license.

  7. Consult this AFM DoS/DDoS Protection topic to validate which vectors are hardware-accelerated DoS vectors on BIG-IP Virtual Edition v15.1.0.4, v15.1.4, or 15.1.6.1 - AFM module for use with BIG-IP VE SmartNIC.

A single SmartNIC supports up to eight VEs with one virtual function (VF) assigned.

Orchestrator User Guide

To optimize your Intel PAC N3000 SmartNIC for accelerating the BIG-IP VE, use the F5 SmartNIC Orchestrator utility.

Note

Screenshots of the F5 BIG-IP VE SmartNIC Orchestrator depicted in this guide may vary, depending upon the version you are using.

Deploy

Before deploying the F5 SmartNIC Orchestrator Docker container, read the F5 Licensing EULA.

  1. To verify that no container is running, type:

    1. docker ps and verify that nothing is returned.
    2. If something does return, type: docker stop <container id>, where <container ID> is the value returned in the previous step.
  2. OPTIONAL: log into Docker (if required), docker login.

  3. To pull the stable F5 SmartNIC Orchestrator Docker container, type the following: docker pull f5networks/smartnic-orchestrator:stable.

  4. To run the container, type the following, which will auto-accept the F5 EULA:

    sudo docker run  -d -t -e TZ=America/Los_Angeles -e ACCEPT_EULA=Y -e DEBUG=N --name f5smartnic-orchtool --mount src=/lib/modules,target=/lib/modules,type=bind --mount src=/usr/src,target=/usr/src,type=bind --mount src=/dev,target=/dev,type=bind --mount src=/var/log,target=/var/log,type=bind --mount src=/var/lib,target=/var/lib,type=bind --mount src=/usr/share/hwdata,target=/usr/share/hwdata,type=bind --cap-add=ALL -p 8443:8443 --privileged=true f5networks/smartnic-orchestrator:stable
    

    Note

    If instructed by F5 Support, enable the -e DEBUG=Y option for read/write capabilities to SmartNIC registers. When the flag is set to -e DEBUG=N, executing remote bash commands using the API or UI is disabled.

  5. Point your browser to https://{hostip}:8443/, enter the following:

    • User name: admin
    • Password: admin

    Wait for the pipeline to finish.

  6. OPTIONAL: You can also set up the Docker Container to run automatically as a service.

Upgrade

Upgrade the Orchestrator BEFORE upgrading/updating your BIG-IP VEs. Pulling a new Docker container will install all the latest common vulnerabilities and exposures (CVEs) fixed by Docker.

To pull the new Orchestrator from the Docker repository, do the following:

  1. Stop all VEs/VMs, in BIG-IP VE terminal type: shutdown -H now for immediate shutdown.

  2. Stop the Orchestrator, type: docker stop <container ID>. If you do not know the container ID, fetch it by typing: docker ps.

  3. Pull the stable F5 SmartNIC Orchestrator Docker container, type: docker pull f5networks/smartnic-orchestrator:stable

  4. Run the new version of the F5 SmartNIC Orchestrator, type:

    sudo docker run  -d -t -e TZ=America/Los_Angeles -e ACCEPT_EULA=Y -e DEBUG=N --name f5smartnic-orchtool --mount src=/lib/modules,target=/lib/modules,type=bind --mount src=/usr/src,target=/usr/src,type=bind --mount src=/dev,target=/dev,type=bind --mount src=/var/log,target=/var/log,type=bind --mount src=/var/lib,target=/var/lib,type=bind --mount src=/usr/share/hwdata,target=/usr/share/hwdata,type=bind --cap-add=ALL -p 8443:8443 --privileged=true f5networks/smartnic-orchestrator:stable
    
  5. Update the PCI devices assigned to your VMs, if they changed during PCI re-enumeration.

  6. Upgrade your BIG-IP VEs, update the Trunk Mode Setting (step 4) in the Orchestrator BEFORE restarting your VMs.

Tip

  • The f5snic.config file in the /var/lib/f5snic/ directory contains all the configuration for the Orchestrator settings, such as optics. This file is persistent across updates and upgrades, so when you update to the latest container or upgrade to the newest version, the SmartNIC loads the configuration file for the new Orchestrator using that f5snic.config file, including passwords, and other similar settings you defined in the previous version.
  • When upgrading an Orchestrator version (for example, Orchestrator 1.0.8 to Orchestrator 2.0.8), your old system settings persist and any new settings for the new version contain default values, until you change these values.
  • If you see this error during the upgrade process: “Error response from daemon: driver failed programming external connectivity on endpoint interesting_hodgkin (7b905f032370fe953153f837d8f64af78b248280cf17747ef373ad0609f57d6e): Bind for 0.0.0.0:8443 failed: port is already allocated”, then stop the Docker container: docker stop [container ID], and run the Docker container script again (step 4).

Change login credentials

To change login credentials, in the top-right corner of the window, click Change Credentials, change the username and password accordingly, confirm password, and then click Update.

../_images/snic_login.png

Tip

If you forget the password and/or return the Orchestrator to factory default configuration, delete the f5snic.config file in the /var/lib/f5snic/ directory. The f5snic.config file contains all the configuration for the Orchestrator settings, such as optics. This file is persistent across upgrades, so when you upgrade SmartNIC loads the configuration file for the new Orchestrator using the f5snic.config file, including passwords, and other similar settings.

To set the login credentials using the API

In the case of automation, consult the Credentials section of the API to change login credentials.

  1. Start the Orchestrator with the default admin/admin credentials.
  2. Use PUT /{nic}/AccountService/Accounts as shown in the following screenshot:
../_images/snic-apiPswd.png

Configuration pipeline

Click the Configuration Pipeline tab or the Home menu to view the status for and run the following pipeline stages:

  • SmartNIC present
  • SmartNIC Management Driver
  • SmartNIC Base image – detailing image and build version
  • SmartNIC F5 image – current bitfile version loaded
  • Enable SR-IOV – enablement status.
  • Configure SmartNIC
  • Enable Network Interface
../_images/smartnic_orch1.png

  1. To run an individual pipeline stage, click retry Retry in a stage row.

  2. To run the entire pipeline, in the top-right corner of the tab, click retryAll.

  3. Monitor your pipeline progress using the built-in terminal window:

    ../_images/smartnic_terminal.png

Tip

If you do NOT see a green checkmark next to the “SmartNIC present” message, then the Orchestrator is NOT detecting the SmartNIC. This can happen for several reasons, like not readjusting the fan setting in the BIOS to performance mode, causing the SmartNIC to overheat, or a power cable was not connected to the card, or launching the Orchestrator BEFORE installing a SmartNIC in the server. You can confirm this detection error when running the following command has no return:

$ lspci -d 8086:b30

Settings

  1. Click the Settings tab to set possible optic modes:

    • 1 Optic
    • 2 Optics
    ../_images/smartnic_settings1.png

Consult the Configure the switch, cables, and optic topic for optic mode recommendations.

Note

For intrinsic mapping of the MACs to the actual VFs that they can reach, connect your Intel N3000 card to your upstream ethernet switch or traffic generator. F5 recommends using QSFP28 (100G) optics and change the port mode of your switch/traffic generator to 4 x 25G. F5 SmartNIC supports single port with two channels; one-optic mode and two-optic mode, using one MAC from each channel.

  1. Click Submit.

  2. To set the FEC mode, click one of the following options that your switch supports, and then click Submit. Consult your switch manufacture documentation for details regarding the supported FEC mode(s) for a 100G split mode to 4x25G.

    • no (No FEC) - To remain in compliance with IEEE standards, use this mode for diagnostic purposes only.
      1. BEFORE using this mode, you must stop any running SmartNIC VE VMs.
      2. Click the Diagnostics tab, and slide the SR-IOV switch to OFF.
      3. Click the Settings tab and select the no (no FEC) option.
      4. Click the Configuration Pipeline tab and in the top-right corner, click retryAll.
    • kr (Fire Code Forward Error Correction (IEEE 802.3 Clause 74))
    • rs (Reed Solomon Forward Error Correction (IEEE 802.3 Clause 108))
    ../_images/smartnic_settings2-0-fec.png
  3. To set the Trunk Mode, select Enable or Disable, and then click Submit.

    Tip

    When upgrading from SmartNIC 1.0 to 2.0-2.0.1, to optimize your VLAN traffic configuration, F5 recommends configuring this Trunk Mode setting accordingly, BEFORE passing traffic.

    • Enable the Trunk Mode in the Orchestrator when you aggregate/bond two 25 gig ports on the switch (trunk ports at your switch). Doing so, ensures that packets are sent evenly between the two ports. When enabling trunk mode, VE will see a 50 gig VF; therefore, you have a few options for configuring VFs on VE:

      • Assign one VF interface to the VE and assign internal and external VLANS to the trunk
      • Assign one VF interface to the VE as a trunk, and assign internal and external VLANS to that trunk
      • Assign each VF interface to the same VE tenant, or assign the VFs to separate VEs, if operating in a multi-tenant configuration

      Note

      This is NOT true LACP signaling; therefore, F5 recommends turning OFF port channel or IEEE 802.3ad LACP bonding.

    • Disable the Trunk Mode in the Orchestrator if using only one 25 gig port, and NOT trunking your ports at the switch. This guarantees that the hardware will learn the VLAN port for ingress packets, and will send egress packets using that same VLAN port. You can still use two, 25 gig channels into the Intel N3000 and disable the Trunk Mode. The system will store the port ID used on packet-ingress, and will use that same port on packet-egress.

      ../_images/smartnic_settings2-0-trunk.png

Tip

When Trunk Mode is disabled, do the following:

  1. Select 1 optic mode with two 25 gig channels configured at the upstream switch.

  2. Use 2 VLANs; one for client traffic, and the other for server traffic.

  3. Verify that each 25 gig port at the switch is configured with 1 VLAN only (either client or server).

    The hardware records the VLANs used for ingress traffic, and will send egress traffic out using that same port. However, this topology limits you to approximately 25 gig throughput, because your client and server traffic are only ingress/egress using a single 25 gig link.

Caution

When Trunk Mode is disabled, AVOID the following:

  • Assigning both VLANs to a 25 gig switch ports (bond/trunk).

    This topology is NOT recommended as it will pass traffic inefficiently, because the hardware must relearn the port used for ingress VLAN packets, requiring the switch to hash the ports across trunk lanes.

  • Using 1 VLAN with 2 switch ports trunked, as this also has the same inefficient effect on traffic.

  1. If the VE is deployed on an untagged network, in the Port Default VLAN section in the QSFP text box enter the VLAN port ID used for sending untagged traffic from VE. If VE is deployed in an untagged network, you MUST tag the SmartNIC VE VF interfaces and enter the tagged value in the Port Default VLAN text box. Typically, VE uses tagged VLANs for sending traffic.

    ../_images/smartnic_settings2-0-qsfp.png

Diagnostics

../_images/snic_diagnostics.png

Use the Diagnostics tab, to do the following:

  1. SmartNIC F5 image - do the following to reload an updated SmartNIC F5 image, reconnect it if disconnected, and/or restore the factory default settings for your SmartNIC:

    1. Stop all VEs/VMs, in BIG-IP VE terminal type: shutdown -H now for immediate shutdown.
    2. In the SR-IOV row, slide the toggle to OFF. Wait for SR-IOV to status to display disabled.
    3. In the SmartNIC F5 image row click Retry retry.
    4. To rerun the pipeline, click the Configuration Pipeline tab, and then click retryAll.
    5. Restart/reboot your VMs, type: reboot.
  2. SR-IOV - slide the Enable/Disable switch accordingly. After enabling/disabling SR-IOV, on the Configuration Pipeline tab, you may need to click retry Retry for the following stages:

    • Configure SmartNIC
    • Enable Network Interface
  3. Run Diagnostics - do the following to validate the SmartNIC hardware (except for optical interfaces):

    1. Stop all VEs/VMs, in BIG-IP VE terminal type: shutdown -H now for immediate shutdown.
    2. In the SR-IOV row, slide the toggle to OFF. Wait for SR-IOV status to display disabled.
    3. In the Run Diagnostics row, click Retry retry. Wait for the status to return a Pass/Fail value.
    4. If you receive a Fail status, then consult the /var/log/f5snic/f5snic.log.[date] log files for more information.
    5. To reload the SmartNIC functionality, in the SmartNIC F5 image row, click Retry retry, and then on the Configuration Pipeline tab, click retryAll.
    6. Restart/reboot your VMs, type: reboot.
  4. SmartNIC Management Driver - click Uninstall trash to delete the currently installed SmartNIC Management Driver BEFORE updating to a new version.

  5. F5 SmartNIC bitfile - click Retry retry to validate the current F5 bitfile.

  6. Collect Snapshot - click Collect Snapshot qview to create a smartnic_snapshot.json file containing F5 VE SmartNIC diagnostic data, and then click Download. The smartnic_snapshot.json downloads locally that you will give the F5 Support team.

    To upload this smartnic_snapshot.json to iHealth for analysis:

    1. Copy/move the smartnic_snapshot.json file to the /config folder on the VE for which you want to upload to iHealth, for example:

      scp snapshot.json root@10.238.8.196:/config/snapshot.json
      
    2. On the VE, navigate to System -> Support -> New Support Snapshot.

    3. Either generate a qkview file or generate and upload to iHealth. The qkview uploaded to iHealth will contain the smartnic_snapshot.json file. Consult this video demo for complete iHealth instructions.

Tip

Advanced diagnostics: Malicious traffic uses more CPU. The FPGA on board the SmartNIC (FPGA + SmartNIC) actively blocks malicious traffic before it can reach your CPU. Depending upon the attack, you will see a big difference in the amount of malicious traffic saturating your CPU. To TPS test your SmartNIC, do the following:

  1. To disable the SmartNIC hardware offload, and notice that approximately 500 Mbps of malicious traffic will max out your CPU to 100 percent, without this protection of the FPGA + SNIC, use the following command:

    modify sys db dos.forceswdos value TRUE set to disable SmartNIC hardware offload

  2. To enable the FPGA + SmartNIC (default mode of operation), and notice that your SmartNIC can process over 30Gbps of malicious traffic, because the FPGA + SmartNIC is monitoring, actively detecting, and blocking the malicious traffic BEFORE it reaches your CPU/software layer, use the following command:

    modify sys db dos.forceswdos value FALSE set to enable SmartNIC hardware offload.

  3. Consult your BIG-IP AFM dashboard for traffic statistics.

Status

Click the Status tab to verify the following;

  • MAC link status for hardware information, channel connection status for 1-optic and/or 2-optics mode
  • MAC stats for MAC channels 0 through 3
../_images/snic_status.png

You will see two or four MAC links for channels used depending upon the optic setting you defined. Click Clear MAC Stats to reload stats for MAC channels.

API

  1. To access the API, in the top-right corner, click the API tab, and then enter your login credentials at the prompt.

    ../_images/smartic_API1.png
  2. The F5 SmartNIC API tab describes how to use the following get and post commands that utilizes an HTTPS scheme. Click each GET, POST, or PULL request to the view full descriptions.

    GET commands

    • smartnic - Returns SmartNIC Configuration Pipeline Status
    • services - Returns SmartNIC services inventory
    • configuration pipeline status - returns SmartNIC configuration pipeline status
    • operating conditions - Returns SmartNIC operating conditions
    • drivers - Returns SmartNIC services inventory
    • network - Returns the status of the network interfaces and optical configuration
    • virtualization - Returns the status of the PCI SR-IOV configuration
    • NicInitialization - Returns the SmartNIC configuration status
    • BaseImg - Returns the SmartNIC Factory FPGA status
    • F5Img - Returns the SmartNIC FPGA F5 Inc. Application Image status
    • Diagnostics - Get Intel Diagnostic test results - Diagnostics.GetSnapshot - Get all snapshot objects
    • registers - Returns the value of the register at the provided pci coordinates

    Post commands

    • Run Configuration Pipeline - Run Configuration Pipeline to prepare the system for SmartNIC
    • Drivers - Modules.install and modules.uninstall Intel fgpa driver modules
    • Network:
      • /{nic}/Network/Actions/Interfaces.SetNumberOfOptics - Configures the network interface to use single or dual optics
      • /{nic}/Network/Actions/Interfaces.Enable - Configures the Intel MAC network interface CRC mode
      • /{nic}/Network/Actions/Interfaces.SetFECMode - Configures the network interface FEC Mode
      • /{nic}/Network/Actions/Interfaces.ClearMACStats - Clears the network mac stats.
      • /{nic}/Virtualization/Actions/Interfaces.TrunkMode.Enable - Enable Trunk Mode for SmartNIC device
      • /{nic}/Virtualization/Actions/Interfaces.TrunkMode.Disable - Disable Trunk Mode for SmartNIC device
      • /{nic}/Virtualization/Actions/Interfaces.SetDefaultVlan - Set Default VLAN for SmartNIC device
    • virtualization - Enable/disable SRIOV for SmartNIC device
    • NicInitialization - /{nic}/NicInitialization/Actions/NIC.Initialize to initialize SmartNIC device
    • BaseImg - /{nic}/BaseImg/Actions/BaseImg.Install to upgrade the SmartNIC Factory FPGA bitfile
    • F5Img:
      • /{nic}/F5Img/Actions/F5Img.Install to upgrade the SmartNIC F5 Inc. FPGA bitfile
      • /{nic}/F5Img/Actions/F5Img.ReloadF5Img to reload the SmartNIC F5 Inc. FPGA bitfile
    • Diagnostics - /{nic}/Diagnostics/Actions/Diagnostics.Run to run Intel Diagnostic tests
    • registers - /{nic}/Registers/Actions/Register.Write/{bus}/{pf}/{vf}/{addr} to write a value to the register at the provided PCI coordinates
    • Shell Execution - /{nic}/Commands/Actions/Command.Run to run remote shell commands
  1. Use the Credentials section to change Orchestrator access credentials.

    Put commands:

    • /{nic}/AccountService/Accounts - to update username and password credentials
  2. Expand Models, and then expand each request to view complete code examples.

    ../_images/smartic_API2.png

Uninstall F5 VE SmartNIC Orchestrator

To remove the F5 BIG-IP VE SmartNIC Orchestrator/restore your SmartNIC default factory settings, do the following:

  1. Stop all VEs/VMs, in BIG-IP VE terminal type: shutdown -H now for immediate shutdown.

  2. On the Diagnostics tab, in the SR-IOV row, slide the toggle to OFF.

  3. On the Diagnostics tab, in the SmartNIC Management Driver row click Uninstall trash to delete the currently installed SmartNIC Management Driver.

  4. To stop SmartNIC Orchestrator Container, in your terminal type docker stop <container ID>. If you do not know the container ID, fetch it by typing: docker ps.

  5. To delete SmartNIC Container and Docker Image, in your terminal type the following where image/container ID/Name is the image/container you want to delete. If you do not know the container ID, fetch it by typing docker ps.

    • Remove container:

      $ docker ps -a
      $ docker rm [OPTIONS] CONTAINER [CONTAINER...]
      

      0r remove all stopped containers:

      $ docker rm $(docker ps -a -q)
      
    • Remove image:

      $ docker images -a
      $ docker image rm [OPTIONS] IMAGE [IMAGE...]
      

Consult the Docker documentation for command usage details.