Configure SR-IOV Network Device Plugin

The SR-IOV Network Device Plugin is used to discover and advertise networking resources available on a Kubernetes host in the following forms:

  • SR-IOV virtual functions (VFs)

  • PCI physical functions (PFs)

  • Auxiliary network devices, in particular Subfunctions (SFs).

SFs created on the DPU nodes must be exposed to the TMM pods for traffic processing by configuring the sriovdp-config ConfigMap. The user must create the ConfigMap with SFs and their corresponding PF#sf numbers created on each PCI device to enable this exposure.

Configure the SFs in the SR-IOV Network Device Plugin ConfigMap

To create multiple SFs (SF pool) for each Physical Function (PF), see Scalable Functions.

Note: For a new DPU setup or after a DPU reboot, SFs need to be created again, see Scalable Functions.

To configure the SFs in the SR-IOV Network Device Plugin ConfigMap, follow the instructions below:

  1. Create sf-cm.yamlfile with the example contents below.

vi sf-cm.yaml
***Example Contents***:

> **Note**:
> Make sure to create multiple SFs on DPU, update OVS bridge with the SF interfaces and specify them in the SR-IOV ConfigMap using `pfNames` parameter. In the below example ConfigMap, eight SFs are specified.
> Make sure to update the values in the example content with the actual values as per your environment.

```{literalinclude} _static/codesnippets/sf-cm.yaml
```
  1. Apply the SF ConfigMap.

    kubectl apply -f sf-cm.yaml
    

Install or Update SR-IOV Network Device Plugin for Kubernetes

To install or update the SR-IOV Network Device Plugin for Kubernetes, follow the instructions below:

A. If sriov-dp is already installed, apply the following patch:

kubectl patch daemonset kube-sriov-device-plugin -n kube-system --type='json' -p='[``{``"op": "add", "path": "/spec/template/spec/tolerations", "value": [``{``"effect": "NoSchedule", "operator": "Exists"``}``]``}``]'`

B. If sriov-dp is not installed, follow the below steps:

  1. Download the sriovdp-daemonset.yaml.

wget https://raw.github.com/k8snetworkplumbingwg/sriov-network-device-plugin/master/deployments/sriovdp-daemonset.yaml

  1. Add tolerations in sriovdp-daemonset.yaml file to allow the sriovdp pods to run on the DPU nodes. Add the below section under `spec`.

tolerations:         
  - key: "dpu"       
    value: "true"    
    operator: "Equal"
  1. Apply the SRI-OV CNI plugin.

kubectl apply -f sriovdp-daemonset.yaml
  1. Verify the SRI-OV device plugin pod created for each node in the cluster.

kubectl get pods -owide -n kube-system

Sample Output:

kube-sriov-device-plugin-nstrs    1/1     Running   0              2d2h    <IP address>   localhost.localdomain   <none>           <none>
kube-sriov-device-plugin-p8mv5    1/1     Running   0              2d22h   <IP address>    sm-hgx1                 <none>           <none>
  1. Verify the pod deployed for the DPU. You should see the SF ConfigMap read by the pod and resources created for each SF. The pod will iterate through all the PCI resources but should eventually locate the correct one.

kubectl logs pod/kube-sriov-device-plugin-nstrs -n kube-system

In the example logs below, look for the new resource server created for bf3_p0_sf and bf3_p1_sf ResourcePools.

I0814 15:33:51.566759       1 manager.go:57] Using Kubelet Plugin Registry Mode
I0814 15:33:51.567877       1 main.go:46] resource manager reading configs
I0814 15:33:51.568002       1 manager.go:86] raw ResourceList: {
    "resourceList": [
        {
            "resourceName": "bf3_p0_sf",
            "resourcePrefix": "nvidia.com",
            "deviceType": "auxNetDevice",
            "selectors": [{
                "vendors": ["15b3"],
                "devices": ["a2dc"],
                "pciAddresses": ["0000:03:00.0"],
                "pfNames": ["p0#1-2"],
                "auxTypes": ["sf"]
            }]
        },
        {
            "resourceName": "bf3_p1_sf",
            "resourcePrefix": "nvidia.com",
            "deviceType": "auxNetDevice",
            "selectors": [{
                "vendors": ["15b3"],
                "devices": ["a2dc"],
                "pciAddresses": ["0000:03:00.1"],
                "pfNames": ["p0#1-2"],
                "auxTypes": ["sf"]
            }]
        }
    ]
}

Sample Output:

I0306 06:37:22.534749       1 factory.go:203] *types.AuxNetDeviceSelectors for resource bf3_p0_sf is [0x400039d520]

I0306 06:37:22.534796       1 factory.go:203] *types.AuxNetDeviceSelectors for resource bf3_p1_sf is [0x400039d6c0]

I0306 06:37:22.534805       1 manager.go:106] unmarshalled ResourceList: [{ResourcePrefix:nvidia.com ResourceName:bf3_p0_sf DeviceType:auxNetDevice ExcludeTopology:false Selectors:0x4000302408 AdditionalInfo:map[] SelectorObjs:[0x400039d520]} {ResourcePrefix:nvidia.com ResourceName:bf3_p1_sf DeviceType:auxNetDevice ExcludeTopology:false Selectors:0x4000302420 AdditionalInfo:map[] SelectorObjs:[0x400039d6c0]}]

I0306 06:37:22.534876       1 manager.go:217] validating resource name "nvidia.com/bf3_p0_sf"

I0306 06:37:22.534894       1 manager.go:217] validating resource name "nvidia.com/bf3_p1_sf"

I0306 06:37:22.534901       1 main.go:62] Discovering host devices

I0306 06:37:22.619234       1 auxNetDeviceProvider.go:84] auxnetdevice AddTargetDevices(): device found: 0000:03:00.0  02            Mellanox Technolo...  MT43244 BlueField-3 integrated Connec...

I0306 06:37:22.619318       1 auxNetDeviceProvider.go:84] auxnetdevice AddTargetDevices(): device found: 0000:03:00.1  02            Mellanox Technolo...  MT43244 BlueField-3 integrated Connec...

I0306 06:37:22.619329       1 netDeviceProvider.go:67] netdevice AddTargetDevices(): device found: 0000:03:00.0  02            Mellanox Technolo...  MT43244 BlueField-3 integrated Connec...

I0306 06:37:22.621099       1 netDeviceProvider.go:67] netdevice AddTargetDevices(): device found: 0000:03:00.1  02            Mellanox Technolo...  MT43244 BlueField-3 integrated Connec...

I0306 06:37:22.623488       1 main.go:68] Initializing resource servers

I0306 06:37:22.623530       1 manager.go:117] number of config: 2

I0306 06:37:22.623555       1 manager.go:121] Creating new ResourcePool: bf3_p0_sf

I0306 06:37:22.623561       1 manager.go:122] DeviceType: auxNetDevice
  1. Examine the DPU node to ensure that it has the correct resources available.

kubectl describe node localhost.localdomain

Sample Output:

Name:               localhost.localdomain
Capacity:
  nvidia.com/bf3_p0_sf:  2
  nvidia.com/bf3_p1_sf:  2
Allocatable:
  nvidia.com/bf3_p0_sf:  2
  nvidia.com/bf3_p1_sf:  2
  1. Verify the Multus logs on DPUs (stored at the path configured in the network attachment definition) to confirm SF movement.

time="2025-03-12T11:11:35.683978599Z" level="debug" msg="moving SF from container netns to hostns" cniName="f5-eowyn" containerID="f485946e6e46e36596127d8c8e40b48899beae012b6297f7df4907c028ad88e7" netns="/var/run/netns/cni-92a4caa0-ab6c-b782-3d71-07cee13bda02" ifname="net1" container-netns="/var/run/netns/cni-92a4caa0-ab6c-b782-3d71-07cee13bda02" host-netns="/var/run/netns/cni-92a4caa0-ab6c-b782-3d71-07cee13bda02"

Multus Network Attachment Definition

A Network Attachment Definition is a Multus configuration that connects an underlying network (SF) to a Kubernetes pod (TMM).

Prerequisites

Before you install BIG-IP Next for Kubernetes, ensure that the following prerequisites are met:

Note: Make sure that the Network Attachment Definition is created in the same namespace where you plan to install FLO and BIG-IP Next for Kubernetes.

  1. Create a net-attach-def.yaml with the contents below:

vi net-attach-def.yaml

Example Contents:

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
    name: sf-external // configure the name as per your configuration
    annotations:
        k8s.v1.cni.cncf.io/resourceName: nvidia.com/bf3_p0_sf
spec:
    config: '{
    "type": "sf",
    "cniVersion": "0.3.1",
    "name": "sf-external", // configure the name as per your configuration
    "logLevel": "debug",
    "logFile": "/tmp/sf-external.log"
}'

---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
    name: sf-internal // configure the name as per your configuration
    annotations:
        k8s.v1.cni.cncf.io/resourceName: nvidia.com/bf3_p1_sf
spec:
    config: '{
    "type": "sf",
    "cniVersion": "0.3.1",
    "name": "sf-internal", // configure the name as per your configuration
    "logLevel": "debug",
    "logFile": "/tmp/sf-internal.log"
}'
  1. Create the Network Attachment Definition custom resource in f5-operators namespace or a namespace of your choosing.*

kubectl apply –f net-attach-def.yaml -n f5-operators

Multus Network Attachment Definition Parameters

The table below describes the spec.networkAttachment parameters.

Parameter Description
config.cniVersion
string
Specifies the version of the CNI (Container Network Interface) used for networking within Kubernetes.
config.ipam.gateway
string
Specifies the gateway IP address for the network managed by the IP Address Management (IPAM) system.
config.ipam.rangeEnd
string
Specifies the last IP address available for the network.
config.ipam.rangeStart
string
Specifies the first IP address available for the network.
config.ipam.routes.dst
string
Specifies the destination IP address for a route.
config.ipam.subnet
string
Specifies the subnet (IP address range) for the network.
config.ipam.type
string
Specifies the type of network configuration.
config.name
string
Specifies the name of the network configuration.
config.pciBusID
string
Specifies the PCI (Peripheral Component Interconnect) Bus ID of the network device (such as a network interface card or SR-IOV device) that will be used for the network attachment.
config.type
string
Specifies the type of network attachment or the network plugin being used.
name
string
Specifies the name of the network attachment. Match applied netAttachments in the same namespace as infra/instance.
namespace
string
Specifies the namespace where the network configuration is defined, ensuring that the correct network setup is applied to the pod or container. Not required if applied to the correct namespace.