SR-IOV PFs/VFs

SR-IOV uses Physical Functions (PFs) to segment compliant PCIe devices into multiple Virtual Functions (VFs), appearing as multiple PCIe devices to cluster nodes and Pods. Prior to installing the Ingress Controller on a multiprocesser system, ensure the SR-IOV VFs defined in the OpenShift networking configuration connect to the same Non-uniform memory access (NUMA) node.

This document demonstrates how to verify the OpenShift network configuration, and determine which NUMA node the associated VFs connect to.

Requirements

Ensure you have:

Procedures

SR-IOV configurations

Use these steps to verify the network node policies (SriovNetworkNodePolicy) and network attachment definitions (net-attach-def) are configured correctly.

  1. Verify the SriovNetworkNodePolicy objects have been created:

    oc get sriovnetworknodepolicies -n openshift-sriov-network-operator \
    -o "custom-columns=Node Policy:.metadata.name" | grep -iv default
    

    In this example, the object names include the name of the associated VFs:

    Node Policy
    ens1f0-netdev-policy
    ens1f1-netdev-policy
    
  2. Obtain the Resource Name of the SriovNetworkNodePolicy objects:

    _images/spk_info.png Note: Network attachment definitions use the Resource Name to reference a network node policy.

    oc get sriovnetworknodepolicies -n openshift-sriov-network-operator \
    -o "custom-columns=Resource Name:.spec.resourceName" 
    

    In this example, the Resource Name resembles the network node policy name, in camelCase format:

    Resource Name
    ens1f0NetdevPolicy
    ens1f1NetdevPolicy
    
  3. Obtain the VFs associated with the SriovNetworkNodePolicy:

    _images/spk_info.png Note: This helps determine which NUMA node the VFs connect to.

    oc get sriovnetworknodepolicies -n openshift-sriov-network-operator \
    -o "custom-columns=VFs:.spec.nicSelector.pfNames[0]" | grep -iv none
    

    In this example, the VFs are ens1f0 and ens1f1:

    VFs
    ens1f0
    ens1f1
    
  4. Verify the network attachments have been created in the Project:

    oc get net-attach-def -n <project>
    

    In this example, the network attachments are in the spk-ingress Project:

    oc get net-attach-def -n spk-ingress
    
    ens1f0-netdevice-internal
    ens1f1-netdevice-external
    
  5. Verify the SriovNetworkNodePolicy Resource Name referenced by the network attachments:

    oc get net-attach-def -n <project> \
    -o "custom-columns=Resource Names:.metadata.annotations"
    

    In this example, the node policies are in the spk-ingress Project:

    oc get net-attach-def -n spk-ingress \
    -o "custom-columns=Resource Names:.metadata.annotations"
    

    In this example, ens1f0-netdevice-internal references ens1f0NetdevPolicy, and ens1f1-netdevice-external references ens1f1NetdevPolicy as expected:

    Resource Names
    map[k8s.v1.cni.cncf.io/resourceName:openshift.io/ens1f0NetdevPolicy]
    map[k8s.v1.cni.cncf.io/resourceName:openshift.io/ens1f1NetdevPolicy]
    

NUMA IDs

Use these steps to launch an oc debug Pod, and obtain the NUMA node IDs for the VFs and CPUs.

  1. Obtain the names of the OpenShift cluster worker nodes:

    oc get nodes | grep worker
    
    NAME                  STATUS   ROLES
    worker-1.ocp.f5.com   Ready    worker
    worker-2.ocp.f5.com   Ready    worker
    worker-3.ocp.f5.com   Ready    worker
    
  2. Select one of the workers, and launch an oc debug Pod of the worker:

    oc debug node/worker-1.ocp.f5.com
    
  3. Verify the NUMA node IDs for the VFs:

    cat /sys/class/net/<pf>/device/numa_node
    

    In this example, both VFs are connected to NUMA node 0:

    cat /sys/class/net/ens1f0/device/numa_node
    0
    
    cat /sys/class/net/ens1f1/device/numa_node
    0
    
  4. Verify the NUMA node IDs for the CPUs:

    _images/spk_info.png Note: The NUMA node CPU IDs can be used to bind TMM threads to CPU cores.

    lscpu |grep NUMA
    
    NUMA node(s):        2
    NUMA node0 CPU(s):   0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38
    NUMA node1 CPU(s):   1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39
    
  5. Continue to the Manual CPU Allocation guide to properly map TMM threads to CPU cores.

Supplemental