Manual CPU Allocation

On multiprocessor servers, the Service Proxy Traffic Management Microkernel (TMM) CPU cores must bind to core IDs from the same NUMA node. As described in the Cluster Requirements guide, the Ingress Controller relies on the OpenShift Performance Addon Operator to properly allocate and align TMM’s CPU cores during installation. If the OpenShift CPU management solutions are not possible, you can manually allocate TMM CPU cores using this guide.

_images/spk_warn.png Important: Manual core allocation is not recommended, and leads to sub-optimal performance when scaling the TMM Pods.

Allocation guidelines

Use these guidelines to manually allocate TMM CPU cores when installing the Ingress Controller:

  • Avoid the core IDs 0 and 1, used by various Kubernetes processes.
  • Select an even number of CPU core IDs; 2, 4, or 8.
  • Ensure core IDs and SR-IOV PFs/VFs share the same NUMA node.
  • Do not bind TMM threads to SMT hyperthreaded (HT) cores.
  • Do not run processes in SMT HT cores that share cores with TMM threads.
  • Use Kubernetes labels to install TMM on cluster nodes with additional resources.

Core selection

Service Proxy TMM CPU cores are manually bound to specified NUMA node core IDs using the PAL_CPU_SET environment variable. To view and select CPU core IDs per NUMA node, you can run one of the following Linux CLI commands:

  1. Log in to the baremetal server command line interface (CLI), or launch an oc debug Pod:

    _images/spk_info.png Note: The oc debug command creates a copy of the node in a new Pod, and opens a command shell.

    oc debug node/<node name>
    

    In this example, we create a copy of the worker-2.ocp.f5.com Pod:

    oc debug node/worker-2.ocp.f5.com
    
  2. The lscpu command displays the number of NUMA nodes, and the CPU IDs on each node:

    lscpu |grep NUMA
    
    NUMA node(s):        2
    NUMA node0 CPU(s):   0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38
    NUMA node1 CPU(s):   1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39
    
  3. The top command provides real-time viewing of CPU load averages, based on a selected NUMA node. Enter the top command and press the 3 key. Locate expand which node (0-1) at the bottom of the screen, and enter a NUMA ID:

    top
    

    In this example, Numa node 0 CPU core IDs are displayed:

    top - 21:27:19 up 6 days,  4:37,  0 users,  load average: 4.73, 5.44, 6.07
    Tasks: 2169 total,   1 running, 2168 sleeping,   0 stopped,   0 zombie
    %Node0 :   3.2/2.7     6[||||||                
    %Cpu0  :  14.9/16.6   31[||||||||||||||||||||||||||||||||    
    %Cpu2  :   3.0/23.4   26[||||||||||||||||||||||||||      
    %Cpu4  :   6.3/5.3    12[|||||||||||                 
    %Cpu6  :   3.6/2.6     6[|||||||                
    %Cpu8  :   3.6/3.3     7[|||||||    
    %Cpu10 :   7.6/1.7     9[||||||||||        
    %Cpu12 :   2.0/1.6     4[||||             
    

Hugepages

Service Proxy TMM requires hugepages to enable DMA (direct memory access). When allocating TMM CPU cores, hugepages must be pre-allocated using the resources.limits.hugepages-2Mi parameter. To calculate the minimum amount of hugepages, use the following formula: 1.5GB x TMM thread count. For example. To bind 4 TMM threads allocate 6GB of hugepages memory.

_images/spk_info.png Note: The number of TMM threads will be reduced if the allocation value is insufficent.

Example override

When the tmm parameters below are placed in the Ingress Controller values file, the TMM container deploys with 2 CPU cores that bind to core IDs 6 and 8 on Numa node0:

_images/spk_warn.png Important: Do not use fractional (millicpus) cpu values, and use either Mi or Gi suffixes to set hugepages and memory values.

tmm:

  customEnvVars:
   - name: PAL_CPU_SET
     value: "6,8"

  resources:
    limits:
      cpu: "2"
      hugepages-2Mi: "3Gi"
      memory: "2Gi"

Verifying threads

The following commands to verify the TMM CPU coress have bound successfully.

  1. Log in to the TMM Debug Sidecar:

    oc exec -it deploy/f5-tmm -c debug -n <project> -- bash
    

    In this example, the TMM Pod is in the spk-ingress Project:

    oc exec -it deploy/f5-tmm -c debug -n <project> -- bash
    
  2. Run the process status command and filter for the tmm.0 process:

    ps aux | grep tmm.0
    

    In this example, –cpu 6,8 indicates the TMM CPU cores are properly bound:

    root          20 19.9  0.0 23527884 163244 ?     S<Ll 02:38   0:10 tmm.0 --cpu 6,8
    

Feedback

Provide feedback to improve this document by emailing spkdocs@f5.com.