System Requirements

Overview

Prior to integrating Service Proxy for Kubernetes (SPK) into the OpenShift cluster, review this document to ensure the required software components are installed and properly configured.

Software support

The SPK and Red Hat software versions listed below are the tested versions. F5 recommends these versions for the best performance and installation experience.

SPK OpenShift
2.2.0 and 2.2.1 4.14, 4.16, 4.18 and 4.20
2.1.0 4.14 and 4.16
2.0.0 - 2.0.2 4.14 and 4.16
1.9.2 4.14
1.9.1 4.14
1.9.0 4.12 and 4.14
1.8.2 4.12
1.8.0 4.12
1.7.15 and 1.7 4.14
1.7.14 4.12 and 4.14

Note

See the NIC Table for the supported OpenShift patch versions for each NIC.

Pod Networking

To support low-latency 5G workloads, SPK relies on Single Root I/O Virtualization (SR-IOV) and the Open Virtual Network with Kubernetes (OVN-Kubernetes) CNI. SPK supports the Kubernetes version based in vanilla Kubernetes 1.x and CNI such as Calico and OpenShfit´s OVN.

To ensure the cluster supports multi-homed Pods; the ability to select either the default (virtual) CNI or the SR-IOV / OVN-Kubernetes (physical) CNI, review the sections below.

Network Operator

To properly manage the cluster networks, the OpenShift Cluster Network Operator must be installed.

Important

OpenShift 4.8 requires configuring local gateway mode using the steps below:

  1. Create the manifest files:

    openshift-install --dir=<install dir> create cluster
    
  2. Create a ConfigMap in new manifest directory, and add the following YAML code:

    apiVersion: v1
    kind: ConfigMap
    metadata:
        name: gateway-mode-config
        namespace: openshift-network-operator
    data:
        mode: "local"
    immutable: true
    
  3. Create the cluster:

    openshift-install create cluster --dir=<install dir>
    

Note

The Cluster Network Operator installation on Github.

SR-IOV

Supported NICs

This table lists the NICs validated in the lab and the corresponding OpenShift Container Platform (OCP) versions. You can use any OCP version that is the same as or later than the listed patch level. When the major and minor versions match, all patches within that major.minor version are compatible.

Note

If you require explicit validation, contact the F5 sales team.

VF Information PF Information OCP Version
NICs VF PCI IDs PF PCI IDs Kernel Network Driver Network Drivers Version Firmware
Intel XXV710 8086:154c 8086:158b i40e 5.14.0-284.52.1.el9_2.x86_64 9.10 4.14.13
Intel E810 8086:1889 8086:1592 ice 1.17.2 - 4.0.0
-4.400x8001c9671.3534.0
4.14.14
Mellanox ConnectX-5 15b3:1018 15b3:1017 mlx5_core 4.18.0-372.58.1.el8_6.x86_64 16.35.3006 4.14.51/4.16.47
Mellanox ConnectX-6 Dx* 15b3:101f 15b3:101e mlx5_core 5.14.0-284.52.1.el9_2.x86_64 26.41.1000 4.14.51
Mellanox ConnectX-7* 15b3:1021 15b3:101e mlx5_core 5.14.0-427.97.1.el9_4.x86_64 28.41.1000 (MT_0000001045) 4.16.52
Broadcom BCM57414
NetXtreme-E 10Gb/25Gb
14e4:16d7 14e4:16dc bnxt_en 5.14.0-427.97.1.el9_4.x86_64 234.0.150.0/pkg 234.1.124.0 4.16.52/4.18.28
Broadcom BCM57508
NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb
14e4:1750 14e4:1806 bnxt_en 5.14.0-427.97.1.el9_4.x86_64 234.0.145.0/pkg 234.1.124.0 4.16.40

VF Configuration

To define the SR-IOV Virtual Functions (VFs) used by the Service Proxy Traffic Management Microkernel (TMM), configure the following OpenShift network objects:

  • An external and internal Network node policy.

  • An external and internal Network attachment definition.

    • Set the spoofChk parameter to off.

    • Set the trust parameter to on.

    • Set the capabilities parameter to '{"mac": true, "ips": true}'.

    • Do not set the vlan parameter, set the F5SPKVlan tag parameter.

    • Do not set the ipam parameter, set the F5SPKVlan internal parameter.

Note

Refer to the SPK Config File Reference for examples.

CPU Allocation

Multiprocessor servers divide memory and CPUs into multiple NUMA nodes, each having a non-shared system bus. When installing the SPK Controller, the CPUs and SR-IOV VFs allocated to the Service Proxy TMM container must share the same NUMA node. To ensure the CPU NUMA node alignment is handled properly by the cluster ensure the following parameters are set:

  • Set the Performance Profile’s Topology Manager Policy to single-numa-node.

  • Set the Kubelet configuration’s CPU Manager Policy to static in the Kubelet configuration.

  • When full-pcpus-only policy option is specified along with static CPU Manager policy, an additional check in the allocation logic of the static policy ensures that CPUs would be allocated such that full cores are allocated. Because of this check, a pod would never have to acquire single threads with the aim to fill partially-allocated cores.

  • If the kubernetes instance configured properly with static CPU policy and full-pcpus-only policy, then when TMM starts with correct count of CPU resources, for example 2 it will be assigned both threads of the same core which means it will be assigned the whole core. No thread for the same core should be assigned to other work-load.

  • However, it is still dependent on how kubernetes is configured, and how TMM starts. There should be two stages here.

    1. When all configured properly and the TMM has two threads of the same core. At the moment, mapres will detect these two threads as separate cores and will therefore start 2 tmm threads. This initial mode should have a warning for performance implications.

    2. Future implementation, when TMM starts with two threads, mapres (or similar) detects SMT, validates threads belong to the same core, and only uses one of these threads. i.e. starts only a single TMM thread per core. Effectively, utilizing the entire core.

Simultaneous Multithreading (SMT)

CNFs supports deployments in hyperthreading-enabled environments, enhancing scalability and resource utilization. This feature allows TMM to effectively manage logical CPUs, ensuring high performance in hyperthreaded setups.

For more information on managing this feature, see Simultaneous Multithreading and TMM Values sections.

Persistent storage

The required Fluentd logging collector, dSSM database and Traffic Management Microkernel (TMM) Debug Sidecar require an available Kubernetes persistent storage to bind to during installation.

Feedback

Provide feedback to improve this document by emailing spkdocs@f5.com.

Supplemental