Cluster Requirements¶
Overview¶
Prior to integrating Cloud-Native Network Functions (CNFs) into the OpenShift cluster, review this document to ensure the required software components are installed and properly configured.
Software support¶
The CNFs and Red Hat software versions listed below are the tested versions. F5 recommends these versions for the best performance and installation experience.
| CNFs | OpenShift |
|---|---|
| 2.1.0 | 4.16.30 |
| 2.0.0-2.0.1 | 4.16.30 |
| 1.2.1 - 1.4.0 | 4.12 and 4.14 |
| 1.1.1 | 4.12 |
| 1.1.0 | 4.10.32 |
Pod Networking¶
To support low-latency 5G workloads, CNFs relies on Single Root I/O Virtualization (SR-IOV) and the Open Virtual Network with Kubernetes (OVN-Kubernetes) CNI. To ensure the cluster supports multi-homed Pods; the ability to select either the default (virtual) CNI or the SR-IOV / OVN-Kubernetes (physical) CNI, review the sections below.
Network Operator¶
To properly manage the cluster networks, the OpenShift Cluster Network Operator must be installed.
Important: OpenShift 4.8 requires configuring local gateway mode using the steps below:
Create the manifest files:
openshift-install --dir=<install dir> create cluster
Create a
ConfigMapin new manifest directory, and add the following YAML code:apiVersion: v1 kind: ConfigMap metadata: name: gateway-mode-config namespace: openshift-network-operator data: mode: "local" immutable: true
Create the cluster:
openshift-install create cluster --dir=<install dir>
The Cluster Network Operator installation on Github.
SR-IOV¶
Supported NICs¶
The table below lists the currently supported NICs.
| VF Information | PF Information | OCP Version | ||||
|---|---|---|---|---|---|---|
| NICs | VF PCI IDs | PF PCI IDs | Kernel Network Driver | Network Drivers Version | Firmware | |
| Mellanox ConnectX-5 | 15b3:1018 | 15b3:1017 | mlx5_core | 4.18.0-372.58.1.el8_6.x86_64 | 16.32.1010 | 4.16.30 |
| Mellanox ConnectX-6 | 15b3:101d | 15b3:101e | mlx5_core | 5.14.0-284.52.1.el9_2.x86_64 | 22.36.1010 | 4.14.13 |
| E810-C* | 8086:1889 | 8086:1592 | ice | 5.14.0-284.54.1.el9_2.x86_64 | 3.10 0x8000ad67 1.3106.0 | 4.14.14 |
| Intel XXV710 | 8086:154c | 8086:158b | i40e | 5.14.0-284.52.1.el9_2.x86_64 | NVM 9.10 | 4.14.13 |
VF Configuration¶
To define the SR-IOV Virtual Functions (VFs) used by the Service Proxy Traffic Management Microkernel (TMM), configure the following OpenShift network objects:
An external and internal Network node policy.
An external and internal Network attachment definition.
Set the
spoofChkparameter tooff.Set the
trustparameter toon.Set the
capabilitiesparameter to'{"mac": true, "ips": true}'.Do not set the
vlanparameter, set the F5BigNetVlantagparameter.
Refer to the CNFs Config File Reference for examples.
CPU Allocation¶
Multiprocessor servers divide memory and CPUs into multiple NUMA nodes, each having a non-shared system bus. When installing the CNFs Controller, the CPUs and SR-IOV VFs allocated to the Service Proxy TMM container must share the same NUMA node. To ensure the proper handling of the CPU NUMA node alignment by the cluster, install the Performance Addon Operator and set the following parameters:
Set the Performance Profile’s Topology Manager Policy to
single-numa-node.Set the CPU Manager Policy to
staticin the Kubelet configuration.
Simultaneous Multithreading (SMT)¶
CNFs supports deployments in hyperthreading-enabled environments, enhancing scalability and resource utilization. This feature allows TMM to effectively manage logical CPUs, ensuring high performance in hyperthreaded setups.
For more information on managing this feature, see Simultaneous Multithreading and TMM Values sections.
Scheduler Limitations¶
The OpenShift Topology Manager dynamically allocates CPU resources, however, the version 4.7 Scheduler currently lacks the NUMA topology awareness feature required to support low-latency 5G applications:
With a lack of NUMA topology awareness feature, the scheduler can allocate CPUs to Numa core IDs that provide poor performance, or insufficient resources within a NUMA node to schedule Pods. To ensure the Service Proxy TMM Pods install with sufficient Numa resources:
Use Labels or Node Affinity - To assign Pods to worker nodes with sufficient resources, use Labels or Node Affinity. For a brief overview of using labels, refer to the Using Node Labels guide. –!>
Common Vulnerabilities and Exposures (CVE)¶
CVE, short for Common Vulnerabilities and Exposures, is a list of publicly disclosed computer security flaws. When someone refers to a CVE, they mean a security flaw that has been assigned a CVE ID number.
Check the RH version patched for a CVE¶
There are a few ways to check if a specific kernel has been patched for a specific CVE. Some of those ways to check are by using the following commands:
RPM command
If you have access to the rpm (Red Hat Package Manager) utility on your system, use the rpm command to check the change log and grep for the CVE name.
Example:
rpm -qp kernel-3.10.0-862.11.6.el7.x86_64.rpm --changelog | grep CVE-2017-12190
YUM command
If the kernel package for the kernel in question is in a repo that is configured and enabled on your server, you could use yum as follows:
yum list --cve CVE-2017-12190 | grep kernel.x86_64
kernel.x86_64 3.10.0-327.22.2.el7 @rhel-7-server-eus-rpms
kernel.x86_64 3.10.0-514.2.2.el7 @rhel-7-server-rpms
kernel.x86_64 3.10.0-693.2.2.el7 @rhel-7-server-rpms
kernel.x86_64 3.10.0-862.14.4.el7 rhel-7-server-rpms
This shows that the above kernels include patches for CVE CVE-2017-12190
Verify if the OpenShift version is vulnerable to the CVEs¶
Red Hat OpenShift Container Platform (RHOCP) 4 includes a fully managed node operating system, Red Hat Enterprise Linux CoreOS, commonly referred to as RHCOS.
The OpenShift cluster updates RHCOS when applying cluster updates, including sometimes updating between RHEL minor releases. This details the underlying RHEL minor versions present in currently supported versions of OCP.
OpenShift uses RHCOS (Red Hat CoreOs). This table shows which version of RHCOS maps to which RHEL version:
RHCOS/OCP Versions RHEL Versions
4.6 RHEL 8.2
4.7.0-4.7.23 RHEL 8.3
4.7.24+ RHEL 8.4
4.8 RHEL 8.4
4.9 RHEL 8.4
4.10 RHEL 8.4
4.11 RHEL 8.6
4.12 RHEL 8.6
4.13 RHEL 9.2
OCP 4.12 uses RHEL 8.6. The above patches were applied on RHEL8, which implies the above patches are available in OCP 4.12.
Mitigations for CPU Resource Allocation¶
Following are a few mitigations and troubleshooting steps for addressing CPU resource allocation challenges in environments where hyperthreading (SMT) and Kubernetes static CPU policies are used.
Apply Intel microcode patches, Kernel/OS patches, or compiler mitigations such as “return trampoline” known as retpoline.
Prevent untrusted processes or deployments from sharing the same CPU; configure deployments to use two or more full physical cores.
Enable the full-pcpus-only policy with a static CPU Manager policy to ensure pods are allocated whole physical cores, avoiding partial allocations.
If the Kubernetes instance is configured properly with a static CPU policy and a full-pcpus-only policy, and the TMM starts with the correct count of CPU resources (For example 2), it will be assigned both threads of the same core, which means the whole core is assigned. No thread for the same core should be assigned to another work-load.
However, it is still dependent on how Kubernetes is configured, and how TMM starts. There should be two stages here.
When all are properly configured and the TMM has two threads of the same core, the mapres will detect these two threads as separate cores and will start two TMM threads. This initial mode should have a warning for performance implications.
Future implementation: When TMM starts with two threads, then the mapres (or similar) detects simultaneous multithreading, validates threads that belong to the same core, and only uses only one of these threads. This starts with only a single TMM thread per core by utilizing the entire core effectively.
Persistent storage¶
The required Fluentd logging collector, dSSM database and Traffic Management Microkernel (TMM) Debug Sidecar require an available Kubernetes persistent storage to bind to during installation.
Next step¶
Continue to the Getting Started guide to begin integrating the CNFs software components.
Feedback¶
Provide feedback to improve this document by emailing cnfdocs@f5.com.
Supplemental¶
The CNI project.
CNFs Networking Overview.