Prior to integrating the Service Proxy for Kubernetes (SPK) software into the OpenShift cluster, ensure the required software components are installed and properly configured for a successful integration.
Note: SPK software supports RedHat OpenShift versions 4.7 and later.
SPK relies on Single Root I/O Virtualization (SR-IOV) and the Open Virtual Network with Kubernetes (OVN-Kubernetes) CNI to support low-latency 5G workloads. To ensure the cluster supports multi-homed Pods; the ability to select either the default (overlay) CNI or the OVN-Kubernetes (underlay) CNI, review each of the sections below.
To properly manage cluster networking, install and configure the OpenShift Cluster Network Operator.
Important: OpenShift 4.8 requires configuring local gateway mode using the steps below:
Create the manifest files:
openshift-install --dir=<install dir> create cluster
ConfigMapin new manifest directory, and add the following YAML code:
apiVersion: v1 kind: ConfigMap metadata: name: gateway-mode-config namespace: openshift-network-operator data: mode: "local" immutable: true
Create the cluster:
openshift-install create cluster --dir=<install dir>
The Cluster Network Operator installation on Github.
To define the SR-IOV Virtual Functions (VFs) injected into the Service Proxy Traffic Management Microkernel (TMM) container, configure the following OpenShift network objects:
- An external and internal Network node policy.
- An external and internal Network attachment definition.
Multiprocessor servers divide memory and CPUs into multiple NUMA nodes, each having a non-shared system bus. When installing the Ingress Controller, the CPUs and SR-IOV VFs allocated to the Service Proxy TMM container must share the same NUMA node. To ensure the CPU NUMA node alignment is handled properly by the cluster, install the Performance Addon Operator and ensure the following parameters are set:
- Set the Topology Manager Policy to
- Set the CPU Manager Policy to
staticin the Kubelet configuration.
The OpenShift Topology Manager dynamically allocates CPU resources, however, the version 4.7 Scheduler currently lacks two features required to support low-latency 5G applications:
- Simultaneous Multi-threading (SMT), or hyper-threading awareness.
- NUMA topology awareness.
Lacking these features, the scheduler can allocate CPUs to Numa core IDs that provide poor performance, or insufficient resources within a NUMA node. To ensure the Service Proxy TMM Pods install with sufficient Numa resources:
- Disable SMT - To install Pods with Guaranteed QoS, each OpenShift worker node must have Simultaneous Multi-threading (SMT) disabled in the BIOS.
- Use Labels or Node Affinity - To assign Pods to worker nodes with sufficient resources, use Labels or Node Affinity. For a brief overview of using labels, refer to the Using Node Labels guide.
The optional Fluentd logging collector, dSSM database and Traffic Management Microkernel (TMM) Debug Sidecar require an available Kubernetes persistent storage to bind to during installation.