SPK Controller¶
Overview¶
The Service Proxy for Kubernetes (SPK) Controller and Service Proxy Traffic Management Microkernel (TMM) Pods install together, and are the primary application traffic management software components. Once integrated, Service Proxy TMM can be configured to proxy and load balance high-performance 5G workloads using SPK CRs.
This document guides you through creating the Controller and TMM Helm values file, installing the Pods, and creating TMM’s internal and external VLAN interfaces.
Requirements¶
Ensure you have:
- Uploaded the Software images.
- Installed the gRPC Secrets.
- A Linux based workstation with Helm installed.
Procedures¶
Helm values¶
The Controller and Service Proxy Pods rely on a number of custom Helm values to install successfully. Use the steps below to obtain important cluster configuration data, and create the proper Helm values file for the installation procedure.
Switch to the Controller Project:
Note: The Controller Project was created during the gRPC Secrets installation.
oc project <project>
In this example, the spk-ingress Project is selected:
oc project spk-ingress
As described in the Networking Overview, the Controller uses OpenShift network node policies and network attachment definitions to create Service Proxy TMM’s interface list. Use the steps below to obtain the node policies and attachment definition names, and configure the TMM interface list:
A. Obtain the names of the network attachment definitions:
oc get net-attach-def
In this example, the network attachment definitions are named internal-netdevice and external-netdevice:
internal-netdevice external-netdevice
B. Obtain the names of the network node policies using the network attachment definition
resourceName
parameter:oc describe net-attach-def | grep openshift.io
In this example, the network node policies are named internalNetPolicy and externalNetPolicy:
Annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/internalNetPolicy Annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/externalNetPolicy
C. Create a Helm values file named ingress-values.yaml and set the node attachment and node policy names to configure the TMM interface list:
In this example, the
cniNetworks:
parameter references the network attachments, and orders TMM’s interface list as: 1.1 (internal) and 1.2 (external):tmm: # References the network attachment definitions. # Orders TMM's interface list cniNetworks: "project/internal-netdevice,project/external-netdevice” # References the network node policies. # Must be in the same order as the network attachment definitions. customEnvVars: - name: OPENSHIFT_VFIO_RESOURCE_1 value: "internalNetPolicy" - name: OPENSHIFT_VFIO_RESOURCE_2 value: "externalNetPolicy"
SPK supports Ethernet frames over 1500 bytes (Jumbo frames), up to a maxiumum transmission unit (MTU) size of 8000 bytes. To modify the MTU size, adapt the
customEnvVars
parameter:tmm: customEnvVars: - name: TMM_DEFAULT_MTU value: "8000"
The Controller relies on the OpenShift Performance Addon Operator to dynamically allocate and properly align TMM’s CPU cores. Use the steps below to enable the Performance Addon Operator:
A. Obtain the full performance profile name from the
runtimeClass
parameter:oc get performanceprofile -o jsonpath='{..runtimeClass}{"\n"}'
In this example, the performance profile name is performance-spk-loadbalancer:
performance-spk-loadbalancer
B. Use the performance profile name to configure the
runtimeClassName
parameter, and set the the parameters below in the Helm values file:tmm: topologyManager: "true" runtimeClassName: "performance-spk-loadbalancer" pod: annotations: cpu-load-balancing.crio.io: disable
Open Virtual Network with Kubernetes (OVN-Kubernetes) annotations are applied to the Service Proxy TMM Pod enabling Pods use TMM’s internal interface as their egress traffic default gateway. To enable OVN-Kubernetes annotations, set the
tmm.icni2.enabled
parameter totrue
:tmm: icni2: enabled: true
To load balance application traffic between networks, or to scale Service Proxy TMM beyond a single instance in the Project, the f5-tmm-routing container must be enabled, and a Border Gateway Protocol (BGP) session must be established with an external neighbor. The parameters below configure an external BGP peering session:
Note: For additional BGP configuration parameters, refer to the BGP Overview guide.
tmm: dynamicRouting: enabled: true tmmRouting: image: repository: "registry.com" config: bgp: asn: 123 neighbors: - ip: "192.168.10.100" asn: 456 acceptsIPv4: true tmrouted: image: repository: "registry.com"
The f5-toda-logging container is enabled by default, and requires setting the
f5-toda-logging.fluentd.host
parameter.A. If you installed the Fluentd Logging collector, set the
host
parameters:controller: # Sends logging data from the Ingress Controller. fluentbit_sidecar: fluentd: host: 'f5-toda-fluentd.spk-utilities.svc.cluster.local.' f5-toda-logging: # Sends logging data from TMM. fluentd: host: 'f5-toda-fluentd.spk-utilities.svc.cluster.local.'
B. If you did not install the Fluentd Logging collector, set the
f5-toda-logging.enabled
parameter tofalse
:f5-toda-logging: enabled: false
The Controller and Service Proxy TMM Pods install to a different Project than the internal application (Pods). Set the
watchNamespace
parameter to the Pod Project:Important: Ensure the Project currently exists in the cluster, the Controller does not discover Projects created after installation.
controller: watchNamespace: "internal-app"
The completed Helm values file should appear similar to the following:
Note: Set the
image.repository
parameter for each container to your local container registry.tmm: replicaCount: 1 image: repository: "local.registry.com" icni2: enabled: true cniNetworks: "spk-ingress/internal-netdevice,spk-ingress/external-netdevice" customEnvVars: - name: OPENSHIFT_VFIO_RESOURCE_1 value: "internalNetPolicy" - name: OPENSHIFT_VFIO_RESOURCE_2 value: "externalNetPolicy" - name: TMM_DEFAULT_MTU value: "8000" topologyManager: "true" runtimeClassName: "performance-spk-loadbalancer" pod: annotations: cpu-load-balancing.crio.io: disable dynamicRouting: enabled: true tmmRouting: image: repository: "local.registry.com" config: bgp: asn: 123 neighbors: - ip: "192.168.10.200" asn: 456 acceptsIPv4: true tmrouted: image: repository: "local.registry.com" controller: image: repository: "local.registry.com" watchNamespace: "internal-apps" fluentbit_sidecar: enabled: true fluentd: host: 'f5-toda-fluentd.spk-utilities.svc.cluster.local.' image: repository: "local.registry.com" f5-toda-logging: fluentd: host: "f5-toda-fluentd.spk-utilities.svc.cluster.local." sidecar: image: repository: "local.registry.com" tmstats: config: image: repository: "local.registry.com" debug: image: repository: "local.registry.com"
Installation¶
Change into the local directory with the SPK files, and list the files in the tar directory:
cd <directory>
ls -1 tar
In this example, the SPK files are in the spkinstall directory:
cd spkinstall
ls -1 tar
In this example, Controller and Service Proxy TMM Helm chart is named f5ingress-3.0.17.tgz:
f5-dssm-0.22.4.tgz f5-toda-fluentd-1.8.13.tgz f5ingress-3.0.17.tgz spk-docker-images.tgz
Switch to the Controller Project:
Note: The Controller Project was created during the gRPC Secrets installation.
oc project <project>
In this example, the spk-ingress Project is selected:
oc project spk-ingress
Install the Controller and Service Proxy TMM Pods, referencing the Helm values file created in the previous procedure:
helm install <release name> tar/f5ingress-<version>.tgz -f <values>.yaml
In this example, Controller installs using Helm chart version 3.0.17:
helm install f5ingress tar/f5ingress-3.0.17.tgz -f ingress-values.yaml
Verify the Pods have installed successfully, and all containers are Running:
oc get pods
In this example, all containers have a STATUS of Running as expected:
NAME READY STATUS f5ingress-f5ingress-744d4fb88b-4ntrx 2/2 Running f5-tmm-79b6d8b495-mw7xt 5/5 Running
Interfaces¶
The F5SPKVlan Custom Resource (CR) configures the Service Proxy TMM interfaces, and should install to the same Project as the Service Proxy TMM Pod. It is important to set the F5SPKVlan spec.internal
parameter to true
on the internal VLAN interface to apply OVN-Kubernetes Annotations, and to select an IP address from the same subnet as the OpenShift nodes. Use the steps below to install the F5SPKVlan CR:
Verify the IP address subnet of the OpenShift nodes:
oc get nodes -o yaml | grep ipv4
In this example, the nodes are on the IPv4 10.144.175.0/24 subnet:
k8s.ovn.org/node-primary-ifaddr: '{"ipv4":"10.144.175.15/24","ipv6":"2620:128:e008:4018::15/128"}' k8s.ovn.org/node-primary-ifaddr: '{"ipv4":"10.144.175.16/24","ipv6":"2620:128:e008:4018::16/128"}' k8s.ovn.org/node-primary-ifaddr: '{"ipv4":"10.144.175.17/24","ipv6":"2620:128:e008:4018::17/128"}' k8s.ovn.org/node-primary-ifaddr: '{"ipv4":"10.144.175.18/24","ipv6":"2620:128:e008:4018::18/128"}' k8s.ovn.org/node-primary-ifaddr: '{"ipv4":"10.144.175.19/24","ipv6":"2620:128:e008:4018::19/128"}'
Configure external and internal F5SPKVlan CRs. You can place both CRs in the same YAML file:
Note: Set the external facing F5SPKVlan to the external BGP peer router’s IP subnet.
apiVersion: "k8s.f5net.com/v1" kind: F5SPKVlan metadata: name: "vlan-internal" namespace: spk-ingress spec: name: net1 interfaces: - "1.1" internal: true selfip_v4s: - 10.144.175.200 prefixlen_v4: 24 mtu: 8000 --- apiVersion: "k8s.f5net.com/v1" kind: F5SPKVlan metadata: name: "vlan-external" namespace: spk-ingress spec: name: net2 interfaces: - "1.2" selfip_v4s: - 192.168.100.1 prefixlen_v4: 24 mtu: 8000
Install the VLAN CRs:
oc apply -f <crd_name.yaml>
In this example, the VLAN CR file is named spk_vlans.yaml.
oc apply -f spk_vlans.yaml
List the VLAN CRs:
oc get f5-spk-vlans
_In this example, the VLAN CRs are installed:
NAME vlan-external vlan-internal
If a BGP peer is provisioned, refer to the Advertising virtual IPs section of the BGP Overview to verify the session has Established.
Feedback¶
Provide feedback to improve this document by emailing spkdocs@f5.com.