SPK Controller¶
Overview¶
The Service Proxy for Kubernetes (SPK) Controller and Service Proxy Traffic Management Microkernel (TMM) Pods install together, and are the primary application traffic management software components. Once integrated, Service Proxy TMM can be configured to proxy and load balance high-performance 5G workloads using SPK CRs.
This document guides you through creating the Controller and TMM Helm values file, installing the Pods, and creating TMM’s internal and external VLAN interfaces.
Requirements¶
Ensure you have:
- Uploaded the SPK Software.
- Installed the SPK Cert Manager.
- Completed the SPK Licensing.
- A Linux based workstation with Helm installed.
Procedures¶
The Controller and Service Proxy Pods rely on a number of custom Helm values to install successfully. Use the steps below to obtain important cluster configuration data, and create the proper Helm values file for the installation procedure.
If you haven’t already, create a new Project for the Controller and Service Proxy deployments:
oc new-project <project>
In this example, a new Project named spk-ingress is created:
oc new-project spk-ingress
Switch to the Controller Project:
In this example, the spk-ingress Project is selected:
oc project spk-ingress
TMM values¶
Use these steps to configure the TMM Proxy Helm values for your environment.
Ensure Helm has the location of your local image registry to download the TMM container:
f5-tmm: enabled: true tmm: image: repository: "local.registry.com"
Add the ServiceAccount for the TMM Pod to the privileged security context constraint (SCC):
A. By default, TMM uses the default ServiceAccount:
oc adm policy add-scc-to-user privileged -n <project> -z default
In this example, the default ServiceAccount is added to the privileged SCC for the spk-ingress Project:
oc adm policy add-scc-to-user privileged -n spk-ingress -z default
B. To use a custom ServiceAccount, you must also update the SPK Controller Helm values file:
In this example, the custom spk-tmm ServiceAccount is added to the privileged SCC.
oc adm policy add-scc-to-user privileged -n spk-ingress -z spk-tmm
In this example, the custom spk-tmm ServiceAccount is added to the Helm values file.
f5-tmm: tmm: serviceAccount: name: spk-tmm
As described in the Networking Overview, the Controller uses OpenShift network node policies and network attachment definitions to create Service Proxy TMM’s interface list. Use the steps below to obtain the node policies and attachment definition names, and configure the TMM interface list:
A. Obtain the names of the network attachment definitions:
oc get net-attach-def
In this example, the network attachment definitions are named internal-netdevice and external-netdevice:
internal-netdevice external-netdevice
B. Obtain the names of the network node policies using the network attachment definition
resourceName
parameter:oc describe net-attach-def | grep openshift.io
In this example, the network node policies are named internalNetPolicy and externalNetPolicy:
Annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/internalNetPolicy Annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/externalNetPolicy
C. Create a Helm values file named ingress-values.yaml and set the node attachment and node policy names to configure the TMM interface list:
In this example, the
cniNetworks:
parameter references the network attachments, and orders TMM’s interface list as: 1.1 (internal) and 1.2 (external):f5-tmm: tmm: cniNetworks: "project/internal-netdevice,project/external-netdevice” customEnvVars: - name: OPENSHIFT_VFIO_RESOURCE_1 value: "internalNetPolicy" - name: OPENSHIFT_VFIO_RESOURCE_2 value: "externalNetPolicy"
SPK supports Ethernet frames over 1500 bytes (Jumbo frames), up to a maxiumum transmission unit (MTU) size of 9000 bytes. To modify the MTU size, adapt the
TMM_DEFAULT_MTU
parameter:Important: The same MTU value must be set in each of the installed F5SPKVlan CRs. SPK does not currently support different MTU sizes.
f5-tmm: tmm: customEnvVars: - name: TMM_DEFAULT_MTU value: "9000"
The Controller relies on the OpenShift Performance Profile to dynamically allocate and properly align TMM’s CPU cores. Use the steps below to reference the installed Performance Profile:
A. Obtain the full performance profile name from the
runtimeClass
parameter:oc get performanceprofile -o jsonpath='{..runtimeClass}{"\n"}'
In this example, the performance profile name is performance-spk-loadbalancer:
performance-spk-loadbalancer
B. Use the performance profile name to configure the
runtimeClassName
parameter, and set the parameters below in the Helm values file:f5-tmm: tmm: topologyManager: "true" runtimeClassName: "performance-spk-loadbalancer" pod: annotations: cpu-load-balancing.crio.io: disable
Open Virtual Network with Kubernetes (OVN-Kubernetes) annotations are applied to the Service Proxy TMM Pod enabling Pods use TMM’s internal interface as their egress traffic default gateway. To enable OVN-Kubernetes annotations, set the
tmm.icni2.enabled
parameter totrue
:f5-tmm: tmm: icni2: enabled: true
To load balance application traffic between networks, or to scale Service Proxy TMM beyond a single instance in the Project, the f5-tmm-routing container must be enabled, and a Border Gateway Protocol (BGP) session must be established with an external neighbor. The parameters below configure an external BGP peering session with a single neighbor. For additional BGP configuration parameters, refer to the BGP Overview guide.
Note: The SPK Controller can also load native ZebOS ConfigMaps, enabling config modifications while the routing container is running.
f5-tmm: tmm: dynamicRouting: enabled: true exportZebosLogs: true tmmRouting: image: repository: "registry.com" config: bgp: asn: 123 neighbors: - ip: "192.168.10.100" asn: 456 acceptsIPv4: true tmrouted: image: repository: "registry.com"
AFM values¶
To enable the Edge Firewall feature, and to ensure Helm can obtain the image from the local image registry, add the following Helm values:
global: afm: enabled: true pccd: enabled: true f5-afm: enabled: true cert-orchestrator: enabled: true afm: pccd: enabled: true image: repository: "local.registry.com"
The Edge Firewall’s default firewall mode accepts all network packets not matching an F5BigFwPolicy firewall rule. You can modify this behavior using the F5BigContextGlobal Custom Resource (CR). For addition details about the default firewall mode and logging parameters, refer to the Firewall mode section of the F5BigFwPolicy overview:
The Fluentd Logging collector is enabled by default, and requires setting the
f5-toda-logging.fluentd.host
parameter. If you installed Fluentd, ensure thehost
parameter targets the Fluentd Pod’s namespace:In this example, the host value includes the Fluentd Pod’s spk-utilities Namespace.
f5-afm: afm: fluentbit_sidecar: fluentd: host: 'f5-toda-fluentd.spk-utilities.svc.cluster.local.' image: repository: "local.registry.com" global: afm: enabled: true pccd: enabled: true
Controller values¶
Ensure Helm can obtain the Controller image from the local image registry, add the following Helm values:
The example below also includes the SPK CWC values.
controller: image: repository: "local.registry.com" f5_lic_helper: enabled: true rabbitmqNamespace: "spk-telemetry" image: repository: "local.registry.com"
The f5-toda-logging container is enabled by default, and requires setting the
f5-toda-logging.fluentd.host
parameter.A. If you installed the Fluentd Logging collector, set the
host
parameters:controller: fluentbit_sidecar: fluentd: host: 'f5-toda-fluentd.spk-utilities.svc.cluster.local.' f5-toda-logging: fluentd: host: 'f5-toda-fluentd.spk-utilities.svc.cluster.local.'
B. If you did not install the Fluentd Logging collector, set the
f5-toda-logging.enabled
parameter tofalse
:f5-tmm f5-toda-logging: enabled: false
The Controller and Service Proxy TMM Pods install to a different Project than the internal application (Pods). Set the
watchNamespace
parameter to the Pod Project(s):controller: watchNamespace: - "spk-apps" - "spk-apps2"
If you intend to install the F5SPKIngressHTTP2 Custom Resource, set the
tmm.tlsStore.enabled
paramter to true. This enables TMM to mount the Secrets located in the secret store named tls-keys-certs-secret:Important: The tls-keys-certs-secret Secret must be created before the SPK Controller is installed, otherwise the mount will fail and cause the TMM to enter a restart loop.
f5-tmm: tmm: tlsStore: enabled: true
Completed values¶
The completed Helm values file should appear similar to the following:
f5-tmm:
enabled: true
tmm:
replicaCount: 1
image:
repository: "local.registry.com"
topologyManager: "true"
runtimeClassName: "performance-spk-loadbalancer"
pod:
annotations:
cpu-load-balancing.crio.io: disable
tlsStore:
enabled: true
icni2:
enabled: true
cniNetworks: "spk-ingress/internal-netdevice,spk-ingress/external-netdevice"
sessiondb:
useExternalStorage: "true"
customEnvVars:
- name: OPENSHIFT_VFIO_RESOURCE_1
value: "internalNetPolicy"
- name: OPENSHIFT_VFIO_RESOURCE_2
value: "externalNetPolicy"
- name: TMM_DEFAULT_MTU
value: "9000"
- name: SESSIONDB_DISCOVERY_SENTINEL
value: "true"
- name: SESSIONDB_EXTERNAL_SERVICE
value: "f5-dssm-sentinel.spk-utilities"
- name: SSL_SERVERSIDE_STORE
value: "/tls/tmm/mds/clt"
- name: SSL_TRUSTED_CA_STORE
value: "/tls/tmm/mds/clt"
dynamicRouting:
enabled: true
tmmRouting:
image:
repository: "local.registry.com"
config:
bgp:
asn: 123
neighbors:
- ip: "192.168.10.200"
asn: 456
acceptsIPv4: true
tmrouted:
image:
repository: "local.registry.com"
blobd:
image:
repository: "local.registry.com"
debug:
image:
repository: "local.registry.com"
rabbitmqNamespace: "spk-telemetry"
global:
afm:
enabled: true
pccd:
enabled: true
f5-afm:
enabled: true
cert-orchestrator:
enabled: true
afm:
pccd:
enabled: true
image:
repository: "local.registry.com"
fluentbit_sidecar:
fluentd:
host: 'f5-toda-fluentd.spk-utilities.svc.cluster.local.'
image:
repository: "local.registry.com"
controller:
image:
repository: "local.registry.com"
f5_lic_helper:
enabled: true
rabbitmqNamespace: "spk-telemetry"
image:
repository: "local.registry.com"
watchNamespace:
- "spk-apps"
- "spk-apps-2"
fluentbit_sidecar:
enabled: true
fluentd:
host: 'f5-toda-fluentd.spk-utilities.svc.cluster.local.'
image:
repository: "local.registry.com"
f5-toda-logging:
fluentd:
host: "f5-toda-fluentd.spk-utilities.svc.cluster.local."
sidecar:
image:
repository: "local.registry.com"
tmstats:
config:
image:
repository: "local.registry.com"
Installation¶
Change into the local directory with the SPK files, and list the files in the tar directory:
cd <directory>
ls -1 tar
In this example, the SPK files are in the spkinstall directory:
cd spkinstall
ls -1 tar
In this example, Controller and Service Proxy TMM Helm chart is named f5ingress-v12.0.35.tgz:
csrc-v0.4.10.tgz cwc-5.0.10.tgz f5-cert-gen-0.9.2.tgz f5-cert-manager-0.22.10.tgz f5-crdconversion-0.4.14.tgz f5-dssm-4.0.5.tgz f5-toda-fluentd-7.0.5.tgz f5ingress-v12.0.35.tgz log-doc-f5ingress-12.0.35.tgz rabbitmq-7.0.2.tgz spk-docker-images.tgz
Switch to the Controller Project:
In this example, the spk-ingress Project is selected:
oc project spk-ingress
Install the Controller and Service Proxy TMM Pods, referencing the Helm values file created in the previous procedure:
helm install <release name> tar/f5ingress-<version>.tgz -f <values>.yaml
In this example, Controller installs using Helm chart version 10.0.1:
helm install f5ingress tar/f5ingress-v12.0.35.tgz -f ingress-values.yaml
Verify the Pods have installed successfully, and all containers are Running:
oc get pods
In this example, all containers have a STATUS of Running as expected:
NAME READY STATUS f5ingress-f5ingress-744d4fb88b-4ntrx 2/2 Running f5-tmm-79b6d8b495-mw7xt 5/5 Running
Verify the f5ingress Pod has successfully licensed:
oc logs f5ingress-f5ingress-5dbd74df49-dqtk2 -c f5-lic-helper \ -n spk-ingress | grep -i LicenseVerified
In this example, the f5ingress Pod’s f5-lic-helper indicates Entitlement: paid.
2023-02-03 22:00:44.221|A|informational|1|Message="Payload type: ResponseCM20LicenseVerified Entitlement: paid Expiry Date: 2024-01-29T00:01:03Z"
Continue to the next procedure to configure the TMM interfaces.
Interfaces¶
The F5SPKVlan Custom Resource (CR) configures the Service Proxy TMM interfaces, and should install to the same Project as the Service Proxy TMM Pod. It is important to set the F5SPKVlan spec.internal
parameter to true
on the internal VLAN interface to apply OVN-Kubernetes Annotations, and to select an IP address from the same subnet as the OpenShift nodes. Use the steps below to install the F5SPKVlan CR:
Verify the IP address subnet of the OpenShift nodes:
For version 4.10.x and later, when the OpenShift Extra Bridge (br-ex1) feature is enabled, use the exgw-ip-addressess subnet:
oc get nodes -o json | grep --color exgw-ip-addresses
"k8s.ovn.org/l3-gateway-config": \"exgw-ip-address\":\"172.20.1.201/24\",\"next-hops\":[\"10.144.174.254\"],
For version 4.7.x and earlier, or when the OpenShift Extra Bridge (br-ex1) feature is disabled, use the node-primary-ifaddr subnet:
oc get nodes -o yaml | grep node-primary-ifaddr
k8s.ovn.org/node-primary-ifaddr: '{"ipv4":"10.144.175.15/24","ipv6":"2620:128:e008:4018::15/128"}'
Configure external and internal F5SPKVlan CRs. You can place both CRs in the same YAML file:
Note: Set the external facing F5SPKVlan to the external BGP peer router’s IP subnet.
apiVersion: "k8s.f5net.com/v1" kind: F5SPKVlan metadata: name: "vlan-internal" namespace: spk-ingress spec: name: net1 interfaces: - "1.1" internal: true selfip_v4s: - 10.144.175.200 prefixlen_v4: 24 mtu: 9000 --- apiVersion: "k8s.f5net.com/v1" kind: F5SPKVlan metadata: name: "vlan-external" namespace: spk-ingress spec: name: net2 interfaces: - "1.2" selfip_v4s: - 192.168.100.1 prefixlen_v4: 24 mtu: 9000
Install the VLAN CRs:
oc apply -f <crd_name.yaml>
In this example, the VLAN CR file is named spk_vlans.yaml.
oc apply -f spk_vlans.yaml
List the VLAN CRs:
oc get f5-spk-vlans
In this example, the VLAN CRs are installed:
NAME vlan-external vlan-internal
If the Debug Sidecar is enabled (the default), you can verify the f5-tmm container’s interface configuration:
oc exec -it deploy/f5-tmm -c debug -n spk-ingress -- ip a
The interfaces should appear at the bottom of the list:
8: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 inet 10.144.175.200/24 brd 192.168.10.0 scope global client valid_lft forever preferred_lft forever inet6 2002::192:168:10:100/112 scope global valid_lft forever preferred_lft forever 9: net2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 link/ether 1e:80:c1:e8:81:15 brd ff:ff:ff:ff:ff:ff inet 192.168.100.1/24 brd 10.10.10.0 scope global server valid_lft forever preferred_lft forever inet6 2002::10:10:10:100/112 scope global valid_lft forever preferred_lft forever
Continue to the Next step.
Uninstallation¶
The following steps are mandatory for cleaning up product installations:
Delete all configured CRs:
kubectl delete -f <cr-file> -n <*namespace>
For example:
kubectl delete -f spk_vlans.yaml -n spk-ingress
Uninstall the product:
helm uninstall <helm-installation-name> -n <*namespace>
For example:
helm uninstall f5ingress -n spk-ingress
Note: In the above commands, the namespace can be either
tmmNamespace
orwatchNamespace
.Important: If the above order is missed, then below script can be used to clean up finalizers from CR’s and proceed with uninstallation of product and namespace.
#!/bin/sh if [ $# -ne 1 ] ; then echo "Invalid Arguments, provide namespace as argument" exit 1 fi echo "This will remove finalizers of all usecase CRs of namespace $1" crs=$(kubectl api-resources --namespaced=true --verbs=list -o name | egrep 'f5-big|f5-spk' | xargs -n 1 kubectl get --show-kind --ignore-not-found -n $1 | grep f5 | cut -d ' ' -f 1) for cr in $crs; do result=$(kubectl -n $1 patch $cr -p '{"metadata":{"finalizers":[]}}' --type=merge) echo $result done echo "" echo "Removed finalizers of all CRs of namespace $1"
For more details, refer to the Finalizers section of the SPK CRs guide.
Feedback¶
Provide feedback to improve this document by emailing spkdocs@f5.com.