BIG-IP Controller¶
Overview¶
The Cloud-Native Network Functions (CNFs) BIG-IP Controller, Edge Firewall, and Traffic Management Microkernel (TMM) Proxy Pods are the primary CNFs software components, and install together using Helm. Once integrated, Edge Firewall and the TMM Proxy Pods can be configured to process and protect high-performance 5G workloads using CNFs CRs.
This document guides you through creating the CNFs installation Helm values file, installing the Pods, and creating TMM’s clientside (upstream) and serverside (downstream) F5BigNetVlan interfaces.
Requirements¶
Ensure you have:
- Installed the CNFs Software.
- Installed the CNFs Cert Manager.
- Completed the CNFs Licensing.
- A Linux based workstation with Helm installed.
Procedures¶
The CNFs Pods rely on a number of custom Helm values to install successfully. Use the steps below to obtain important cluster configuration data, and create the proper BIG-IP Controller Helm values file for the installation.
If you haven’t already, create a new Project for the CNFs Pods:
oc new-project <project>
In this example, a new Project named cnf-gateway is created:
oc new-project cnf-gateway
Switch to the CNFs Project:
In this example, the cnf-gateway Project is selected:
oc project cnf-gateway
Defining the Platform type¶
While deploying the cluster on the OpenShift platform, set
platformType
parameter toocp
in ingress-values.yaml values file.global: platformType: "ocp"
TMM values¶
Use these steps to enable and configure the TMM Proxy Helm values for your environment.
To enable the TMM Proxy Helm values and to ensure Helm can obtain the image from the local image registry, add the following Helm values:
f5-tmm: enabled: true tmm: image: repository: "local.registry.com"
Add the ServiceAccount for the TMM Pod to the privileged security context constraint (SCC):
A. By default, TMM uses the default ServiceAccount:
oc adm policy add-scc-to-user privileged -n <project> -z default
In this example, the default ServiceAccount is added to the privileged SCC for the cnf-gateway Project:
oc adm policy add-scc-to-user privileged -n cnf-gateway -z default
B. To use a custom ServiceAccount, you must also update the BIG-IP Controller Helm values file:
In this example, the custom spk-tmm ServiceAccount is added to the privileged SCC.
oc adm policy add-scc-to-user privileged -n cnf-gateway -z spk-tmm
In this example, the custom spk-tmm ServiceAccount is added to the Helm values file.
f5-tmm: tmm: serviceAccount: create: true name: spk-tmm
As described in the Networking Overview, the Controller uses OpenShift network node policies and network attachment definitions to create the TMM Proxy Pod interface list. Use the steps below to obtain the node policies and attachment definition names, and configure the TMM interface list:
A. Obtain the names of the network attachment definitions:
oc get net-attach-def
In this example, the network attachment definitions are named clientside-netdevice and serverside-netdevice:
clientside-netdevice serverside-netdevice
B. Obtain the names of the network node policies using the network attachment definition
resourceName
parameter:oc describe net-attach-def | grep openshift.io
In this example, the network node policies are named clientsideNetPolicy and serversideNetPolicy:
Annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/clientsideNetPolicy Annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/serversideNetPolicy
C. Create a Helm values file named ingress-values.yaml and set the node attachment and node policy names to configure the TMM interface list:
In this example, the
cniNetworks:
parameter references the network attachments, and orders TMM’s interface list as: 1.1 (clientside) and 1.2 (serverside):f5-tmm: tmm: cniNetworks: "namespace/clientside-netdevice,namespace/serverside-netdevice” customEnvVars: - name: OPENSHIFT_VFIO_RESOURCE_1 value: "internalNetPolicy" - name: OPENSHIFT_VFIO_RESOURCE_2 value: "externalNetPolicy"
CNFs supports Ethernet frames over 1500 bytes (Jumbo frames), up to a maxiumum transmission unit (MTU) size of 9000 bytes. To modify the MTU size, adapt the
TMM_DEFAULT_MTU
parameter:Important: The same MTU value must be set in each of the installed F5BigNetVlan CRs. CNFs does not currently support different MTU sizes.
f5-tmm: tmm: customEnvVars: - name: TMM_DEFAULT_MTU value: "9000"
The Controller relies on the OpenShift Performance Addon Operator to dynamically allocate and properly align TMM’s CPU cores. Use the steps below to enable the Performance Addon Operator:
A. Obtain the full performance profile name from the
runtimeClass
parameter:oc get performanceprofile -o jsonpath='{..runtimeClass}{"\n"}'
In this example, the performance profile name is performance-cnf-loadbalancer:
performance-cnf-loadbalancer
B. Use the performance profile name to configure the
runtimeClassName
parameter, and set the the parameters below in the Helm values file:f5-tmm: tmm: topologyManager: "true" runtimeClassName: "performance-cnf-loadbalancer" pod: annotations: cpu-load-balancing.crio.io: disable
To advertise routing information between networks, or to scale TMM beyond a single instance, the f5-tmm-routing container must be enabled, and a Border Gateway Protocol (BGP) session must be established with an external neighbor. The parameters below configure an external BGP peering session:
Note: For additional BGP configuration parameters, refer to the BGP Overview guide.
f5-tmm: tmm: dynamicRouting: enabled: true exportZebosLogs: true tmmRouting: image: repository: "registry.com" resources: limits: cpu: "700m" memory: "1Gi" requests: cpu: "700m" memory: "1Gi" config: bgp: asn: 123 neighbors: - ip: "192.168.10.100" asn: 456 acceptsIPv4: true tmrouted: image: repository: "registry.com" resources: limits: cpu: "300m" memory: "512Mi" requests: cpu: "300m" memory: "512Mi"
Set the TMM default Gateway either using BGP or the F5BigNetStaticroute Custom Resource (CR) and perform the following:
a. Set the
add_k8s_routes
paramter to true.Sample configuration:
f5-tmm: tmm: add_k8s_routes: true
b. Provide RBAC user permissions for the
ClusterRole
orkubeadm-config
configmaps.Sample configuration:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cr-f5-5gc-cgnat-{{.user}} # "namespace" omitted since ClusterRoles are not namespaced rules: apiGroups: - "" # "" indicates the core API group resources: - configmaps resourceNames: - kubeadm-config - tmm-k8s-routes-configmap-{{.ns}} verbs: - get - update - patch - list - watch
Important: If you enable the default gateway using either BGP or the F5BigNetStaticroute Custom Resource (CR) without enabling the
add_k8s_routes
paramter, pod-to-pod communication will fail.When
add_k8s_routes
paramter is enabled, and if you intend to perform a product installation with a non-cluster admin, please set the parameters below:create_k8s_routes_sa: false k8s_routes_sa_name:<sa-name>
For this service account, the permissions are only needed to run a prehook job to get pods and service networks on the cluster. This prehook job will be deleted once tmm-k8s-routes-configmap is created.
Use the below RBAC settings while creating a service account and user:
i. ClusterRoles:
Bind the service account with the following RBAC rules:
- apiGroups: - config.openshift.io resources: - networks verbs: - get
Add the following permissions to the F5BigDownloaderPolicy feature as it requires additional permissions:
- apiGroups: - apiextensions.k8s.io resources: - customresourcedefinitions verbs: - get - list - watch - update
ii. Role: Bind the below rules to the user:
- apiGroups: - batch resources: - jobs verbs: - create - delete - get - list - patch - update - watch
The Fluentd Logging collector is enabled by default, and requires setting the
f5-toda-logging.fluentd.host
parameter.A. When Fluentd is enabled, ensure the
fluentd.host
parameter targets the BIG-IP Controller Namespace:In this example, the host value includes the Fluentd Pod’s cnf-gateway Namespace.
f5-tmm: f5-toda-logging: enabled: true fluentd: host: 'f5-toda-fluentd.cnf-utilities.svc.cluster.local.' sidecar: image: repository: "local.registry.com"
B. When Fluentd is disabled, set the
f5-toda-logging.enabled
parameter tofalse
:f5-tmm: f5-toda-logging: enabled: false
The default resources for the TMM Pod are as follows:
f5-tmm: tmm: resources: limits: cpu: 2 hugepages-2Mi: "3Gi" memory: "2Gi" requests: cpu: 2 hugepages-2Mi: "3Gi" memory: "2Gi"
Note: The CPU, memory, and hugepages-2Mi values between limits and requests must be the same for the TMM pod.
AFM values
To enable the Edge Firewall feature, and to ensure Helm can obtain the image from the local image registry, add the following Helm values:
global: afm: enabled: true pccd: enabled: true f5-afm: enabled: true cert-orchestrator: enabled: true afm: pccd: enabled: true image: repository: "local.registry.com"
The Edge Firewall’s default firewall mode accepts all network packets not matching an F5BigFwPolicy firewall rule. You can modify this behavior using the F5BigContextGlobal Custom Resource (CR). For addition details about the default firewall mode and logging parameters, refer to the Firewall mode section of the F5BigFwPolicy overview:
The Fluentd Logging collector is enabled by default, and requires setting the
f5-toda-logging.fluentd.host
parameter. If you installed Fluentd, ensure thehost
parameter targets the Fluentd Pod’s namespace:In this example, the host value includes the Fluentd Pod’s cnf-gateway Namespace.
f5-afm: afm: fluentbit_sidecar: fluentd: host: 'f5-toda-fluentd.cnf-gateway.svc.cluster.local.' image: repository: "local.registry.com" resources: limits: cpu: "0.5" memory: "512Mi" requests: cpu: "0.25" memory: "256Mi"
The default resources for the Edge Firewall are as follows:
f5-afm: afm: pccd: resources: limits: cpu: "1" memory: "512Mi" requests: cpu: "0.5"
To install the AFM pod, execute the following command:
oc adm policy add-scc-to-user privileged -n <f5ingress-namesapce> -z f5-afm
IPSD values
Use these steps to enable and configure the Intrusion Prevention System Helm values for your environment.
To enable the IPSD Pod, and to ensure Helm can obtain the image from the local image registry, add the following Helm values:
f5-ipsd: enabled: true ipsd: image: repository: "local.registry.com"
The Fluentd Logging collector is enabled by default, and requires setting the
f5-toda-logging.fluentd.host
parameter. If you installed Fluentd, ensure thehost
parameter targets the Fluentd Pod’s namespace:In this example, the host value includes the cnf-gateway Namespace.
f5-ipsd: ipsd: fluentbit_sidecar: fluentd: host: "f5-toda-fluentd.cnf-gateway.svc.cluster.local." image: repository: "local.registry.com" resources: limits: cpu: "0.5" memory: "512Mi" requests: cpu: "0.25" memory: "256Mi"
The default resources for the IPSD are as follows:
f5-ipsd: ipsd: resources: limits: cpu: "1" memory: "512Mi" requests: cpu: "0.5" memory: "256Mi"
To install the IPSD pod, execute the following command:
oc adm policy add-scc-to-user privileged -n <f5ingress-namesapce> -z f5-ipsd
Controller values
To ensure Helm can obtain the image from the local image registry, add the following Helm values:
The example below also includes the CNFs CWC values.
controller: image: repository: "local.registry.com"
The following section is needed for the controller to fetch the license information. It communicates with CWC via RabbitMQ to obtain the license. Additionally, it includes an alternative path to read the Kubernetes secret where CWC stores the license details. This alternative option is intended to minimize the impact of issues in CWC/RabbitMQ on the control plane and data plane:
controller: cwcNamespace: default f5_lic_helper: enabled: true rabbitmqNamespace: "cnf-telemetry" image: repository: "local.registry.com"
To remove the singular reliance on RabbitMQ for fetching license information in the controller, do the following:
Install CNF with the CWC namespace:
controller: cwcNamespace: default
Note: The default value of
cwcNamespace
parameter is default.If using a default RBAC, no change is required. For custom RBAC, to allow the controller to access the license from CWC, create a
ClusterRole
andClusterRoleBinding
with the F5Ingress controller service account as shown in the example:kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: <ClusterRoleName> rules: - apiGroups: [" "] resources: ["secrets"] resourceNames: ["licensestatus"] verbs: ["list", "get", "watch"]
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: <ClusterRoleBindingName> roleRef: kind: ClusterRole name: <ClusterRoleName> apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: <cnff5ingressServiceAccount> namespace: <cnff5ingressNamespace>
To enable the TMM pod manager, add the following helm values to the values file.
tmm_pod_manager: image: repository: repo.f5.com file path pullPolicy: always
The Fluentd Logging collector is enabled by default, and requires setting the
f5-toda-logging.fluentd.host
parameter. If you installed Fluentd, ensure thehost
parameter targets the Fluentd Pod’s namespace:In this example, the host value includes the cnf-gateway Namespace.
controller: fluentbit_sidecar: fluentd: host: 'f5-toda-fluentd.cnf-gateway.svc.cluster.local.' image: repository: "local.registry.com"
The default resources for the Controller Pod are as follows:
controller: resources: limits: cpu: 1 memory: 1Gi requests: cpu: 0.5 memory: 1Gi
To install the Controller/f5ingress pod, execute the following command:
oc adm policy add-scc-to-user privileged -n <f5ingress-namesapce> -z f5ingress-f5ingress
Downloader values
Use these steps to enable and configure the Downloader Helm values for your environment.
To connect to the RabbitMQ open source message broker and ensure proper functioning of the downloader pod, enable the RabbitMQ namespace in the values.yaml file.
downloader: name: f5-downloader debug: false rabbitmqNamespace: "cnf-telemetry" storage: enabled: true storageClassName: standard access: ReadWriteOnce
To enable the Downloader Pod, and to ensure Helm can obtain the image from the local image registry, add the following Helm values:
f5-downloader: enabled: true downloader: image: repository: "local.registry.com"
The Fluentd Logging collector is enabled by default, and requires setting the
f5-toda-logging.fluentd.host
parameter. If you installed Fluentd, ensure thehost
parameter targets the Fluentd Pod’s Namespace:In this example, the host value includes the cnf-gateway Namespace.
f5-downloader: downloader: fluentbit_sidecar: image: repository: "local.registry.com" fluentd: host: "f5-toda-fluentd.cnf-gateway.svc.cluster.local." resources: limits: cpu: "0.5" memory: "512Mi" requests: cpu: "0.25" memory: "256Mi"
If CNF is deployed on a cluster with multiple worker nodes, the downloader pod requires a persistent volume with a storage class that supports the
ReadWriteMany
access mode, andaccess: ReadWriteMany
needs to be specified in the values file.The following is an example for the values needed.
f5-downloader: enabled: true downloader: … storage: enabled: true access: ReadWriteMany storageClassName: managed-nfs-storage
To enable the downloader, the CRD updater has to be enabled. To enable the CRD updater, add the following to the values.yaml file.
crdupdater: name: crdupdater image: repository: "local.registry.com" pullPolicy:
The default resources for the Downloader Pod are as follows:
f5-downloader: downloader: resources: limits: cpu: "1" memory: "2000Mi" requests: cpu: "500m" memory: "1000Mi"
To install the Downloader pod, execute the following command:
oc adm policy add-scc-to-user privileged -n <f5ingress-namesapce> -z f5-downloader
Note: Along with this pod, CRD updater sidecar will also come up in the BIG-IP Controller pod.
Completed values
The completed Helm values file should appear similar to the following:
f5-tmm:
enabled: true
tmm:
image:
repository: "local.registry.com"
hugepages:
enabled: true
sessiondb:
useExternalStorage: "true"
topologyManager: true
runtimeClassName: "performance-cnf-loadbalancer"
pod:
annotations:
cpu-load-balancing.crio.io: disable
cniNetworks: "cnf-gateway/clientside-netdevice,cnf-gateway/serverside-netdevice"
sessiondb:
useExternalStorage: "true"
add_k8s_routes: true
customEnvVars:
- name: OPENSHIFT_VFIO_RESOURCE_1
value: "clientsideNetPolicy"
- name: OPENSHIFT_VFIO_RESOURCE_2
value: "serversideNetPolicy"
- name: TMM_DEFAULT_MTU
value: "9000"
- name: SESSIONDB_DISCOVERY_SENTINEL
value: "true"
- name: SESSIONDB_EXTERNAL_SERVICE
value: "f5-dssm-sentinel.cnf-gateway"
- name: SSL_SERVERSIDE_STORE
value: "/tls/tmm/mds/clt"
- name: SSL_TRUSTED_CA_STORE
value: "/tls/tmm/mds/clt"
dynamicRouting:
enabled: true
tmmRouting:
image:
repository: "local.registry.com"
resources:
limits:
cpu: "700m"
memory: "1Gi"
requests:
cpu: "700m"
memory: "1Gi"
config:
bgp:
asn: 123
neighbors:
- ip: "192.168.10.200"
asn: 456
acceptsIPv4: true
tmrouted:
image:
repository: "local.registry.com"
resources:
limits:
cpu: "300m"
memory: "512Mi"
requests:
cpu: "300m"
memory: "512Mi"
blobd:
image:
repository: "local.registry.com"
resources:
limits:
cpu: "1"
memory: "4Gi"
requests:
cpu: "0.5"
memory: "3.5Gi"
f5-toda-logging:
enabled: true
fluentd:
host: 'f5-toda-fluentd.cnf-utilities.svc.cluster.local.'
sidecar:
image:
repository: "local.registry.com"
debug:
enabled: true
rabbitmqNamespace: "cnf-telemetry"
image:
repository: "local.registry.com"
resources:
limits:
cpu: "500m"
memory: "1Gi"
requests:
cpu: "500m"
memory: "1Gi"
global:
afm:
enabled: true
pccd:
enabled: true
imageCredentials:
name: far-secret
controller:
crdupdater:
enabled: true
f5-afm:
enabled: true
cert-orchestrator:
enabled: true
afm:
pccd:
enabled: true
image:
repository: "local.registry.com"
resources:
limits:
cpu: "1"
memory: "512Mi"
requests:
cpu: "0.5"
fluentbit_sidecar:
fluentd:
host: 'f5-toda-fluentd.cnf-gateway.svc.cluster.local.'
image:
repository: "local.registry.com"
resources:
limits:
cpu: "0.5"
memory: "512Mi"
requests:
cpu: "0.25"
memory: "256Mi"
f5-ipsd:
enabled: true
ipsd:
image:
repository: "local.registry.com"
resources:
limits:
cpu: "1"
memory: "512Mi"
requests:
cpu: "0.5"
memory: "256Mi"
fluentbit_sidecar:
fluentd:
host: "local.registry.com"
image:
repository: "local.registry.com"
resources:
limits:
cpu: "0.5"
memory: "512Mi"
requests:
cpu: "0.25"
memory: "256Mi"
f5-downloader:
downloader:
fluentbit_sidecar:
image:
repository: "local.registry.com"
fluentd:
host: "f5-toda-fluentd.cnf-gateway.svc.cluster.local."
resources:
limits:
cpu: "0.5"
memory: "512Mi"
requests:
cpu: "0.25"
memory: "256Mi"
controller:
image:
repository: "local.registry.com"
crdupdater:
name: crdupdater
image:
repository: "local.registry.com"
pullPolicy:
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 0.5
memory: 1Gi
f5_lic_helper:
enabled: true
rabbitmqNamespace: "cnf-telemetry"
image:
repository: "local.registry.com"
resources:
limits:
cpu: 0.5
memory: 512Mi
requests:
cpu: 0.25
memory: 256Mi
tmm_pod_manager:
image:
repository: repo.f5.com
pullPolicy: always
resources:
limits:
cpu: 0.5
memory: 512Mi
requests:
cpu: 0.25
memory: 256Mi
fluentbit_sidecar:
enabled: true
fluentd:
host: 'f5-toda-fluentd.cnf-utilities.svc.cluster.local.'
image:
repository: "local.registry.com"
resources:
limits:
cpu: 0.5
memory: 512Mi
requests:
cpu: 0.25
memory: 256Mi
crdupdater:
resources:
limits:
cpu: 0.5
memory: 512Mi
requests:
cpu: 0.25
memory: 256Mi
f5-stats_collector:
enabled: true
image:
repository: "local.registry.com"
stats_collector:
resources:
limits:
cpu: 0.5
memory: 512Mi
requests:
cpu: 0.25
memory: 256Mi
Installation¶
Change into the directory containing the latest CNFs Software, and obtain the f5ingress Helm chart version:
In this example, the CNF files are in the cnfinstall directory:
cd cnfinstall
ls -1 tar | grep f5ingress
The example output should appear similar to the following:
f5ingress-v0.542.0-0.0.154.tgz
If you haven’t already, create a new Project for the CNFs Pods using the following command syntax:
oc create ns <project name>
In this example, a new Project named cnf-gateway is created:
oc create ns cnf-gateway
Install the BIG-IP Controller using the following command syntax:
helm install f5ingress tar/<helm chart> \ -f <values file> -n <namespace>
For example:
helm install f5ingress tar/f5ingress-v0.542.0-0.0.154.tgz \ -f ingress-values.yaml -n cnf-gateway
Verify the Pods have installed successfully, and all containers are Running:
oc get pods -n cnf-gateway
In this example, all containers have a STATUS of Running as expected:
NAME READY STATUS f5-afm-d67cd45d5-z6tch 2/2 Running f5-ipsd-d886bbb78-wb5w7 2/2 Running f5-tmm-7458484b8c-fmbgd 4/4 Running f5ingress-f5ingress-76d8679d4b-w989t 2/2 Running
Verify the f5ingress Pod has been been successfully licensed:
kubectl logs f5ingress-f5ingress-7965947785-b8f5c -c f5-lic-helper \ -n cnf-gateway | grep -i LicenseVerified
In this example, the f5ingress Pod’s f5-lic-helper indicates Entitlement: paid.
2023-02-03 22:00:44.221|A|informational|1|Message="Payload type: ResponseCM20LicenseVerified Entitlement: paid Expiry Date: 2024-01-29T00:01:03Z"
When the BIG-IP controller pod is deployed or restarted, and the status of the f5ingress pod changes to Running, wait for at least 15 seconds before applying any other CRs. If any CRs are applied during this time, they might be rejected with the following error:
Internal error occurred: failed calling webhook "f5validate.f5net.com": failed to call webhook: Post "https://f5-validation-svc.cnf-gateway.svc:5000/f5-validator?timeout=10s": no endpoints available for service "f5-validation-svc
Since the BIG-IP controller may not be ready to consume any CR, it may not serve the requests for
f5-validation-svc service
at the moment.Note: If this error occurs, re-apply the CRs after 15 seconds and they shall work without any errors related to validating webhook.
Continue to the next procedure to configure the TMM interfaces.
Interfaces¶
The F5BigNetVlan Custom Resource (CR) applies TMM’s interface configuration; IP addresses, VLAN tags, MTU, etc. Use the steps below to configure and install clientside and serverside F5BigNetVlan CRs:
Configure external and internal F5BigNetVlan CRs. You can place both of the example CRs into a single YAML file:
Important: Set the
cmp_hash
parameter values to SRC_ADDR on the clientside (upstream) VLAN, and DST_ADDR on the serverside (downstream) VLAN.apiVersion: "k8s.f5net.com/v1" kind: F5BigNetVlan metadata: name: "subscriber-vlan" namespace: "cnf-gateway" spec: name: clientside interfaces: - "1.1" selfip_v4s: - 10.10.10.100 - 10.10.10.101 prefixlen_v4: 24 selfip_v6s: - 2002::10:10:10:100 - 2002::10:10:10:101 prefixlen_v6: 116 mtu: 9000 cmp_hash: SRC_ADDR --- apiVersion: "k8s.f5net.com/v1" kind: F5BigNetVlan metadata: name: "application-vlan" namespace: "cnf-gateway" spec: name: serverside interfaces: - "1.2" selfip_v4s: - 192.168.10.100 - 192.168.10.101 prefixlen_v4: 24 selfip_v6s: - 2002::192:168:10:100 - 2002::192:168:10:101 prefixlen_v6: 116 mtu: 9000 cmp_hash: DST_ADDR
Install the VLAN CRs:
oc apply -f cnf_vlans.yaml
List the VLAN CRs:
oc get f5-big-net-vlan -n cnf-gateway
In this example, the VLAN CRs are installed:
NAME vlan-client vlan-server
If the Debug Sidecar is enabled (the default), you can verify the f5-tmm container’s interface configuration:
oc exec -it deploy/f5-tmm -c debug -n cnf-gateway -- ip a
The interfaces should appear at the bottom of the list:
8: clientside: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 inet 10.10.10.100/24 brd 192.168.10.0 scope global client valid_lft forever preferred_lft forever inet6 2002::192:168:10:100/112 scope global valid_lft forever preferred_lft forever 9: serverside: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 link/ether 1e:80:c1:e8:81:15 brd ff:ff:ff:ff:ff:ff inet 192.168.10.100/24 brd 10.10.10.0 scope global server valid_lft forever preferred_lft forever inet6 2002::10:10:10:100/112 scope global valid_lft forever preferred_lft forever
Uninstallation¶
Following are the mandatory steps to perform for cleaning up the product installations:
Note: The namespace in delete and unistall commands can be either
tmmNamespace
or watchNamespace
.
Delete all configured CRs:
oc delete -f <cr-file> -n <*namespace>
For example:
oc delete -f cnf_vlans.yaml -n cnf-gateway
Uninstall the product:
helm uninstall <helm-installation-name> -n <*namespace>
For example:
helm uninstall f5ingress -n cnf-gateway
Important: If the order of uninstallation is missed, then use the following script to clean up finalizers from CRs and proceed with uninstallation of the product and namespace.
#!/bin/sh if [ $# -ne 1 ] ; then echo "Invalid Arguments, provide namespace as argument" exit 1 fi echo "This will remove finalizers of all usecase CRs of namespace $1" crs=$(oc api-resources --namespaced=true --verbs=list -o name | egrep 'f5-big|f5-cnf' | xargs -n 1 oc get --show-kind --ignore-not-found -n $1 | grep f5 | cut -d ' ' -f 1) for cr in $crs; do result=$(oc -n $1 patch $cr -p '{"metadata":{"finalizers":[]}}' --type=merge) echo $result done echo "" echo "Removed finalizers of all CRs of namespace $1"
For more details, refer to the Finalizers section in the CNF CRs guide.
Clean up the cluster. For this, the CA secret should be deleted manually (the secret is located in the
f5-cert-manager
namespace with ca-key-pair name).oc delete secret <ca-key-pair> -n <F5-CERT-MANAGER-NAMESPACE>
For example:
oc delete secret ca-key-pair -n cnf-cert-manager
Next step
To begin processing application traffic, continue to the CNFs CRs guide.
Feedback
Provide feedback to improve this document by emailing cnfdocs@f5.com.
Supplemental