BIG-IP Controller¶
Overview¶
The Cloud-Native Network Functions (CNFs) BIG-IP Controller, Edge Firewall, and Traffic Management Microkernel (TMM) Proxy Pods are the primary CNFs software components, and install together using Helm. Once integrated, Edge Firewall and the TMM Proxy Pods can be configured to process and protect high-performance 5G workloads using CNFs CRs.
This document guides you through creating the CNFs installation Helm values file, installing the Pods, and creating TMM’s clientside (upstream) and serverside (downstream) F5BigNetVlan interfaces.
Requirements¶
Ensure you have:
- Installed the CNFs Software.
- Installed the CNFs Cert Manager.
- Completed the CNFs Licensing.
- A Linux based workstation with Helm installed.
Procedures¶
The CNFs Pods rely on a number of custom Helm values to install successfully. Use the steps below to obtain important cluster configuration data, and create the proper BIG-IP Controller Helm values file for the installation.
If you haven’t already, create a new Project for the CNFs Pods:
oc new-project <project>
In this example, a new Project named cnf-gateway is created:
oc new-project cnf-gateway
Switch to the CNFs Project:
In this example, the cnf-gateway Project is selected:
oc project cnf-gateway
TMM values¶
To ensure Helm can obtain the TMM image from the local image registry, add the following Helm values:
tmm: image: repository: "local.registry.com"
Add the ServiceAccount for the TMM Pod to the privileged security context constraint (SCC):
A. By default, TMM uses the default ServiceAccount:
oc adm policy add-scc-to-user privileged -n <project> -z default
In this example, the default ServiceAccount is added to the privileged SCC for the cnf-gateway Project:
oc adm policy add-scc-to-user privileged -n cnf-gateway -z default
B. To use a custom ServiceAccount, you must also update the BIG-IP Controller Helm values file:
In this example, the custom spk-tmm ServiceAccount is added to the privileged SCC.
oc adm policy add-scc-to-user privileged -n cnf-gateway -z spk-tmm
In this example, the custom spk-tmm ServiceAccount is added to the Helm values file.
tmm: serviceAccount: name: spk-tmm
As described in the Networking Overview, the Controller uses OpenShift network node policies and network attachment definitions to create the TMM Proxy Pod interface list. Use the steps below to obtain the node policies and attachment definition names, and configure the TMM interface list:
A. Obtain the names of the network attachment definitions:
oc get net-attach-def
In this example, the network attachment definitions are named clientside-netdevice and serverside-netdevice:
clientside-netdevice serverside-netdevice
B. Obtain the names of the network node policies using the network attachment definition
resourceName
parameter:oc describe net-attach-def | grep openshift.io
In this example, the network node policies are named clientsideNetPolicy and serversideNetPolicy:
Annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/clientsideNetPolicy Annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/serversideNetPolicy
C. Create a Helm values file named ingress-values.yaml and set the node attachment and node policy names to configure the TMM interface list:
In this example, the
cniNetworks:
parameter references the network attachments, and orders TMM’s interface list as: 1.1 (clientside) and 1.2 (serverside):tmm: cniNetworks: "namespace/clientside-netdevice,namespace/serverside-netdevice” customEnvVars: - name: OPENSHIFT_VFIO_RESOURCE_1 value: "internalNetPolicy" - name: OPENSHIFT_VFIO_RESOURCE_2 value: "externalNetPolicy"
CNFs supports Ethernet frames over 1500 bytes (Jumbo frames), up to a maxiumum transmission unit (MTU) size of 9000 bytes. To modify the MTU size, adapt the
TMM_DEFAULT_MTU
parameter:Important: The same MTU value must be set in each of the installed F5BigNetVlan CRs. CNFs does not currently support different MTU sizes.
tmm: customEnvVars: - name: TMM_DEFAULT_MTU value: "9000"
The Controller relies on the OpenShift Performance Addon Operator to dynamically allocate and properly align TMM’s CPU cores. Use the steps below to enable the Performance Addon Operator:
A. Obtain the full performance profile name from the
runtimeClass
parameter:oc get performanceprofile -o jsonpath='{..runtimeClass}{"\n"}'
In this example, the performance profile name is performance-cnf-loadbalancer:
performance-cnf-loadbalancer
B. Use the performance profile name to configure the
runtimeClassName
parameter, and set the the parameters below in the Helm values file:tmm: topologyManager: "true" runtimeClassName: "performance-cnf-loadbalancer" pod: annotations: cpu-load-balancing.crio.io: disable
To advertise routing information between networks, or to scale TMM beyond a single instance, the f5-tmm-routing container must be enabled, and a Border Gateway Protocol (BGP) session must be established with an external neighbor. The parameters below configure an external BGP peering session:
Note: For additional BGP configuration parameters, refer to the BGP Overview guide.
tmm: dynamicRouting: enabled: true exportZebosLogs: true tmmRouting: image: repository: "registry.com" config: bgp: asn: 123 neighbors: - ip: "192.168.10.100" asn: 456 acceptsIPv4: true tmrouted: image: repository: "registry.com"
To set TMM’s default gateway using either BGP or the F5BigNetStaticroute Custom Resource (CR), set the
add_k8s_routes
paramter to true:tmm: add_k8s_routes: true
Important: If you enable the default gateway using either BGP or the F5BigNetStaticroute Custom Resource (CR) without enabling the
add_k8s_routes
paramter, pod-to-pod communication will fail.When
add_k8s_routes
paramter is enabled, and if you intend to perform a product installation with a non-cluster admin, please set the parameters below:create_k8s_routes_sa: false k8s_routes_sa_name:<sa-name>
For this service account, the permissions are only needed to run a prehook job to get pods and service networks on the cluster. This prehook job will be deleted once tmm-k8s-routes-configmap is created.
Use the below RBAC settings while creating a service account and user:
i. ClusterRole: Bind the service account with the following RBAC rules:
- apiGroups: - config.openshift.io resources: - networks verbs: - get
ii. Role: Bind the below rules to the user:
- apiGroups: - batch resources: - jobs verbs: - create - delete - get - list - patch - update - watch
The Fluentd Logging collector is enabled by default, and requires setting the
f5-toda-logging.fluentd.host
parameter.A. When Fluentd is enabled, ensure the
fluentd.host
parameter targets the BIG-IP Controller Namespace:In this example, the host value includes the Fluentd Pod’s cnf-gateway Namespace.
f5-toda-logging: fluentd: host: 'f5-toda-fluentd.cnf-utilities.svc.cluster.local.' sidecar: image: repository: "local.registry.com"
B. When Fluentd is disabled, set the
f5-toda-logging.enabled
parameter tofalse
:f5-toda-logging: enabled: false
AFM values
To enable the Edge Firewall feature, and to ensure Helm can obtain the image from the local image registry, add the following Helm values:
global: afm: enabled: true pccd: enabled: true f5-afm: enabled: true cert-orchestrator: enabled: true afm: pccd: enabled: true image: repository: "local.registry.com"
The Edge Firewall’s default firewall mode accepts all network packets not matching an F5BigFwPolicy firewall rule. You can modify this behavior using the F5BigContextGlobal Custom Resource (CR). For addition details about the default firewall mode and logging parameters, refer to the Firewall mode section of the F5BigFwPolicy overview:
The Fluentd Logging collector is enabled by default, and requires setting the
f5-toda-logging.fluentd.host
parameter. If you installed Fluentd, ensure thehost
parameter targets the Fluentd Pod’s namespace:In this example, the host value includes the Fluentd Pod’s cnf-gateway Namespace.
f5-afm: afm: fluentbit_sidecar: fluentd: host: 'f5-toda-fluentd.cnf-gateway.svc.cluster.local.' image: repository: "local.registry.com"
IPSD values
Use these steps to enable and configure the Intrusion Prevention System Helm values for your environment.
To enable the IPSD Pod, and to ensure Helm can obtain the image from the local image registry, add the following Helm values:
f5-ipsd: enabled: true ipsd: image: repository: "local.registry.com"
The Fluentd Logging collector is enabled by default, and requires setting the
f5-toda-logging.fluentd.host
parameter. If you installed Fluentd, ensure thehost
parameter targets the Fluentd Pod’s namespace:In this example, the host value includes the cnf-gateway Namespace.
f5-ipsd: ipsd: fluentbit_sidecar: fluentd: host: "f5-toda-fluentd.cnf-gateway.svc.cluster.local." image: repository: "local.registry.com"
Controller values
To ensure Helm can obtain the image from the local image registry, add the following Helm values:
The example below also includes the CNFs CWC values.
controller: image: repository: "local.registry.com" f5_lic_helper: enabled: true rabbitmqNamespace: "cnf-telemetry" image: repository: "local.registry.com"
The Fluentd Logging collector is enabled by default, and requires setting the
f5-toda-logging.fluentd.host
parameter. If you installed Fluentd, ensure thehost
parameter targets the Fluentd Pod’s namespace:In this example, the host value includes the cnf-gateway Namespace.
controller: fluentbit_sidecar: fluentd: host: 'f5-toda-fluentd.cnf-gateway.svc.cluster.local.' image: repository: "local.registry.com"
Completed values
The completed Helm values file should appear similar to the following:
tmm:
image:
repository: "local.registry.com"
hugepages:
enabled: true
sessiondb:
useExternalStorage: "true"
topologyManager: true
runtimeClassName: "performance-cnf-loadbalancer"
pod:
annotations:
cpu-load-balancing.crio.io: disable
cniNetworks: "cnf-gateway/clientside-netdevice,cnf-gateway/serverside-netdevice"
sessiondb:
useExternalStorage: "true"
add_k8s_routes: true
customEnvVars:
- name: OPENSHIFT_VFIO_RESOURCE_1
value: "clientsideNetPolicy"
- name: OPENSHIFT_VFIO_RESOURCE_2
value: "serversideNetPolicy"
- name: TMM_DEFAULT_MTU
value: "9000"
- name: SESSIONDB_DISCOVERY_SENTINEL
value: "true"
- name: SESSIONDB_EXTERNAL_SERVICE
value: "f5-dssm-sentinel.cnf-gateway"
- name: SSL_SERVERSIDE_STORE
value: "/tls/tmm/mds/clt"
- name: SSL_TRUSTED_CA_STORE
value: "/tls/tmm/mds/clt"
dynamicRouting:
enabled: true
tmmRouting:
image:
repository: "local.registry.com"
config:
bgp:
asn: 123
neighbors:
- ip: "192.168.10.200"
asn: 456
acceptsIPv4: true
tmrouted:
image:
repository: "local.registry.com"
global:
afm:
enabled: true
pccd:
enabled: true
f5-afm:
enabled: true
cert-orchestrator:
enabled: true
afm:
pccd:
enabled: true
image:
repository: "local.registry.com"
fluentbit_sidecar:
fluentd:
host: 'f5-toda-fluentd.cnf-gateway.svc.cluster.local.'
image:
repository: "local.registry.com"
f5-ipsd:
enabled: true
ipsd:
image:
repository: "local.registry.com"
fluentbit_sidecar:
fluentd:
host: "local.registry.com"
image:
repository: "local.registry.com"
controller:
image:
repository: "local.registry.com"
f5_lic_helper:
enabled: true
rabbitmqNamespace: "cnf-telemetry"
image:
repository: "local.registry.com"
fluentbit_sidecar:
enabled: true
fluentd:
host: 'f5-toda-fluentd.cnf-utilities.svc.cluster.local.'
image:
repository: "local.registry.com"
f5-toda-logging:
fluentd:
host: 'f5-toda-fluentd.cnf-utilities.svc.cluster.local.'
sidecar:
image:
repository: "local.registry.com"
debug:
rabbitmqNamespace: "cnf-telemetry"
image:
repository: "local.registry.com"
Installation¶
Change into the directory containing the latest CNFs Software, and obtain the f5ingress Helm chart version:
In this example, the CNF files are in the cnfinstall directory:
cd cnfinstall
ls -1 tar | grep f5ingress
The example output should appear similar to the following:
f5ingress-9.2.11.tgz
If you haven’t already, create a new Project for the CNFs Pods using the following command syntax:
oc create ns <project name>
In this example, a new Project named cnf-gateway is created:
oc create ns cnf-gateway
Install the BIG-IP Controller using the following command syntax:
helm install f5ingress tar/<helm chart> \ -f <values file> -n <namespace>
For example:
helm install f5ingress tar/f5ingress-9.2.11.tgz \ -f ingress-values.yaml -n cnf-gateway
Verify the Pods have installed successfully, and all containers are Running:
oc get pods -n cnf-gateway
In this example, all containers have a STATUS of Running as expected:
NAME READY STATUS f5-afm-d67cd45d5-z6tch 2/2 Running f5-ipsd-d886bbb78-wb5w7 2/2 Running f5-tmm-7458484b8c-fmbgd 4/4 Running f5ingress-f5ingress-76d8679d4b-w989t 2/2 Running
Continue to the next procedure to configure the TMM interfaces.
Interfaces¶
The F5BigNetVlan Custom Resource (CR) applies TMM’s interface configuration; IP addresses, VLAN tags, MTU, etc. Use the steps below to configure and install clientside and serverside F5BigNetVlan CRs:
Configure external and internal F5BigNetVlan CRs. You can place both of the example CRs into a single YAML file:
Important: Set the
cmp_hash
parameter values to SRC_ADDR on the clientside (upstream) VLAN, and DST_ADDR on the serverside (downstream) VLAN.apiVersion: "k8s.f5net.com/v1" kind: F5BigNetVlan metadata: name: "subscriber-vlan" namespace: "cnf-gateway" spec: name: clientside interfaces: - "1.1" selfip_v4s: - 10.10.10.100 - 10.10.10.101 prefixlen_v4: 24 selfip_v6s: - 2002::10:10:10:100 - 2002::10:10:10:101 prefixlen_v6: 116 mtu: 9000 cmp_hash: SRC_ADDR --- apiVersion: "k8s.f5net.com/v1" kind: F5BigNetVlan metadata: name: "application-vlan" namespace: "cnf-gateway" spec: name: serverside interfaces: - "1.2" selfip_v4s: - 192.168.10.100 - 192.168.10.101 prefixlen_v4: 24 selfip_v6s: - 2002::192:168:10:100 - 2002::192:168:10:101 prefixlen_v6: 116 mtu: 9000 cmp_hash: DST_ADDR
Install the VLAN CRs:
kubectl apply -f cnf_vlans.yaml
List the VLAN CRs:
kubectl get f5-big-net-vlan -n cnf-gateway
In this example, the VLAN CRs are installed:
NAME vlan-client vlan-server
If the Debug Sidecar is enabled (the default), you can verify the f5-tmm container’s interface configuration:
kubectl exec -it deploy/f5-tmm -c debug -n cnf-gateway -- ip a
The interfaces should appear at the bottom of the list:
8: clientside: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 inet 10.10.10.100/24 brd 192.168.10.0 scope global client valid_lft forever preferred_lft forever inet6 2002::192:168:10:100/112 scope global valid_lft forever preferred_lft forever 9: serverside: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 link/ether 1e:80:c1:e8:81:15 brd ff:ff:ff:ff:ff:ff inet 192.168.10.100/24 brd 10.10.10.0 scope global server valid_lft forever preferred_lft forever inet6 2002::10:10:10:100/112 scope global valid_lft forever preferred_lft forever
Feedback¶
Provide feedback to improve this document by emailing cnfdocs@f5.com.