BIG-IP Controller¶
Overview¶
The Cloud-Native Network Functions (CNFs) BIG-IP Controller, Edge Firewall, and Traffic Management Microkernel (TMM) Proxy Pods are the primary CNFs software components, and install together using Helm. Once integrated, Edge Firewall and the TMM Proxy Pods can be configured to process and protect high-performance 5G workloads using CNFs CRs.
This document guides you through creating the CNFs installation Helm values file, installing the Pods, and creating TMM’s clientside (upstream) and serverside (downstream) F5BigNetVlan interfaces.
Requirements¶
Ensure you have:
- Installed the CNFs Software.
- Installed the CNFs Secrets.
- A Linux based workstation with Helm installed.
Procedures¶
Helm values¶
The CNFs Helm values file requires a number of custom parameter values to successfully integrate the CNFs software. Use the steps below to obtain important cluster configuration data, and configure the CNFs parameter values for a successful installation.
CNFs relies on Kubernetes Topology Manager to dynamically allocate and properly align TMM’s CPU cores. Create a new Helm values file named ingress-values.yaml and add the
tmm.topologyManager
parameter:tmm: topologyManager: "true"
Robin ip-pools provide information required to discover and order TMM’s SR-IOV network interface list. The interface numbers will be required later when configuring and installing the F5BigNetVlan Custom Resource (CR). Use the steps below to obtain the Robin ip-pool information:
Note: The BIG-IP Controller Namespace; cnf-gateway, was created during the CNFs Secrets installation.
A. To configure the
tmm.cniNetworks
parameter, obtain the Names of the clientside (upstream) and serverside (downstream) ip-pools:robin ip-pool list
In this example, the clientside ip-pool Name is e801-180 and the serverside ip-pool Name is e810-181:
Name | Driver | Network | VLAN -----------+--------+-------------------+------- e810-180 | sriov | 192.168.10.100/24 | - e810-181 | sriov | 10.10.10.100/24 | -
B. To configure the
tmm.customEnvVars
parameters, obtain the NIC Tags value for the clientside ip-pool:robin ip-pool info e810-180 | grep NIC
In this example, the NIC Tags value is p1p1:
NIC Tags: [{'name': 'p1p1'}]
C. Obtain the NIC Tags value for the serverside ip-pool:
robin ip-pool info e810-181 | grep NIC
In this example, the NIC Tags value is p1p2:
NIC Tags: [{'name': 'p1p2'}]
D. Use the ip-pool Name and NIC Tags values to create TMM’s interface list, using the BIG-IP Controller’s Helm values. The interface maximum transmission unit (MTU) size can also be set here:
In this example, TMM’s clientside interface is 1.1, and the serverside interface is 1.2:
cniNetworks: '[{"ippool": "e810-180", "mtu": 9000}, {"ippool": "e810-181", "mtu": 9000}]' robinNetworks: "true" customEnvVars: - name: ROBIN_VFIO_RESOURCE_1 value: "P1P1_VFIOPCI" - name: ROBIN_VFIO_RESOURCE_2 value: "P1P2_VFIOPCI"
To use the Calico CNI, set the
TMM_CALICO_ROUTER
parameter. If the CNI relies on a router to perform proxy ARP, set theTMM_IGNORE_GATEWAYS
parameter to ensure TMM does not configure a default gateway:Important: Enabling
TMM_IGNORE_GATEWAYS
may cause cluster (Pod-to-Pod) traffic to fail. To set routes for specific cluster IPs, review Cluster Traffic in the F5BigNetStaticroute CR guide.tmm: customEnvVars: - name: TMM_CALICO_ROUTER value: "default" - name: TMM_IGNORE_GATEWAYS value: "TRUE"
To advertise routing information between networks, or to scale TMM beyond a single instance, the f5-tmm-routing container must be enabled, and a Border Gateway Protocol (BGP) session must be established with an external neighbor. The parameters below configure an external BGP peering session:
Note: For additional BGP configuration parameters, refer to the BGP Overview guide.
dynamicRouting: enabled: true exportZebosLogs: true tmmRouting: image: repository: "local.registry.com" config: bgp: asn: 123 neighbors: - ip: "192.168.10.200" asn: 456 acceptsIPv4: true tmrouted: image: repository: "local.registry.com"
The Fluentd Logging collector is enabled by default, and requires setting the
f5-toda-logging.fluentd.host
parameter. If you installed Fluentd, ensure thehost
parameter targets the BIG-IP Controller Namespace, and update the afm, controller and f5-toda-logging sections as follows:Note: In this example, the BIG-IP Controller, Edge Firewall and TMM Proxy Pods are installing to the cnf-gateway Namespace:
afm: fluentbit_sidecar: fluentd: host: 'f5-toda-fluentd.cnf-gateway.svc.cluster.local.' image: repository: "local.registry.com" controller: fluentbit_sidecar: fluentd: host: 'f5-toda-fluentd.cnf-gateway.svc.cluster.local.' image: repository: "local.registry.com" f5-toda-logging: fluentd: host: "f5-toda-fluentd.cnf-gateway.svc.cluster.local." sidecar: image: repository: "local.registry.com"
By default, the Edge Firewall’s default firewall mode accepts all network packets not matching an F5BigFwPolicy firewall rule. You can modify this behavior using the
defaultFirewallRule.action
parameter. For addition details about the default firewall mode and logging parameters, refer to the Firewall mode section of the F5BigFwPolicy overview:afm: defaultFirewallRule: action: accept log: true
Be default, the TMM container uses the default Kubernetes
serviceAccount
. Use the parameter below to modify theserviceAccount
used by TMM:tmm: serviceAccount: create: false name: tmm_sa
The completed Helm values file should appear similar to the following:
Note: Set the
image.repository
parameter for each container to your local container registry.tmm: image: repository: "local.registry.com" hugepages: enabled: true sessiondb: useExternalStorage: "true" topologyManager: true cniNetworks: '[{"ippool": "e810-180", "mtu": 9000}, {"ippool": "e810-181", "mtu": 9000}]' robinNetworks: "true" customEnvVars: - name: ROBIN_VFIO_RESOURCE_1 value: "P1P1_VFIOPCI" - name: ROBIN_VFIO_RESOURCE_2 value: "P1P2_VFIOPCI" - name: TMM_IGNORE_GATEWAYS value: "TRUE" - name: TMM_CALICO_ROUTER value: "default" - name: REDIS_CA_FILE value: "/etc/ssl/certs/dssm-ca.crt" - name: REDIS_AUTH_CERT value: "/etc/ssl/certs/dssm-cert.crt" - name: REDIS_AUTH_KEY value: "/etc/ssl/private/dssm-key.key" - name: SESSIONDB_EXTERNAL_STORAGE value: "true" - name: SESSIONDB_DISCOVERY_SENTINEL value: "true" - name: SESSIONDB_EXTERNAL_SERVICE value: "f5-dssm-sentinel.cnf-gateway" dynamicRouting: enabled: true exportZebosLogs: true tmmRouting: image: repository: "local.registry.com" config: bgp: asn: 123 neighbors: - ip: "192.168.10.200" asn: 456 acceptsIPv4: true tmrouted: image: repository: "local.registry.com" afm: enabled: true defaultFirewallRule: action: reject log: true pccd: enabled: true image: repository: "local.registry.com" fluentbit_sidecar: enabled: true image: repository: "local.registry.com" fluentd: host: 'f5-toda-fluentd.cnf-gateway.svc.cluster.local.' ipsd: enabled: true image: repository: "local.registry.com" controller: image: repository: "local.registry.com" fluentbit_sidecar: enabled: true image: repository: "local.registry.com" fluentd: host: 'f5-toda-fluentd.cnf-gateway.svc.cluster.local.' f5-toda-logging: fluentd: host: "f5-toda-fluentd.cnf-gateway.svc.cluster.local." sidecar: image: repository: "local.registry.com" tmstats: config: image: repository: "local.registry.com" debug: image: repository: "local.registry.com" stats_collector: enabled: true image: repository: "local.registry.com"
Installation¶
Change into the local directory with the CNF files, and list the files in the tar directory:
In this example, the CNF files are in the cnfinstall directory:
cd cnfinstall
ls -1 tar
In this example, the Helm chart for the BIG-IP Controller, Service Proxy TMM and Edge Firewall is named f5ingress-6.0.32.tgz:
cnf-docker-images.tgz f5-cert-gen-0.3.0.tgz f5-dssm-0.22.14.tgz f5-toda-fluentd-1.8.40.tgz f5ingress-6.0.32.tgz
Install the BIG-IP Controller, TMM Proxy and Edge Firewall Pods, referencing the Helm values file created in the previous procedure:
helm install f5ingress <helm chart> -f <values file> -n <namespace>
For example:
helm install f5ingress tar/f5ingress-6.0.32.tgz -f ingress-values.yaml -n cnf-gateway
Verify the Pods have installed successfully, and all containers are Running:
kubectl get pods -n cnf-gateway
In this example, all containers have a STATUS of Running as expected:
NAME READY STATUS f5-afm-5bb5cd989b-b8qpf 2/2 Running f5-dssm-db-0 1/1 Running f5-dssm-db-1 1/1 Running f5-dssm-db-2 1/1 Running f5-dssm-sentinel-0 1/1 Running f5-dssm-sentinel-1 1/1 Running f5-dssm-sentinel-2 1/1 Running f5-ingress-f5ingress-7965947785-b8f5c 1/1 Running f5-ipsd-74f5754b5d-kjzft 1/1 Running f5-tmm-576df78f88-mpzbq 4/4 Running
Interfaces¶
The F5BigNetVlan Custom Resource (CR) applies TMM’s interface configuration; IP addresses, VLAN tags, MTU, etc. Use the steps below to configure and install clientside and serverside F5BigNetVlan CRs:
You can place both of the example CRs into a single YAML file:
Important: Set the
cmp_hash
parameter values to SRC_ADDR on the clientside (upstream) VLAN, and DST_ADDR on the serverside (downstream) VLAN.apiVersion: "k8s.f5net.com/v1" kind: F5BigNetVlan metadata: name: "subscriber-vlan" namespace: "cnf-gateway" spec: name: clientside interfaces: - "1.1" selfip_v4s: - 10.10.10.100 - 10.10.10.101 prefixlen_v4: 24 selfip_v6s: - 2002::10:10:10:100 - 2002::10:10:10:101 prefixlen_v6: 116 mtu: 9000 cmp_hash: DST_ADDR --- apiVersion: "k8s.f5net.com/v1" kind: F5BigNetVlan metadata: name: "application-vlan" namespace: "cnf-gateway" spec: name: serverside interfaces: - "1.2" selfip_v4s: - 192.168.10.100 - 192.168.10.101 prefixlen_v4: 24 selfip_v6s: - 2002::192:168:10:100 - 2002::192:168:10:101 prefixlen_v6: 116 mtu: 9000 cmp_hash: SRC_ADDR
Install the VLAN CRs:
kubectl apply -f cnf_vlans.yaml
List the VLAN CRs:
kubectl get f5-big-net-vlan -n cnf-gateway
In this example, the VLAN CRs are installed:
NAME vlan-client vlan-server
If the Debug Sidecar is enabled (the default), you can verify the f5-tmm container’s interface configuration:
kubectl exec -it deploy/f5-tmm -c debug -n cnf-gateway -- ip a
The interfaces should appear at the bottom of the list:
8: clientside: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 inet 192.160.10.100/24 brd 192.168.10.0 scope global client valid_lft forever preferred_lft forever inet6 2002::192:168:10:100/112 scope global valid_lft forever preferred_lft forever 9: serverside: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 link/ether 1e:80:c1:e8:81:15 brd ff:ff:ff:ff:ff:ff inet 10.10.10.100/24 brd 10.10.10.0 scope global server valid_lft forever preferred_lft forever inet6 2002::10:10:10:100/112 scope global valid_lft forever preferred_lft forever
Feedback¶
Provide feedback to improve this document by emailing cnfdocs@f5.com.