BIG-IP Controller

Overview

The Cloud-Native Network Functions (CNFs) BIG-IP Controller, Edge Firewall, and Traffic Management Microkernel (TMM) Proxy Pods are the primary CNFs software components, and install together using Helm. Once integrated, Edge Firewall and the TMM Proxy Pods can be configured to process and protect high-performance 5G workloads using CNFs CRs.

This document guides you through creating the CNFs installation Helm values file, installing the Pods, and creating TMM’s clientside (upstream) and serverside (downstream) F5BigNetVlan interfaces.

Requirements

Ensure you have:

Procedures

The CNFs Pods rely on a number of custom Helm values to install successfully. Use the steps below to obtain important cluster configuration data, and create the proper BIG-IP Controller Helm values file for the installation.

  1. If you haven’t already, create a new Project for the CNFs Pods:

    oc new-project <project>
    

    In this example, a new Project named cnf-gateway is created:

    oc new-project cnf-gateway
    
  2. Switch to the CNFs Project:

    In this example, the cnf-gateway Project is selected:

    oc project cnf-gateway
    

Defining the Platform type

  • While deploying the cluster on the OpenShift platform, set platformType parameter to ocp in ingress-values.yaml values file.

     global: 
       platformType: "ocp"  
    

TMM values

Use these steps to enable and configure the TMM Proxy Helm values for your environment.

  1. To enable the TMM Proxy Helm values and to ensure Helm can obtain the image from the local image registry, add the following Helm values:

    f5-tmm:
      enabled: true 
      tmm:
        image:
          repository: "local.registry.com"
    
  2. Add the ServiceAccount for the TMM Pod to the privileged security context constraint (SCC):

    A. By default, TMM uses the default ServiceAccount:

    oc adm policy add-scc-to-user privileged -n <project> -z default
    

    In this example, the default ServiceAccount is added to the privileged SCC for the cnf-gateway Project:

    oc adm policy add-scc-to-user privileged -n cnf-gateway -z default
    

    B. To use a custom ServiceAccount, you must also update the BIG-IP Controller Helm values file:

    In this example, the custom spk-tmm ServiceAccount is added to the privileged SCC.

    oc adm policy add-scc-to-user privileged -n cnf-gateway -z spk-tmm
    

    In this example, the custom spk-tmm ServiceAccount is added to the Helm values file.

     f5-tmm:
       tmm:
         serviceAccount:
         name: spk-tmm
    
  3. As described in the Networking Overview, the Controller uses OpenShift network node policies and network attachment definitions to create the TMM Proxy Pod interface list. Use the steps below to obtain the node policies and attachment definition names, and configure the TMM interface list:

    A. Obtain the names of the network attachment definitions:

    oc get net-attach-def
    

    In this example, the network attachment definitions are named clientside-netdevice and serverside-netdevice:

    clientside-netdevice
    serverside-netdevice
    

    B. Obtain the names of the network node policies using the network attachment definition resourceName parameter:

    oc describe net-attach-def | grep openshift.io
    

    In this example, the network node policies are named clientsideNetPolicy and serversideNetPolicy:

    Annotations:  k8s.v1.cni.cncf.io/resourceName: openshift.io/clientsideNetPolicy
    Annotations:  k8s.v1.cni.cncf.io/resourceName: openshift.io/serversideNetPolicy
    

    C. Create a Helm values file named ingress-values.yaml and set the node attachment and node policy names to configure the TMM interface list:

    In this example, the cniNetworks: parameter references the network attachments, and orders TMM’s interface list as: 1.1 (clientside) and 1.2 (serverside):

    f5-tmm:
      tmm:
        cniNetworks: "namespace/clientside-netdevice,namespace/serverside-netdevice”
        customEnvVars:
         - name: OPENSHIFT_VFIO_RESOURCE_1
           value: "internalNetPolicy"
         - name: OPENSHIFT_VFIO_RESOURCE_2
           value: "externalNetPolicy"
    
  4. CNFs supports Ethernet frames over 1500 bytes (Jumbo frames), up to a maxiumum transmission unit (MTU) size of 9000 bytes. To modify the MTU size, adapt theTMM_DEFAULT_MTU parameter:

    _images/spk_warn.png Important: The same MTU value must be set in each of the installed F5BigNetVlan CRs. CNFs does not currently support different MTU sizes.

    f5-tmm:
      tmm:
        customEnvVars:
          - name: TMM_DEFAULT_MTU
            value: "9000"
    
  5. The Controller relies on the OpenShift Performance Addon Operator to dynamically allocate and properly align TMM’s CPU cores. Use the steps below to enable the Performance Addon Operator:

    A. Obtain the full performance profile name from the runtimeClass parameter:

    oc get performanceprofile -o jsonpath='{..runtimeClass}{"\n"}'
    

    In this example, the performance profile name is performance-cnf-loadbalancer:

    performance-cnf-loadbalancer
    

    B. Use the performance profile name to configure the runtimeClassName parameter, and set the the parameters below in the Helm values file:

    f5-tmm:
      tmm:
        topologyManager: "true"
        runtimeClassName: "performance-cnf-loadbalancer"
    
        pod:
          annotations:
          cpu-load-balancing.crio.io: disable
    
  6. To advertise routing information between networks, or to scale TMM beyond a single instance, the f5-tmm-routing container must be enabled, and a Border Gateway Protocol (BGP) session must be established with an external neighbor. The parameters below configure an external BGP peering session:

    _images/spk_info.png Note: For additional BGP configuration parameters, refer to the BGP Overview guide.

    f5-tmm:
      tmm:
        dynamicRouting:
          enabled: true
          exportZebosLogs: true
          tmmRouting:
            image:
              repository: "registry.com"
            config:
              bgp:
                asn: 123
                neighbors:
                  - ip: "192.168.10.100"
                  asn: 456
                  acceptsIPv4: true
    
           tmrouted:
             image:
               repository: "registry.com"
    
  7. To set TMM’s default gateway using either BGP or the F5BigNetStaticroute Custom Resource (CR), set the add_k8s_routes paramter to true:

    f5-tmm:
      tmm:
        add_k8s_routes: true
    

    _images/spk_warn.png Important: If you enable the default gateway using either BGP or the F5BigNetStaticroute Custom Resource (CR) without enabling the add_k8s_routes paramter, pod-to-pod communication will fail.

  8. When add_k8s_routes paramter is enabled, and if you intend to perform a product installation with a non-cluster admin, please set the parameters below:

    create_k8s_routes_sa: false
    k8s_routes_sa_name:<sa-name> 
    

    For this service account, the permissions are only needed to run a prehook job to get pods and service networks on the cluster. This prehook job will be deleted once tmm-k8s-routes-configmap is created.

    Use the below RBAC settings while creating a service account and user:

    i. ClusterRoles:

    • Bind the service account with the following RBAC rules:

      - apiGroups:
          - config.openshift.io
        resources:
          - networks
        verbs:
          - get
      
    • Add the following permissions to the F5BigDownloaderPolicy feature as it requires additional permissions:

      - apiGroups:
          - apiextensions.k8s.io
        resources:
          - customresourcedefinitions
        verbs:
          - get
          - list
          - watch
          - update
      

    ii. Role: Bind the below rules to the user:

    - apiGroups:
       - batch
      resources:
       - jobs
      verbs:
       - create
       - delete
       - get
       - list
       - patch
       - update
       - watch
    
  9. The Fluentd Logging collector is enabled by default, and requires setting the f5-toda-logging.fluentd.host parameter.

    A. When Fluentd is enabled, ensure the fluentd.host parameter targets the BIG-IP Controller Namespace:

    In this example, the host value includes the Fluentd Pod’s cnf-gateway Namespace.

     f5-tmm:
       f5-toda-logging:
         enabled: true
         fluentd:
           host: 'f5-toda-fluentd.cnf-utilities.svc.cluster.local.'
         sidecar:
           image: 
             repository: "local.registry.com"
    

    B. When Fluentd is disabled, set the f5-toda-logging.enabled parameter to false:

     f5-tmm:
       f5-toda-logging:
         enabled: false
    

AFM values

  1. To enable the Edge Firewall feature, and to ensure Helm can obtain the image from the local image registry, add the following Helm values:

    global:
       afm:
         enabled: true
         pccd:
           enabled: true 
    
    f5-afm:
       enabled: true
       cert-orchestrator:
         enabled: true
       afm:
         pccd:  
           enabled: true
           image:
            repository: "local.registry.com"
    
  2. The Edge Firewall’s default firewall mode accepts all network packets not matching an F5BigFwPolicy firewall rule. You can modify this behavior using the F5BigContextGlobal Custom Resource (CR). For addition details about the default firewall mode and logging parameters, refer to the Firewall mode section of the F5BigFwPolicy overview:

  3. The Fluentd Logging collector is enabled by default, and requires setting the f5-toda-logging.fluentd.host parameter. If you installed Fluentd, ensure the host parameter targets the Fluentd Pod’s namespace:

    In this example, the host value includes the Fluentd Pod’s cnf-gateway Namespace.

    f5-afm:
      afm:
        fluentbit_sidecar:
          fluentd:
            host: 'f5-toda-fluentd.cnf-gateway.svc.cluster.local.'
          image:
            repository: "local.registry.com"
    

IPSD values

Use these steps to enable and configure the Intrusion Prevention System Helm values for your environment.

  1. To enable the IPSD Pod, and to ensure Helm can obtain the image from the local image registry, add the following Helm values:

    f5-ipsd:
      enabled: true
      ipsd:
        image:
          repository: "local.registry.com"
    
  2. The Fluentd Logging collector is enabled by default, and requires setting the f5-toda-logging.fluentd.host parameter. If you installed Fluentd, ensure the host parameter targets the Fluentd Pod’s namespace:

    In this example, the host value includes the cnf-gateway Namespace.

    f5-ipsd:
      ipsd:
        fluentbit_sidecar:
          fluentd:
            host: "f5-toda-fluentd.cnf-gateway.svc.cluster.local."
          image:
              repository: "local.registry.com"
    

Controller values

  1. To ensure Helm can obtain the image from the local image registry, add the following Helm values:

    The example below also includes the CNFs CWC values.

    controller:
      image:
        repository: "local.registry.com"
    
      f5_lic_helper:
        enabled: true
        rabbitmqNamespace: "cnf-telemetry"
        image:
          repository: "local.registry.com"
    
  2. The Fluentd Logging collector is enabled by default, and requires setting the f5-toda-logging.fluentd.host parameter. If you installed Fluentd, ensure the host parameter targets the Fluentd Pod’s namespace:

    In this example, the host value includes the cnf-gateway Namespace.

    controller:
      fluentbit_sidecar:
        fluentd:
          host: 'f5-toda-fluentd.cnf-gateway.svc.cluster.local.'
        image:
          repository: "local.registry.com"
    

Downloader values

Use these steps to enable and configure the Downloader Helm values for your environment.

  1. To connect to the RabbitMQ open source message broker and ensure proper functioning of the downloader pod, enable the RabbitMQ namespace in the values.yaml file.

      downloader: 
      name: f5-downloader 
      debug: false
      rabbitmqNamespace: "cnf-telemetry" 
      storage: 
            enabled: true 
            storageClassName: standard 
            access: ReadWriteOnce
    
  2. To enable the Downloader Pod, and to ensure Helm can obtain the image from the local image registry, add the following Helm values:

    f5-downloader:
      enabled: true
      downloader:
        image:
          repository: "local.registry.com"
    
  3. The Fluentd Logging collector is enabled by default, and requires setting the f5-toda-logging.fluentd.host parameter. If you installed Fluentd, ensure the host parameter targets the Fluentd Pod’s Namespace:

    In this example, the host value includes the cnf-gateway Namespace.

    f5-downloader:
      downloader:
        fluentbit_sidecar:
          image:
            repository: "local.registry.com"
          fluentd:
            host: "f5-toda-fluentd.cnf-gateway.svc.cluster.local."
    
  4. If CNF is deployed on a cluster with multiple worker nodes, then it is best to set the downloader persistent storage access to ReadWriteMany and the storage class must support ReadWriteMany storage.

    The following is an example for the values needed.

    f5-downloader:
      enabled: true
      downloader:
                storage:
              enabled: true
              access: ReadWriteMany
              storageClassName: robin-rwx
    

_images/spk_info.png Note: Along with this pod, CRD updater sidecar will also come up in the BIG-IP Controller pod.

Completed values

The completed Helm values file should appear similar to the following:

f5-tmm:
  enabled: true
  
  tmm:
    image:
      repository: "local.registry.com"

    hugepages:
    enabled: true

    sessiondb:
      useExternalStorage: "true"

    topologyManager: true
      runtimeClassName: "performance-cnf-loadbalancer"

    pod:
      annotations:
        cpu-load-balancing.crio.io: disable

    cniNetworks: "cnf-gateway/clientside-netdevice,cnf-gateway/serverside-netdevice"

    sessiondb:
      useExternalStorage: "true"

    add_k8s_routes: true

    customEnvVars:
      - name: OPENSHIFT_VFIO_RESOURCE_1
        value: "clientsideNetPolicy"
      - name: OPENSHIFT_VFIO_RESOURCE_2
        value: "serversideNetPolicy"
      - name: TMM_DEFAULT_MTU
        value: "9000"
      - name: SESSIONDB_DISCOVERY_SENTINEL
        value: "true"
      - name: SESSIONDB_EXTERNAL_SERVICE
        value: "f5-dssm-sentinel.cnf-gateway"
      - name: SSL_SERVERSIDE_STORE
        value: "/tls/tmm/mds/clt"
      - name: SSL_TRUSTED_CA_STORE
        value: "/tls/tmm/mds/clt"

    dynamicRouting:
      enabled: true	
      tmmRouting:
        image:
          repository: "local.registry.com"
        config:
          bgp:
            asn: 123
            neighbors:
              - ip: "192.168.10.200"
              asn: 456
              acceptsIPv4: true
 
      tmrouted:
        image:
          repository: "local.registry.com"
    
  blobd:
    image:
      repository: "local.registry.com"

    f5-toda-logging:
      enabled: true
      fluentd:
        host: 'f5-toda-fluentd.cnf-utilities.svc.cluster.local.'
        sidecar:
         image:
          repository: "local.registry.com"
 
    debug:
      enabled: true
      rabbitmqNamespace: "cnf-telemetry"
        image:
          repository: "local.registry.com"

global:
  afm:
    enabled: true
    pccd:
      enabled: true 

f5-afm:
  enabled: true
  cert-orchestrator:
    enabled: true
  afm:
    pccd:  
      enabled: true
      image:
        repository: "local.registry.com"
    fluentbit_sidecar:
      fluentd:
        host: 'f5-toda-fluentd.cnf-gateway.svc.cluster.local.'
      image:
        repository: "local.registry.com"

f5-ipsd:
  enabled: true
  ipsd:
    image:
      repository: "local.registry.com"
    fluentbit_sidecar:
      fluentd:
        host: "local.registry.com"
      image:
         repository: "local.registry.com"

controller:
  image:
    repository: "local.registry.com"

  f5_lic_helper:
    enabled: true
    rabbitmqNamespace: "cnf-telemetry"
    image:
      repository: "local.registry.com"

  fluentbit_sidecar:
    enabled: true
    fluentd:
      host: 'f5-toda-fluentd.cnf-utilities.svc.cluster.local.'
    image:
      repository: "local.registry.com"

Installation

  1. Change into the directory containing the latest CNFs Software, and obtain the f5ingress Helm chart version:

    In this example, the CNF files are in the cnfinstall directory:

    cd cnfinstall
    
    ls -1 tar | grep f5ingress
    

    The example output should appear similar to the following:

    f5ingress-v0.480.0-0.1.30.tgz
    
  2. If you haven’t already, create a new Project for the CNFs Pods using the following command syntax:

    oc create ns <project name>
    

    In this example, a new Project named cnf-gateway is created:

    oc create ns cnf-gateway
    
  3. Install the BIG-IP Controller using the following command syntax:

    helm install f5ingress tar/<helm chart> \
    -f <values file> -n <namespace>
    

    For example:

    helm install f5ingress tar/f5ingress-v0.480.0-0.1.30.tgz \
    -f ingress-values.yaml -n cnf-gateway
    
  4. Verify the Pods have installed successfully, and all containers are Running:

    oc get pods -n cnf-gateway
    

    In this example, all containers have a STATUS of Running as expected:

    NAME                                   READY   STATUS    
    f5-afm-d67cd45d5-z6tch                 2/2     Running
    f5-ipsd-d886bbb78-wb5w7                2/2     Running
    f5-tmm-7458484b8c-fmbgd                4/4     Running
    f5ingress-f5ingress-76d8679d4b-w989t   2/2     Running
    
  5. Continue to the next procedure to configure the TMM interfaces.

Interfaces

The F5BigNetVlan Custom Resource (CR) applies TMM’s interface configuration; IP addresses, VLAN tags, MTU, etc. Use the steps below to configure and install clientside and serverside F5BigNetVlan CRs:

  1. Configure external and internal F5BigNetVlan CRs. You can place both of the example CRs into a single YAML file:

    _images/spk_warn.png Important: Set the cmp_hash parameter values to SRC_ADDR on the clientside (upstream) VLAN, and DST_ADDR on the serverside (downstream) VLAN.

    apiVersion: "k8s.f5net.com/v1"
    kind: F5BigNetVlan
    metadata:
      name: "subscriber-vlan"
      namespace: "cnf-gateway"
    spec:
      name: clientside
      interfaces:
        - "1.1"
      selfip_v4s:
        - 10.10.10.100
        - 10.10.10.101
      prefixlen_v4: 24
      selfip_v6s:
        - 2002::10:10:10:100
        - 2002::10:10:10:101
      prefixlen_v6: 116
      mtu: 9000
      cmp_hash: SRC_ADDR
    ---
    apiVersion: "k8s.f5net.com/v1"
    kind: F5BigNetVlan
    metadata:
      name: "application-vlan"
      namespace: "cnf-gateway"
    spec:
      name: serverside
      interfaces:
        - "1.2"
      selfip_v4s:
        - 192.168.10.100
        - 192.168.10.101
      prefixlen_v4: 24
      selfip_v6s:
        - 2002::192:168:10:100
        - 2002::192:168:10:101
      prefixlen_v6: 116
      mtu: 9000
      cmp_hash: DST_ADDR
    
  2. Install the VLAN CRs:

    oc apply -f cnf_vlans.yaml
    
  3. List the VLAN CRs:

    oc get f5-big-net-vlan -n cnf-gateway
    

    In this example, the VLAN CRs are installed:

    NAME
    vlan-client
    vlan-server
    
  4. If the Debug Sidecar is enabled (the default), you can verify the f5-tmm container’s interface configuration:

    oc exec -it deploy/f5-tmm -c debug -n cnf-gateway -- ip a
    

    The interfaces should appear at the bottom of the list:

    8: clientside: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000
        inet 10.10.10.100/24 brd 192.168.10.0 scope global client
           valid_lft forever preferred_lft forever
        inet6 2002::192:168:10:100/112 scope global
           valid_lft forever preferred_lft forever
    9: serverside: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000
        link/ether 1e:80:c1:e8:81:15 brd ff:ff:ff:ff:ff:ff
        inet 192.168.10.100/24 brd 10.10.10.0 scope global server
           valid_lft forever preferred_lft forever
        inet6 2002::10:10:10:100/112 scope global
           valid_lft forever preferred_lft forever
    

Uninstallation

The following steps are mandatory for cleaning up product installations:

  1. Delete all configured CRs:

    oc delete -f <cr-file> -n <*namespace>
    
  2. Uninstall the product:

    helm uninstall <helm-installation-name> -n <*namespace>
    
  3. Delete the namespace:

    oc delete ns <*namespace>
    
  4. Delete the CRDs:

    oc delete crd <crd-name> 
    

_images/spk_info.png Note: In the above commands, the namespace can be either tmmNamespace or watchNamespace.

_images/spk_warn.png Important: If the above order is missed, then below script can be used to clean up finalizers from CR’s and proceed with uninstallation of product and namespace.

  #!/bin/sh

  if [ $# -ne 1 ] ; then
      echo "Invalid Arguments, provide namespace as argument"
      exit 1
  fi

  echo "This will remove finalizers of all usecase CRs of namespace $1"

  crs=$(oc api-resources --namespaced=true --verbs=list -o name | egrep 'f5-big|f5-cnf' | xargs -n 1 oc get --show-kind --ignore-not-found -n $1 | grep f5 | cut -d ' ' -f 1)

  for cr in $crs; do
     result=$(oc -n $1 patch $cr -p '{"metadata":{"finalizers":[]}}' --type=merge)
     echo $result
  done

  echo ""
  echo "Removed finalizers of all CRs of namespace $1"

For more details, refer to the Finalizers section in the CNF CRs guide.

Next step

To begin processing application traffic, continue to the CNFs CRs guide.

Feedback

Provide feedback to improve this document by emailing cnfdocs@f5.com.