Ingress Controller

Overview

The Service Proxy for Kubernetes (SPK) Ingress Controller and Service Proxy Traffic Management Microkernel (TMM) Pods install together, and are the primary application traffic management software components. Once integrated, Service Proxy TMM can be configured to proxy and load balance high-performance 5G workloads using SPK’s Custom Resources (CRs).

This document guides you through creating the Ingress Controller and TMM Helm values file, installing the Pods, and creating TMM’s internal and external VLAN interfaces.

CRD workaround

SPK Custom Resource Definitions (CRDs) install with the Ingress Controller, and do not install over existing CRDs of the same type. To install a newer F5SPKIngressTCP CRD, ensure you first delete the existing CRD. Use the following steps to delete CRDs prior to installing the Ingress Controller:

  1. List the currently installed CRDs:

    oc get crds | grep f5net.com
    
  2. To delete an existing F5SPKIngressTCP CRD, use the following command:

    oc delete crd f5-spk-ingresstcps.ingresstcp.k8s.f5net.com
    
  3. Continue the previous steps for each new CRD you indend to configure for application traffic processing.

Requirements

Ensure you have:

Procedures

Helm values file

The Ingress Controller and Service Proxy Pods rely on a number of custom Helm values to install successfully. Use the steps below to obtain important cluster configuration data, and create the proper Helm values file for the Controller Installation procedure.

  1. Switch to the Ingress Controller Project:

    _images/spk_info.png Note: The Ingress Controller Project was created during the gRPC Secrets installation.

    oc project <project>
    

    In this example, the spk-ingress Project is selected:

    oc project spk-ingress
    
  2. As described in the Networking Overview, the Ingress Controller uses OpenShift network node policies and network attachment definitions to create Service Proxy TMM’s SR-IOV interface list. Use the steps below to obtain the node policies and attachment definition names, and configure the TMM interface list:

    A. Obtain the names of the network attachment definitions:

    oc get net-attach-def
    

    In this example, the network attachment definitions are named internal-netdevice and external-netdevice:

    internal-netdevice
    external-netdevice
    

    B. Obtain the names of the network node policies using the network attachment definition resourceName parameter:

    oc describe net-attach-def | grep openshift.io
    

    In this example, the network node policies are named internalNetPolicy and externalNetPolicy:

    Annotations:  k8s.v1.cni.cncf.io/resourceName: openshift.io/internalNetPolicy
    Annotations:  k8s.v1.cni.cncf.io/resourceName: openshift.io/externalNetPolicy
    

    C. Create a Helm values file named ingress-values.yaml and set the node attachment and node policy names to configure the TMM interface list:

    In this example, OPENSHIFT_VFIO_RESOURCE_1 creates interface 1.1, and OPENSHIFT_VFIO_RESOURCE_2 creates interface 1.2:

    tmm:
    
      # Orders the network attachment definitions.
      cniNetworks: "project/internal-netdevice,project/external-netdevice”
    
      # Orders the network node policies.
      customEnvVars:
        - name: OPENSHIFT_VFIO_RESOURCE_1
          value: "internalNetPolicy"
        - name: OPENSHIFT_VFIO_RESOURCE_2
          value: "externalNetPolicy"
    
  3. SPK supports Ethernet frames over 1500 bytes (Jumbo frames), up to a maxiumum transmission unit (MTU) size of 8000 bytes. The MTU size must be configured using the customEnvVars parameters. Add and adapt the parameters below to the tmm section of the Helm values file::

    tmm:
    
      customEnvVars:
      - name: TMM_DEFAULT_MTU
        value: "8000"
    
  4. Service Proxy TMM installs with two CPU cores by default. On multiprocessor servers, the CPU cores must bind to core IDs in the same NUMA node. The Ingress Controller relies on the OpenShift Performance Addon Operator to dynamically allocate and properly align TMM’s CPU cores. Use the steps below to enable the Performance Addon Operator:

    A. Obtain the full performance profile name from the runtimeClass parameter:

    oc get performanceprofile -o jsonpath='{..runtimeClass}{"\n"}'
    

    In this example, the performance profile name is performance-spk-loadbalancer:

    performance-spk-loadbalancer
    

    B. Use the performance profile name to configure the runtimeClassName parameter, and set the the parameters below in the Helm values file:

    tmm:
    
      topologyManager: "true"
      runtimeClassName: "performance-spk-loadbalancer"
    
      pod:
        annotations:
        cpu-load-balancing.crio.io: disable
    
  5. Open Virtual Network with Kubernetes (OVN-Kubernetes) annotations are applied to the Service Proxy TMM Pod enabling Pods use TMM’s internal interface as their egress traffic default gateway. To enable OVN-Kubernetes annotations, set the tmm.icni2.enabled parameter to true:

    tmm:
    
      icni2:
        enabled: true
    
  6. To load balance application traffic between networks, or to scale Service Proxy TMM beyond a single instance in the Project, the f5-tmm-routing container must be enabled, and a Border Gateway Protocol (BGP) session must be established with an external neighbor. The parameters below configure an external BGP peering session:

    _images/spk_info.png Note: For additional BGP configuration parameters, refer to the BGP Overview guide.

    tmm:
    
      dynamicRouting:
        enabled: true
        tmmRouting:
          image:
            repository: "registry.com"
          config:
            bgp:
              asn: 123
              neighbors:
              - ip: "192.168.10.100"
                asn: 456
                acceptsIPv4: true
    
        tmrouted:
          image:
            repository: "registry.com"
    
  7. The f5-toda-logging container is enabled by default, and requires setting the f5-toda-logging.fluentd.host parameter.

    A. If you installed the Fluentd Logging collector, set the host parameters:

    controller:
    
      # Sends logging data from the Ingress Controller.
      fluentbit_sidecar:
        fluentd:
          host: 'f5-toda-fluentd.spk-utilities.svc.cluster.local.'
    
    f5-toda-logging:
    
      # Sends logging data from TMM.
      fluentd:
        host: 'f5-toda-fluentd.spk-utilities.svc.cluster.local.'
    

    B. If you did not install the Fluentd Logging collector, set the f5-toda-logging.enabled parameter to false:

    f5-toda-logging:
    
      enabled: false
    
  8. The Ingress Controller and Service Proxy TMM Pods install to a different Project than the internal application (Pods) being proxied and load balanced. Set the watchNamespace parameter to the Pod Project:

    _images/spk_warn.png Important: Ensure the Project exists in the cluster, the Ingress Controller does not discover Projects created after installation.

    controller:
    
       watchNamespace: "internal-app"
    
  9. The completed Helm values file should appear similar to the following:

    _images/spk_info.png Note: Set the image.repository parameter for each container to your local container registry.

    tmm:
    
      replicaCount: 1
    
      image:
        repository: "local.registry.com"
    
      icni2:
        enabled: true
    
      cniNetworks: "spk-ingress/internal-netdevice,spk-ingress/external-netdevice"
    
      customEnvVars:
      - name: OPENSHIFT_VFIO_RESOURCE_1
        value: "internalNetPolicy"
      - name: OPENSHIFT_VFIO_RESOURCE_2
        value: "externalNetPolicy"
      - name: TMM_DEFAULT_MTU
        value: "8000"
    
      topologyManager: "true"
      runtimeClassName: "performance-spk-loadbalancer"
    
      pod:
        annotations:
          cpu-load-balancing.crio.io: disable
    
      dynamicRouting:
        enabled: true
        tmmRouting:
          image:
            repository: "local.registry.com"
          config:
            bgp:
              asn: 123
              neighbors:
              - ip: "192.168.10.200"
                asn: 456
                acceptsIPv4: true
    
        tmrouted:
          image:
            repository: "local.registry.com"
    
    controller:
      image:
        repository: "local.registry.com"
    
      watchNamespace: "internal-apps"
    
      fluentbit_sidecar:
        enabled: true
        fluentd:
          host: 'f5-toda-fluentd.spk-utilities.svc.cluster.local.'
    
        image:
          repository: "local.registry.com"
    
    f5-toda-logging:  
      fluentd:
        host: "f5-toda-fluentd.spk-utilities.svc.cluster.local."
    
      sidecar:
        image: 
          repository: "local.registry.com"
    
      tmstats:
        config:
          image:
            repository: "local.registry.com"
    
    debug:
      image: 
        repository: "local.registry.com"
    

Controller Installation

  1. Change into local directory with the SPK files, and list the files in the tar directory:

    cd <directory>
    
    ls -1 tar
    

    In this example, the SPK files are in the spkinstall directory:

    cd spkinstall
    
    ls -1 tar
    

    In this example, Ingress Controller and Service Proxy TMM Helm chart is named f5ingress-2.0.19.tgz:

    f5-dssm-0.16.3.tgz
    f5-toda-fluentd-1.7.7.tgz
    f5ingress-2.0.19.tgz
    spk-docker-images.tgz
    
  2. Switch to the Ingress Controller Project:

    _images/spk_info.png Note: The Ingress Controller Project was created during the gRPC Secrets installation.

    oc project <project>
    

    In this example, the spk-ingress Project is selected:

    oc project spk-ingress
    
  3. Install the Ingress Controller and Service Proxy TMM Pods, referencing the Helm values file created in the previous procedure:

    helm install <release name> tar/f5ingress-<version>.tgz -f <values>.yaml 
    

    In this example, Ingress Controller installs using Helm chart version 2.0.19:

    helm install f5ingress tar/f5ingress-2.0.19.tgz -f ingress-values.yaml 
    
  4. Verify the Pods have installed successfully, and all containers are Running:

    oc get pods 
    

    In this example, all containers have a STATUS of Running as expected:

    NAME                                   READY   STATUS    
    f5ingress-f5ingress-744d4fb88b-4ntrx   2/2     Running   
    f5-tmm-79b6d8b495-mw7xt                5/5     Running   
    

VLAN configuration

The F5SPKVlan Custom Resource (CR) configures the Service Proxy TMM interfaces, and should install to the same Project as the Service Proxy TMM Pod. It is important to set the F5SPKVlan spec.internal parameter to true on the internal VLAN interface to apply OVN-Kubernetes Annotations, and to select an IP address from the same subnet as the OpenShift nodes. Use the steps below to install the F5SPKVlan CR:

  1. Verify the IP address subnet of the OpenShift nodes:

    oc get nodes -o yaml | grep ipv4
    

    In this example, the nodes are on the IPv4 10.144.175.0/24 subnet:

    k8s.ovn.org/node-primary-ifaddr: '{"ipv4":"10.144.175.15/24","ipv6":"2620:128:e008:4018::15/128"}'
    k8s.ovn.org/node-primary-ifaddr: '{"ipv4":"10.144.175.16/24","ipv6":"2620:128:e008:4018::16/128"}'
    k8s.ovn.org/node-primary-ifaddr: '{"ipv4":"10.144.175.17/24","ipv6":"2620:128:e008:4018::17/128"}'
    k8s.ovn.org/node-primary-ifaddr: '{"ipv4":"10.144.175.18/24","ipv6":"2620:128:e008:4018::18/128"}'
    k8s.ovn.org/node-primary-ifaddr: '{"ipv4":"10.144.175.19/24","ipv6":"2620:128:e008:4018::19/128"}'
    
  2. Configure external and internal F5SPKVlan CRs. You can place both CRs in the same YAML file:

    _images/spk_info.png Note: Set the external facing F5SPKVlan to the external BGP peer router’s IP subnet.

    apiVersion: "k8s.f5net.com/v1"
    kind: F5SPKVlan
    metadata:
      name: "vlan-internal"
      namespace: spk-ingress
    spec:
      name: net1
      interfaces:
        - "1.1"
      internal: true
      selfip_v4s:
        - 10.144.175.200
      prefixlen_v4: 24
      mtu: 8000
    ---
    apiVersion: "k8s.f5net.com/v1"
    kind: F5SPKVlan
    metadata:
      name: "vlan-external"
      namespace: spk-ingress
    spec:
      name: net2
      interfaces:
        - "1.2"
      selfip_v4s:
        - 192.168.100.1
      prefixlen_v4: 24
      mtu: 8000
    
  3. Install the VLAN CRs:

    oc apply -f <crd_name.yaml>
    

    In this example, the VLAN CR file is named spk_vlans.yaml.

    oc apply -f spk_vlans.yaml
    
  4. List the VLAN CRs:

    oc get f5-spk-vlans
    

    _In this example, the VLAN CRs are installed:

    NAME           
    vlan-external 
    vlan-internal 
    
  5. If a BGP peer is provisioned, refer to the Advertising virtual IPs section of the BGP Overview to verify the session has Established.

Next step

To begin processing application traffic, continue to the Custom Resources guide.

Feedback

Provide feedback to improve this document by emailing spkdocs@f5.com.