SPK Controller

Overview

The Service Proxy for Kubernetes (SPK) Controller and Service Proxy Traffic Management Microkernel (TMM) Pods install together, and are the primary application traffic management software components. Once integrated, Service Proxy TMM can be configured to proxy and load balance high-performance 5G workloads using SPK CRs.

This document guides you through creating the Controller and TMM Helm values file, installing the Pods, and creating TMM’s internal and external VLAN interfaces.

Requirements

Ensure you have:

Procedures

Helm values

The Controller and Service Proxy Pods rely on a number of custom Helm values to install successfully. Use the steps below to obtain important cluster configuration data, and create the proper Helm values file for the installation procedure.

  1. Switch to the Controller Project:

    _images/spk_info.png Note: The Controller Project was created during the SPK Secrets installation.

    oc project <project>
    

    In this example, the spk-ingress Project is selected:

    oc project spk-ingress
    
  2. As described in the Networking Overview, the Controller uses OpenShift network node policies and network attachment definitions to create Service Proxy TMM’s interface list. Use the steps below to obtain the node policies and attachment definition names, and configure the TMM interface list:

    A. Obtain the names of the network attachment definitions:

    oc get net-attach-def
    

    In this example, the network attachment definitions are named internal-netdevice and external-netdevice:

    internal-netdevice
    external-netdevice
    

    B. Obtain the names of the network node policies using the network attachment definition resourceName parameter:

    oc describe net-attach-def | grep openshift.io
    

    In this example, the network node policies are named internalNetPolicy and externalNetPolicy:

    Annotations:  k8s.v1.cni.cncf.io/resourceName: openshift.io/internalNetPolicy
    Annotations:  k8s.v1.cni.cncf.io/resourceName: openshift.io/externalNetPolicy
    

    C. Create a Helm values file named ingress-values.yaml and set the node attachment and node policy names to configure the TMM interface list:

    In this example, the cniNetworks: parameter references the network attachments, and orders TMM’s interface list as: 1.1 (internal) and 1.2 (external):

    tmm:
      cniNetworks: "project/internal-netdevice,project/external-netdevice”
    
      customEnvVars:
        - name: OPENSHIFT_VFIO_RESOURCE_1
          value: "internalNetPolicy"
        - name: OPENSHIFT_VFIO_RESOURCE_2
          value: "externalNetPolicy"
    
  3. SPK supports Ethernet frames over 1500 bytes (Jumbo frames), up to a maxiumum transmission unit (MTU) size of 9000 bytes. To modify the MTU size, adapt thecustomEnvVars parameter:

    tmm:
      customEnvVars:
      - name: TMM_DEFAULT_MTU
        value: "9000"
    
  4. The Controller relies on the OpenShift Performance Addon Operator to dynamically allocate and properly align TMM’s CPU cores. Use the steps below to enable the Performance Addon Operator:

    A. Obtain the full performance profile name from the runtimeClass parameter:

    oc get performanceprofile -o jsonpath='{..runtimeClass}{"\n"}'
    

    In this example, the performance profile name is performance-spk-loadbalancer:

    performance-spk-loadbalancer
    

    B. Use the performance profile name to configure the runtimeClassName parameter, and set the the parameters below in the Helm values file:

    tmm:
      topologyManager: "true"
      runtimeClassName: "performance-spk-loadbalancer"
    
      pod:
        annotations:
        cpu-load-balancing.crio.io: disable
    
  5. Open Virtual Network with Kubernetes (OVN-Kubernetes) annotations are applied to the Service Proxy TMM Pod enabling Pods use TMM’s internal interface as their egress traffic default gateway. To enable OVN-Kubernetes annotations, set the tmm.icni2.enabled parameter to true:

    tmm:
      icni2:
        enabled: true
    
  6. To load balance application traffic between networks, or to scale Service Proxy TMM beyond a single instance in the Project, the f5-tmm-routing container must be enabled, and a Border Gateway Protocol (BGP) session must be established with an external neighbor. The parameters below configure an external BGP peering session:

    _images/spk_info.png Note: For additional BGP configuration parameters, refer to the BGP Overview guide.

    tmm:
      dynamicRouting:
        enabled: true
        exportZebosLogs: true
        tmmRouting:
          image:
            repository: "registry.com"
          config:
            bgp:
              asn: 123
              neighbors:
              - ip: "192.168.10.100"
                asn: 456
                acceptsIPv4: true
    
        tmrouted:
          image:
            repository: "registry.com"
    
  7. The f5-toda-logging container is enabled by default, and requires setting the f5-toda-logging.fluentd.host parameter.

    A. If you installed the Fluentd Logging collector, set the host parameters:

    controller:
      fluentbit_sidecar:
        fluentd:
          host: 'f5-toda-fluentd.spk-utilities.svc.cluster.local.'
    
    f5-toda-logging:
      fluentd:
        host: 'f5-toda-fluentd.spk-utilities.svc.cluster.local.'
    

    B. If you did not install the Fluentd Logging collector, set the f5-toda-logging.enabled parameter to false:

    f5-toda-logging:
      enabled: false
    
  8. The Controller and Service Proxy TMM Pods install to a different Project than the internal application (Pods). Set the watchNamespace parameter to the Pod Project(s):

    controller:
       watchNamespace: 
         - "spk-apps"
         - "spk-apps2"
    
  9. If you intend to install the F5SPKIngressHTTP2 Custom Resource, set the tmm.tlsStore.enabled paramter to true. This enables TMM to mount the Secrets located in the secret store named tls-keys-certs-secret:

    _images/spk_warn.png Important: The tls-keys-certs-secret Secret must be created before the SPK Controller is installed, otherwise the mount will fail and cause the TMM to enter a restart loop.

    tmm:
      tlsStore:
        enabled: true
    
  10. The completed Helm values file should appear similar to the following:

    _images/spk_info.png Note: Set the image.repository parameter for each container to your local container registry.

    tmm:
      replicaCount: 1
    
      image:
        repository: "local.registry.com"
    
      tlsStore:
        enabled: true
    
      icni2:
        enabled: true
    
      cniNetworks: "spk-ingress/internal-netdevice,spk-ingress/external-netdevice"
    
      customEnvVars:
      - name: OPENSHIFT_VFIO_RESOURCE_1
        value: "internalNetPolicy"
      - name: OPENSHIFT_VFIO_RESOURCE_2
        value: "externalNetPolicy"
      - name: TMM_DEFAULT_MTU
        value: "9000"
    
      topologyManager: "true"
      runtimeClassName: "performance-spk-loadbalancer"
    
      pod:
        annotations:
          cpu-load-balancing.crio.io: disable
    
      dynamicRouting:
        enabled: true
        tmmRouting:
          image:
            repository: "local.registry.com"
          config:
            bgp:
              asn: 123
              neighbors:
              - ip: "192.168.10.200"
                asn: 456
                acceptsIPv4: true
    
        tmrouted:
          image:
            repository: "local.registry.com"
    
    controller:
      image:
        repository: "local.registry.com"
    
      f5_lic_helper:
        enabled: true
        cwcNamespace: "spk-telemetry"
        image:
          repository: "local.registry.com"
        rabbitmqCerts:
          ca_root_cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk
          client_cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1
          client_key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1J
    
      watchNamespace: 
        - "spk-apps"
        - "spk-apps-2"
    
      fluentbit_sidecar:
        enabled: true
        fluentd:
          host: 'f5-toda-fluentd.spk-utilities.svc.cluster.local.'
        image:
          repository: "local.registry.com"
    
    f5-toda-logging:  
      fluentd:
        host: "f5-toda-fluentd.spk-utilities.svc.cluster.local."
    
      sidecar:
        image: 
          repository: "local.registry.com"
    
      tmstats:
        config:
          image:
            repository: "local.registry.com"
    
    debug:
      image: 
        repository: "local.registry.com"
    
    stats_collector:
      enabled: true
      image:
        repository: "local.registry.com"
    

Installation

  1. Change into the local directory with the SPK files, and list the files in the tar directory:

    cd <directory>
    
    ls -1 tar
    

    In this example, the SPK files are in the spkinstall directory:

    cd spkinstall
    
    ls -1 tar
    

    In this example, Controller and Service Proxy TMM Helm chart is named f5ingress-7.0.13.tgz:

    csrc-0.1.4.tgz
    cwc-0.5.0.tgz
    f5-cert-gen-0.2.4.tgz
    f5-dssm-0.22.18.tgz
    f5-toda-fluentd-1.10.1.tgz
    f5ingress-7.0.13.tgz
    spk-docker-images.tgz
    
  2. Switch to the Controller Project:

    _images/spk_info.png Note: The Controller Project was created during the SPK Secrets installation.

    oc project <project>
    

    In this example, the spk-ingress Project is selected:

    oc project spk-ingress
    
  3. Install the Controller and Service Proxy TMM Pods, referencing the Helm values file created in the previous procedure:

    helm install <release name> tar/f5ingress-<version>.tgz -f <values>.yaml 
    

    In this example, Controller installs using Helm chart version 7.0.13:

    helm install f5ingress tar/f5ingress-7.0.13.tgz -f ingress-values.yaml 
    
  4. Verify the Pods have installed successfully, and all containers are Running:

    oc get pods 
    

    In this example, all containers have a STATUS of Running as expected:

    NAME                                   READY   STATUS    
    f5ingress-f5ingress-744d4fb88b-4ntrx   2/2     Running   
    f5-tmm-79b6d8b495-mw7xt                5/5     Running   
    
  5. Verify the f5ingress Pod has successfully licensed:

    oc logs f5ingress-f5ingress-744d4fb88b-4ntrx -c f5-lic-helper \
    -n spk-ingress | grep -i LicenseVerified
    

    In this example, the f5ingress Pod’s f5-lic-helper indicates Entitlement: paid.

    2023-02-03 22:00:44.221|A|informational|1|Message="Payload type:
    ResponseCM20LicenseVerified Entitlement: paid Expiry Date: 2024-01-29T00:01:03Z"
    
  6. Continue to the next procedure to configure the TMM interfaces.

Interfaces

The F5SPKVlan Custom Resource (CR) configures the Service Proxy TMM interfaces, and should install to the same Project as the Service Proxy TMM Pod. It is important to set the F5SPKVlan spec.internal parameter to true on the internal VLAN interface to apply OVN-Kubernetes Annotations, and to select an IP address from the same subnet as the OpenShift nodes. Use the steps below to install the F5SPKVlan CR:

  1. Verify the IP address subnet of the OpenShift nodes:

    For version 4.10.x and later, when the OpenShift Extra Bridge (br-ex1) feature is enabled, use the exgw-ip-addressess subnet:

    oc get nodes -o json | grep --color exgw-ip-addresses
    
    "k8s.ovn.org/l3-gateway-config": 
       \"exgw-ip-address\":\"172.20.1.201/24\",\"next-hops\":[\"10.144.174.254\"],
    

    For version 4.7.x and earlier, or when the OpenShift Extra Bridge (br-ex1) feature is disabled, use the node-primary-ifaddr subnet:

    oc get nodes -o yaml | grep node-primary-ifaddr
    
    k8s.ovn.org/node-primary-ifaddr: '{"ipv4":"10.144.175.15/24","ipv6":"2620:128:e008:4018::15/128"}'
    
  2. Configure external and internal F5SPKVlan CRs. You can place both CRs in the same YAML file:

    _images/spk_info.png Note: Set the external facing F5SPKVlan to the external BGP peer router’s IP subnet.

    apiVersion: "k8s.f5net.com/v1"
    kind: F5SPKVlan
    metadata:
      name: "vlan-internal"
      namespace: spk-ingress
    spec:
      name: net1
      interfaces:
        - "1.1"
      internal: true
      selfip_v4s:
        - 10.144.175.200
      prefixlen_v4: 24
      mtu: 9000
    ---
    apiVersion: "k8s.f5net.com/v1"
    kind: F5SPKVlan
    metadata:
      name: "vlan-external"
      namespace: spk-ingress
    spec:
      name: net2
      interfaces:
        - "1.2"
      selfip_v4s:
        - 192.168.100.1
      prefixlen_v4: 24
      mtu: 9000
    
  3. Install the VLAN CRs:

    oc apply -f <crd_name.yaml>
    

    In this example, the VLAN CR file is named spk_vlans.yaml.

    oc apply -f spk_vlans.yaml
    
  4. List the VLAN CRs:

    oc get f5-spk-vlans
    

    In this example, the VLAN CRs are installed:

    NAME           
    vlan-external 
    vlan-internal 
    
  5. If a BGP peer is provisioned, refer to the Advertising virtual IPs section of the BGP Overview to verify the session has Established.

  6. Continue to the Next step.

Next step

To begin processing application traffic, continue to the SPK CRs guide.

Feedback

Provide feedback to improve this document by emailing spkdocs@f5.com.