F5SPKIngressNGAP

Overview

This overview discusses the F5SPKIngressNGAP CR. For the full list of CRs, refer to the SPK CRs overview. The F5SPKIngressNGAP Custom Resource (CR) configures the Service Proxy Traffic Management Microkernel (TMM) to provide low-latency datagram load balancing using the Stream Control Protocol (SCTP) and NG Application (NGAP) signaling protocols. The F5SPKIngressNGAP CR also provides options to tune how connections are processed, and to monitor the health of Service object Endpoints.

Note: The NGAP CR does not currently support multi-homing.

This document guides you through understanding, configuring and installing a simple F5SPKIngressNGAP CR.

CR integration

SPK CRs should be integrated into the cluster after the Kubernetes application Deployment and application Service object have been installed. The SPK Controller uses the CR service.name to discover the application Endpoints, and use them to create TMM’s load balancing pool. The recommended method for installing SPK CRs is to include them in the application’s Helm release. Refer to the Helm CR Integration guide for more information.

CR parameters

The table below describes the CR parameters used in this document.

Option Description
service.name Selects the Service object name for the internal applications (Pods), and creates a round-robin load balancing pool using the Service Endpoints.
service.port Selects the Service object port value.
spec.ipfamilies Should match the Service object ipFamilies parameter, ensuring SNAT Automap is applied correctly: IPv4 (default), IPv6, and IPv4andIPv6.
spec.destinationAddress Creates an IPv4 virtual server address for ingress connections.
spec.v6destinationAddress Creates an IPv6 virtual server address for ingress connections.
spec.destinationPort Defines the service port for inbound connections. When the Kubernetes service being load balanced has multiple ports, install one CR per service, or use port 0 for all ports.
spec.snatType Enables translating the source IP address of ingress packets to TMM's self IP addresses: SRC_TRANS_AUTOMAP to enable, or SRC_TRANS_NONE to disable (default).
spec.idleTimeout The connection idle timeout period in seconds. The default is 300.
spec.inboundSnatEnabled Enable source network address translation: true (default), or false.
spec.inboundSnatIP The source IP address to use for translating inbound connections.
spec.loadBalancingMethod Specifies the load balancing method used to distribute traffic across pool members: ROUND_ROBIN distributes connections evenly across all pool members (default), and RATIO_LEAST_CONN_MEMBER distributes connections first to members with the least number of active connections.
spec.clientSideMultihoming Enable client side connection multihoming: true or false (default).
spec.alternateAddressList Specifies a list of alternate IP addresses when clientsideMultihoming is enabled. Each TMM POD requires unique alternate IP address, and the IP address will be advertised via BGP to the upstream router. Each list defined will be allocated to TMMs in order: first list to first TMM, continuing through each list.
spec.vlans.vlanList Specifies a list of F5SPKVlan CRs to listen for ingress traffic, using the CR's metadata.name. The list can also be disabled using disableListedVlans.
spec.vlans.category Specifies an F5SPKVlan CR category to listen for ingress traffic. The category can also be disabled using disableListedVlans.

CR example

apiVersion: "k8s.f5net.com/v1"
kind: F5SPKIngressNGAP
metadata:
  name: "ngap-cr"
  namespace: "ngap-apps"
service:
  name: "ngap-svc"
  port: 38412
spec:
  destinationAddress: "192.168.1.123"
  destinationPort: 38412
  idleTimeout: 100
  loadBalancingMethod: "RATIO_LEAST_CONN_MEMBER"
  snatType: "SRC_TRANS_AUTOMAP"
  vlans:
    vlanList:
    - vlan-external

Application Project

The Ingress Controller and Service Proxy TMM Pods install to a different Project than the NGAP application (Pods). When installing the Ingress Controller, set the controller.watchNamespace parameter to the NGAP Pod Project in the Helm values file. For example:

_images/spk_warn.png Important: Ensure the Project currently exists in the cluster, the Ingress Controller does not discover Projects created after installation.

controller:
  watchNamespace: "ngap-apps"

Dual-Stack environments

Service Proxy TMM’s load balancing pool is created by discovering the Kubernetes Service Endpoints in the Project. In IPv4/IPv6 dual-stack environments, to populate the load balancing pool with IPv6 or IPv6 and IPv4 members, set the Kubernetes Service PreferDualStack parameter to IPv6, and set the F5SPKIngressNGAP CR’s spec.ipfamilies parameter to the same value. For example:

Kubernetes Service

kind: Service
metadata:
  name: ngap-svc
  namespace: ngap-apps
spec:
  ipFamilyPolicy: PreferDualStack
  ipFamilies:
  - IPv6
  - IPv4

_images/spk_warn.png Important: When enabling PreferDualStack, ensure TMM’s internal F5SPKVlan interface configuration includes both IPv4 and IPv6 addresses.

F5SPKIngressNGAP CR

kind: F5SPKIngressNGAP
metadata:
  namespace: ngap-apps
  name: ngap-cr
service:
  name: ngap-svc
spec:
  ipfamilies:
  - IPv4andIPv6

SNAT requirement

The F5IngressNGAP destinationAddress and v6destinationAddress parameters create virtual servers on the Service Proxy TMM, and it is possible to have configurations with IPv4 and IPv6 virtual servers and only an IPv6 or an IPv4 pool. In the case where virtual server and pool IP address versions differ, you must set the snatType parameter to SRC_TRANS_AUTOMAP. The table below describes when to set the snatType parameter:

TMM Virtuals K8S Service TMM configuration with SNAT
IPv4/IPv6 IPv4/IPv6 IPv4 virtual with IPv4 pool, and IPv6 virtual with IPv6 pool. No SNAT required.
IPv4/IPv6 IPv4 IPv4 virtual with IPv4 pool, and IPv6 virtual with IPv4 pool. Set SRC_TRANS_AUTOMAP.
IPv4/IPv6 IPv6 IPv4 virtual with IPv6 pool, and IPv6 virtual with IPv6 pool. Set SRC_TRANS_AUTOMAP.

Ingress traffic

To enable ingress network traffic, Service Proxy TMM must be configured to advertise virtual server IP addresses to external networks using the BGP dynamic routing protocol. Alternatively, you can configure appropriate routes on upstream devices. For BGP configuration assistance, refer to the BGP Overview.

Requirements

Ensure you have:

Installation

Use the following steps to verify the application’s Service object configuration, and install the example F5SPKIngressNGAP CR.

  1. Switch to the application Project:

    oc project <project>
    

    In this example, the application is in the ngap-apps Project:

    oc project ngap-apps
    
  2. Verify the K8S Service object NAME and PORT are set using the CR service.name and service.port parameters:

    kubectl get service
    

    In this example, the Service object NAME ngap-apps and PORT 38412 are set in the example CR:

    NAME         TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S) 
    ngap-apps    NodePort   10.99.99.99   <none>        38412:30714/TCP
    
  3. Copy the example CR into a YAML file:

    apiVersion: "k8s.f5net.com/v1"
    kind: F5SPKIngressNGAP
    metadata:
      name: "ngap-cr"
      namespace: "ngap-apps"
    service:
      name: "ngap-svc"
      port: 38412
    spec:
      destinationAddress: "192.168.1.123"
      destinationPort: 38412
      idleTimeout: 100
      loadBalancingMethod: "RATIO_LEAST_CONN_MEMBER"
      snatType: "SRC_TRANS_AUTOMAP"
      vlans:
        vlanList:
        - vlan-external
    
  4. Install the F5SPKIngressNGAP CR:

    oc apply -f spk-ingress-ngap.yaml
    
  5. NGAP clients should now be able to connect to the application through the Service Proxy TMM.

Verify connectivity

If you installed the Ingress Controller with the Debug Sidecar enabled, connect to the sidecar to view virtual server and pool member connectivity statistics.

  1. Log in to the Service Proxy Debug container:

     kubectl attach -it f5-tmm-546c7cb9b9-zvjsf -c debug -n spk-ingress
    
  2. View the virtual server connection statistics:

    tmctl -d blade virtual_server_stat -s name,clientside.tot_conns
    

    For example:

    name                                serverside.tot_conns
    ----------------------------------- --------------------
    ngap-apps-ngap-cr-virtual-server                       31
    
  3. View the load balancing pool connection statistics:

    tmctl -d blade pool_member_stat -s pool_name,serverside.tot_conns
    

    For example:

    ngap-apps-ngap-cr-pool                        15
    ngap-apps-ngap-cr-pool                        16
    

Supplemental