F5SPKIngressUDP

Overview

The F5SPKIngressUDP Custom Resource (CR) configures the Service Proxy Traffic Management Microkernel (TMM) to proxy and load balance low-latency UDP application traffic between networks using a virtual server and load balancing pool. The F5SPKIngressUDP CR also provides options to tune how connections are processed, and to monitor the health of Service object Endpoints.

This document guides you through understanding, configuring and installing a simple F5SPKIngressUDP CR.

CR integration stages

The graphic below displays the four integration stages used to begin processing application traffic. SPK CRs can also be integrated into your Helm release, managing all components with single interface. Refer to the Helm CR Integration guide for more information.

_images/spk_udp_crd.gif

CR Parameters

The table below describes the CR parameters used in this document, refer to the F5SPKIngressUDP Reference for the full list of parameters.

Parameter Description
service.name Selects the Service object name for the internal applications (Pods), and creates a round-robin load balancing pool using the Service Endpoints.
service.port Selects the Service object port value.
spec.destinationAddress Creates an IPv4 virtual server address for ingress connections.
spec.destinationPort Defines the service port for inbound connections.
spec.ipv6destinationAddress Creates an IPv6 virtual server address for ingress connections.
spec.idleTimeout The TCP connection idle timeout period in seconds (1-4294967295). The default value is 60 seconds.
monitors.icmp.interval Specifies in seconds the monitor check frequency (1-86400). The default value is 5.
monitors.icmp.timeout Specifies in seconds the time in which the target must respond (1-86400). The default value is 16.

Application Project

The Ingress Controller and Service Proxy TMM Pods install to a different Project than the UDP application (Pods). When installing the Ingress Controller, set the controller.watchNamespace parameter to the UDP Pod in the Helm values file. For example:

_images/spk_warn.png Important: Ensure the Project currently exists in the cluster, the Ingress Controller does not discover Projects created after installation.

controller:

  watchNamespace: "udp-apps"

Dual-Stack environments

Service Proxy TMM’s load balancing pool is created by discovering the Kubernetes Service Endpoints in the Project. In IPv4/IPv6 dual-stack environments, to populate the load balancing pool with IPv6 members, set the Service PreferDualStack parameter to IPv6. For example:

kind: Service
metadata:
  name: bind-dns
  namespace: udp-apps
  labels:
    app: bind-dns
spec:
  ipFamilyPolicy: PreferDualStack
  ipFamilies:
  - IPv6
  - IPv4

Ingress traffic

To enable ingress network traffic, the Service Proxy Pod must be configured to advertise virtual server IP addresses to remote networks, using the BGP dynamic routing protocol. Alternatively, you can configure appropriate routes on upstream devices. For BGP configuration assistance, refer to the BGP Overview.

Requirements

Ensure you have:

Installation

Use the following steps to verify the application’s Service object configuration, and install the example F5SPKIngressUDP CR.

  1. Switch to the application Project:

    oc project <project>
    

    In this example, the application is installed to the udp-apps Project:

    oc project udp-apps
    
  2. Verify the K8S Service object NAME and PORT for the application are set using the CR service.spec and service.port parameters:

    oc get service
    

    In this example, the Service object name bind-dns and port 53 are set in the example CR:

    NAME        TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)       
    bind-dns    NodePort   10.99.99.99   <none>        53:30714/UDP 
    
  3. Copy the example CR into a YAML file:

    The code below creates a F5SPKIngressUDP CR file named spk-ingress-udp.yaml:

    cat << EOF > spk-ingress-udp.yaml
    apiVersion: "ingressudp.k8s.f5net.com/v1"
    kind: F5SPKIngressUDP
    metadata:
      namespace: udp-apps
      name: bind-dns-cr
    service:
      name: bind-dns
      port: 53
    spec:
      destinationAddress: "192.168.1.123"
      destinationPort: 53 
      ipv6destinationAddress: "2001::100:100"
      idleTimeout: 30
    monitors:
      icmp:
        - interval: 3
        - timeout: 10
    EOF
    
  4. Install the F5SPKIngressUDP CR:

    oc apply -f spk-ingress-udp.yaml
    
  5. DNS clients should now be able to connect to the application through the Service Proxy TMM.

Verify connectivity

If you installed the Ingress Controller with the Debug Sidecar enabled, connect to the sidecar to view virtual server and pool member connecitivy statistics.

  1. Log in to the Service Proxy Debug container:

    oc exec -it deploy/f5-tmm -c debug -n spk-ingress -- bash
    
  2. View the virtual server connection statistics:

    tmctl -f /var/tmstat/blade/tmm0 virtual_server_stat -s name,serverside.tot_conns
    

    For example:

    name                                serverside.tot_conns
    ----------------------------------- --------------------
    udp-apps-bind-dns-crd-virtual-server                 31
    
  3. View the load balancing pool connection statistics:

    tmctl -f /var/tmstat/blade/tmm0 pool_member_stat -s pool_name,serverside.tot_conns 
    

    For example:

    udp-apps-bind-dns-crd-pool                        15
    udp-apps-bind-dns-crd-pool                        16
    

Feedback

Provide feedback to improve this document by emailing spkdocs@f5.com.

Supplemental