F5SPKIngressTCP

Overview

The F5SPKIngressTCP Custom Resource (CR) configures the Service Proxy Traffic Management Microkernel (TMM) to proxy and load balance low-latency TCP application traffic between networks using a virtual server and load balancing pool. The F5SPKIngressTCP CR also provides options to tune how connections are processed, and to monitor the health of Service object Endpoints.

This document guides you through understanding, configuring and installing a simple F5SPKIngressTCP CR.

CR integration stages

The graphic below displays the four integration stages used to begin processing application traffic. SPK CRs can also be integrated into your Helm release, managing all components with single interface. Refer to the Helm CR Integration guide for more information.

_images/spk_tcp_crd.gif

CR Parameters

The table below describes the CR parameters used in this document, refer to the F5SPKIngressTCP Reference for the full list of parameters.

Parameter Description
service.name Selects the Service object name for the internal applications (Pods), and creates a round-robin load balancing pool using the Service Endpoints.
service.port Selects the Service object port value.
spec.destinationAddress Creates an IPv4 virtual server address for ingress connections.
spec.destinationPort Defines the service port for inbound connections.
spec.ipv6destinationAddress Creates an IPv6 virtual server address for ingress connections.
spec.idleTimeout The TCP connection idle timeout period in seconds (1-4294967295). The default value is 300 seconds.
monitors.tcp.interval Specifies in seconds the monitor check frequency (1-86400). The default value is 5.
monitors.tcp.timeout Specifies in seconds the time in which the target must respond (1-86400). The default value is 16.

Application Project

The Ingress Controller and Service Proxy TMM Pods install to a different Project than the TCP application (Pods). When installing the Ingress Controller, set the controller.watchNamespace parameter to the TCP Pod Project in the Helm values file. For example:

_images/spk_warn.png Important: Ensure the Project currently exists in the cluster, the Ingress Controller does not discover Projects created after installation.

controller:

  watchNamespace: "web-apps"

Dual-Stack environments

Service Proxy TMM’s load balancing pool is created by discovering the Kubernetes Service Endpoints in the Project. In IPv4/IPv6 dual-stack environments, to populate the load balancing pool with IPv6 members, set the Service PreferDualStack parameter to IPv6. For example:

kind: Service
metadata:
  name: nginx-web-app
  namespace: web-apps
  labels:
    app: nginx-web-app
spec:
  ipFamilyPolicy: PreferDualStack
  ipFamilies:
  - IPv6
  - IPv4

Ingress traffic

To enable ingress network traffic, Service Proxy TMM must be configured to advertise virtual server IP addresses to external networks using the BGP dynamic routing protocol. Alternatively, you can configure appropriate routes on upstream devices. For BGP configuration assistance, refer to the BGP Overview.

Requirements

Ensure you have:

Installation

Use the following steps to verify the application’s Service object configuration, and install the example F5SPKIngressTCP CR.

  1. Switch to the application Project:

    oc project <project>
    

    In this example, the application is in the web-apps Project:

    oc project web-apps
    
  2. Verify the K8S Service object NAME and PORT are set using the CR service.spec and service.port parameters:

    oc get service 
    

    In this example, the Service object NAME nginx-web-app and PORT 80 are set in the example CR:

    NAME           TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S) 
    nginx-web-app  NodePort   10.99.99.99   <none>        80:30714/TCP
    
  3. Copy the example CR into a YAML file:

    The code below creates a F5SPKIngressTCP CR file named spk-ingress-tcp.yaml:

    cat << EOF > spk-ingress-tcp.yaml 
    apiVersion: "ingresstcp.k8s.f5net.com/v1"
    kind: F5SPKIngressTCP
    metadata:
      namespace: web-apps
      name: nginx-web-cr
    service:
      name: nginx-web-app
      port: 80
    spec:
      destinationAddress: "192.168.1.123"
      destinationPort: 80
      ipv6destinationAddress: "2001::100:100"
      idleTimeout: 30
    monitors:
      tcp:
        - interval: 3
        - timeout: 10
    EOF
    
  4. Install the F5SPKIngressTCP CR:

    oc apply -f spk-ingress-tcp.yaml
    
  5. Web clients should now be able to connect to the application through the Service Proxy TMM.

Verify connectivity

If you installed the Ingress Controller with the Debug Sidecar enabled, connect to the sidecar to view virtual server and pool member connecitivy statistics.

  1. Log in to the Service Proxy Debug container:

    oc exec -it deploy/f5-tmm -c debug -n spk-ingress -- bash
    
  2. View the virtual server connection statistics:

    tmctl -f /var/tmstat/blade/tmm0 virtual_server_stat -s name,serverside.tot_conns 
    

    For example:

    name                                serverside.tot_conns
    ----------------------------------- --------------------
    spk-apps-nginx-web-crd-virtual-server                 31
    
  3. View the load balancing pool connection statistics:

    tmctl -f /var/tmstat/blade/tmm0 pool_member_stat -s pool_name,serverside.tot_conns 
    

    For example:

    web-apps-nginx-web-crd-pool                        15
    web-apps-nginx-web-crd-pool                        16
    

Feedback

Provide feedback to improve this document by emailing spkdocs@f5.com.

Supplemental