F5SPKIngressDiameter

Overview

This overview discusses the F5SPKIngressDiameter Custom Resource (CR). For the full list of CRs, refer to the SPK CRs overview. The F5SPKIngressDiameter CR configures the Service Proxy Traffic Management Microkernel (TMM) to proxy and load balance low-latency Diameter application traffic between networks using a virtual server and load balancing pool. The F5SPKIngressDiameter CR also provides options to tune how TCP or SCTP connections are processed, and to monitor the health of Service object Endpoints.

This document guides you through understanding, configuring and installing a simple F5SPKIngressDiameter CR.

CR integration

SPK CRs should be integrated into the cluster after the Kubernetes application Deployment and application Service object have been installed. The SPK Controller uses the CR service.name to discover the application Endpoints, and use them to create TMM’s load balancing pool. The recommended method for installing SPK CRs is to include them in the application’s Helm release. Refer to the Helm CR Integration guide for more information.

CR parameters

The table below describes the CR parameters used in this document, refer to the F5SPKIngressDiameter Reference for the full list of parameters.

service

The table below describes the CR service parameters.

Parameter Description
name Selects the Service object name for the internal applications (Pods), and creates a round-robin load balancing pool using the Service Endpoints.
port Selects the Service object port value.

spec

The table below describes the CR spec parameters.

Parameter Description
externalTCP.destinationAddress The IP address receiving ingress TCP connections.
externalTCP.destinationPort The service port receiving ingress TCP connections. When the Kubernetes service being load balanced has multiple ports, install one CR per service, or use port 0 for all ports.
externalSession.originHost The diameter host name sent to external peers in capabilities exchange messages.
externalSession.originRealm The diameter realm name sent to external peers in capabilities exchange messages.
internalTCP.destinationAddress The IP address receiving egress TCP connections.
internalTCP.destinationPort The service port receiving egress TCP connections.
internalSession.persistenceKey The diameter AVP to use as the ingress peristence record. The default is SESSION-ID[0].
internalSession.persistenceTimeout The length of time in seconds ingress idle persistence records remain valid. The default is 300.
loadBalancingMethod Specifies the load balancing method used to distribute traffic across pool members: ROUND_ROBIN distributes connections evenly across all pool members (default), and RATIO_LEAST_CONN_MEMBER distributes connections first to members with the least number of active connections.

CR example

apiVersion: "k8s.f5net.com/v1"
kind: F5SPKIngressDiameter
metadata:
  name: "diameter-app-cr"
  namespace: "diameter-apps"
service:
  name: "diameter-app"
  port: 3868
spec:
  externalTCP:
    destinationAddress: "192.168.10.50"
    destinationPort: 3868
  externalSession:
    originHost: "diameter.f5.com"
    originRealm: "f5"
  internalTCP:
    destinationAddress: "10.244.5.100"
    destinationPort: 3868
  internalSession:
    persistenceKey: "AUTH-APPLICATION-ID"
    persistenceTimeout: 100
  loadBalancingMethod: "RATIO_LEAST_CONN_MEMBER"

Application Project

The SPK Controller and Service Proxy TMM Pods install to a different Project than the Diameter application (Pods). When installing the SPK Controller, set the controller.watchNamespace parameter to the Diameter Pod Project(s) in the Helm values file. For example:

Note: The watchNamespace parameter accepts multiple namespaces.

controller:
  watchNamespace: 
    - "diameter-apps"
    - "diameter-apps2"

Dual-Stack environments

Service Proxy TMM’s load balancing pool is created by discovering the Kubernetes Service Endpoints in the Project. In IPv4/IPv6 dual-stack environments, to populate the load balancing pool with IPv6 members, set the Service PreferDualStack parameter to IPv6. For example:

kind: Service
metadata:
  name: diameter-app
  namespace: diameter-apps
  labels:
    app: diameter-app
spec:
  ipFamilyPolicy: PreferDualStack
  ipFamilies:
  - IPv6
  - IPv4

_images/spk_warn.png Important: When enabling PreferDualStack, ensure TMM’s internal F5SPKVlan interface configuration includes both IPv4 and IPv6 addresses.

Ingress traffic

To enable ingress network traffic, the Service Proxy Pod must be configured to advertise virtual server IP addresses to remote networks using the Border Gateway Protocol (BGP). Alternatively, you can configure appropriate routes on upstream devices. For BGP configuration assistance, refer to the BGP Overview.

Endpoint availablity

Service Proxy TMM load balances ingress Diameter connections to the Pod Service Endpoints, and creates persistence records using the SESSION-ID[0] Attribute-Value Pair (AVP) by default. When a Service Endpoint is either removed from the Service object (scaling), or fails a Kubernetes Health check, connections to that Endpoint will load balance to an available Endpoint.

Requirements

Ensure you have:

  • Installed a K8S Service object and application.
  • Installed the SPK Controller Pods.
  • Have a Linux based workstation.

Installation

Use the following steps to verify the application’s Service object configuration, and install the example F5SPKIngressDiameter CR.

  1. Switch to the application Project:

    oc project <project>
    

    In this example, the application is in the diameter-apps Project:

    oc project diameter-apps
    
  2. Verify the K8S Service object NAME and PORT are set using the CR service.name and service.port parameters:

    oc get service
    

    In this example, the Service object NAME diameter-app and PORT 3868 are set in the example CR:

    NAME          TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)         
    diameter-app  NodePort   10.99.99.99   <none>        3868:30714/TCP  
    
  3. Copy the example CR into a YAML file:

    apiVersion: "k8s.f5net.com/v1"
    kind: F5SPKIngressDiameter
    metadata:
      name: "diameter-app-cr"
      namespace: "diameter-apps"
    service:
      name: "diameter-app"
      port: 3868
    spec:
      externalTCP:
        destinationAddress: "192.168.10.50"
        destinationPort: 3868
      externalSession:
        originHost: "diameter.f5.com"
        originRealm: "f5"
      internalTCP:
        destinationAddress: "10.244.5.100"
        destinationPort: 3868
      internalSession:
        persistenceKey: "AUTH-APPLICATION-ID"
        persistenceTimeout: 100
      loadBalancingMethod: "RATIO_LEAST_CONN_MEMBER"
    
  4. Install the the F5SPKIngressDiameter CR:

    oc apply -f spk-ingress-diameter.yaml
    
  5. Diameter clients should now be able to connect to the application through the Service Proxy TMM.

Verify Connectivity

If you installed the SPK Controller with the Debug Sidecar enabled, connect to the sidecar to view virtual server and pool member connecitivy statistics.

  1. Log in to the TMM Debug container:

    oc exec -it deploy/f5-tmm -c debug -n <project> -- bash
    

    In this example, the TMM Pod is in the spk-ingress Project:

    oc exec -it deploy/f5-tmm -c debug -n spk-ingress -- bash
    
  2. View the virtual server connection statistics:

    tmctl -f /var/tmstat/blade/tmm0 virtual_server_stat -s name,serverside.tot_conns 
    

    For example:

    name                                      serverside.tot_conns
    ---------------------------------         --------------------
    diameter-apps-diameter-app-int-vs                           19 
    diameter-apps-diameter-app-ext-vs                           31 
    
  3. View the load balancing pool connection statistics:

    tmctl -f /var/tmstat/blade/tmm0 pool_member_stat -s pool_name,serverside.tot_conns 
    

    For example:

    diameter-apps-diameter-app-pool                           15
    diameter-apps-diameter-app-pool                           16
    

Feedback

Provide feedback to improve this document by emailing spkdocs@f5.com.

Supplemental