F5SPKIngressEgressUDP¶
Overview¶
This overview discusses the F5SPKIngressEgressUDP Custom Resource (CR). For the full list of CRs, refer to the SPK CRs overview. The F5SPKIngressEgressUDP CR configures the Service Proxy Traffic Management Microkernel (TMM) to proxy and load balance low-latency 5G UDP datagrams using a virtual IP address (VIP), and a load balancing pool consisting of 5G Network Function endpoints. The F5SPKIngressEgressUDP CR is designed to ensure network packets responding to client request, or egressing the cluster, use the virtual IP address (VIP) as the source IP address.
This document guides you through understanding, configuring and installing a simple F5SPKIngressEgressUDP CR.
CR integration¶
SPK CRs should be integrated into the cluster after the Kubernetes application Deployment and application Service object have been installed. The SPK Controller uses the CR service.name
to discover the application Endpoints, and use them to create TMM’s load balancing pool. The recommended method for installing SPK CRs is to include them in the application’s Helm release. Refer to the Helm CR Integration guide for more information.
CR parameters¶
The table below describes the CR parameters. Each heading below represents the top-level parameter element. For example, to set the Kubernetes services.name
list, use spec.services.name
.
Parameter | Description |
---|---|
services.name |
A list of Service object names for the internal applications (Pods). The Controller creates a load balancing pool using the discovered Service Endpoints. |
services.port |
The exposed port for the service. |
spec¶
Parameter | Description |
---|---|
destinationAddress |
Creates an IPv4 virtual server address to receive for ingress connections. |
destinationPort |
Creates a service port to receive ingress connections. When the Kubernetes service being load balanced has multiple ports, install one CR per service, or use port 0 for all ports. |
v6destinationAddress |
Creates an IPv6 virtual server address to receive for ingress connections. |
loadBalancingMethod |
Specifies the load balancing method used to distribute traffic across pool members: ROUND_ROBIN distributes connections evenly across all pool members (default), and RATIO_LEAST_CONN_MEMBER distributes connections first to members with the least number of active connections. |
ingressVlans.vlanList |
Specifies a list of F5SPKVlan CRs to listen for ingress traffic, using the CR's metadata.name . The list can also be disabled using disableListedVlans . |
ingressVlans.category |
Specifies an F5SPKVlan CR category to listen for ingress traffic. The category can also be disabled using disableListedVlans . |
ingressVlans.disableListedVlans |
Disables the VLANs specified with the vlanList parameter: true (default) or false . Excluding one VLAN may simplify having to enable many VLANS. |
egressVlans.vlanList |
Specifies a list of F5SPKVlan CRs to listen for egress traffic, using the CR's metadata.name . The list can also be disabled using disableListedVlans . |
egressVlans.category |
Specifies an F5SPKVlan CR category to listen for egress traffic. The category can also be disabled using disableListedVlans . |
egressVlans.disableListedVlans |
Disables the VLANs specified with the vlanList parameter: true (default) or false . Excluding one VLAN may simplify having to enable many VLANS. |
idleTimeout |
Specifies the time in seconds that a client connection may remain open without activity before closing. The default is 60. |
ipFamilyPolicy |
Specifies the dual-stack configuration for this Service: SingleStack, PreferDualStack (default), RequireDualStack. |
ipfamilies |
The IP version capabilities of the application: IPv4 (default), IPv6, IPv4andIPv6. |
monitors¶
Parameter | Description |
---|---|
icmp.interval |
Specifies the monitor check frequency in seconds. The default is 5. |
icmp.timeout |
Specifies the time in which the target must respond in seconds. The default is 16. |
icmp.username |
Specifies the username for HTTP authentication. |
icmp.password |
Specifies the password for HTTP authentication. |
icmp.serversslProfileName |
Specifies the server side SSL profile that this monitor will use to ping the target. The default is _mon_ssl. |
CR example¶
apiVersion: "k8s.f5net.com/v1"
kind: F5SPKIngressEgressUDP
metadata:
name: "ingress-egress-udp-app"
namespace: "spk-app"
service:
name: "ingress-egress-udp"
port: 8100
spec:
destinationPort: 8100
destinationAddress: 10.20.2.214
monitors:
icmp:
- interval: 3
timeout: 10
Application Project¶
The SPK Controller and Service Proxy TMM Pods install to a different Project than the TCP application (Pods). When installing the SPK Controller, set the controller.watchNamespace
parameter to the UDP Pod Project(s) in the Helm values file. For example:
Note: The watchNamespace parameter accepts multiple namespaces.
controller:
watchNamespace: "spk-app"
watchNamespace: "spk-app2"
Dual-Stack environments¶
Service Proxy TMM’s load balancing pool is created by discovering the Kubernetes Service Endpoints in the Project. In IPv4/IPv6 dual-stack environments, to populate the load balancing pool with IPv6 members, set the Service PreferDualStack
parameter to IPv6
. For example:
kind: Service
metadata:
name: nginx-web-app
namespace: spk-app
labels:
app: http2-web-app
spec:
ipFamilyPolicy: PreferDualStack
ipFamilies:
- IPv6
- IPv4
Important: When enabling PreferDualStack
, ensure TMM’s internal F5SPKVlan interface configuration includes both IPv4 and IPv6 addresses.
Ingress traffic¶
To enable ingress network traffic, Service Proxy TMM must be configured to advertise virtual server IP addresses to external networks using the BGP dynamic routing protocol. Alternatively, you can configure appropriate routes on upstream devices. For BGP configuration assistance, refer to the BGP Overview.
Vitual IPs and ports¶
The F5SPKIngressEgressUDP CR configures a single virtual IPv4 and a single IPv6 address to receive ingress connections. However, to ensure UDP connections flow between client and server, specifically when the TMM Pods have multiple replicas, the CR creates the following three virtual server types:
- An ingress virtual server listening on the configured IP address and the specified service port to support basic UDP protocols.
- An ingress virtual server listening on the configured IP address and all service ports to support UDP protocols allowing multiple sessions over the same connection.
- An egress virtual server listening on TMM’s internal IP address and all service ports to support internal Pod’s egress responses. The response will use the virtual IP address as the source.
Important considerations¶
Review the following important points prior to configuring the F5SPKIngressEgressUDP CR:
- Do not configure the
destinationPort
to use port 53 when installing the F5SPKEgress CR on the same TMM Pod. - Do not configure the
destinationAddress
with an IP used for other UDP traffic in the cluster, or same network, as performance may be affected.
Requirements¶
Ensure you have:
- Installed a K8S Service object and application.
- Installed the SPK Controller.
- A Linux based workstation.
- Installed the dSSM Database to support persistence records.
Installation¶
Use the following steps to obtain the application’s Service object information to configure and install the F5SPKIngressEgressUDP CR.
Switch to the application Project:
In this example, the application is in the spk-app Project:
oc project spk-app
Use the Service object NAME and PORT to configure the CR
service.name
andservice.port
parameters:oc get service
In this example, the Service object NAME is ingress-egress-udp and the PORT is 8100:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) ingress-egress-udp NodePort 10.99.99.99 <none> 8100/UDP
Copy the example F5SPKIngressEgressUDP CR into a YAML file:
apiVersion: "k8s.f5net.com/v1" kind: F5SPKIngressEgressUDP metadata: name: "ingress-egress-udp-app" namespace: "spk-app" service: name: "ingress-egress-udp" port: 8100 spec: destinationPort: 8100 destinationAddress: 10.20.2.214 monitors: icmp: - interval: 3 timeout: 10
Install the F5SPKIngressEgressUDP CR:
oc apply -f spk-ingress-http2.yaml
UDP app clients should now be able to connect to the application through the Service Proxy TMM.
Connection statistics¶
If you installed the SPK Controller with the Debug Sidecar enabled, connect to the sidecar to view virtual server and pool member ingress (ext) and egress (int) connectivity statistics.
Log in to the Service Proxy Debug container:
oc exec -it deploy/f5-tmm -c debug -n spk-ingress -- bash
View the virtual server connection statistics:
tmctl -d blade virtual_server_stat -s name,clientside.tot_conns
For example:
name clientside.tot_conns ------------------------------------------ -------------------- spk-apps-ingress-egress-udp-app-ext-vs 17 spk-apps-ingress-egress-udp-app-int-vs 12 spk-apps-ingress-egress-udp-app-ext-any-vs 11
View the load balancing pool connection statistics:
tmctl -d blade pool_member_stat -s pool_name,serverside.tot_conns
For example:
spk-apps-ingress-egress-udp-app-pool-member-list 15 spk-apps-ingress-egress-udp-app-pool-member-list 16
Feedback¶
Provide feedback to improve this document by emailing spkdocs@f5.com.