F5SPKIngressGTP¶
Overview¶
This overview discusses the F5SPKIngressGTP Custom Resource (CR). For the full list of CRs, refer to the SPK CRs overview. The F5SPKIngressGTP CR configures the Service Proxy Traffic Management Microkernel (TMM) to proxy and load balance low-latency GPRS Tunnelling Protocol (GTP) traffic between networks using a virtual server and load balancing pool.
This document guides you through understanding, configuring and installing a simple F5SPKIngressGTP CR.
CR integration¶
SPK CRs should be integrated into the cluster after the Kubernetes application Deployment and application Service object have been installed. The SPK Controller uses the CR service.name
to discover the application Endpoints, and use them to create TMM’s load balancing pool. The recommended method for installing SPK CRs is to include them in the application’s Helm release. Refer to the Helm CR Integration guide for more information.
CR parameters¶
The table below describes the CR parameters.
service¶
The table below describes the CR service
parameters.
Parameter | Description |
---|---|
name |
Selects the Service object name for the internal applications (Pods), and creates a round-robin load balancing pool using the Service Endpoints. |
port |
Selects the Service object port value. |
spec¶
The table below describes the CR spec
parameters.
Parameter | Description |
---|---|
destinationAddress |
The IPv4 address receiving ingress GTP connections. |
v6destinationAddress |
The IPv6 address receiving ingress GTP connections. |
destinationPort |
The service port receiving ingress TCP connections. When the Kubernetes service being load balanced has multiple ports, install one CR per service, or use port 0 for all ports. |
persistenceMethod |
Specifies the persistence method: NONE (default), TEID, TEID_PLUS_ULI, TEID_PLUS_ULI. The dSSM Database is required to store persistence entries. |
persistenceTimeout |
The persistence timeout for the GTP session in seconds. The default is 180. |
idleTimeout |
The idle timeout for the UDP connnections in seconds. The default is 300. |
enableInboundSnat |
If true, source address translation will be enabled |
snatIP |
The IPv4 address used as the source for packets egressing the TMM Pod. |
v6snatIP |
The IPv6 address used as the source for packets egressing the TMM Pod. |
spec.ipfamilies |
Should match the Service object ipFamilies parameter, ensuring SNAT Automap is applied correctly: IPv4 (default), IPv6, and IPv4andIPv6. |
vlans.vlanList |
Option to explicitly specify list of vlans to pass traffic on |
vlans.disableListedVlans |
Whether to use all vlans except the listed ones (true) or only the ones in the list (false) |
CR example¶
apiVersion: "k8s.f5net.com/v1"
kind: F5SPKIngressGTP
metadata:
name: "spk-gtp-app"
namespace: "spk-apps"
service:
name: "gtp-apps"
port: 2123
spec:
destinationAddress: "192.168.10.100"
destinationPort: 2123
idleTimeout: 301
enableInboundSnat: true
snatIP: "10.10.10.100"
enableTeidPersistence: false
persistenceMethod: "TEID_PLUS_ULI"
persistenceTimeout: 180
Application Project¶
The SPK Controller and Service Proxy TMM Pods install to a different Project than the GTP application (Pods). When installing the SPK Controller, set the controller.watchNamespace
parameter to the GTP Pod Project(s) in the Helm values file. For example:
Note: The watchNamespace parameter accepts multiple namespaces.
controller:
watchNamespace:
- "spk-apps"
- "spk-apps2"
Dual-Stack environments¶
Service Proxy TMM’s load balancing pool is created by discovering the Kubernetes Service Endpoints in the Project. In IPv4/IPv6 dual-stack environments, to populate the load balancing pool with IPv6 members, set the Service PreferDualStack
parameter to IPv6
. For example:
kind: Service
metadata:
name: gtp-app
namespace: spk-apps
labels:
app: gtp-app
spec:
ipFamilyPolicy: PreferDualStack
ipFamilies:
- IPv6
- IPv4
Important: When enabling PreferDualStack
, ensure TMM’s internal F5SPKVlan interface configuration includes both IPv4 and IPv6 addresses.
Ingress traffic¶
To enable ingress network traffic, the Service Proxy Pod must be configured to advertise virtual server IP addresses to remote networks using the Border Gateway Protocol (BGP). Alternatively, you can configure appropriate routes on upstream devices. For BGP configuration assistance, refer to the BGP Overview.
Requirements¶
Ensure you have:
- Installed a K8S Service object and application.
- Installed the dSSM Database when enabling persistence.
- Installed the SPK Controller Pods.
- Have a Linux based workstation.
Installation¶
Use the following steps to verify the application’s Service object configuration, and install the example F5SPKIngressGTP CR.
Switch to the application Project:
oc project <project>
In this example, the application is in the spk-apps Project:
oc project spk-apps
Verify the K8S Service object NAME and PORT are set using the CR
service.name
andservice.port
parameters:oc get service
In this example, the Service object NAME gtp-app and PORT 3868 are set in the example CR:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) gtp-app NodePort 10.99.99.99 <none> 2123:2123/UDP
Copy the example CR into a YAML file:
apiVersion: "k8s.f5net.com/v1" kind: F5SPKIngressGTP metadata: name: "spk-gtp-app" namespace: "spk-apps" service: name: "spk-gtp-app" port: 2123 spec: destinationAddress: "192.168.10.100" destinationPort: 2123 idleTimeout: 301 enableInboundSnat: true snatIP: "10.10.10.100" enableTeidPersistence: false persistenceMethod: "TEID_PLUS_ULI" persistenceTimeout: 180
Install the the F5SPKIngressGTP CR:
oc apply -f spk-ingress-gtp.yaml
GTP clients should now be able to connect to the application through the Service Proxy TMM.
Verify Connectivity¶
If you installed the SPK Controller with the Debug Sidecar enabled, connect to the sidecar to view virtual server and pool member connectivity statistics.
Log in to the TMM Debug container:
oc exec -it deploy/f5-tmm -c debug -n <project> -- bash
In this example, the TMM Pod is in the spk-ingress Project:
oc exec -it deploy/f5-tmm -c debug -n spk-ingress -- bash
View the virtual server connection statistics:
tmctl -d blade virtual_server_stat -s name,clientside.tot_conns
For example:
name serverside.tot_conns --------------------------------- -------------------- gtp-apps-gtp-app-int-vs 19 gtp-apps-gtp-app-ext-vs 31
View the load balancing pool connection statistics:
tmctl -d blade pool_member_stat -s pool_name,serverside.tot_conns
For example:
gtp-apps-gtp-app-pool 15 gtp-apps-gtp-app-pool 16
Feedback¶
Provide feedback to improve this document by emailing spkdocs@f5.com.