F5SPKIngressTCP¶
Overview¶
This overview discusses the F5SPKIngressTCP Custom Resource (CR). For the full list of CRs, refer to the SPK CRs overview. The F5SPKIngressTCP CR configures the Service Proxy Traffic Management Microkernel (TMM) to proxy and load balance low-latency TCP application traffic between networks using a virtual server and load balancing pool. The F5SPKIngressTCP CR also provides options to tune how connections are processed, and to monitor the health of Service object Endpoints.
This document guides you through understanding, configuring and installing a simple F5SPKIngressTCP CR.
CR integration¶
SPK CRs should be integrated into the cluster after the Kubernetes application Deployment and application Service object have been installed. The SPK Controller uses the CR service.name
to discover the application Endpoints, and use them to create TMM’s load balancing pool. The recommended method for installing SPK CRs is to include them in the application’s Helm release. Refer to the Helm CR Integration guide for more information.
CR parameters¶
The table below describes the CR parameters used in this document, refer to the F5SPKIngressTCP Reference for the full list of parameters.
service¶
The table below describes the CR service
parameters.
Parameter | Description |
---|---|
name |
Selects the Service object name for the internal applications (Pods), and creates a round-robin load balancing pool using the Service Endpoints. |
port |
Selects the Service object port value. |
spec¶
The table below describes the CR spec
parameters.
Parameter | Description |
---|---|
destinationAddress |
Creates an IPv4 virtual server address for ingress connections. |
destinationPort |
Defines the service port for inbound connections. When the Kubernetes service being load balanced has multiple ports, install one CR per service, or use port 0 for all ports. |
ipv6destinationAddress |
Creates an IPv6 virtual server address for ingress connections. |
idleTimeout |
The TCP connection idle timeout period in seconds (1-4294967295). The default value is 300 seconds. |
loadBalancingMethod |
Specifies the load balancing method used to distribute traffic across pool members: ROUND_ROBIN distributes connections evenly across all pool members (default), and RATIO_LEAST_CONN_MEMBER distributes connections first to members with the least number of active connections. |
snat |
Enables translating the source IP address of ingress packets to TMM's self IP addresses: SRC_TRANS_AUTOMAP to enable, or SRC_TRANS_NONE to disable (default). |
vlans.vlanList |
Specifies a list of F5SPKVlan CRs to listen for ingress traffic, using the CR's metadata.name . The list can also be disabled using disableListedVlans . |
vlans.category |
Specifies an F5SPKVlan CR category to listen for ingress traffic. The category can also be disabled using disableListedVlans . |
monitors¶
The table below describes the CR monitors
parameters.
Parameter | Description |
---|---|
tcp.interval |
Specifies in seconds the monitor check frequency: 1 to 86400. The default is 5. |
tcp.timeout |
Specifies in seconds the time in which the target must respond: 1 to 86400. The default is 16. |
CR example¶
apiVersion: "ingresstcp.k8s.f5net.com/v1"
kind: F5SPKIngressTCP
metadata:
name: "nginx-web-cr"
namespace: "web-apps"
service:
name: "nginx-web-app"
port: 80
spec:
destinationAddress: "192.168.1.123"
destinationPort: 80
ipv6destinationAddress: "2001::100:100"
idleTimeout: 30
loadBalancingMethod: "ROUND_ROBIN"
snat: "SRC_TRANS_AUTOMAP"
persist:
mode: "PERSIST_TYPE_SRCADDR"
timeout: 60
ipv4PrefixLength: 24
vlans:
vlanList:
- vlan-external
monitors:
tcp:
- interval: 3
timeout: 10
Application Project¶
The SPK Controller and Service Proxy TMM Pods install to a different Project than the TCP application (Pods). When installing the SPK Controller, set the controller.watchNamespace
parameter to the TCP Pod Project(s) in the Helm values file. For example:
Note: The watchNamespace parameter accepts multiple namespaces.
controller:
watchNamespace:
- "web-apps"
- "web-apps2"
Dual-Stack environments¶
Service Proxy TMM’s load balancing pool is created by discovering the Kubernetes Service Endpoints in the Project. In IPv4/IPv6 dual-stack environments, to populate the load balancing pool with IPv6 members, set the Service PreferDualStack
parameter to IPv6
. For example:
kind: Service
metadata:
name: nginx-web-app
namespace: web-apps
labels:
app: nginx-web-app
spec:
ipFamilyPolicy: PreferDualStack
ipFamilies:
- IPv6
- IPv4
Important: When enabling PreferDualStack
, ensure TMM’s internal F5SPKVlan interface configuration includes both IPv4 and IPv6 addresses.
Ingress traffic¶
To enable ingress network traffic, Service Proxy TMM must be configured to advertise virtual server IP addresses to external networks using the BGP dynamic routing protocol. Alternatively, you can configure appropriate routes on upstream devices. For BGP configuration assistance, refer to the BGP Overview.
Session persistence¶
Session persistence enables the Service Proxy TMM to direct session requests to the same endpoint based on the client’s source IP address. To enable Persistence, set the F5SPKIngressTCP CR’s spec.persist.mode
parameter to PERSIST_TYPE_SRCADDR.
Important: The spec.persist
parameter requires the dSSM Database to store session persistence records.
The table below describes the spec.persist
parameters.
Parameter | Description |
---|---|
spec.persist.mode |
Specifies the type of persistence: PERSIST_TYPE_NONE (default) or PERSIST_TYPE_SRCADDR - direct session requests to the same endpoint based on the client's source IP address. Requires the dSSM Database. |
spec.persist.timeout |
Specifies the duration for the session persistence entries. The default value is 180 seconds. |
spec.persist.hashAlg |
Specifies the algorithm the system uses for hash persistence load balancing: PERSIST_HASH_DEFAULT (default) - use an index of the pool members (endpoints) to determine the hash, or PERSIST_HASH_CARP - use the Cache Array Routing Protocol (CARP) to determine the hash. |
spec.persist.ipv4PrefixLength |
Specifies the IPv4 prefix length that you want to use as the mask: 0-32. The default value is 32. |
spec.persist.ipv6PrefixLength |
Specifies the IPv6 prefix length that you want to use as the mask: 0-128. The default value is 128. |
Requirements¶
Ensure you have:
- Installed a K8S Service object and application.
- Installed the SPK Controller.
- Installed the dSSM Database when enabling persistence.
- A Linux based workstation.
Installation¶
Use the following steps to obtain the application’s Service object configuration, and configure and install the F5SPKIngressTCP CR.
Switch to the application Project:
oc project <project>
In this example, the application is in the web-apps Project:
oc project web-apps
Use the Service object NAME and PORT to configure the CR
service.name
andservice.port
parameters:oc get service
In this example, the Service object NAME is nginx-web-app and the PORT is 80:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) nginx-web-app NodePort 10.99.99.99 <none> 80:30714/TCP
Copy the example CR into a YAML file:
apiVersion: "ingresstcp.k8s.f5net.com/v1" kind: F5SPKIngressTCP metadata: name: "nginx-web-cr" namespace: "web-apps" service: name: "nginx-web-app" port: 80 spec: destinationAddress: "192.168.1.123" destinationPort: 80 ipv6destinationAddress: "2001::100:100" idleTimeout: 30 loadBalancingMethod: "ROUND_ROBIN" snat: "SRC_TRANS_AUTOMAP" persist: mode: "PERSIST_TYPE_SRCADDR" timeout: 60 ipv4PrefixLength: 24 vlans: vlanList: - vlan-external monitors: tcp: - interval: 3 timeout: 10
Install the F5SPKIngressTCP CR:
oc apply -f spk-ingress-tcp.yaml
Verify the status of the installed CR:
oc get f5-spk-ingresstcp -n nginx-apps
In this example, the CR has installed successfully. Installation failures may indicate a missing CR dependancy such as a referenced VLAN.
NAME STATUS MESSAGE nginx-web-cr SUCCESS CR config sent to all grpc endpoints
Web clients should now be able to connect to the application through the Service Proxy TMM.
Connection statistics¶
If you installed the SPK Controller with the Debug Sidecar enabled, connect to the sidecar to view virtual server and pool member connectivity statistics.
Log in to the Service Proxy Debug container:
oc exec -it deploy/f5-tmm -c debug -n spk-ingress -- bash
View the virtual server connection statistics:
tmctl -d blade virtual_server_stat -s name,clientside.tot_conns
For example:
name serverside.tot_conns ----------------------------------- -------------------- spk-apps-nginx-web-crd-virtual-server 31
View the load balancing pool connection statistics:
tmctl -d blade pool_member_stat -s pool_name,serverside.tot_conns
For example:
web-apps-nginx-web-crd-pool 15 web-apps-nginx-web-crd-pool 16
Persistence records¶
If you installed the SPK Controller with the Debug Sidecar enabled, connect to the sidecar to view the persistence record entries.
Obtain the IP address of the dSSM Sentinel:
In this example, dSSM is installed in the spk-utilities Project.
oc get svc -n spk-utilities
In this example, the Sentinel IP address is 10.203.180.204.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) f5-dssm-db ClusterIP 10.108.254.57 <none> 6379/TCP f5-dssm-sentinel ClusterIP 10.103.180.204 <none> 26379/TCP
Login to the debug sidecar container:
In this example, the debug sidecar is in the spk-ingress Project.
oc exec -it deploy/f5-tmm -c debug -n spk-ingress -- bash
View the persistent record statistics:
mrfdb -ipport=10.103.180.204:26379 -serverName=server -display=all -type=tcp_udp_persist
ClientAddress PoolMemberAddress ClientPort PoolMemberPort Timeout PoolName ---------------------------------------------------------------------------------------------------------------- 192.168.43.128 192.168.238.109 45468 80 60 web-apps-nginx-web-crd-pool
Feedback¶
Provide feedback to improve this document by emailing spkdocs@f5.com.