F5SPKEgress¶
The BIG-IP Next for Kubernetes F5SPKEgress
Custom Resource (CR) enables egress connectivity for internal pods that need access to external networks. Additionally, the F5SPKEgress CR must reference an F5SPKDnscache CR to provide high-performance DNS caching. For the full list of CRs, refer to the BIG-IP Next for Kubernetes CRs.
Important: All the Egress CRs should be applied to the namespace where TMM pod is configured.
The F5SPKEgress CR supports the following features:
- Egress CR with SNAT: SNATs modify the source IP address of outgoing packets leaving the cluster. Using the F5SPKEgress CR, you can configure SNAT in 3 ways: SNAT none, SNAT pools or SNAT automap. For more information, see Egress SNAT
- Egress CR with Shared SNAT with Flow-Forwarding: The multiple TMM pods use the same SNAT addresses to save resources. If return traffic goes to a different pod, it sends it to the original pod to handle it correctly. For more information, see Shared SNAT Addres with Flow-Forwarding
- Egress CR with PseudoCNI: Whole Cluster BIG-IP Next for Kubernetes uses the F5SPKEgress CRD to manage egress traffic from apps through TMM pods. The CSRC daemonset sets up routes on worker nodes to direct traffic to the correct TMM. For more information, see Egress CR with PseudoCNI
CR modifications¶
Because the F5SPKEgress CR references a number of additional CRs, F5 recommends that you always delete and reapply the CR, rather than using kubectl apply
to modify the running CR configuration.
Note: Each time you modify egress or DNS, the TMM has to redeploy.
Egress SNAT¶
SNATs are used to modify the source IP address of egress packets leaving the cluster. When the BIG-IP Next for Kubernetes Traffic Management Microkernel (TMM) receives an internal packet from an internal Pod, the external (egress) packet source IP address will translate using a configured SNAT IP address. Using the F5SPKEgress CR, you can apply SNAT IP addresses using either SNAT pools, or SNAT automap.
snatType
There are three types of SNAT:
Note: Currently, the Egress use case withSNAT none
type is not supported for LA.
SNAT none
When snatType
is configured with SRC_TRANS_NONE, TMM uses application pod’s IP address to communicate with an external server.
SNAT pools
When snatType
is configured with SRC_TRANS_SNATPOOL, TMM uses the configured MBIP server-side SNAT pools to communicate with an external server.
SNAT pools are lists of routable IP addresses, used by TMM to translate the source IP address of egress packets. SNAT pools provide a greater number of available IP addresses, and offer more flexibility for defining the SNAT IP addresses used for translation. If snatType
is configured with the SRC_TRANS_SNATPOOL type, egressSnatpool will be used as a pool of addresses for source address translation for traffic that matches this F5SPKEgress CR. For more information to enable SNAT pools, see F5SPKSnatpool.
SNAT automap
When snatType
is configured with SRC_TRANS_AUTOMAP, TMM uses the configured MBIP server side self IPs to communicate with an external server.
SNAT automap uses TMM’s external F5SPKVlan IP address as the source IP for egress packets. SNAT automap is easier to implement, and conserves IP address allocations. To use SNAT automap, leave the spec.egressSnatpool
parameter undefined (default). Use the installation procedure below to enable egress connectivity using SNAT automap.
Note: In clusters with multiple BIG-IP Next for Kubernetes Controller instances, ensure the IP addresses defined in each F5SPKSnatpool CR do not overlap.
Parameters¶
The parameters used to configure TMM for SNAT automap:
Parameter | Description |
---|---|
snatType |
Specifies the type of SNAT address to use when forwarding requests that match this static route: SRC_TRANS_NONE, SRC_TRANS_SNATPOOL, or SRC_TRANS_AUTOMAP (default). For more infrormation, see Egress SNAT |
spec.dualStackEnabled |
Enables creating both IPv4 and IPv6 wildcard virtual servers for egress connections: true or false (default). |
spec.egressSnatpool |
References an installed F5SPKsnatpool CR using the spec.name parameter, or applies SNAT automap when undefined (default). |
Installation¶
Follow the instructions below to set up the SNAT automap configuration using the F5SPKEgress CR:
Copy the below example F5SPKEgress CR to a YAML file, then set the
namespace
parameter to the Controller’s Project and save:apiVersion: "k8s.f5net.com/v3" kind: F5SPKEgress metadata: name: egress-cr namespace: <project> spec: dualStackEnabled: <true|false> egressSnatpool: ""
In this example, the CR installs to the default Project:
apiVersion: "k8s.f5net.com/v3" kind: F5SPKEgress metadata: name: egress-cr namespace: default spec: dualStackEnabled: true egressSnatpool: egress_snatpool
Install the F5SPKEgress CR they you have created:
kubectl apply -f <file name>
In this example, the CR file is named spk-egress-cr.yaml:
kubectl apply -f spk-egress-cr.yaml
Internal Pods can now connect to external resources using the external F5SPKVlan self IP address.
To verify traffic processing statistics, log in to the Debug Sidecar:
kubectl exec -it <tmm-pod-name> -c debug -n <project>
In this example, the debug sidecar is in the default Project:
kubectl exec -it f5-tmm-wgrvm -c debug -n default
Run the following tmctl command:
tmctl -d blade virtual_server_stat -s name,serverside.tot_conns
In this example, 3 IPv4 connections, and 2 IPv6 connections have been initiated by internal Pods:
name serverside.tot_conns ----------------- -------------------- default-egress-crd-egress-ipv6 2 default-egress-crd-egress-ipv4 3
Configuring PseudoCNI¶
The Whole Cluster BIG-IP Next for Kubernetes utilizes the F5SPKEgress CR to incorporate PseudoCNI configuration, which defines the necessary setting for egress traffic from application pods through TMM pods. The PseudoCNI Controller reads the PseudoCNI configuration from the F5SPKEgress CR, identifies the egress path for each application pod such as VLAN/VXLAN, and maps them to TMMs. These mappings are then stored in a configmap for the CSRC to use. The CSRC daemonSet, running in privileged mode, configures rules and routes on worker nodes hosting application pods. These configurations redirect egress traffic from the application pods toward the TMM.
Parameters¶
The table below describes the F5SPKEgress CR spec
parameters used to configure PseudoCNI configuration to egress application pod traffic via TMMs.
Parameter | Description |
---|---|
egressSnatpool |
Specifies the SNAT Pool name. |
pseudoCNIConfig.namespaces |
Specifies the namespaces of the application pods to which this egress configuration applies. The pseudoCNIConfig.namespaces configuration ensures that each egress CR is restricted to only one application namespace at a time. This restriction prevents multiple namespaces from sharing the same egress CR, ensuring that egress traffic management remains isolated and controlled within a single namespace. |
pseudoCNIConfig.appPodInterface |
Specifies the application pod’s network interface name from which egress traffic is originated. Example “eth0”. |
pseudoCNIConfig.vlanName |
Specifies the VLAN/VXLAN CR name. |
pseudoCNIConfig.appNodeInterface |
Specifies the network interface name on the worker node. The egress traffic is directed towards the TMM through this interface on the node, where the application pod is deployed. |
Installation¶
Requirements
To install the F5SPKEgress CR before setting up PseudoCNI configuration to route egress application pod traffic via TMMs, ensure the following prerequisites are met:
- The F5SPKVLAN CR has to be installed, or
- The F5SPKVXLAN CR has to be installed
Note: If you install F5SPKVLAN or F5SPKVXLAN, ensure thepsudeoCNIConfig.vlanName
parameter is configured with the corresponding VLAN or VXLAN CR name.
Follow the instructions below to set up the PseudoCNI configuration using the F5SPKEgress CR:
Copy the below example F5SPKEgress CR to a YAML file, then modify PseudoCNI settings as required and save:
apiVersion: "k8s.f5net.com/v3" kind: F5SPKEgress metadata: name: egress-cr-spk-app spec: snatType: "SRC_TRANS_POOL" egressSnatpool: "<snat-pool-cr-name>" psudeoCNIConfig: namespaces: - "spk-app-1" appPodInterface: "etho" vlanName: "vlan-3001" appNodeInterface: "spk-ms.3001"
Install the F5SPKEgress CR that you have created:
kubectl apply -f spk-egress-cr.yaml
Uninstallation¶
Follow the steps below for the graceful uninstallation of the F5SPKEgress CR:
Delete the F5SPKEgress CR.
kubectl delete -f spk-egress-cr.yaml
Delete all dependent CRs of F5SPKEgress:
For Example: If the F5SPKVlan or F5SPKVXLAN CRs are referenced in the F5SPKEgress CR, ensure to delete them after removing F5SPKEgress CR, see Step 3.
Delete the referenced F5SPKVlan or F5SPKVXLAN dependence CRs:
kubectl delete -f f5-spk-vlan.yaml
or
kubectl delete -f f5-spk-vxlan.yaml
Uninstall the product:
kubectl delete -f spkinstance.yaml