F5SPKEgress¶
The BIG-IP Next for Kubernetes F5SPKEgress Custom Resource (CR) enables egress connectivity for internal pods that need access to external networks. Additionally, the F5SPKEgress CR must reference an F5SPKDnscache CR to provide high-performance DNS caching. For the full list of CRs, refer to the BIG-IP Next for Kubernetes CRs.
Important: All the Egress CRs should be applied to the namespace where TMM pod is configured.
The F5SPKEgress CR supports the following features:
Egress CR with SNAT: SNATs modify the source IP address of outgoing packets leaving the cluster. Using the F5SPKEgress CR, you can configure SNAT in 3 ways: SNAT none, SNAT pools or SNAT automap. For more information, see Egress SNAT
Egress CR with Shared SNAT Address and Flow-Forwarding: Multiple TMM pods use the same SNAT addresses to save resources. If response traffic is sent to a different pod, it forwards the traffic to the original pod to handle it correctly. For more information, see Shared SNAT Address with Flow-Forwarding.
Egress CR with Shared SNAT Address and Security Protection: The security protection configuration is added to the F5SPKEgress CR to enhance the shared SNAT address with additional security. For more information, see Shared SNAT Address with Security Protection.
Egress CR with PseudoCNI: Whole Cluster BIG-IP Next for Kubernetes uses the F5SPKEgress CR to manage egress traffic from apps through TMM pods. The CSRC daemonset sets up routes on worker nodes to direct traffic to the correct TMM. For more information, see PseudoCNI.
Egress CR with VXLAN: Users now have the option to create an VXLAN directly when they are creating an F5SPKEgress CR. This streamlines the process as it eliminates the need to create a separate F5SPKVXLAN CR and then refer to it within the F5SPKEgress CR. For more information, see VXLAN with F5SPKEgress CR.
CR Modifications¶
Because the F5SPKEgress CR references a number of additional CRs, F5 recommends that you always delete and reapply the CR, rather than using kubectl apply to modify the running CR configuration.
Note: Each time you modify egress or DNS, the TMM has to redeploy.
Egress SNAT¶
SNATs are used to modify the source IP address of egress packets leaving the cluster. When the BIG-IP Next for Kubernetes Traffic Management Microkernel (TMM) receives an internal packet from an internal Pod, the external (egress) packet source IP address will translate using a configured SNAT IP address. Using the F5SPKEgress CR, you can apply SNAT IP addresses using either SNAT pools, or SNAT automap.
snatType
There are three types of SNAT:
Note: Currently, the Egress use case with
SNAT nonetype is not supported for GA.
SNAT none
When snatType is configured with SRC_TRANS_NONE, TMM uses application pod’s IP address to communicate with an external server.
SNAT pools
When snatType is configured with SRC_TRANS_SNATPOOL, TMM uses the configured MBIP server-side SNAT pools to communicate with an external server.
SNAT pools are lists of routable IP addresses, used by TMM to translate the source IP address of egress packets. SNAT pools provide a greater number of available IP addresses, and offer more flexibility for defining the SNAT IP addresses used for translation. If snatType is configured with the SRC_TRANS_SNATPOOL type, egressSnatpool will be used as a pool of addresses for source address translation for traffic that matches this F5SPKEgress CR. For more information to enable SNAT pools, see F5SPKSnatpool.
SNAT automap
When snatType is configured with SRC_TRANS_AUTOMAP, TMM uses the configured MBIP server side self IPs to communicate with an external server.
SNAT automap uses TMM’s external F5SPKVlan IP address as the source IP for egress packets. SNAT automap is easier to implement, and conserves IP address allocations. To use SNAT automap, leave the spec.egressSnatpool parameter undefined (default). Use the installation procedure below to enable egress connectivity using SNAT automap.
Note: In clusters with multiple BIG-IP Next for Kubernetes Controller instances, ensure the IP addresses defined in each F5SPKSnatpool CR do not overlap.
Parameters¶
The parameters used to configure TMM for SNAT automap:
| Parameter | Description |
|---|---|
snatType |
Specifies the type of SNAT address to use when forwarding requests that match this static route: SRC_TRANS_NONE, SRC_TRANS_SNATPOOL, or SRC_TRANS_AUTOMAP (default). For more infrormation, see Egress SNAT |
spec.dualStackEnabled |
Enables creating both IPv4 and IPv6 wildcard virtual servers for egress connections: true or false (default). |
spec.egressSnatpool |
References an installed F5SPKsnatpool CR using the spec.name parameter, or applies SNAT automap when undefined (default). |
Configure SNAT Automap¶
Follow the instructions below to set up the SNAT automap configuration using the F5SPKEgress CR:
Copy the below example F5SPKEgress CR to a YAML file, then set the
namespaceparameter to the Controller’s Project and save.apiVersion: "k8s.f5net.com/v3" kind: F5SPKEgress metadata: name: egress-cr namespace: <project> spec: dualStackEnabled: <true|false> egressSnatpool: ""
In this example, the CR installs to the default Project:
apiVersion: "k8s.f5net.com/v3" kind: F5SPKEgress metadata: name: egress-cr namespace: default spec: dualStackEnabled: <true|false> egressSnatpool: ""
Apply the F5SPKEgress CR that you have created.
kubectl apply -f <file name>
In this example, the CR file is named spk-egress-cr.yaml:
kubectl apply -f spk-egress-cr.yaml
Sample Output:
f5spkegress.k8s.f5net.com/spk-egress-cr created
Internal Pods can now connect to external resources using the external F5SPKVlan self IP address.
Connection Statistics¶
Connect to the Debug Sidecar in the BIG-IP Next for Kubernetes to view the traffic processing statistics.
Log in to the BIG-IP Next for Kubernetes Debug container.
kubectl exec -it f5-tmm-wgrvm -c debug -n default
View the traffic processing statistics.
tmctl -d blade virtual_server_stat -s name,serverside.tot_conns
In this example, 3 IPv4 connections, and 2 IPv6 connections have been initiated by internal Pods:
name serverside.tot_conns ----------------- -------------------- default-egress-crd-egress-ipv6 2 default-egress-crd-egress-ipv4 3
PseudoCNI¶
The Whole Cluster BIG-IP Next for Kubernetes utilises the F5SPKEgress CR to incorporate PseudoCNI configuration, which defines the necessary setting for egress traffic from application pods through TMM pods. The PseudoCNI Controller reads the PseudoCNI configuration from the F5SPKEgress CR and generates the necessary settings to designate TMMs as egress gateways. These mappings are shared with CSRC over GRPC. The CSRC daemonSet, running in privileged mode, configures rules and routes on nodes hosting application pods. The TMM running on the DPU local to the host node is given the highest priority compared to other TMMs.
Parameters¶
The table below describes the F5SPKEgress CR spec parameters used to configure PseudoCNI configuration to egress application pod traffic via TMMs.
| Parameter | Description |
|---|---|
egressSnatpool |
Specifies the SNAT Pool name. |
pseudoCNIConfig.namespaces |
Specifies the namespaces of the application pods to which this egress configuration applies. The pseudoCNIConfig.namespaces configuration ensures that each egress CR is restricted to only one application namespace at a time. This restriction prevents multiple namespaces from sharing the same egress CR, ensuring that egress traffic management remains isolated and controlled within a single namespace. |
pseudoCNIConfig.appPodInterface |
Specifies the application pod’s network interface name from which egress traffic is originated. Example “eth0”. |
pseudoCNIConfig.appNodeInterface |
Specifies the network interface name on the worker node. The egress traffic is directed towards the TMM through this interface on the node, where the application pod is deployed. |
Note: The pseudoCNIConfig.appNodeInterface should be empty when pseudoCNIConfig.vxlan.create is set to true, and it should be populated when pseudoCNIConfig.vxlan.create is set to false. For more information, see Configuring VXLAN with F5SPKEgress CR. |
|
pseudoCNIConfig.vxlan |
For the parameters description of pseudoCNIConfig.vxlan, see VXLAN Configuration Parameters table. |
Configuring PseudoCNI¶
Prerequisites
Ensure you have:
A Linux-based workstation.
Installed F5SPKVLAN.
Follow the instructions below to set up the PseudoCNI configuration using the F5SPKEgress CR:
Copy the below example F5SPKEgress CR, including the VLAN, into a YAML file, then modify PseudoCNI settings as required and save.
apiVersion: "k8s.f5net.com/v3" kind: F5SPKEgress metadata: name: egress-cr spec: dualStackEnabled: true snatType: "SRC_TRANS_POOL" egressSnatpool: "<snat-pool-cr-name>" vlans: disableListedVlans: false vlanList: - f5-vlan-egress psudeoCNIConfig: namespaces: - "spk-app-1" appPodInterface: "etho" appNodeInterface: "spk-ms.3001"
Apply the F5SPKEgress CR that you have created.
kubectl apply -f spk-egress-cr.yaml
Notes:
1. Using only VLAN in the egress CR (without VXLAN) is supported.
2. Make sure to configure one egress CR per namespace and one VLAN per egress CR. A single VLAN cannot be referred to in multiple egress CRs.
Configuring Dynamic Port Selection for CSRC-CNE Controller Communication¶
Port Conflict Issue from CSRC Logs:
ERROR: Failed to listen: listen tcp :8751: bind: address already in use
Cause:
As the CSRC daemonSet runs in the host network namespace, the CSRC gRPC port (default: 8751) may conflict with other applications using the same port in the host network.
FIX:
If a port conflict occurs during or after BIG-IP Next for Kubernetes installation, perform the following steps to fix this issue:
Update the Service Resource.
kubectl edit svc f5-csrc-grpc-svc
In the
csrc-grpc_service.yamlfile, modify:ports: - name: f5-csrc-grpc port: 8751 # Change to a new port (e.g., 8875) protocol: TCP targetPort: 8751 # Update to match the new port (e.g., 8875)
Update the DaemonSet Configuration.
kubectl edit daemonset f5-spk-csrc
In the
daemonset.yamlfile, modify:- name: PSEUDOCNI_GRPC_PORT value: 8751 # Change to the new port (e.g., 8875)
The CSRC pods will automatically restart with the new port configuration, resolving the conflict.
VXLAN with F5SPKEgress CR¶
The setup process for the F5SPKVXLAN CR is automated to minimize the need for manual input and prevent errors. By using the specified tmmInterfaceName, the node interface is automatically identified and obtains the required underlay network information. It then chooses a unique Virtual Network Identifier (VNI) and subnet, and associates the VXLAN with the F5SPKEgress CR. These improvements simplify the configuration process, improve efficiency, and ensure accurate configurations with minimal user intervention.
Parameters¶
The table below describes the F5SPKEgress CR spec parameters for VXLAN configuration.
| Parameter | Description |
|---|---|
vxlan.create |
Enables creating a VXLAN tunnel: true or false (default). |
vxlan.tmmInterfaceName |
Specifies the TMM VLAN interface on which the VXLAN is to be created. It must match the metadata.name of the VLAN Custom Resource (CR). This field is mandatory. |
vxlan.nodeInterfaceName |
This parameter is populated automatically and should not be manually configured. |
vxlan.mtu |
Specifies the Maximum Transmission Unit (MTU) to be set for the VXLAN tunnel. The default value is 1460. |
vxlan.port |
Specifies the port to be used for the VXLAN creation. The default value is 4789. |
vxlan.key |
Specifies the Virtual Network Identifier (VNI) to be used for the VXLAN tunnel. The default value is 0. This field is optional. If not configured, the PseudoCNI Controller will generate the VNI and automatically populate this field. |
vxlan.ipv4Subnet |
Specifies the IPv4 addresses that can be used to assign the self IP on the tunnel interface in TMM and on nodes. This field is optional. If not configured, the PseudoCNI Controller shall generate a unique subnet for this VXLAN and automatically populate this field. The PseudoCNI Controller uses the 10.0.0.0/8 network to generate unique subnets. |
vxlan.ipv4PrefixLen |
Specifies the prefix length for the IPv4 subnet assigned to the self IP. Note: This should be configured only if vxlan.ipv4Subnet is set. |
vxlan.ipv6Subnet |
Specifies the IPv6 addresses that can be used to assign the self IP on the tunnel interface in TMM and on nodes. This field is optional. If not configured, the PseudoCNI Controller will generate a unique subnet for this VXLAN and automatically populate this field. The PseudoCNI Controller uses fd00::/112 network to generate unique subnets. |
vxlan.ipv6PrefixLen |
Specifies the prefix length for the IPv6 subnet assigned to the self IP. Note: This should be configured only if vxlan.ipv6Subnet is set. |
Configuring VXLAN with F5SPKEgress CR¶
Prerequisites:
The underlying
F5SPKVLANCR, referred to in theF5SPKEgressCR astmmInterfaceName, must be created before creating theF5SPKEgressCR.If
pseudoCNIConfigis configured andvxlan.createis set to true.vlans.vlanListshould be empty.pseudoCNIConfig.appNodeInterfaceshould be empty.
If
pseudoCNIConfigis configured andvxlan.createis set to false.vlans.vlanListshould be populated, containing only one element.
If
vxlan.nodeInterfaceNameis configured, the specified interface will be used to create the VXLAN interface. Ifvxlan.nodeInterfaceNameis not configured or set to default, the appropriate interface on the node is automatically detected. This detected interface will be the one that can connect to the VLAN interface specified bytmmInterfaceNamein the TMM.When you configure a subnet for a VXLAN interface manually, ensure that unique VXLAN subnets are configured for all F5SPKEgress (VXLAN) CRs.
When you configure a VNI manually, ensure that the VNI is configured for all F5SPKEgress (VXLAN) CRs.
Follow the instructions below to set up the VXLAN configuration using the F5SPKEgress CR with default and specific configuration values:
Using Default Values
Copy the below example F5SPKEgress CR with default values into a YAML file, then modify VXLAN settings as required and save.
apiVersion: "k8s.f5net.com/v3" kind: F5SPKEgress metadata: name: "engress-cr-spk-app" spec: dualStackEnabled: true snatType: "SRC_TRANS_POOL" egressSnatpool: "egress-snat-vx100" psuedoCNIConfig: namespaces: - "spk-app-2" appPodInterface: "eth0" vxlan: create: true tmmInterfaceName: "internal"
Using Specific Configuration Values
Copy the below example F5SPKEgress CR with specific configuration values into a YAML file, then modify VXLAN settings as needed and save.
apiVersion: "k8s.f5net.com/v3" kind: F5SPKEgress metadata: name: "engress-cr-spk-app" spec: dualStackEnabled: true snatType: "SRC_TRANS_POOL" egressSnatpool: "egress-snat-vx100" psuedoCNIConfig: namespaces: - "spk-app-2" appPodInterface: "eth0" vxlan: create: true nodeInterfaceName: "eth0" tmmInterfaceName: "internal" port: 4789 mtu: 1460 key: 100 ipv4Subnet: 40.10.0.400 ipv4PrefixLen: 24 ipv6Subnet: ab30::60:10:20:400 ipv6PrefixLen: 112
Apply the F5SPKEgress CR that you have created.
kubectl apply -f spk-egress-cr.yaml
Sample Output
f5spkegress.k8s.f5net.com/spk-egress-cr create
Uninstallation F5SPKEgress CR¶
Follow the steps below for the graceful uninstallation of the F5SPKEgress CR:
Delete the F5SPKEgress CR.
kubectl delete -f spk-egress-cr.yaml
After deleting F5SPKEgress, delete any dependent CRs, such as F5SPKVlan CR.
kubectl delete -f f5-spk-vlan.yaml
Delete the BNKGatewayClass CR to remove all pods.
kubectl delete -f bnkgatewayclass-cr.yaml
Uninstall the F5 Lifecycle Operator helm chart.
helm uninstall flo -n default
