F5SPKEgress

The BIG-IP Next for Kubernetes F5SPKEgress Custom Resource (CR) enables egress connectivity for internal pods that need access to external networks. Additionally, the F5SPKEgress CR must reference an F5SPKDnscache CR to provide high-performance DNS caching. For the full list of CRs, refer to the BIG-IP Next for Kubernetes CRs.

Important: All the Egress CRs should be applied to the namespace where TMM pod is configured.

The F5SPKEgress CR supports the following features:

  • Egress CR with SNAT: SNATs modify the source IP address of outgoing packets leaving the cluster. Using the F5SPKEgress CR, you can configure SNAT in 3 ways: SNAT none, SNAT pools or SNAT automap. For more information, see Egress SNAT
  • Egress CR with Shared SNAT with Flow-Forwarding: The multiple TMM pods use the same SNAT addresses to save resources. If return traffic goes to a different pod, it sends it to the original pod to handle it correctly. For more information, see Shared SNAT Addres with Flow-Forwarding
  • Egress CR with PseudoCNI: Whole Cluster BIG-IP Next for Kubernetes uses the F5SPKEgress CRD to manage egress traffic from apps through TMM pods. The CSRC daemonset sets up routes on worker nodes to direct traffic to the correct TMM. For more information, see Egress CR with PseudoCNI

CR modifications

Because the F5SPKEgress CR references a number of additional CRs, F5 recommends that you always delete and reapply the CR, rather than using kubectl apply to modify the running CR configuration.

Note: Each time you modify egress or DNS, the TMM has to redeploy.

Requirements

Ensure you have:

  • Configured and installed an external and internal F5SPKVlan CR.

Egress SNAT

SNATs are used to modify the source IP address of egress packets leaving the cluster. When the BIG-IP Next for Kubernetes Traffic Management Microkernel (TMM) receives an internal packet from an internal Pod, the external (egress) packet source IP address will translate using a configured SNAT IP address. Using the F5SPKEgress CR, you can apply SNAT IP addresses using either SNAT pools, or SNAT automap.

snatType

There are three types of SNAT:

Note: Currently, the Egress use case with SNAT none type is not supported for LA.

SNAT none

When snatType is configured with SRC_TRANS_NONE, TMM uses application pod’s IP address to communicate with an external server.

SNAT pools

When snatType is configured with SRC_TRANS_SNATPOOL, TMM uses the configured MBIP server-side SNAT pools to communicate with an external server.

SNAT pools are lists of routable IP addresses, used by TMM to translate the source IP address of egress packets. SNAT pools provide a greater number of available IP addresses, and offer more flexibility for defining the SNAT IP addresses used for translation. If snatType is configured with the SRC_TRANS_SNATPOOL type, egressSnatpool will be used as a pool of addresses for source address translation for traffic that matches this F5SPKEgress CR. For more information to enable SNAT pools, see F5SPKSnatpool.

SNAT automap

When snatType is configured with SRC_TRANS_AUTOMAP, TMM uses the configured MBIP server side self IPs to communicate with an external server.

SNAT automap uses TMM’s external F5SPKVlan IP address as the source IP for egress packets. SNAT automap is easier to implement, and conserves IP address allocations. To use SNAT automap, leave the spec.egressSnatpool parameter undefined (default). Use the installation procedure below to enable egress connectivity using SNAT automap.

Note: In clusters with multiple BIG-IP Next for Kubernetes Controller instances, ensure the IP addresses defined in each F5SPKSnatpool CR do not overlap.

Parameters

The parameters used to configure TMM for SNAT automap:

Parameter Description
snatType Specifies the type of SNAT address to use when forwarding requests that match this static route: SRC_TRANS_NONE, SRC_TRANS_SNATPOOL, or SRC_TRANS_AUTOMAP (default). For more infrormation, see Egress SNAT
spec.dualStackEnabled Enables creating both IPv4 and IPv6 wildcard virtual servers for egress connections: true or false (default).
spec.egressSnatpool References an installed F5SPKsnatpool CR using the spec.name parameter, or applies SNAT automap when undefined (default).

Installation

Follow the instructions below to set up the SNAT automap configuration using the F5SPKEgress CR:

  1. Copy the below example F5SPKEgress CR to a YAML file, then set the namespace parameter to the Controller’s Project and save:

    apiVersion: "k8s.f5net.com/v3"
    kind: F5SPKEgress
    metadata:
      name: egress-cr
      namespace: <project>
    spec:
      dualStackEnabled: <true|false>
      egressSnatpool: ""
    

    In this example, the CR installs to the default Project:

    apiVersion: "k8s.f5net.com/v3"
    kind: F5SPKEgress
    metadata:
      name: egress-cr
      namespace: default
    spec:
      dualStackEnabled: true
      egressSnatpool: egress_snatpool
    
  2. Install the F5SPKEgress CR they you have created:

    kubectl apply -f <file name>
    

    In this example, the CR file is named spk-egress-cr.yaml:

    kubectl apply -f spk-egress-cr.yaml
    
  3. Internal Pods can now connect to external resources using the external F5SPKVlan self IP address.

  4. To verify traffic processing statistics, log in to the Debug Sidecar:

    kubectl exec -it <tmm-pod-name> -c debug -n <project>
    

    In this example, the debug sidecar is in the default Project:

    kubectl exec -it f5-tmm-wgrvm -c debug -n default
    
  5. Run the following tmctl command:

    tmctl -d blade virtual_server_stat -s name,serverside.tot_conns
    

    In this example, 3 IPv4 connections, and 2 IPv6 connections have been initiated by internal Pods:

    name                              serverside.tot_conns
    -----------------                 --------------------
    default-egress-crd-egress-ipv6                2
    default-egress-crd-egress-ipv4                3
    

Configuring Shared SNAT Address

When using the default SNAT pool mode, each TMM pod is assigned a unique SNAT address. However, to optimize resource usage and reduce the number of SNAT addresses in large-scale deployments, TMM can be configured to share a SNAT address. In shared SNAT address mode, multiple TMM pods can scale out and use the same set of SNAT addresses for handling egress traffic. When the same set of SNAT addresses is shared among TMM pods, there may be instances where returning traffic from the router is directed to an alternate TMM pod that did not originally process the outgoing traffic. To address this situation, the alternate TMM pod can forward the returning traffic to original TMM pod so it can be processed properly.

Note: Only valid returning traffic will be forwarded and processed. Non-valid traffic includes any traffic that does not belong to an active TMM pod or does not match any existing outgoing connection. Please ensure that your configurations align with these criteria to maintain proper traffic handling.

The table below describes the F5SPKEgress CR spec parameters for Flow-Forwarding configuration:

Parameter Description
spec.egressSnatpoolMode Specifies the type of SNAT address for egress: UNIQUE_SNAT_ADDRESS (default), SHARED_SNAT_ADDRESS_LOOSE_INIT and SHARED_SNAT_ADDRESS_FLOW_FORWARD.
* UNIQUE_SNAT_ADDRESS (default): Each TMM pod is assigned a unique SNAT address, which is not shared between TMMs.
* SHARED_SNAT_ADDRESS_LOOSE_INIT:
Note: This option is not supported in the Titan_LA Release.
* SHARED_SNAT_ADDRESS_FLOW_FORWARD: Flow forward will be enabled. The older parameter egressSnatpool in the F5SPKEgress CR refers to the SNAT pool used for egress. This SNAT pool should have sharedSnatAddressEnabled parameter set to true, meaning same set of SNAT address is shared among all TMM pods. For more information, see F5SPKSnatpool.
spec.egressSnatpoolFlowForwardVlan Specifies the Vlan to be used for packets flow-forwarding between TMM pods.

Flow-Forwarding Architecture: Egress Traffic

_images/flow-forwarding-egresssnataddress.png

Diagram 1: Flow-Forwarding Architecture: Egress Traffic

The Diagram 1, describes how egress traffic from a CNF (Cloud-Native Function) Node is sent to an External Client through TMM (Traffic Management Microkernel) with Shared SNAT (Source Network Address Translation) applied.

Flow Description:

  1. Initiating Egress Traffic:
    • The CNF Node (X:x) initiates egress traffic, sending packets to the external client (Y:y) via the TMM.
  2. Applying SNAT (Source Network Address Translation):
    • To send packets to the external client (Y:y), the TMM changes the CNF node’s (X:x) source address to the egress SNAT address (S:s).
    • The packet is then forwarded from the TMM to the router with the SNAT address (S:s).
  3. Routing to the External Client:
    • The router forwards the packet to the external client (Y:y).
  4. Return Traffic from the External Client:
    • The external client (Y:y) sends a response packet back to the router.
    • Since the SNAT address (S:s) is shared across all TMMs, the router may return the packet to any available TMM (e.g., TMM 1), without knowing which specific TMM handled the initial outgoing packet.
  5. Identifying the Original TMM:
    • Upon receiving the packet, the TMM checks which TMM initiated the original packet flow.
    • The packet is then forwarded to the correct TMM (e.g., TMM 2) that originally handled the outgoing traffic.
  6. Returning the Packet to the CNF Node:
    • The identified TMM (e.g., TMM 2) sends the packet back to the originating CNF node (X:x), completing the egress traffic flow.

Installation

Follow the instructions below to set up the shared SNAT Address configuration using the F5SPKEgress and F5SPKSnatpool CRs:

  1. Copy the below example F5SPKEgress CR to a YAML file, then modify shared SNAT Address settings as required and save:

    apiVersion: "k8s.f5net.com/v3"
    kind: F5SPKEgress
    metadata:
      name: egress-cr
      namespace: default
    spec:
      dualStackEnabled: true
      maxTMMReplicas: 1
      egressSnatpool: egress_snatpool 
      egressSnatpoolMode: SHARED_SNAT_ADDRESS_FLOW_FORWARD  
      egressSnatpoolFlowForwardVlan: external
      vlans:
        vlanlist: [egress-3003]
        disableListedVlans: false
    

    _images/spk_info.png Notes:

    • For the description of vlanlist and disableListedVlans parameters, see spec.egressVlans
    • The same egressSnatpool should not be configured across multiple F5SPKEgress CRs.
  2. Install the F5SPKEgress CR that you have created:

    kubectl apply -f spk-egress-cr.yaml
    
    f5spkegress.k8s.f5net.com/spk-egress-cr created 
    
  3. Copy the below F5SPKSnatpool CR to a YAML file, then modify shared SNAT Address settings as required and save:

    apiVersion: "k8s.f5net.com/v1"  
    kind: F5SPKSnatpool
    metadata:
       name: egress-snatpool-cr
       namespace: default
    spec:
       name: egress-snatpool 
       sharedSnatAddressEnabled: true
       addressList:
        - - 44.44.44.4
          - 2002::44:44:44:4 
    
  4. Install the F5SPKSnatpool CR that you have created:

    kubectl apply -f spk-snatpool-cr.yaml
    
    f5spksnatpool.k8s.f5net.com/spk-snatpool-cr created
    

Configuring PseudoCNI

The Whole Cluster BIG-IP Next for Kubernetes utilizes the F5SPKEgress CR to incorporate PseudoCNI configuration, which defines the necessary setting for egress traffic from application pods through TMM pods. The PseudoCNI Controller reads the PseudoCNI configuration from the F5SPKEgress CR, identifies the egress path for each application pod such as VLAN/VXLAN, and maps them to TMMs. These mappings are then stored in a configmap for the CSRC to use. The CSRC daemonSet, running in privileged mode, configures rules and routes on worker nodes hosting application pods. These configurations redirect egress traffic from the application pods toward the TMM.

Parameters

The table below describes the F5SPKEgress CR spec parameters used to configure PseudoCNI configuration to egress application pod traffic via TMMs.

Parameter Description
egressSnatpool Specifies the SNAT Pool name.
pseudoCNIConfig.namespaces Specifies the namespaces of the application pods to which this egress configuration applies.
The pseudoCNIConfig.namespaces configuration ensures that each egress CR is restricted to only one application namespace at a time. This restriction prevents multiple namespaces from sharing the same egress CR, ensuring that egress traffic management remains isolated and controlled within a single namespace.
pseudoCNIConfig.appPodInterface Specifies the application pod’s network interface name from which egress traffic is originated. Example “eth0”.
pseudoCNIConfig.vlanName Specifies the VLAN/VXLAN CR name.
pseudoCNIConfig.appNodeInterface Specifies the network interface name on the worker node. The egress traffic is directed towards the TMM through this interface on the node, where the application pod is deployed.

Installation

Requirements

To install the F5SPKEgress CR before setting up PseudoCNI configuration to route egress application pod traffic via TMMs, ensure the following prerequisites are met:

Note: If you install F5SPKVLAN or F5SPKVXLAN, ensure the psudeoCNIConfig.vlanName parameter is configured with the corresponding VLAN or VXLAN CR name.

Follow the instructions below to set up the PseudoCNI configuration using the F5SPKEgress CR:

  1. Copy the below example F5SPKEgress CR to a YAML file, then modify PseudoCNI settings as required and save:

    apiVersion: "k8s.f5net.com/v3"
    kind: F5SPKEgress
    metadata:
    name: egress-cr-spk-app
    spec:
      snatType: "SRC_TRANS_POOL"
      egressSnatpool: "<snat-pool-cr-name>"
      psudeoCNIConfig: 
         namespaces: 
            - "spk-app-1"
         appPodInterface: "etho"
         vlanName: "vlan-3001"
         appNodeInterface: "spk-ms.3001"
    
  2. Install the F5SPKEgress CR that you have created:

    kubectl apply -f spk-egress-cr.yaml
    

Uninstallation

Follow the steps below for the graceful uninstallation of the F5SPKEgress CR:

  1. Delete the F5SPKEgress CR.

    kubectl delete -f spk-egress-cr.yaml
    
  2. Delete all dependent CRs of F5SPKEgress:

    For Example: If the F5SPKVlan or F5SPKVXLAN CRs are referenced in the F5SPKEgress CR, ensure to delete them after removing F5SPKEgress CR, see Step 3.

  3. Delete the referenced F5SPKVlan or F5SPKVXLAN dependence CRs:

    kubectl delete -f f5-spk-vlan.yaml 
    

    or

    kubectl delete -f f5-spk-vxlan.yaml 
    
  4. Uninstall the product:

    kubectl delete -f spkinstance.yaml