F5SPKEgress

The BIG-IP Next for Kubernetes F5SPKEgress Custom Resource (CR) enables egress connectivity for internal pods that need access to external networks. Additionally, the F5SPKEgress CR must reference an F5SPKDnscache CR to provide high-performance DNS caching. For the full list of CRs, refer to the BIG-IP Next for Kubernetes CRs.

Important: All the Egress CRs should be applied to the namespace where TMM pod is configured.

The F5SPKEgress CR supports the following features:

  • Egress CR with SNAT: SNATs modify the source IP address of outgoing packets leaving the cluster. Using the F5SPKEgress CR, you can configure SNAT in 3 ways: SNAT none, SNAT pools or SNAT automap. For more information, see Egress SNAT

  • Egress CR with Shared SNAT Address and Flow-Forwarding: Multiple TMM pods use the same SNAT addresses to save resources. If response traffic is sent to a different pod, it forwards the traffic to the original pod to handle it correctly. For more information, see Shared SNAT Address with Flow-Forwarding.

  • Egress CR with Shared SNAT Address and Security Protection: The security protection configuration is added to the F5SPKEgress CR to enhance the shared SNAT address with additional security. For more information, see Shared SNAT Address with Security Protection.

  • Egress CR with PseudoCNI: Whole Cluster BIG-IP Next for Kubernetes uses the F5SPKEgress CR to manage egress traffic from apps through TMM pods. The CSRC daemonset sets up routes on worker nodes to direct traffic to the correct TMM. For more information, see PseudoCNI.

  • Egress CR with VXLAN: Users now have the option to create an VXLAN directly when they are creating an F5SPKEgress CR. This streamlines the process as it eliminates the need to create a separate F5SPKVXLAN CR and then refer to it within the F5SPKEgress CR. For more information, see VXLAN with F5SPKEgress CR.

CR Modifications

Because the F5SPKEgress CR references a number of additional CRs, F5 recommends that you always delete and reapply the CR, rather than using kubectl apply to modify the running CR configuration.

Note: Each time you modify egress or DNS, the TMM has to redeploy.

Requirements

Ensure you have:

  • Configured and installed an external and internal F5SPKVlan CR.

Egress SNAT

SNATs are used to modify the source IP address of egress packets leaving the cluster. When the BIG-IP Next for Kubernetes Traffic Management Microkernel (TMM) receives an internal packet from an internal Pod, the external (egress) packet source IP address will translate using a configured SNAT IP address. Using the F5SPKEgress CR, you can apply SNAT IP addresses using either SNAT pools, or SNAT automap.

snatType

There are three types of SNAT:

Note: Currently, the Egress use case with SNAT none type is not supported for GA.

SNAT none

When snatType is configured with SRC_TRANS_NONE, TMM uses application pod’s IP address to communicate with an external server.

SNAT pools

When snatType is configured with SRC_TRANS_SNATPOOL, TMM uses the configured MBIP server-side SNAT pools to communicate with an external server.

SNAT pools are lists of routable IP addresses, used by TMM to translate the source IP address of egress packets. SNAT pools provide a greater number of available IP addresses, and offer more flexibility for defining the SNAT IP addresses used for translation. If snatType is configured with the SRC_TRANS_SNATPOOL type, egressSnatpool will be used as a pool of addresses for source address translation for traffic that matches this F5SPKEgress CR. For more information to enable SNAT pools, see F5SPKSnatpool.

SNAT automap

When snatType is configured with SRC_TRANS_AUTOMAP, TMM uses the configured MBIP server side self IPs to communicate with an external server.

SNAT automap uses TMM’s external F5SPKVlan IP address as the source IP for egress packets. SNAT automap is easier to implement, and conserves IP address allocations. To use SNAT automap, leave the spec.egressSnatpool parameter undefined (default). Use the installation procedure below to enable egress connectivity using SNAT automap.

Note: In clusters with multiple BIG-IP Next for Kubernetes Controller instances, ensure the IP addresses defined in each F5SPKSnatpool CR do not overlap.

Parameters

The parameters used to configure TMM for SNAT automap:

Parameter Description
snatType Specifies the type of SNAT address to use when forwarding requests that match this static route: SRC_TRANS_NONE, SRC_TRANS_SNATPOOL, or SRC_TRANS_AUTOMAP (default). For more infrormation, see Egress SNAT
spec.dualStackEnabled Enables creating both IPv4 and IPv6 wildcard virtual servers for egress connections: true or false (default).
spec.egressSnatpool References an installed F5SPKsnatpool CR using the spec.name parameter, or applies SNAT automap when undefined (default).

Configure SNAT Automap

Follow the instructions below to set up the SNAT automap configuration using the F5SPKEgress CR:

  1. Copy the below example F5SPKEgress CR to a YAML file, then set the namespace parameter to the Controller’s Project and save.

    apiVersion: "k8s.f5net.com/v3"
    kind: F5SPKEgress
    metadata:
      name: egress-cr
      namespace: <project>
    spec:
      dualStackEnabled: <true|false>
      egressSnatpool: ""
    

    In this example, the CR installs to the default Project:

    apiVersion: "k8s.f5net.com/v3"
    kind: F5SPKEgress
    metadata:
      name: egress-cr
      namespace: default
    spec:
      dualStackEnabled: <true|false>
      egressSnatpool: ""
    
  2. Apply the F5SPKEgress CR that you have created.

    kubectl apply -f <file name>
    

    In this example, the CR file is named spk-egress-cr.yaml:

    kubectl apply -f spk-egress-cr.yaml
    

    Sample Output:

    f5spkegress.k8s.f5net.com/spk-egress-cr created
    

Internal Pods can now connect to external resources using the external F5SPKVlan self IP address.

Connection Statistics

Connect to the Debug Sidecar in the BIG-IP Next for Kubernetes to view the traffic processing statistics.

  1. Log in to the BIG-IP Next for Kubernetes Debug container.

    kubectl exec -it f5-tmm-wgrvm -c debug -n default
    
  2. View the traffic processing statistics.

    tmctl -d blade virtual_server_stat -s name,serverside.tot_conns
    

    In this example, 3 IPv4 connections, and 2 IPv6 connections have been initiated by internal Pods:

    name                              serverside.tot_conns
    -----------------                 --------------------
    default-egress-crd-egress-ipv6                2
    default-egress-crd-egress-ipv4                3
    

Shared SNAT Address and Security Protection

In the default SNAT pool mode, each TMM pod is assigned a unique SNAT address. However, to optimize resource usage and minimize the number of SNAT addresses in large-scale deployments, TMM can be configured to share an SNAT address. Hence, in shared SNAT address mode, multiple TMM pods can use the same set of SNAT addresses to handle egress traffic. In this scenario, there may be instances where response traffic from the router is directed to an alternate TMM pod than the one that originally handled the outgoing traffic. To address this situation, the alternate TMM pod forwards the valid response traffic to original TMM pod so it can be processed properly.

Note: Response Traffic Handling: Only traffic that is valid and belongs to an active TMM pod, or matches an existing outgoing connection, will be forwarded and processed. It is important to follow these rules to ensure smooth traffic management.

Shared SNAT Address with Flow-Forwarding

Parameters

The table below describes the F5SPKEgress CR spec parameters for Flow-Forwarding configuration:

Parameter Description
spec.egressSnatpoolMode Specifies the type of SNAT address for egress: UNIQUE_SNAT_ADDRESS (default) and SHARED_SNAT_ADDRESS_FLOW_FORWARD.
* UNIQUE_SNAT_ADDRESS (default): Each TMM pod is assigned a unique SNAT address, which is not shared between TMMs.
* SHARED_SNAT_ADDRESS_FLOW_FORWARD: Flow forward will be enabled. The older parameter egressSnatpool in the F5SPKEgress CR refers to the SNAT pool used for egress. This SNAT pool should have sharedSnatAddressEnabled parameter set to true, meaning same set of SNAT address is shared among all TMM pods. For more information, see F5SPKSnatpool.
spec.egressSnatpoolFlowForwardVlan Specifies the Vlan to be used for packets flow-forwarding between TMM pods.

Flow-Forwarding Architecture: Egress Traffic

Note: ACL and Flow-Forwarding Compatibility: Access Control List (ACL) and Flow-Forwarding cannot be applied simultaneously to the same IP address. In the case of Flow-Forwarding, IP address specifically refers to the Source Network Address Translation (SNAT) address. This means that if Flow-Forwarding is enabled, ACL configurations for the SNAT address are not applicable.

Flow-Forwarding Architecture: Egress Traffic

The above diagram describes how egress traffic from a (Cloud-Native Function) CNF Node is sent to an External Client through TMM with Shared SNAT applied.

Egress Traffic Flow Description:

  1. Initiating Egress Traffic:

    • The CNF Node (X:x) initiates egress traffic, sending packets to the external client (Y:y) via the TMM.

  2. Applying SNAT (Source Network Address Translation):

    • To send packets to the external client (Y:y), the TMM changes the CNF node’s (X:x) source address to the egress SNAT address (S:s).

    • The packet is then forwarded from the TMM to the router with the SNAT address (S:s).

  3. Routing the egress traffic to the External Client:

    • The router forwards the packet to the external client (Y:y).

  4. Return Traffic from the External Client:

    • The external client (Y:y) sends a response packet back to the router.

    • Since the SNAT address (S:s) is shared across all TMMs, the router may return the packet to any available TMM (e.g., TMM 1), without knowing which specific TMM handled the initial outgoing packet.

  5. Identifying the Original TMM:

    • Upon receiving the packet, the TMM checks which TMM initiated the original packet flow.

    • The packet is then forwarded to the correct TMM (e.g., TMM 2) that originally handled the outgoing traffic.

  6. Returning the Packet to the CNF Node:

    • The identified TMM (e.g., TMM 2) sends the packet back to the originating CNF node (X:x), completing the egress traffic flow.

Configuring Shared SNAT Address with Flow-Forwarding

Follow the instructions below to set up the shared SNAT address configuration using the F5SPKEgress and F5SPKSnatpool CRs:

  1. Copy the below example F5SPKEgress CR to a YAML file, then modify shared SNAT Address settings as required and save.

    apiVersion: "k8s.f5net.com/v3"
    kind: F5SPKEgress
    metadata:
      name: egress-cr
      namespace: default
    spec:
      dualStackEnabled: true
      maxTMMReplicas: 1
      egressSnatpool: egress_snatpool 
      egressSnatpoolMode: SHARED_SNAT_ADDRESS_FLOW_FORWARD  
      egressSnatpoolFlowForwardVlan: external
      vlans:
        vlanlist: [egress-3003]
        disableListedVlans: false
    

    Note: The same egressSnatpool should not be configured across multiple F5SPKEgress CRs.

  2. Apply the F5SPKEgress CR that you have created.

    kubectl apply -f spk-egress-cr.yaml
    

    Sample Output

    f5spkegress.k8s.f5net.com/spk-egress-cr created 
    
  3. Copy the below F5SPKSnatpool CR to a YAML file, then modify shared SNAT Address settings as required and save.

    apiVersion: "k8s.f5net.com/v1"  
    kind: F5SPKSnatpool
    metadata:
       name: egress-snatpool-cr
       namespace: default
    spec:
       name: egress-snatpool 
       sharedSnatAddressEnabled: true
       addressList:
        - - 44.44.44.4
          - 2002::44:44:44:4 
    
  4. Apply the F5SPKSnatpool CR that you have created.

    kubectl apply -f spk-snatpool-cr.yaml
    

    Sample Output

    f5spksnatpool.k8s.f5net.com/spk-snatpool-cr created
    

Shared SNAT Address with Security Protection

Parameters

The table below describes the F5SPKEgress CR spec parameters for shared SNAT address with Security Protection configuration:

Parameter Description
spec.egressSnatpoolMode Specifies the type of SNAT address for egress: UNIQUE_SNAT_ADDRESS (default) and SHARED_SNAT_ADDRESS_FLOW_FORWARD.
* UNIQUE_SNAT_ADDRESS (default): Each TMM pod is assigned a unique SNAT address, which is not shared between TMMs.
* SHARED_SNAT_ADDRESS_FLOW_FORWARD: Flow forward will be enabled. The older parameter egressSnatpool in the F5SPKEgress CR refers to the SNAT pool used for egress. This SNAT pool should have sharedSnatAddressEnabled parameter set to true, meaning same set of SNAT address is shared among all TMM pods. For more information, see F5SPKSnatpool.
spec.egressSnatpoolFlowForwardVlan Specifies the Vlan to be used for packets flow-forwarding between TMM pods.
spec.egressSnatpoolProtectionEnabled Enables or disables the shared SNAT address with security protection for packet forwarding between TMM pods: true or false (default).

Configuring Shared SNAT Address with Security Protection

This section provides the configuration of shared SNAT address with security protection.

Security protection prevents packet forwarding of likely invalid flows. This can help reduce security risk and improve overall performance.

Follow the instructions below to set up the shared SNAT address with security protection using the F5SPKEgress CR:

  1. Copy the below example F5SPKEgress CR to a YAML file, then modify security protection settings as required and save.

apiVersion: "k8s.f5net.com/v3"
kind: F5SPKEgress
metadata:
  name: egress-cr
  namespace: default
spec:
  dualStackEnabled: true
  maxTMMReplicas: 1
  egressSnatpool: egress_snatpool
  egressSnatpoolMode: SHARED_SNAT_ADDRESS_FLOW_FORWARD 
  egressSnatpoolFlowForwardVlan: external
  egressSnatpoolProtectionEnabled: true
  vlans:
    vlanlist: [egress-3003]
    disableListedVlans: false
  1. Apply the F5SPKEgress CR that you have created.

kubectl apply -f spk-egress-cr.yaml

Sample Output:

f5spkegress.k8s.f5net.com/spk-egress-cr created

Connection Statistics

Connect to the Debug Sidecar in the BIG-IP Next for Kubernetes TMM to view the security protection stats.

  1. Log in to the BIG-IP Next for Kubernetes Debug container.

    kubectl exec -it f5-tmm-6cdbc6bb65-j2r7d -c debug -n default -- bash
    
  2. View the security protection stats.

     tmctl -d blade profile_sharedscript_stat -P
    

    Sample Output:

     Name                             Value
     ---------------------------      --------
     cur_deny_listener                0
     tot_alternate_drop               0
     tot_origin_drop                  0
     tot_alternate_invalid_pkt        0
     tot_deny_listener                0
    

PseudoCNI

The Whole Cluster BIG-IP Next for Kubernetes utilises the F5SPKEgress CR to incorporate PseudoCNI configuration, which defines the necessary setting for egress traffic from application pods through TMM pods. The PseudoCNI Controller reads the PseudoCNI configuration from the F5SPKEgress CR and generates the necessary settings to designate TMMs as egress gateways. These mappings are shared with CSRC over GRPC. The CSRC daemonSet, running in privileged mode, configures rules and routes on nodes hosting application pods. The TMM running on the DPU local to the host node is given the highest priority compared to other TMMs.

Parameters

The table below describes the F5SPKEgress CR spec parameters used to configure PseudoCNI configuration to egress application pod traffic via TMMs.

Parameter Description
egressSnatpool Specifies the SNAT Pool name.
pseudoCNIConfig.namespaces Specifies the namespaces of the application pods to which this egress configuration applies.
The pseudoCNIConfig.namespaces configuration ensures that each egress CR is restricted to only one application namespace at a time. This restriction prevents multiple namespaces from sharing the same egress CR, ensuring that egress traffic management remains isolated and controlled within a single namespace.
pseudoCNIConfig.appPodInterface Specifies the application pod’s network interface name from which egress traffic is originated. Example “eth0”.
pseudoCNIConfig.appNodeInterface Specifies the network interface name on the worker node. The egress traffic is directed towards the TMM through this interface on the node, where the application pod is deployed.
Note: The pseudoCNIConfig.appNodeInterface should be empty when pseudoCNIConfig.vxlan.create is set to true, and it should be populated when pseudoCNIConfig.vxlan.create is set to false. For more information, see Configuring VXLAN with F5SPKEgress CR.
pseudoCNIConfig.vxlan For the parameters description of pseudoCNIConfig.vxlan, see VXLAN Configuration Parameters table.

Configuring PseudoCNI

Prerequisites

Ensure you have:

  • A Linux-based workstation.

  • Installed F5SPKVLAN.

Follow the instructions below to set up the PseudoCNI configuration using the F5SPKEgress CR:

  1. Copy the below example F5SPKEgress CR, including the VLAN, into a YAML file, then modify PseudoCNI settings as required and save.

    apiVersion: "k8s.f5net.com/v3"
    kind: F5SPKEgress
    metadata:
    name: egress-cr
    spec:
      dualStackEnabled: true
      snatType: "SRC_TRANS_POOL"
      egressSnatpool: "<snat-pool-cr-name>"
      vlans:
        disableListedVlans: false
        vlanList:
          - f5-vlan-egress
      psudeoCNIConfig: 
        namespaces: 
          - "spk-app-1"
        appPodInterface: "etho"
        appNodeInterface: "spk-ms.3001"
    
  2. Apply the F5SPKEgress CR that you have created.

    kubectl apply -f spk-egress-cr.yaml
    

Notes:
1. Using only VLAN in the egress CR (without VXLAN) is supported.
2. Make sure to configure one egress CR per namespace and one VLAN per egress CR. A single VLAN cannot be referred to in multiple egress CRs.

Configuring Dynamic Port Selection for CSRC-CNE Controller Communication

Port Conflict Issue from CSRC Logs:

ERROR: Failed to listen: listen tcp :8751: bind: address already in use

Cause:

As the CSRC daemonSet runs in the host network namespace, the CSRC gRPC port (default: 8751) may conflict with other applications using the same port in the host network.

FIX:

If a port conflict occurs during or after BIG-IP Next for Kubernetes installation, perform the following steps to fix this issue:

  1. Update the Service Resource.

    kubectl edit svc f5-csrc-grpc-svc
    

    In the csrc-grpc_service.yaml file, modify:

    ports:
      - name: f5-csrc-grpc
        port: 8751       # Change to a new port (e.g., 8875)
        protocol: TCP
        targetPort: 8751 # Update to match the new port (e.g., 8875)
    
  2. Update the DaemonSet Configuration.

    kubectl edit daemonset f5-spk-csrc
    

    In the daemonset.yaml file, modify:

    - name: PSEUDOCNI_GRPC_PORT
      value: 8751     # Change to the new port (e.g., 8875)
    

The CSRC pods will automatically restart with the new port configuration, resolving the conflict.

VXLAN with F5SPKEgress CR

The setup process for the F5SPKVXLAN CR is automated to minimize the need for manual input and prevent errors. By using the specified tmmInterfaceName, the node interface is automatically identified and obtains the required underlay network information. It then chooses a unique Virtual Network Identifier (VNI) and subnet, and associates the VXLAN with the F5SPKEgress CR. These improvements simplify the configuration process, improve efficiency, and ensure accurate configurations with minimal user intervention.

Parameters

The table below describes the F5SPKEgress CR spec parameters for VXLAN configuration.

Parameter Description
vxlan.create Enables creating a VXLAN tunnel: true or false (default).
vxlan.tmmInterfaceName Specifies the TMM VLAN interface on which the VXLAN is to be created. It must match the metadata.name of the VLAN Custom Resource (CR). This field is mandatory.
vxlan.nodeInterfaceName This parameter is populated automatically and should not be manually configured.
vxlan.mtu Specifies the Maximum Transmission Unit (MTU) to be set for the VXLAN tunnel. The default value is 1460.
vxlan.port Specifies the port to be used for the VXLAN creation. The default value is 4789.
vxlan.key Specifies the Virtual Network Identifier (VNI) to be used for the VXLAN tunnel. The default value is 0. This field is optional.
If not configured, the PseudoCNI Controller will generate the VNI and automatically populate this field.
vxlan.ipv4Subnet Specifies the IPv4 addresses that can be used to assign the self IP on the tunnel interface in TMM and on nodes. This field is optional.
If not configured, the PseudoCNI Controller shall generate a unique subnet for this VXLAN and automatically populate this field. The PseudoCNI Controller uses the 10.0.0.0/8 network to generate unique subnets.
vxlan.ipv4PrefixLen Specifies the prefix length for the IPv4 subnet assigned to the self IP.
Note: This should be configured only if vxlan.ipv4Subnet is set.
vxlan.ipv6Subnet Specifies the IPv6 addresses that can be used to assign the self IP on the tunnel interface in TMM and on nodes. This field is optional.
If not configured, the PseudoCNI Controller will generate a unique subnet for this VXLAN and automatically populate this field. The PseudoCNI Controller uses fd00::/112 network to generate unique subnets.
vxlan.ipv6PrefixLen Specifies the prefix length for the IPv6 subnet assigned to the self IP.
Note: This should be configured only if vxlan.ipv6Subnet is set.

Configuring VXLAN with F5SPKEgress CR

Prerequisites:

  • The underlying F5SPKVLAN CR, referred to in the F5SPKEgress CR as tmmInterfaceName, must be created before creating the F5SPKEgress CR.

  • If pseudoCNIConfig is configured and vxlan.create is set to true.

    • vlans.vlanList should be empty.

    • pseudoCNIConfig.appNodeInterface should be empty.

  • If pseudoCNIConfig is configured and vxlan.create is set to false.

    • vlans.vlanList should be populated, containing only one element.

  • If vxlan.nodeInterfaceName is configured, the specified interface will be used to create the VXLAN interface. If vxlan.nodeInterfaceName is not configured or set to default, the appropriate interface on the node is automatically detected. This detected interface will be the one that can connect to the VLAN interface specified by tmmInterfaceName in the TMM.

  • When you configure a subnet for a VXLAN interface manually, ensure that unique VXLAN subnets are configured for all F5SPKEgress (VXLAN) CRs.

  • When you configure a VNI manually, ensure that the VNI is configured for all F5SPKEgress (VXLAN) CRs.

Follow the instructions below to set up the VXLAN configuration using the F5SPKEgress CR with default and specific configuration values:

  1. Using Default Values

    Copy the below example F5SPKEgress CR with default values into a YAML file, then modify VXLAN settings as required and save.

    apiVersion: "k8s.f5net.com/v3"  
    kind: F5SPKEgress
    metadata:
      name: "engress-cr-spk-app"  
    spec:  
     dualStackEnabled: true
     snatType: "SRC_TRANS_POOL" 
     egressSnatpool: "egress-snat-vx100"
     psuedoCNIConfig: 
        namespaces:  
          - "spk-app-2"
        appPodInterface: "eth0"
        vxlan:
          create: true 
          tmmInterfaceName: "internal"  
    
  2. Using Specific Configuration Values

    Copy the below example F5SPKEgress CR with specific configuration values into a YAML file, then modify VXLAN settings as needed and save.

    apiVersion: "k8s.f5net.com/v3"  
    kind: F5SPKEgress
    metadata:
      name: "engress-cr-spk-app"  
    spec:  
     dualStackEnabled: true
     snatType: "SRC_TRANS_POOL" 
     egressSnatpool: "egress-snat-vx100"
     psuedoCNIConfig: 
        namespaces:  
          - "spk-app-2"
        appPodInterface: "eth0"
        vxlan:
          create: true 
          nodeInterfaceName: "eth0"
          tmmInterfaceName: "internal"
          port: 4789
          mtu: 1460
          key: 100
          ipv4Subnet: 40.10.0.400
          ipv4PrefixLen: 24
          ipv6Subnet: ab30::60:10:20:400
          ipv6PrefixLen: 112  
    
  3. Apply the F5SPKEgress CR that you have created.

    kubectl apply -f spk-egress-cr.yaml  
    

    Sample Output

    f5spkegress.k8s.f5net.com/spk-egress-cr create
    

Uninstallation F5SPKEgress CR

Follow the steps below for the graceful uninstallation of the F5SPKEgress CR:

  1. Delete the F5SPKEgress CR.

    kubectl delete -f spk-egress-cr.yaml
    
  2. After deleting F5SPKEgress, delete any dependent CRs, such as F5SPKVlan CR.

    kubectl delete -f f5-spk-vlan.yaml
    
  3. Delete the BNKGatewayClass CR to remove all pods.

    kubectl delete -f bnkgatewayclass-cr.yaml
    
  4. Uninstall the F5 Lifecycle Operator helm chart.

    helm uninstall flo -n default