Managing the source address for egress traffic from the TMM pod

One of the standard Service Proxy for Kubernetes (SPK) Custom Resource Definitions (CRD) is the IngressRouteSnatpool.k8s.f5net.com. This CRD allows the administrator to apply a Source Network Address Translation(SNAT) rule for egress(outbound) traffic from the TMM pod. This ensures that the internal Pod address is not exposed outside of the cluster. When a Pod in the namespace watched by SPK makes an outbound request the source IP is translated to one of the available IP addressees in the SNAT pool. SNAT also ensures that all inbound responses are routed through the TMM pod to the application. Additional details for the SNATPool can be found in the Service Proxy for Kubernetes documentation found on clouddocs.f5.com.

What is Egress and Ingress traffic

The term ingress and egress refers to the direction of the traffic flow. Generally when working with OpenShift and Kubernetes this traffic originates or terminates at the pod. Egress traffic can be defined as packets that originate from a pod inside the OpenShift or Kubernetes network and travel out through switches and routers to an external destination. Ingress is simply the opposite; traffic that originates outside of a given network and travels to a pod inside the network.

What does SNAT solve

In some situations you may find that your return traffic to the pod has an alternative route that is used by other services or clients but in your case causes issues. By using a SNAT address the external client response will be sent to the TMM instance and then proxied back to the pod on your node in the required namespace. Likewise outbound connections from the Pod may be presented with multiple routes due to the use of Multus. This can result in asynchronous outbound traffic that the destination will likely drop. Using a snatPool forces the pod to always use the tmm pod as the outbound route.

/// Todo add image for asynchronous traffic here

Enable SNAT for SPK

The SNAT functionality is not enabled by default, you can check to see if it was enabled when SPK was deployed by reviewing the deployment configuration. We have deployed the F5 spk instance to the namespace spk. We can check that namespace for the available deployments and see f5ingress-f5ingress listed. Using oc get with the -o yaml option we can grep for the snat settings as shown in the example screenshot below. Here we can see that -use-snatpools is set to false

oc get deployments
oc get deployments f5ingress-f5ingress -o yaml | grep -i use-snat

snat disabled

In this example we can see that it is not enabled so SPK will need to be redeployed with this feature enabled. When uninstalling SPK be sure to confirm all the related pods are terminated before starting the new install. In this example as we installed F5 SPK using helm we can uninstall it using helm also.

helm list
helm uninstall f5ingress
oc get pods

uninstall f5ingress

Next we need to add useSnatpools: true to the spk override yaml file. [If you want more details for the SPK install procedures, check the Service Proxy for Kubernetes (SPK) install use case and the instructions found online in the F5 Service Proxy for Kubernetes documentation.

Note: The high level values tmm should already exist in your spk-override.yaml.
Note: The standard yaml indention is two spaces, avoid using tab as it will lead to yaml syntax issues.

tmm:
  egress:
    useSnatpools: true

Deploy a SNAT pool configuration

Now that you have redeployed SPK you can configure your snat pools. You can confirm your snatPools setting again using oc get and the -o yaml output option. Assuming it is now set to true you can create and apply your snatpool CR. The metadata name value is freeform. In this example I am including the namespace value that will be impacted by this snatpool. Event1 is the watch namespace that SPK is monitoring. The namespace value is where we have deployed the F5 SPK instance to in this case it is spk. The spec.name value must be set to egress_snatpool. The snatPool list includes to IP addresses, each one will provide pool of ports to expose for tmm instances.

apiVersion: "k8s.f5net.com/v1"
kind: IngressRouteSnatpool
metadata:
  name: "ns-event1-snatpool"
  namespace: spk
spec:
  name: "egress_snatpool"
  addressList:
    - - 10.10.99.50
      - 10.10.99.51

If you are going to scale up your TMM instances you will need to create a snatPool for each TMM instance using the same listing format.

spec:
  name: "egress_snatpool"
  addressList:
    - - 10.10.99.50
      - 10.10.99.51
    - - 10.10.99.52
      - 10.10.99.53

If you want to use more than one subnet you must include the same subnets in all pools. In the following example we are using IP addresses from the 10.10.99.0 and 10.10.100.0 as an example.

```bash
spec:
  name: "egress_snatpool"
  addressList:
    - - 10.10.99.50
      - 10.10.99.51
      - 10.10.100.99
    - - 10.10.99.52
      - 10.10.99.53
      - 10.10.100.100

/// Todo add container with functionality to return client details similar to the iRule used in the lab


End of use case