F5SPKEgress

Overview

The Service Proxy for Kubernetes (SPK) F5SPKEgress Custom Resource (CR) enables egress connectivity for internal Pods requiring access to external networks. The F5SPKEgress CR enables egress connectivity using either Source Network Address Translation (SNAT), or the DNS/NAT46 feature that supports communication between internal IPv4 Pods and external IPv6 hosts.

_images/spk_info.png Note: The DNS/NAT46 feature does not rely on Kubernetes IPv4/IPv6 dual-stack added in v1.21.

This overview describes configuring egress traffic using SNAT and DNS/NAT46, and provides simple deployment scenarios.

Requirements

Ensure you have:

SNAT

SNATs are used to modify the source IP address of egress packets leaving the cluster. When the Service Proxy Traffic Management Microkernel (TMM) receives an internal packet from an internal Pod, the external (egress) packet source IP address will translate using a configured SNAT IP address. Using the F5SPKEgress CR, you can apply SNAT IP addresses using either SNAT pools, or SNAT automap.

SNAT pools

SNAT pools are lists of routable IP addresses, used by Service Proxy TMM to translate the source IP address of egress packets. SNAT pools provide a greater number of available IP addresses, and offer more flexibility for defining the SNAT IP addresses used for translation. For more background information and to enable SNAT pools, review the F5SPKSnatpool CR guide.

SNAT automap

SNAT automap uses Service Proxy TMM’s external F5SPKVlan IP address as the source IP for egress packets. SNAT automap is easier to implement, and conserves IP address allocations. To use SNAT automap, set the spec.egressSnatpool parameter to "", or no reference. To translate both IPv4 and IPv6 addresses, set the dualStackEnabled parameter to true. Use the deployment procedure below to enable SNAT automap.

Deployment

Use the following steps to configure the F5SPKEgress CR for SNAT automap, and verify the installation.

  1. Copy the F5SPKEgress CR to a YAML file, and set the namespace parameter to the Ingress Controller’s Project:

    apiVersion: "k8s.f5net.com/v1"
    kind: F5SPKEgress
    metadata:
      name: egress-crd
      namespace: <project>
    spec:
      dualStackEnabled: <true|false>
      egressSnatpool: ""
    

    In this example, the CR installs to the spk-ingress Project:

    apiVersion: "k8s.f5net.com/v1"
    kind: F5SPKEgress
    metadata:
      name: egress-crd
      namespace: spk-ingress
    spec:
      dualStackEnabled: true
      egressSnatpool: ""
    
  2. Install the F5SPKEgress CR:

    oc apply -f <file name>
    

    In this example, the CR file is named spk-egress-crd.yaml:

    oc apply -f spk-egress-crd.yaml
    
  3. Internal Pods can now connect to external resources using the external F5SPKVlan self IP address.

  4. To verify traffic processing statistics, log in to the Debug Sidecar:

    oc exec -it deploy/f5-tmm -c debug -n <project>
    

    In this example, the debug sidecar is in the spk-ingress Project:

    oc exec -it deploy/f5-tmm -c debug -n spk-ingress
    
  5. Run the following tmctl command:

    tmctl -f /var/tmstat/blade/tmm0 virtual_server_stat \
    -s name,serverside.tot_conns
    

    In this example, 3 IPv4 connections, and 2 IPv6 connections have been initiated by internal Pods:

    name              serverside.tot_conns
    ----------------- --------------------
    egress-ipv6                          2
    egress-ipv4                          3
    

DNS/NAT46

DNS Resolution

When the Service Proxy Traffic Management Microkernel (TMM) is configured for DNS/NAT46, it performs as both a domain name system (DNS) and network address translation (NAT) gateway, enabling connectivity between IPv4 and IPv6 hosts. Kubernetes DNS enables connectivity between Pods and Services by resolving their DNS requests. When Kubernetes DNS is unable to resolve a DNS request, it forwards the request to an external DNS server for resolution. When the Service Proxy TMM is positioned as a gateway for forwarded DNS requests, replies from external DNS servers are processed by TMM as follows:

  • When the reply contains only a type A record, it returns unchanged.
  • When the reply contains both type A and AAAA records, it returns unchanged.
  • When the reply contains only a type AAAA record, TMM performs the following:
    • Create a new type A database (DB) entry pointing to an internal IPv4 NAT address.
    • Create a NAT mapping in the DB between the internal IPv4 NAT address, and the external IPv6 address in the response.
    • Return the new type A record, and the original type AAAA record.

Internal Pods now connect to the internal IPv4 NAT address, and Service Proxy TMM translates the packet to the external IPv6 host, using a public IPv6 SNAT address. All TCP IPv4 and IPv6 traffic will now be properly translated, and flow through Service Proxy TMM.

Example DNS/NAT46 translation:

_images/spk_dns_nat_64.png

Parameters

The parameters used to configure Service Proxy TMM for DNS/NAT46 are:

Parameter Description
spec.dnsNat46Enabled Enable or disable the DNS46/NAT46 feature (true/false). The default setting is false.
spec.dnsNat46Ipv4Subnet The pool of private IPv4 addresses used to create DNS records for the internal Pods.
spec.egressSnatpool The IP addresses to apply to egress packets. The value egress_snatpool references a SNAT pool, a null value uses the external TMM self IP address.
spec.dnsNat46PoolIps A pool of IP addresses representing external DNS servers, or gateways to reach the DNS servers.
spec.dnsNat46SorryIP IP address for Oops Page if the NAT pool becomes exhausted.

DNS gateway

For DNS/NAT46 to function properly, it is important to enable Intelligent CNI 2 (iCNI2) when installing the Ingress Controller. With iCNI 2 enabled, internal Pods use the Service Proxy Traffic Management Microkernel (TMM) as their default gateway. It is important that Service Proxy TMM intercepts and process all internal DNS requests.

DNS server

The dnsNat46PoolIps parameter sets the DNS server that Service Proxy TMM uses to resolve DNS requests. This configuration enables you to define any non-reachable DNS server on the internal Pods. For example, Pods can use resolver IP address 1.2.3.4 to request DNS resolution from Service Proxy TMM, which then proxies requests and responses from the dnsNat46PoolIps.

Adding DB entries

If required, you can manually add static DNS/NAT46 entries to the dSSM database. To do so, refer to the Manual DB entry section below.

Deployment

Use the following steps to configure the Service Proxy TMM for DNS/NAT46 using the F5SPKSnatpool and F5SPKEgress CRs.

  1. Configure a SNAT Pool using the F5SPKSnatpool CR, and install to the Ingress Controller Project. For example:

    _images/spk_warn.png Important: The spec.name parameter must be set to egress_snatpool, and you must install the F5SPKSNATPool CR first.

    In this example, the CR deploys to the spk-ingress Project:

     apiVersion: "k8s.f5net.com/v1"
     kind: F5SPKSnatpool 
     metadata:
       name: "egress-snatpool"
       namespace: spk-ingress
     spec:
       name: "egress_snatpool"
       addressList:
         - - 2001::10:244:100:250
           - 2001::10:244:100:251
         - - 2001::10:244:100:252
           - 2001::10:244:100:253
    
  2. Install the F5SPKSnatpool CR:

    oc apply -f <file-name>.yaml
    

    In this example, the CR file is named spk-dns-snat-crd.yaml:

    oc apply -f spk-dns-snat-crd.yaml
    
  3. Configure the F5SPKEgress CR, and install to Ingress Controller Project. For example:

    _images/spk_warn.png Important: The spec.egressSnatpool parameter must be set to egress_snatpool.

    In this example, the CR deploys to the spk-ingress Project:

    apiVersion: k8s.f5net.com/v1
    kind: F5SPKEgress
    metadata:
      name: egress-crd
      namespace: spk-ingress
    spec:
      dnsNat46Enabled: true
      dnsNat46Ipv4Subnet: 10.244.50.0/24
      dnsNat46PoolIps: 
        - 10.244.40.252
        - 10.244.40.253
      egressSnatpool: "egress_snatpool"
      dnsNat46SorryIp: 10.244.40.100
    
  4. Install the F5SPKEgress CR:

    oc apply -f <file-name>.yaml 
    

    In this example, the CR file is named spk-egress-crd.yaml:

    oc apply -f spk-egress-crd.yaml 
    
  5. Internal IPv4 Pods requesting access to IPv6 hosts (via DNS queries), can now connect to external IPv6 hosts.

  6. To verify traffic processing statistics, log in to the Debug Sidecar:

    oc exec -it deploy/f5-tmm -c debug -n <project>
    

    In this example, the debug sidecar is in the spk-ingress Project:

    oc exec -it deploy/f5-tmm -c debug -n spk-ingress
    
  7. Run the following tmctl command:

    tmctl -f /var/tmstat/blade/tmm0 virtual_server_stat \
    -s name,serverside.tot_conns
    

    In this example, 19 IPv4/IPv6 connections have been translated, and 12 DNS queries have been performed:

    name              serverside.tot_conns
    ----------------- --------------------
    egress-ipv4-nat46                   19
    egress-dns-ipv4                     12
    
  8. If you experience DNS/NAT46 connectivity issues, refer to the Troubleshooting DNS/NAT46 guide.

Manual DB entry

This section details how to manually create a DNS/NAT46 DB entry in the dSSM database. The tables below describe the DB parameters, and their position on the Redis DB CLI:

DB parameters

Position Parameter Description
1 ha_unit Represents the high availability traffic group* ID. Traffic groups are not yet implemented in SPK. This must be set to 0.
2 bin_id The DB key ID. This ID must be set to 0073c3b6eft_dns46.
3 key_component The egress IPv4 NAT IP address for the internal Pods.
4 gen_id The generation counter ID for the DB entry. This can remain set to 0001.
5 timeout The max inactivity period before the DB entry is deleted. This should be set to 00000000 (indefinite).
6 user flags The secondary DB entry payload. This should be set to 0000000000000002.
7 value_component The remote host IPv6 destination address.

* Traffic groups are collections of configuration settings, including Virtual Servers, VLANs, NAT, SNAT, etc.

Redis CLI

1 2 3 4 5 6 7
0073c3b6eft_dns4610.144.175.220 "00010000000000000000000000022002::10:20:2:206"
key value

Procedure

The following steps create a DNS/NAT46 DB entry, mapping internal IPv4 NAT address 10.144.175.220 to remote IPv6 host 2002::10:20:2:206.

  1. Log into the dSSM DB Pod:

    oc exec -it pod/f5-dssm-db-0 -n <project> -- bash
    

    In the following example, the DSSM DB Pod is in the ingress-utils Project:

    oc exec -it pod/f5-dssm-db-0 -n ingress-utils -- bash
    
  2. Enter the Redis CLI:

    redis-cli --tls --cert /etc/ssl/certs/dssm-cert.crt \
    --key /etc/ssl/certs/dssm-key.key \
    --cacert /etc/ssl/certs/dssm-ca.crt
    
  3. Create the new DB entry:

    SETNX <key> "<value>"
    

    In this example, the DB entry maps IPv4 address 10.144.175.220 to IPv6 address 2002::10:20:2:206:

    SETNX 0073c3b6eft_dns4610.144.175.220 "00010000000000000000000000022002::10:20:2:206"
    
  4. View the DB key entries:

    KEYS *
    

    For example:

    1) "0073c3b6eft_dns4610.144.175.220"
    2) "0073c3b6eft_dns4610.144.175.221"
    
  5. Test connectivity to the remote host:

    curl http://10.144.175.220 8080
    
  6. The Redis DB will not accept updates to an existing DB entry. To update an entry, you must first delete the existing entry:

    DEL <key>
    

    For example, to delete the DB entry created in this procedure, use:

    DEL 0073c3b6eft_dns4610.144.175.220
    

Feedback

Provide feedback to improve this document by emailing spkdocs@f5.com.