F5SPKEgress¶
Overview¶
The Service Proxy for Kubernetes (SPK) F5SPKEgress Custom Resource (CR) enables egress connectivity for internal Pods requiring access to external networks. The F5SPKEgress CR enables egress connectivity using either Source Network Address Translation (SNAT), or the DNS/NAT46 feature that supports communication between internal IPv4 Pods and external IPv6 hosts.
Note: The DNS/NAT46 feature does not rely on Kubernetes IPv4/IPv6 dual-stack added in v1.21.
This overview describes configuring egress traffic using SNAT and DNS/NAT46, and provides simple deployment scenarios.
Requirements¶
Ensure you have:
- Installed the Ingress Controller.
- Configured and installed an external and internal F5SPKVlan CR.
- DNS/NAT64 only: Installed the dSSM database Pods.
SNAT¶
SNATs are used to modify the source IP address of egress packets leaving the cluster. When the Service Proxy Traffic Management Microkernel (TMM) receives an internal packet from an internal Pod, the external (egress) packet source IP address will translate using a configured SNAT IP address. Using the F5SPKEgress CR, you can apply SNAT IP addresses using either SNAT pools, or SNAT automap.
SNAT pools¶
SNAT pools are lists of routable IP addresses, used by Service Proxy TMM to translate the source IP address of egress packets. SNAT pools provide a greater number of available IP addresses, and offer more flexibility for defining the SNAT IP addresses used for translation. For more background information and to enable SNAT pools, review the F5SPKSnatpool CR guide.
SNAT automap¶
SNAT automap uses Service Proxy TMM’s external F5SPKVlan IP address as the source IP for egress packets. SNAT automap is easier to implement, and conserves IP address allocations. To use SNAT automap, set the spec.egressSnatpool
parameter to ""
, or no reference. To translate both IPv4 and IPv6 addresses, set the dualStackEnabled
parameter to true
. Use the deployment procedure below to enable SNAT automap.
Deployment¶
Use the following steps to configure the F5SPKEgress CR for SNAT automap, and verify the installation.
Copy the F5SPKEgress CR to a YAML file, and set the
namespace
parameter to the Ingress Controller’s Project:apiVersion: "k8s.f5net.com/v1" kind: F5SPKEgress metadata: name: egress-crd namespace: <project> spec: dualStackEnabled: <true|false> egressSnatpool: ""
In this example, the CR installs to the spk-ingress Project:
apiVersion: "k8s.f5net.com/v1" kind: F5SPKEgress metadata: name: egress-crd namespace: spk-ingress spec: dualStackEnabled: true egressSnatpool: ""
Install the F5SPKEgress CR:
oc apply -f <file name>
In this example, the CR file is named spk-egress-crd.yaml:
oc apply -f spk-egress-crd.yaml
Internal Pods can now connect to external resources using the external F5SPKVlan self IP address.
To verify traffic processing statistics, log in to the Debug Sidecar:
oc exec -it deploy/f5-tmm -c debug -n <project>
In this example, the debug sidecar is in the spk-ingress Project:
oc exec -it deploy/f5-tmm -c debug -n spk-ingress
Run the following tmctl command:
tmctl -f /var/tmstat/blade/tmm0 virtual_server_stat \ -s name,serverside.tot_conns
In this example, 3 IPv4 connections, and 2 IPv6 connections have been initiated by internal Pods:
name serverside.tot_conns ----------------- -------------------- egress-ipv6 2 egress-ipv4 3
DNS/NAT46¶
DNS Resolution¶
When the Service Proxy Traffic Management Microkernel (TMM) is configured for DNS/NAT46, it performs as both a domain name system (DNS) and network address translation (NAT) gateway, enabling connectivity between IPv4 and IPv6 hosts. Kubernetes DNS enables connectivity between Pods and Services by resolving their DNS requests. When Kubernetes DNS is unable to resolve a DNS request, it forwards the request to an external DNS server for resolution. When the Service Proxy TMM is positioned as a gateway for forwarded DNS requests, replies from external DNS servers are processed by TMM as follows:
- When the reply contains only a type A record, it returns unchanged.
- When the reply contains both type A and AAAA records, it returns unchanged.
- When the reply contains only a type AAAA record, TMM performs the following:
- Create a new type A database (DB) entry pointing to an internal IPv4 NAT address.
- Create a NAT mapping in the DB between the internal IPv4 NAT address, and the external IPv6 address in the response.
- Return the new type A record, and the original type AAAA record.
Internal Pods now connect to the internal IPv4 NAT address, and Service Proxy TMM translates the packet to the external IPv6 host, using a public IPv6 SNAT address. All TCP IPv4 and IPv6 traffic will now be properly translated, and flow through Service Proxy TMM.
Example DNS/NAT46 translation:
Parameters¶
The parameters used to configure Service Proxy TMM for DNS/NAT46 are:
Parameter | Description |
---|---|
spec.dnsNat46Enabled |
Enable or disable the DNS46/NAT46 feature (true/false). The default setting is false. |
spec.dnsNat46Ipv4Subnet |
The pool of private IPv4 addresses used to create DNS records for the internal Pods. |
spec.egressSnatpool |
The IP addresses to apply to egress packets. The value egress_snatpool references a SNAT pool, a null value uses the external TMM self IP address. |
spec.dnsNat46PoolIps |
A pool of IP addresses representing external DNS servers, or gateways to reach the DNS servers. |
spec.dnsNat46SorryIP |
IP address for Oops Page if the NAT pool becomes exhausted. |
DNS gateway¶
For DNS/NAT46 to function properly, it is important to enable Intelligent CNI 2 (iCNI2) when installing the Ingress Controller. With iCNI 2 enabled, internal Pods use the Service Proxy Traffic Management Microkernel (TMM) as their default gateway. It is important that Service Proxy TMM intercepts and process all internal DNS requests.
DNS server¶
The dnsNat46PoolIps
parameter sets the DNS server that Service Proxy TMM uses to resolve DNS requests. This configuration enables you to define any non-reachable DNS server on the internal Pods. For example, Pods can use resolver IP address 1.2.3.4 to request DNS resolution from Service Proxy TMM, which then proxies requests and responses from the dnsNat46PoolIps
.
Adding DB entries¶
If required, you can manually add static DNS/NAT46 entries to the dSSM database. To do so, refer to the Manual DB entry section below.
Deployment¶
Use the following steps to configure the Service Proxy TMM for DNS/NAT46 using the F5SPKSnatpool and F5SPKEgress CRs.
Configure a SNAT Pool using the F5SPKSnatpool CR, and install to the Ingress Controller Project. For example:
Important: The
spec.name
parameter must be set toegress_snatpool
, and you must install the F5SPKSNATPool CR first.In this example, the CR deploys to the spk-ingress Project:
apiVersion: "k8s.f5net.com/v1" kind: F5SPKSnatpool metadata: name: "egress-snatpool" namespace: spk-ingress spec: name: "egress_snatpool" addressList: - - 2001::10:244:100:250 - 2001::10:244:100:251 - - 2001::10:244:100:252 - 2001::10:244:100:253
Install the F5SPKSnatpool CR:
oc apply -f <file-name>.yaml
In this example, the CR file is named spk-dns-snat-crd.yaml:
oc apply -f spk-dns-snat-crd.yaml
Configure the F5SPKEgress CR, and install to Ingress Controller Project. For example:
Important: The
spec.egressSnatpool
parameter must be set toegress_snatpool
.In this example, the CR deploys to the spk-ingress Project:
apiVersion: k8s.f5net.com/v1 kind: F5SPKEgress metadata: name: egress-crd namespace: spk-ingress spec: dnsNat46Enabled: true dnsNat46Ipv4Subnet: 10.244.50.0/24 dnsNat46PoolIps: - 10.244.40.252 - 10.244.40.253 egressSnatpool: "egress_snatpool" dnsNat46SorryIp: 10.244.40.100
Install the F5SPKEgress CR:
oc apply -f <file-name>.yaml
In this example, the CR file is named spk-egress-crd.yaml:
oc apply -f spk-egress-crd.yaml
Internal IPv4 Pods requesting access to IPv6 hosts (via DNS queries), can now connect to external IPv6 hosts.
To verify traffic processing statistics, log in to the Debug Sidecar:
oc exec -it deploy/f5-tmm -c debug -n <project>
In this example, the debug sidecar is in the spk-ingress Project:
oc exec -it deploy/f5-tmm -c debug -n spk-ingress
Run the following tmctl command:
tmctl -f /var/tmstat/blade/tmm0 virtual_server_stat \ -s name,serverside.tot_conns
In this example, 19 IPv4/IPv6 connections have been translated, and 12 DNS queries have been performed:
name serverside.tot_conns ----------------- -------------------- egress-ipv4-nat46 19 egress-dns-ipv4 12
If you experience DNS/NAT46 connectivity issues, refer to the Troubleshooting DNS/NAT46 guide.
Manual DB entry¶
This section details how to manually create a DNS/NAT46 DB entry in the dSSM database. The tables below describe the DB parameters, and their position on the Redis DB CLI:
DB parameters
Position | Parameter | Description |
---|---|---|
1 | ha_unit | Represents the high availability traffic group* ID. Traffic groups are not yet implemented in SPK. This must be set to 0. |
2 | bin_id | The DB key ID. This ID must be set to 0073c3b6eft_dns46. |
3 | key_component | The egress IPv4 NAT IP address for the internal Pods. |
4 | gen_id | The generation counter ID for the DB entry. This can remain set to 0001. |
5 | timeout | The max inactivity period before the DB entry is deleted. This should be set to 00000000 (indefinite). |
6 | user flags | The secondary DB entry payload. This should be set to 0000000000000002. |
7 | value_component | The remote host IPv6 destination address. |
* Traffic groups are collections of configuration settings, including Virtual Servers, VLANs, NAT, SNAT, etc.
Redis CLI
1 2 3 | 4 5 6 7 |
---|---|
0073c3b6eft_dns4610.144.175.220 | "00010000000000000000000000022002::10:20:2:206" |
key | value |
Procedure
The following steps create a DNS/NAT46 DB entry, mapping internal IPv4 NAT address 10.144.175.220 to remote IPv6 host 2002::10:20:2:206.
Log into the dSSM DB Pod:
oc exec -it pod/f5-dssm-db-0 -n <project> -- bash
In the following example, the DSSM DB Pod is in the ingress-utils Project:
oc exec -it pod/f5-dssm-db-0 -n ingress-utils -- bash
Enter the Redis CLI:
redis-cli --tls --cert /etc/ssl/certs/dssm-cert.crt \ --key /etc/ssl/certs/dssm-key.key \ --cacert /etc/ssl/certs/dssm-ca.crt
Create the new DB entry:
SETNX <key> "<value>"
In this example, the DB entry maps IPv4 address 10.144.175.220 to IPv6 address 2002::10:20:2:206:
SETNX 0073c3b6eft_dns4610.144.175.220 "00010000000000000000000000022002::10:20:2:206"
View the DB key entries:
KEYS *
For example:
1) "0073c3b6eft_dns4610.144.175.220" 2) "0073c3b6eft_dns4610.144.175.221"
Test connectivity to the remote host:
curl http://10.144.175.220 8080
The Redis DB will not accept updates to an existing DB entry. To update an entry, you must first delete the existing entry:
DEL <key>
For example, to delete the DB entry created in this procedure, use:
DEL 0073c3b6eft_dns4610.144.175.220
Feedback¶
Provide feedback to improve this document by emailing spkdocs@f5.com.