F5SPKSnatpool¶
Overview¶
This overview discusses the F5SPKSnatpool CR. For the full list of CRs, refer to the SPK CRs overview. The F5SPKSnatpool Custom Resource (CR) configures the Service Proxy for Kubernetes (SPK) Traffic Management Microkernel (TMM) to perform source network address translations (SNAT) on egress network traffic. When internal Pods connect to external resources, their internal cluster IP address is translated to one of the available IP address in the SNAT pool.
Note: In clusters with multiple SPK Controller instances, ensure the IP addresses defined in each F5SPKSnatpool CR do not overlap.
This document guides you through understanding, configuring and deploying a simple F5SPKSnatpool CR.
Parameters¶
The table below describes the F5SPKESnatpool parameters used in this document:
Parameter | Description |
---|---|
metadata.name |
The name of the F5SPKSnatpool object in the Kubernetes configuration. |
spec.name |
The name of the F5SPKSnatpool object referenced and used by other CRs such as the F5SPKEgress CR. |
spec.addressList |
The list of IPv4 or IPv6 address used to translate source IP addresses as they egress TMM. |
Scaling TMM¶
When scaling Service Proxy TMM beyond a single instance in the Project, the F5SPKSnatpool CR must be configured to provide a SNAT pool to each TMM replica. The first SNAT pool is applied to the first TMM replica, the second snatpool to the second TMM replica, continuing through the list.
Important: When configuring SNAT pools with multiple IP subnets, ensure all TMM replicas receive the same IP subnets.
Example CR:
apiVersion: "k8s.f5net.com/v1"
kind: F5SPKSnatpool
metadata:
name: "egress-snatpool-cr"
namespace: spk-ingress
spec:
name: "egress_snatpool"
addressList:
- - 10.244.10.1
- 10.244.20.1
- - 10:244:10:2
- 10:244:20:2
- - 10.244.10.3
- 10.244.20.3
Example deployment:
Advertising address lists¶
By default, all SNAT Pool IP addresses are advertised (redistributed) to BGP neighbors. To advertise only specific SNAT Pool IP addresses, configure a prefixList
and routeMaps
when installing the Ingress Controller. For configuration assistance, refer to the BGP Overview.
Referencing the SNAT Pool¶
Once the F5SPKSnatpool is configured, a virtual server is required to process the egress Pod connections, and apply the SNAT IP addresses. The F5SPKEgress CR creates the required virtual server, and is included in the Deployment procedure below:
Requirements¶
Ensure you have:
- Installed the Ingress Controller.
- Created an external and internal F5SPKVlan.
- A Linux based workstation.
Deployment¶
Use the following steps to deploy the example F5SPKSnatpool CR, the required F5SPKEgress CR, and to verify the configurations.
Configure SNAT Pools using the example CR, and deploy to the same Project as the Ingress Controller. For example:
In this example, the CR installs to the spk-ingress Project:
apiVersion: "k8s.f5net.com/v1" kind: F5SPKSnatpool metadata: name: "egress-snatpool-cr" namespace: spk-ingress spec: name: "egress_snatpool" addressList: - - 10.244.10.1 - 10.244.20.1 - 2002::10:244:10:1 - 2002::10:244:20:1 - - 10.244.10.2 - 10.244.20.2 - 2002::10:244:10:2 - 2002::10:244:20:2
Install the F5SPKSnatpool CR:
oc apply -f spk-snatpool-crd.yaml
To verify the SNAT pool IP address mappings on the TMMs, use any one of the following methods:
Method 1: For individual TMMs, login to the debug sidecar and run the following command to view the SNAT pool members:
tmctl -w 120 -d blade pool_member_stat -s pool_name,addr
The addresses will be displayed as IPv6 address in hexadecimal format. For example,
10.200.3.4
address is displayed as00:00:00:00:00:00:00:00:00:00:FF:FF:0A:C8:03:04:00:00:00:00
.Method 2: If
dynamicRouting
is enabled during startup, inoverride
file of the TMM section, then the following method can be used to verify the SNAT pool membership.Execute into
f5-tmm-routing
container:$ k exec -it f5-tmm-67d54df997-p7ntl -c f5-tmm-routing – bash
Run the Integrated Management Interface Shell (IMISH) command:
I have no name!@f5-tmm-67d54df997-p7ntl:/cod e$ imish
Run
show ip route
command:5-tmm-67d54df997-p7ntl[0]>show ip route Codes: K - kernel, C - connected, S - static, R - RIP, B - BGP O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area * - candidate default IP Route Table for VRF "default" K 10.9.77.25/32 [0/0] is directly connected, tmm C 10.244.99.91/32 is directly connected, eth0 C 11.11.11.0/24 is directly connected, tmm-client C 22.22.22.0/24 is directly connected, tmm-server C 127.20.0.0/16 is directly connected, tmm_bp C 169.254.0.0/24 is directly connected, tmm Gateway of last resort is not set
Run
show ipv6 route
command:f5-tmm-67d54df997-p7ntl[0]>show ipv6 route IPv6 Routing Table Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF, IA - OSPF inter area, E1 - OSPF external type 1, E2 - OSPF external type 2, N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2, I - IS-IS, B - BGP Timers: Uptime IP Route Table for VRF "default" C 2002::11:11:11:0/112 via ::, tmm-client, 00:04:52 C 2002::22:22:22:0/112 via ::, tmm-server, 00:04:52 K 2021::77:25/128 [0/0] via ::, tmm, 00:04:50 C fd00:10:244:25:4aa:40cb:5dc6:8b9a/128 via ::, eth0, 00:05:20 C fe80::/64 via ::, tmm-server, 00:04:52
Configure the F5SPKEgress CR, and install to the same Project as the Ingress Controller. For example:
apiVersion: "k8s.f5net.com/v1" kind: F5SPKEgress metadata: name: egress-cr namespace: spk-ingress spec: egressSnatpool: "egress_snatpool"
Install the F5SPKEgress CR:
In this example, the CR file is named spk-egress-crd.yaml:
oc apply -f spk-egress-crd.yaml
Verify the status of the installed CR:
oc get f5-spk-vlan -n spk-ingress
In this example, the CR has installed successfully.
NAME STATUS MESSAGE staticroute-ipv4 SUCCESS CR config sent to all grpc endpoints
To verify connectivity statistics, log in to the Debug Sidecar:
oc exec -it deploy/f5-tmm -c debug -n <project>
In this example, the debug sidecar is in the spk-ingress Project:
oc exec -it deploy/f5-tmm -c debug -n spk-ingress
Verify the internal virtual servers have been configured:
tmctl -f /var/tmstat/blade/tmm0 virtual_server_stat -s name,serverside.tot_conns
In this example, 3 IPv4 connections, and 2 IPv6 connections have been initiated by internal Pods:
name serverside.tot_conns ----------------- -------------------- egress-ipv6 2 egress-ipv4 3
Feedback¶
Provide feedback to improve this document by emailing spkdocs@f5.com.