Calico Egress GW¶
Overview¶
When Service Proxy for Kubernetes (SPK) integrates with the Calico Container Network Interface (CNI) to process ingress application traffic, you can configure the cluster nodes and SPK software to provide egress gateway (GW) services to internal application Pods. The SPK software includes a daemon-set that is designed to run as Pod on each of the cluster node to provide egress routing for internal applications.
This document guides you through installing the Calico Egress GW feature.
Network interfaces¶
When integrating the SPK daemon-set into the cluster, it is important to understand how the TMM Pod interfaces map to the worker node’s network interfaces. Both the TMM Pod and the worker node have an eth0 interface on the cluster management network. To support the Calico Egress GW feature, each worker node must have a second eth1 interface created on the management, that maps to TMM’s internal interface. TMM’s internal interface is managed by host-device CNI. The host-device CNI routes traffic from the worker node’s eth1 interface in the host network namespace to the Service Proxy container in that namespace.
Requirements¶
Ensure you have:
- Installed the Multus CNI plugin.
- Installed the SPK Cert Manager.
- A workstation with Helm installed.
Procedure¶
Use the following steps to prepare the worker nodes and to integrate the SPK software into the cluster.
Add the additional eth1 interface to the management network on each worker node.
Change into the directory with the SPK software:
cd <directory>
In this example, the SPK software is in the spkinstall directory:
cd spkinstall
Add the IP Pool IP subnet and the local image registry hostname to the daemon-set Helm values:
image: repository: "local.registry.net" config: iptableid: 275 interfacename: "eth0" json: app-ip-pool: 10.124.0.0/16
Install CSRC daemon-set using F5 provided Helm chart.
helm install spk-csrc tar/csrc.tgz -f csrc-values.yaml
Verify the daemon-set Pods are Running on each node:
kubectl get pods -n default | grep csrc
f5-spk-csrc-4q44h 1/1 Running f5-spk-csrc-597z6 1/1 Running f5-spk-csrc-9jprm 1/1 Running f5-spk-csrc-f2f9v 1/1 Running
Create NetworkAttachmentDefinition for the second eth1 network interface:
Note: Ensure the NetworkAttachmentDefinition installs to the SPK Controller Project (namespace). In this example, spk-ingress.
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: net1 namespace: spk-ingress spec: config: '{ "cniVersion": "0.3.0", "type": "host-device", "device": "eth1", "ipam": {"type": "static"},"ipMasq": false }'
Configure the TMM Pod interface by installing an F5SPKVlan Custom Resource (CR):
Note: For each TMM replica in the Project, configure at least one IP address using the
selfip_v4s
parameter.apiVersion: "k8s.f5net.com/v1" kind: F5SPKVlan metadata: namespace: spk-ingress name: "vlan-internal" spec: name: internal interfaces: - "1.1" selfip_v4s: - 10.146.164.160 - 10.146.164.161 prefixlen_v4: 22 bonded: false
To enable connectivity through the TMM Pod(s) from internal Pod endpoints, install the F5SPKEgress CR:
Note: Set the
maxTmmReplicas
parameter value to the number of TMMs replicas in the Project.apiVersion: "k8s.f5net.com/v1" kind: F5SPKEgress metadata: name: egress-crd namespace: spk-ingress spec: dualStackEnabled: false maxTmmReplicas: 2
Install SPK Controller and TMM Pods.
Important: You can add additional SPK parameters to the values file, for example
tmm.dynamicRouting
. However, ensure the required parameters detailed below are included.tmm: bigdb: verifyreturnroute: enabled: false replicaCount: 1 pod: annotations: k8s.v1.cni.cncf.io/networks: | [ { "name": "net1", "ips": ["10.2.2.42/32"] } ] debug: enabled: true grpc: enabled: true hugepages: enabled: false customEnvVars: - name: TMM_IGNORE_MEM_LIMIT value: "TRUE" - name: TMM_CALICO_ROUTER value: "default" - name: TMM_MAPRES_VERBOSITY value: debug - name: TMM_DEFAULT_MTU value: "8000" - name: TMM_LOG_LEVEL value: "DEBUG" - name: TMM_MAPRES_USE_VETH_NAME value: "TRUE" - name: TMM_MAPRES_PREFER_SOCK value: "TRUE" - name: TMM_MAPRES_DELAY_MS value: "10000" - name: TMM_MAPRES_ADDL_VETHS_ON_DP value: "TRUE" - name: ROBIN_VFIO_RESOURCE_1 value: eth1 - name: PCIDEVICE_INTEL_COM_ETH1 value: "0000:03:00.0" resources: limits: cpu: 1 hugepages-2Mi: "0" memory: "2Gi" requests: cpu: 1 hugepages-2Mi: "0" memory: "2Gi" vxlan: enabled: false controller: image: repository: "local.registry.com" f5_lic_helper: enabled: true rabbitmqNamespace: "spk-telemetry" image: repository: "local.registry.com" watchNamespace: - "spk-apps" - "spk-apps-2"
Modify the TMM deployment configMap to enable ping using the egress gateway:
Note: Be certain to update both namespace values. In the example below, the namespace is spk-ingress (top and bottom lines).
kubectl get cm tmm-init -o yaml -n spk-ingress | \ sed "s/user_conf\.tcl: \"\"/user_conf\.tcl: \|/" | \ sed "/user_conf\.tcl:/a \ \ \ \ bigdb arp\.verifyreturnroute disable" | \ kubectl -n spk-ingress replace -f -
Verify the eth0 and internal interfaces are configured on the TMM Pod:
kubectl exec -it deploy/f5-tmm -c debug -n spk-ingress -- ip a | grep -iE 'eth0|tmm'
In this example, both the interfaces exist, and are configured.
4: eth0@if125: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default link/ether 2a:5c:29:fa:a7:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 100.102.144.24/32 brd 100.102.144.24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::285c:29ff:fefa:a706/64 scope link valid_lft forever preferred_lft forever 6: internal: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000 link/ether 00:50:56:8c:37:43 brd ff:ff:ff:ff:ff:ff inet 10.146.164.160/22 brd 10.146.164.0 scope global internal valid_lft forever preferred_lft forever inet6 fe80::250:56ff:fe8c:3743/64 scope link
Create a new Project for the application:
In this example, a new Project named spk-apps is created.
kubectl new-project spk-ingress
Create the application Pod IPPool.
apiVersion: crd.projectcalico.org/v1 kind: IPPool metadata: name: app-ip-pool spec: cidr: 10.124.0.0/16 ipipMode: Always natOutgoing: true
Annotate the application namespace to reference the IPPool
name
:kubectl annotate namespace <namespace> "cni.projectcalico.org/ipv4pools"=[\"<ippool name>\"]
Note: In this example, the application namespace is spk-apps and the IPPool name is app-ip-pool.
kubectl annotate namespace spk-apps "cni.projectcalico.org/ipv4pools"=[\"app-ip-pool\"]
Install the application Pod. For example:
apiVersion: v1 kind: Pod metadata: name: centos annotations: spec: nodeSelector: kubernetes.io/hostname: new-wl-cluster-md-0-76666d45d4-l8bv9 containers: - name: centos image: dougbtv/centos-network # Just spin & wait forever command: [ "/bin/bash", "-c", "--" ] args: [ "while true; do sleep 30; done;" ]
The application Pods should now be able to access remote networks through the TMM Pod(s).
Feedback¶
Provide feedback to improve this document by emailing spkdocs@f5.com.