Enabling BGP for SPK with fault tolerance using BFD

The goal of this document is to provide a use case example where BGP & BFD are configured with SPK so that newly created Virtual Servers are advertised to neighboring routers and TMMs can be scaled up. This configuration information builds upon what was covered in the installation use case.

Assumptions

  • You have not deployed SPK (or are planning to redeploy). BGP/BFD needs to be configured when SPK is deployed.

  • You have a stable, working OpenShift environment.

  • You are deploying SPK version 1.2.3.3.

  • You have access to configure the BGP Peer, or have the cooperation of the BGP Peer Admin.

Review the F5 Service Proxy for Kubernetes docs for the latest release notes and installation updates.

This use case covers only the simplest BGP/BFD implementation. If for example, you need to add authentication to your configuration you could find those implementation details here.

Brief introduction to BGP

Border Gateway Protocol (BGP) is often seen as the protocol that built the internet. BGP uses path vector routing between different Autonomous Systems (AS) to make routing decisions. An AS is an IP network or group of IP networks under the same admin control with well defined routing policy. Consider 2 organizations with their own networks, each would have a unique Autonomous System Number ASN. These ASN values are assigned the Regional Internet Registries RIR by IANA. The regional RIR for the North America is ARIN. The pool of 16 bit ASN values is shrinking, resulting in the creation of a 32 bit ASN range now.

In order for one AS to pass traffic to another AS they will configure BGP on the edge router. A path vector routing protocol such as BGP will maintain the path information that is updated dynamically. Alternatively BGP can be used within a single AS, this is known as Internal BGP or iBGP. In this design the same BGP AS is used for each BGP peer configuration. This lab will cover iBGP.

SPK administrators that intend to enable dynamic routing will need to gather the BGP neighbors IP address, AS number (ASN) and the BGP peer router password if enabled. Authentication is not always required when working with BGP peers.

Brief introduction to BFD

The main purpose of Bidirectional Forwarding Detection (BFD) is to detect failures between neighbors. These devices can use dynamic or static routes. BFD is a simple Hello protocol that can detect failures in less than a second. BFD use control packets that function like Hello packets. Each device sends the other end point a control packet, if these fail BFD notifies the routing protocol to switch to an alternate path.

There are four BFD session states of which three are part of the normal process.

  • Init: interface ready to start based on the initial BFD handshake

  • Up: Link is up and BFD session is normal

  • Down: Link is down based on control packet response or lack there of, this state may change once control packets are exchanged again.

  • AdminDown: This indicates that the link is down for administrative purposes.

BFD is based on RFC 5880

Environment review

The following is a simple overview of the network. The end user (Edison) is connecting from spk-client.example.com. This client is connected to the internal network of the BIG-IP (10.11.33.0/24). This BIG-IP has BGP enabled as is the routing peer to the SPK instance. The host name of this BIG-IP is bgppeer.example.com. The external interface of bgppeer.example.com is 10.10.255.252. The external interface of the TMM instance in this lab is 10.10.6.10. The application podinfo is hosted on node-dev-6 and exposed using the ingressroutefastl4 custom resource configured on the f5-tmm.

BGP example environment

BGP and BFD are enabled with the initial SPK deployment using the helm override file. Here is an example of the values used to enable these services. Notice that these values are placed in the TMM section of the override file. In the following example we are configuring Internal BGP using the ASN 2015 with a peer at 10.10.255.252. We can also see that BFD is defined. Because this example environment is not exposed to the internet the actual ASN value does not matter.

Dynamic Routing config

Once we have deployed or redeployed SPK in our project, we expect to find at least 4 containers in the f5-tmm pod this time. These are: f5-tmm, debug, f5-tmm-routing, f5-tmm-tmrouted. The new additions are f5-tmm-tmrouted and f5-tmm-routing.

oc project spk-ingress
oc get pods
oc get pods $(oc get pods -n spk-ingress | grep tmm | awk '{print $1}') -n spk-ingress -o jsonpath='{.spec.containers[*].name}'; echo

Next we can view the kubernetes configuration map for dynamic routing and confirm that we see the expected ZebOS config.

Note: cm is the shortname for configmaps, check all available shortnames using oc api-resources

oc get configmaps | grep routing
oc describe cm f5-tmm-dynamic-routing

configuration maps

Next we can login to the tmm-routing container and review the config. Once logged into the tmm-routing container we will start the imish shell. You can review the F5 ZebOS command reference for more details.

Note: To exit the Zebos shell and return to the bastion host you must enter exit multiple times.

oc exec -it deploy/f5-tmm -c f5-tmm-routing -- bash
imish
show bgp neighbors
show bgp summary
en
show running-config
exit
exit
exit

BFD

To review the BFD connection login to the f5-tmm-routing container and show the sessions. The BFD session details should show up if both the client and peer are configured correctly and communicating successfully.

oc exec -it deploy/f5-tmm -c f5-tmm-routing -- bash
imish
show bfd interface
show bfd session

BFD session details

BGP (BIGIP peer)

Now we are going to show an example in which we log-into the BGP peer which in this case is a BIG-IP running ZebOS. Once logged in we can start the imish shell and review the settings. You should see that the BIG-IP has an internal and external network defined. The external interface should be in the same network as the external interface found on the SPK TMM instance.

ssh bgpadmin@bgppeer.example.com
imish
show ip route
exit
quit

The show ip route command will show all routes including those learned via BGP which represent Virtual Servers deployed on TMM using the FastL4 CRD. For more information on deploying FastL4 Virtual Servers please review this document. Creating a FastL4 resource

BGP show ip route

Now, when Virtual Servers (VIPs) are created in TMM, they’ll be advertised to this BGP Peer. The neighbor (10.10.6.10) listed here is our SPK instance, and the “B” to the left of the IP address means that this route was learned via BGP.