BGP Overview

Overview

A few configurations require the Service Proxy Traffic Management Microkernel (TMM) to establish a Border Gateway Protocol (BGP) session with an external BGP neighbor. The Service Proxy TMM Pod’s f5-tmm-routing container can be enabled and configured when installing the Ingress Controller. Review the sections below to determine if you require BGP prior to installing the Ingress Controller.

_images/spk_info.png Note: The f5-tmm-routing container is disabled by default.

BGP parameters

The tables below describe the available BGP Helm parameters.

bgp

Configure and establish BGP peering relationships.

Parameter Description
asn The AS number of the f5-tmm-routing container.
hostname The hostname of the f5-tmm-routing container.
neighbors.ip The IPv4 or IPv6 address of the BGP peer.
neighbors.asn The AS number of the BGP peer.
neighbors.password The BGP peer MD5 authentication password. Note: The password is stored in the f5-tmm-dynamic-routing configmap unencrypted.
neighbors.ebgpMultihop Enables connectivity between external peers that do not have a direct connection (1-255).
neighbors.acceptsIPv4 Enables advertising IPv4 virtual server addresses to the peer (true / false). The default is false.
neighbors.acceptsIPv6 Enables advertising IPv6 virtual server addresses to the peer (true / false). The default is false.
neighbors.softReconf Enables BGP4 policies to be activated without clearing the BGP session.
neighbors.maxPathsEbgp The number of parallel eBGP (external peer) routes installed. The default is 2.
neighbors.maxPathsIbgp The number of parallel iBGP (internal peer) routes installed. The default is 2.
neighbors.fallover Enables bidrectional forwarding detection (BFD) between neighbors (true / false). The default is false.
neighbors.routeMap References the routeMaps.name parameter, and applies the filter to the BGP neighbor.

prefixList

Create prefix lists to filter specified IP address subnets.

Parameter Description
name The name of the prefixList entry.
seq The order of the prefixList entry.
deny Allow or deny the prefixList entry.
prefix The IP address subnet to filter.

routeMaps

Create route maps that apply to BGP neighbors, referencing specified prefix lists.

Parameter Description
name The name of the routeMaps object applied to the BGP neighbor.
seq The order of the routeMaps entry.
deny Allow or deny routeMaps entry.
match The name of the referenced prefixList.

bfd

Enable BFD and configure the control packet intervals.

Parameter Description
interface Selects the BFD peering interface.
interval Sets the minimum transmission interval in milliseconds (50-999).
minrx Sets the minimum receive interval in milliseconds (50-999).
multiplier Sets the Hello multiplier value (3-50).

Advertising virtual IPs

Virtual server IP addresses are created on the f5-tmm container when deploying SPK’s application traffic Custom Resources. To have f5-tmm begin proxying and load balancing external traffic to the internal Pods, advertise the virtual server IP address to remote networks using BGP. Alternatively, static routes can be configured on upstream devices, however, this method is less scalable and more error-prone.

In this example, the f5-tmm-routing container peers with an IPv4 neighbor, and advertises any IPv4 virtual server address:

tmm:
  dynamicRouting:
    enabled: true
    tmmRouting:
      config:
        bgp:
          asn: 100
          hostname: spk-bgp
          neighbors:
          - ip: 10.10.10.200
            asn: 200
            ebgpMultihop: 10
            maxPathsEbgp: 4
            maxPathsIbgp: 'null'
            acceptsIPv4: true
            softReconf: true

Once the Ingress Controller is installed, verify the neighbor relationship has established, and the virtual server IP address is being advertised.

  1. Log in to the f5-tmm-routing container:

    oc exec -it deploy/f5-tmm -c f5-tmm-routing -n <project> -- bash
    

    In this example, the f5-tmm-routing container is in the spk-ingress Project:

    oc exec -it deploy/f5-tmm -c f5-tmm-routing -n spk-ingress -- bash
    
  2. Log in the IMI shell and turn on privileged mode:

    imish
    en
    
  3. Verify the IPv4 neighbor BGP state:

    show bgp ipv4 neighbors <ip address>
    

    In this example, the neighbor address is 10.10.10.200 and the BGP state is Established:

    show bgp ipv4 neighbors 10.10.10.200
    
    BGP neighbor is 10.10.10.200, remote AS 200, local AS 100, external link
    BGP version 4, remote router ID 10.10.10.200
    BGP state = Established
    
  4. Install one of the application traffic Custom Resources.

  5. Verify the IPv4 virtual IP address is being advertised:

    show bgp ipv4 neighbors <ip address> advertised-routes
    

    In this example, the 10.10.10.1 virtual IP address is being advertised with a Next Hop of the TMM self IP address 10.10.10.250:

    show bgp ipv4 neighbors 10.10.10.200 advertised-routes
    
    Network              Next Hop        Metric    LocPrf    Weight 
    *>   10.10.10.1/32   10.10.10.250    0         100       32768  
    
    Total number of prefixes 1
    
  6. External hosts should now be able to connect to any IPv4 virtual IP address configured on the f5-tmm container.

Filtering Snatpool IPs

By default, all F5SPKSnatpool IP addresses are advertised (redistributed) to BGP neighbors. To advertise specific SNAT pool IP addresses, configure a prefixList defining the IP addresses to advertise, and apply a routeMap to the BGP neighbor configuration referencing the prefexList. In the example below, only the 10.244.10.0/24 and 10.244.20.0/24 IP address subnets will be advertised to the BGP neighbor:

dynamicRouting:
  enabled: true
  tmmRouting:
    config:
      prefixList:
        - name: 10pod
          seq: 10
          deny: false
          prefix: 10.244.10.0/24 le 32
        - name: 20pod
          seq: 10
          deny: false
          prefix: 10.244.20.0/24 le 32

      routeMaps:
        - name: snatpoolroutemap
          seq: 10
          deny: false
          match: 10pod
        - name: snatpoolroutemap
          seq: 11
          deny: false
          match: 20pod

      bgp: 
        asn: 100
        hostname: spk-bgp
        neighbors:
        - ip: 10.10.10.200 
          asn: 200
          routeMap: snatpoolroutemap

Once the Ingress Controller has been installed, verify the expected SNAT pool IP addresses are being advertised.

  1. Install the F5SPKSnatpool Custom Resource (CR).

  2. Log in to the f5-tmm-routing container:

    oc exec -it deploy/f5-tmm -c f5-tmm-routing -n <project> -- bash
    

    In this example, the f5-tmm-routing container is in the spk-ingress Project:

    oc exec -it deploy/f5-tmm -c f5-tmm-routing -n spk-ingress -- bash
    
  3. Log in IMI shell and turn on privileged mode:

    imish
    en
    
  4. Verify the SNAT pool IP addresses are being advertised:

    show bgp ipv4 neighbors <ip address> advertised-routes
    

    In this example, the SNAT pool IP addresse lists are being advertised, and TMM’s external interface is the next hop:

    show bgp ipv4 neighbors 10.10.10.200 advertised-routes
    
         Network          Next Hop            Metric    LocPrf 	     Weight
    *>   10.244.10.1/32   10.20.2.207         0         100           32768 
    *>   10.244.10.2/32   10.20.2.207         0         100           32768 
    *>   10.244.20.1/32   10.20.2.207         0         100           32768 
    *>   10.244.20.2/32   10.20.2.207         0         100           32768 
    
    Total number of prefixes 4
    

Scaling TMM Pods

When installing more than a single Service Proxy TMM Pod instance (scaling) in the Project, you must configure BGP with Equal-cost Multipath (ECMP) load balancing. Each of the Service Proxy TMM replicas advertise themselves to the upstream BGP routers, and ingress traffic is distributed across the TMM replicas based on the external BGP neighbor’s load balancing algorithm. Distributing traffic over multiple paths offers increased bandwidth, and a level of network path fault tolerance.

The example below configures ECMP for up to 4 TMM Pod instances:

tmm:
  dynamicRouting:
    enabled: true
    tmmRouting:
      config:
        bgp:
          asn: 100
          maxPathsEbgp: 4
          maxPathsIbgp: 'null'
          hostname: spk-bgp
          neighbors:
          - ip: 10.10.10.200
            asn: 200
            ebgpMultihop: 10
            acceptsIPv4: true

Once the Ingress Controller has been installed, verify the virtual server IP addresses are being advertised by both TMMs.

  1. Deploy one of the application traffic Custom Resources, and verify the virtual server IP addresses are being advertised:

  2. Log in to one of the external peer routers, and show the routing table for the virtual IP address:

    show ip route bgp
    

    In this example, 2 TMM replicas are deployed and configured with virtual IP address 10.10.10.1:

    show ip route bgp
    B       10.10.10.1/32 [20/0] via 10.10.10.250, external, 00:07:59
                          [20/0] via 10.10.10.251, external, 00:07:59
    
  3. The external peer routers should now distribute traffic flows to the TMM replicas based on the configured ECMP load balancing algorithm.

Enabling BFD

Bidirectional Forwarding Detection (BFD) rapidly detects loss of connectivity between BGP neighbors by exchanging periodic BFD control packets on the network link. After a specified interval, if a control packet is not received, the connection is considered down, enabling fast network convergence. The BFD configuration requires the interface name of the external BGP peer. Use the following command to obtain the external interface name:

oc get ingressroutevlan <external vlan> -o "custom-columns=VLAN Name:.spec.name"

The example below configures BFD between two BGP peers:

tmm:
  dynamicRouting:
    enabled: true
    tmmRouting:
      config:
        bgp:
          asn: 100
          hostname: spk-bgp
          neighbors:
          - ip: 10.10.10.200
            asn: 200
            ebgpMultihop: 10
            acceptsIPv4: true
            fallover: true
        bfd:
          interface: external
          interval: 100
          minrx: 100
          multiplier: 3

Once the Ingress Controller has been installed, verify the BFD configuration is working.

  1. Log in to the f5-tmm-routing container:

    oc exec -it deploy/f5-tmm -c f5-tmm-routing -n <project> -- bash
    

    In this example, the f5-tmm-routing container is in the spk-ingress Project:

    oc exec -it deploy/f5-tmm -c f5-tmm-routing -n spk-ingress -- bash
    
  2. Log in IMI shell and turn on privileged mode:

    imish
    en
    
  3. View the bfd session status:

    _images/spk_info.png Note: You can append the detail argument for verbose session information.

    show bfd session 
    

    In this example, the Sess-State is Up:

    BFD process for VRF: (DEFAULT VRF)
    =====================================================================================
    Sess-Idx   Remote-Disc  Lower-Layer  Sess-Type   Sess-State  UP-Time   Remote-Addr
    2          1            IPv4         Single-Hop  Up          00:03:16  10.10.10.200/32
    Number of Sessions:    1
    
  4. BGP should now quickly detect link failures between neighbors.

Feedback

Provide feedback to improve this document by emailing spkdocs@f5.com.

Supplemental