BGP Overview

Overview

A few configurations require the Service Proxy Traffic Management Microkernel (TMM) to establish a Border Gateway Protocol (BGP) session with an external BGP neighbor. The Service Proxy TMM Pod’s f5-tmm-routing container can be enabled and configured when installing the SPK Controller. Review the sections below to determine if you require BGP prior to installing the Controller.

_images/spk_info.png Note: The f5-tmm-routing container is disabled by default.

ZebOS ConfigMaps

The SPK f5-tmm-routing container can reference native ZebOS.conf files as ConfigMaps using the SPK Controller Helm values. One of the benefits of referencing the ZebOS.conf file as a ConfigMap is the ability to modify BGP configurations while the SPK F5ingress and TMM Pods are running. The SPK Controller detects modifications made to the ConfigMap file, and applies the updates the running f5-tmm-routing container. Refer to the ZebOS ConfigMaps overview.

BGP parameters

The tables below describe the SPK Controller BGP Helm parameters.

tmm.dynamicRouting

Parameter Description
enabled Enables the f5-tmm-routing container: true or false (default).
exportZebosLogs Enables sending f5-tmm-routing logs to Fluentd Logging: true (default) or false.

tmm.dynamicRouting.tmmRouting.config.bgp

Configure and establish BGP peering relationships.

Parameter Description
asn The AS number of the f5-tmm-routing container.
hostname The hostname of the f5-tmm-routing container.
logFile Specifies a file used to capture BGP logging events: /var/log/zebos.log.
debugs Sets the BGP logging level to debug for troublshooting purposes: ["bgp"]. It is not recommended to run in debug level for extended periods.
bgpSecret Set the name of the Kubernetes secret containing the BGP neighbor password. See the BGP Secrets section below.
neighbors.ip The IPv4 or IPv6 address of the BGP peer.
neighbors.asn The AS number of the BGP peer.
neighbors.password The BGP peer MD5 authentication password. Note: The password is stored in the f5-tmm-dynamic-routing configmap unencrypted.
neighbors.ebgpMultihop Enables connectivity between external peers that do not have a direct connection (1-255).
neighbors.acceptsIPv4 Enables advertising IPv4 virtual server addresses to the peer (true / false). The default is false.
neighbors.acceptsIPv6 Enables advertising IPv6 virtual server addresses to the peer (true / false). The default is false.
neighbors.softReconf Enables BGP4 policies to be activated without clearing the BGP session.
neighbors.maxPathsEbgp The number of parallel eBGP (external peer) routes installed. The default is 2.
neighbors.maxPathsIbgp The number of parallel iBGP (internal peer) routes installed. The default is 2.
neighbors.fallover Enables bidrectional forwarding detection (BFD) between neighbors (true / false). The default is false.
neighbors.routeMap References the routeMaps.name parameter, and applies the filter to the BGP neighbor.

tmm.dynamicRouting.tmmRouting.config.prefixList

Create prefix lists to filter specified IP address subnets.

Parameter Description
name The name of the prefixList entry.
seq The order of the prefixList entry.
deny Allow or deny the prefixList entry.
prefix The IP address subnet to filter.

tmm.dynamicRouting.tmmRouting.config.routeMaps

Create route maps that apply to BGP neighbors, referencing specified prefix lists.

Parameter Description
name The name of the routeMaps object applied to the BGP neighbor.
seq The order of the routeMaps entry.
deny Allow or deny routeMaps entry.
match The name of the referenced prefixList.

tmm.dynamicRouting.tmmRouting.config.bfd

Enable BFD and configure the control packet intervals.

Parameter Description
interface Selects the BFD peering interface if specified.
interval Sets the minimum transmission interval in milliseconds: 50 (default) - 999.
minrx Sets the minimum receive interval in milliseconds: 50 (default) - 999.
multiplier Sets the Hello multiplier value 3 - 50. The default is 10.
multihop_peer Enables multi-hop BFD to BGP neighbor: true or false (default).

BGP Secrets

BGP neighbor passwords can be stored as Kubernetes secrets using the bgpSecret parameter described in the BGP Parameters section above. When using Secrets, the value must be the neighbor.ip, and the data must be the base64 encoded password. When using IPv6, replace any colon : characters, with dash *- characters. For example:

  1. Base64 encode the password:

    echo -n password | base64
    
    cGFzc3dvcmQ=
    
  2. Copy the encoded password into the Secret:

    apiVersion: v1
    kind: Secret
    metadata:
     name: bgp-secret
     namespace: spk-ingress
    data:
     10.1.2.3: c3dvcmRmaXNo
     2002--10-1-2-3: cGFzc3dvcmQK
    
  3. Reference the Secret in the SPK Controller Helm values configuration:

    tmm:
      dynamicRouting:
        tmmRouting:
          config:
            bgp:
              bgpSecret: bgp-secret
    

Advertising virtual IPs

Virtual server IP addresses are created on Service Proxy TMM after installing one of the application traffic SPK CRs. When TMM’s virtual server IP addresses are advertized to external networks via BGP, traffic begins flowing to TMM, and the connections are load balanced to the internal Pods, or endpoint pool members. Alternatively, static routes can be configured on upstream devices, however, this method is less scalable and more error-prone.

_images/spk_warn.png Important: The Kubernetes Service object referenced by the SPK CR must have at least one Endpoint for the virtual server IP to be created and advertised.

In this example, the f5-tmm-routing container peers with an IPv4 neighbor, and advertises any IPv4 virtual server address:

tmm:
  dynamicRouting:
    enabled: true
    tmmRouting:
      config:
        bgp:
          asn: 100
          hostname: spk-bgp
          neighbors:
          - ip: 10.10.10.200
            asn: 200
            ebgpMultihop: 10
            maxPathsEbgp: 4
            maxPathsIbgp: 'null'
            acceptsIPv4: true
            softReconf: true

Once the Controller is installed, verify the neighbor relationship has established, and the virtual server IP address is being advertised.

  1. Log in to the f5-tmm-routing container:

    oc exec -it deploy/f5-tmm -c f5-tmm-routing -n <project> -- bash
    

    In this example, the f5-tmm-routing container is in the spk-ingress Project:

    oc exec -it deploy/f5-tmm -c f5-tmm-routing -n spk-ingress -- bash
    
  2. Log in the IMI shell and turn on privileged mode:

    imish
    en
    
  3. Verify the IPv4 neighbor BGP state:

    show bgp ipv4 neighbors <ip address>
    

    In this example, the neighbor address is 10.10.10.200 and the BGP state is Established:

    show bgp ipv4 neighbors 10.10.10.200
    
    BGP neighbor is 10.10.10.200, remote AS 200, local AS 100, external link
    BGP version 4, remote router ID 10.10.10.200
    BGP state = Established
    
  4. Install one of the application traffic SPK CRs.

  5. Verify the IPv4 virtual IP address is being advertised:

    show bgp ipv4 neighbors <ip address> advertised-routes
    

    In this example, the 10.10.10.1 virtual IP address is being advertised with a Next Hop of the TMM self IP address 10.10.10.250:

    show bgp ipv4 neighbors 10.10.10.200 advertised-routes
    
    Network              Next Hop        Metric    LocPrf    Weight 
    *>   10.10.10.1/32   10.10.10.250    0         100       32768  
    
    Total number of prefixes 1
    
  6. External hosts should now be able to connect to any IPv4 virtual IP address configured on the f5-tmm container.

Filtering Snatpool IPs

By default, all F5SPKSnatpool IP addresses are advertised (redistributed) to BGP neighbors. To advertise specific SNAT pool IP addresses, configure a prefixList defining the IP addresses to advertise, and apply a routeMap to the BGP neighbor configuration referencing the prefexList. In the example below, only the 10.244.10.0/24 and 10.244.20.0/24 IP address subnets will be advertised to the BGP neighbor:

dynamicRouting:
  enabled: true
  tmmRouting:
    config:
      prefixList:
        - name: 10pod
          seq: 10
          deny: false
          prefix: 10.244.10.0/24 le 32
        - name: 20pod
          seq: 10
          deny: false
          prefix: 10.244.20.0/24 le 32

      routeMaps:
        - name: snatpoolroutemap
          seq: 10
          deny: false
          match: 10pod
        - name: snatpoolroutemap
          seq: 11
          deny: false
          match: 20pod

      bgp: 
        asn: 100
        hostname: spk-bgp
        neighbors:
        - ip: 10.10.10.200 
          asn: 200
          routeMap: snatpoolroutemap

Once the Controller is installed, verify the expected SNAT pool IP addresses are being advertised.

  1. Install the F5SPKSnatpool Custom Resource (CR).

  2. Log in to the f5-tmm-routing container:

    oc exec -it deploy/f5-tmm -c f5-tmm-routing -n <project> -- bash
    

    In this example, the f5-tmm-routing container is in the spk-ingress Project:

    oc exec -it deploy/f5-tmm -c f5-tmm-routing -n spk-ingress -- bash
    
  3. Log in IMI shell and turn on privileged mode:

    imish
    en
    
  4. Verify the only the expected SNAT pool IP addresses are being advertised:

    show bgp ipv4 neighbors <ip address> advertised-routes
    

    In this example, only the SNAT pool IP addresses within the specified prefixList subnet are advertised, and TMM’s external interface is the next hop:

    show bgp ipv4 neighbors 10.10.10.200 advertised-routes
    
         Network          Next Hop            Metric    LocPrf 	     Weight
    *>   10.244.10.1/32   10.20.2.207         0         100           32768 
    *>   10.244.10.2/32   10.20.2.207         0         100           32768 
    *>   10.244.20.1/32   10.20.2.207         0         100           32768 
    *>   10.244.20.2/32   10.20.2.207         0         100           32768 
    
    Total number of prefixes 4
    

Scaling TMM Pods

When installing more than a single Service Proxy TMM Pod instance (scaling) in the Project, you must configure BGP with Equal-cost Multipath (ECMP) load balancing. Each of the Service Proxy TMM replicas advertise themselves to the upstream BGP routers, and ingress traffic is distributed across the TMM replicas based on the external BGP neighbor’s load balancing algorithm. Distributing traffic over multiple paths offers increased bandwidth, and a level of network path fault tolerance.

The example below configures ECMP for up to 4 TMM Pod instances:

tmm:
  dynamicRouting:
    enabled: true
    tmmRouting:
      config:
        bgp:
          asn: 100
          maxPathsEbgp: 4
          maxPathsIbgp: 'null'
          hostname: spk-bgp
          neighbors:
          - ip: 10.10.10.200
            asn: 200
            ebgpMultihop: 10
            acceptsIPv4: true

Once the Controller is installed, verify the virtual server IP addresses are being advertised by both TMMs.

  1. Deploy one of the SPK CRs that support application traffic, and verify the virtual server IP addresses are being advertised:

  2. Log in to one of the external peer routers, and show the routing table for the virtual IP address:

    show ip route bgp
    

    In this example, 2 TMM replicas are deployed and configured with virtual IP address 10.10.10.1:

    show ip route bgp
    B       10.10.10.1/32 [20/0] via 10.10.10.250, external, 00:07:59
                          [20/0] via 10.10.10.251, external, 00:07:59
    
  3. The external peer routers should now distribute traffic flows to the TMM replicas based on the configured ECMP load balancing algorithm.

Enabling BFD

Bidirectional Forwarding Detection (BFD) rapidly detects loss of connectivity between BGP neighbors by exchanging periodic BFD control packets on the network link. After a specified interval, if a control packet is not received, the connection is considered down, enabling fast network convergence. The BFD configuration requires the interface name of the external BGP peer. Use the following command to obtain the external interface name:

oc get ingressroutevlan <external vlan> -o "custom-columns=VLAN Name:.spec.name"

The example below configures BFD between two BGP peers:

tmm:
  dynamicRouting:
    enabled: true
    tmmRouting:
      config:
        bgp:
          asn: 100
          hostname: spk-bgp
          neighbors:
          - ip: 10.10.10.200
            asn: 200
            ebgpMultihop: 10
            acceptsIPv4: true
            fallover: true
        bfd:
          interface: external
          interval: 100
          minrx: 100
          multiplier: 3

Once the Controller is installed, verify the BFD configuration is working.

  1. Log in to the f5-tmm-routing container:

    oc exec -it deploy/f5-tmm -c f5-tmm-routing -n <project> -- bash
    

    In this example, the f5-tmm-routing container is in the spk-ingress Project:

    oc exec -it deploy/f5-tmm -c f5-tmm-routing -n spk-ingress -- bash
    
  2. Log in IMI shell and turn on privileged mode:

    imish
    en
    
  3. View the bfd session status:

    _images/spk_info.png Note: You can append the detail argument for verbose session information.

    show bfd session 
    

    In this example, the Sess-State is Up:

    BFD process for VRF: (DEFAULT VRF)
    =====================================================================================
    Sess-Idx   Remote-Disc  Lower-Layer  Sess-Type   Sess-State  UP-Time   Remote-Addr
    2          1            IPv4         Single-Hop  Up          00:03:16  10.10.10.200/32
    Number of Sessions:    1
    
  4. BGP should now quickly detect link failures between neighbors.

Troubleshooting

When BGP neighbor relationships fail to establish, begin troubleshooting by reviewing BGP log events to gather useful diagnostic data. If you installed the Fluentd logging collector, review the Log file locations and Viewing logs sections of the FLuentd Logging guide before proceeding to the steps below. If the Fluentd logging collector is not installed, use the steps below to verify the current BGP state, and enable and review log events to resolve a simple connectivity issue.

_images/spk_info.png Note: BGP connectivity is established over TCP port 179.

  1. Run the following command to verify the BGP state:

    oc exec -it deploy/f5-tmm -c f5-tmm-routing -n spk-ingress \
    -- imish -e 'show bgp neighbors' | grep state
    

    In this example, the BGP state is Active, indicating neighbor relationships are not currently established:

     BGP state = Active
     BGP state = Active
    
  2. To enable BGP logging, log in to the f5-tmm-routing container:

    oc exec -it deploy/f5-tmm -c f5-tmm-routing -n spk-ingress \
    -- bash
    
  3. Run the following commands to enter configuration mode:

    imish
    en
    config t
    
  4. Enable BGP logging:

    log file /var/log/zebos.log
    
  5. Exit configuration mode, and return to the shell:

    exit
    exit
    exit
    
  6. View the BGP log file events as they occur:

    tail -f /var/log/zebos.log
    

    In this example, the log messages indicate the peers (neighbors), are not reachable:

    Jan 01 12:00:00 : BGP : ERROR [SOCK CB] Could not find peer for FD - 11 (error:107)
    Jan 01 12:00:01 : BGP : INFO 10.20.2.206-Outgoing [FSM] bpf_timer_conn_retry: Peer down,
    Jan 01 12:00:02 : BGP : ERROR [SOCK CB] Could not find peer for FD - 11 (error:107)
    Jan 01 12:00:01 : BGP : INFO 10.30.2.206-Outgoing [FSM] bpf_timer_conn_retry: Peer down,
    
  7. Fix: The tag ID on the F5SPKVlan was set to the correct ID value:

    The messages indicate the neighbors are now Up. It can take up to two minutes for the relationships to establish:

    Jan 01 12:00:05 : BGP : ERROR [SOCK CB] Could not find peer for FD - 13 (error:107)
    Jan 01 12:00:06 : BGP : INFO %BGP-5-ADJCHANGE: neighbor 10.20.2.206 Up
    Jan 01 12:00:07 : BGP : ERROR [SOCK CB] Could not find peer for FD - 11 (error:107)
    Jan 01 12:00:08 : BGP : INFO %BGP-5-ADJCHANGE: neighbor 10.30.2.206 Up
    
  8. The BGP state should now be Established:

    imish -e 'show bgp neighbors' | grep state
    
      BGP state = Established, up for 00:00:36
      BGP state = Established, up for 00:00:19
    
  9. If the BGP state is still not established, and there are issues other than connectivity, set BGP logging to debug, and continue reviewing the lower-level log events:

    debug bgp all  
    
  10. Once the BGP troubleshooting is complete, remove the BGP log and debug configurations:

    no log file
    
    no debug bgp
    

Feedback

Provide feedback to improve this document by emailing spkdocs@f5.com.

Supplemental