Lab 1.4 - F5 Container Connector Usage

  1. This class and following labs need these namespaces/projects created.

    oc create namespace f5demo
    oc create namespace demoproj
    
  2. For the following yaml files to work you need to be in the “f5demo” project.

    Attention

    In the previous lab, upon OpenShift login, you were placed in the “default” project.

    oc project f5demo
    
  3. Create the f5demo deployment

    oc create -f f5demo.yaml
    

    Tip

    This file can be found at /home/centos/agilitydocs/openshift/advanced/apps/f5demo

     1apiVersion: extensions/v1beta1
     2kind: Deployment
     3metadata:
     4  name: f5demo
     5spec:
     6  replicas: 3
     7  template:
     8    metadata:
     9      labels:
    10        app: f5demo
    11        tier: frontend
    12    spec:
    13      containers:
    14      - name: f5demo
    15        image: kmunson1973/f5demo:1.0.0
    16        ports:
    17        - containerPort: 8080
    
  4. Create the f5demo service

    oc create -f f5service.yaml
    

    Tip

    This file can be found at /home/centos/agilitydocs/openshift/advanced/apps/f5demo

     1apiVersion: v1
     2kind: Service
     3metadata:
     4  name: f5demo
     5  labels:
     6    app: f5demo
     7    tier: frontend
     8spec:
     9  # if your cluster supports it, uncomment the following to automatically create
    10  # an external load-balanced IP for the frontend service.
    11  # type: LoadBalancer
    12  ports:
    13  - port: 8080
    14  selector:
    15    app: f5demo
    16    tier: frontend
    
  5. Upload the Deployments to the OpenShift API server. Use the pool-only configmap to configuration project namespace: f5demo on the bigip

    oc create -f pool-only.yaml
    

    Tip

    This file can be found at /home/centos/agilitydocs/openshift/advanced/ocp/

     1kind: ConfigMap
     2apiVersion: v1
     3metadata:
     4  # name of the resource to create on the BIG-IP
     5  name: k8s.poolonly
     6  # the namespace to create the object in
     7  # As of v1.1, the k8s-bigip-ctlr watches all namespaces by default
     8  # If the k8s-bigip-ctlr is watching a specific namespace(s),
     9  # this setting must match the namespace of the Service you want to proxy
    10  # -AND- the namespace(s) the k8s-bigip-ctlr watches
    11  namespace: f5demo
    12  labels:
    13    # the type of resource you want to create on the BIG-IP
    14    f5type: virtual-server
    15data:
    16  schema: "f5schemadb://bigip-virtual-server_v0.1.7.json"
    17  data: |
    18    {
    19      "virtualServer": {
    20        "backend": {
    21          "servicePort": 8080,
    22          "serviceName": "f5demo",
    23          "healthMonitors": [{
    24            "interval": 3,
    25            "protocol": "http",
    26            "send": "GET /\r\n",
    27            "timeout": 10
    28          }]
    29        },
    30        "frontend": {
    31          "virtualAddress": {
    32            "port": 80
    33          },
    34          "partition": "ocp",
    35          "balance": "round-robin",
    36          "mode": "http"
    37        }
    38      }
    39    }
    
  6. Check bigip1 and bigip2 to make sure the pool got created. Validate the pools are marked green.

    Attention

    Make sure you are looking at the “ocp” partition

    ../../../_images/pool-members.png
  7. Increase the replicas of the f5demo project pods. Replicas specified the required number of instances to run

    oc scale --replicas=10 deployment/f5demo -n f5demo
    

    Note

    It may take time to have your replicas up and running.

  8. Don’t hesitate to track this by using the following command. To check the number of AVAILABLE instances:

    oc get deployment f5demo -n f5demo
    
    ../../../_images/10-containers.png

    Validate that bigip1 and bigip2 are updated with the additional pool members and their health monitor works. If the monitor is failing check the tunnel and selfIP.

Validation and Troubleshooting

Now that you have HA configured and uploaded the deployment, it is time to generate traffic through our BIG-IPs.

Add a virtual IP to the the configmap. You can edit the pool-only.yaml configmap. There are multiple ways to edit the configmap which will be covered in module 3. In this task remove the deployment, edit the yaml file and re-apply the deployment

  1. Remove the “pool-only” configmap.

    oc delete -f pool-only.yaml
    
  2. Edit the pool-only.yaml and add the bindAddr

    vi pool-only.yaml

    "frontend": {
       "virtualAddress": {
          "port": 80,
          "bindAddr": "10.3.10.220"
    

    Tip

    Do not use TAB in the file, only spaces. Don’t forget the “,” at the end of the “”port”: 80,” line.

  3. Create the modified pool-only deployment

    oc create -f pool-only.yaml
    
  4. From the jumpbox open a browser and try to connect to the virtual server at http://10.3.10.220. Does the connection work? If not, try the following troubleshooting options:

    1. Capture the http request to see if the connection is established with the BIG-IP.
    2. Follow the following network troubleshooting section.

Network Troubleshooting

How do I verify connectivity between the BIG-IP VTEP and the OSE Node?

  1. Ping the Node’s VTEP IP address. Use the -s flag to set the MTU of the packets to allow for VxLAN encapsulation.

    ping -s 1600 -c 4 10.3.10.21 #(or .22 or .23)
    
  2. Ping the Pod’s IP address (use the output from looking at the pool members in the previous steps). Use the -s flag to set the MTU of the packets to allow for VxLAN encapsulation.

    ping -s 1600 -c 4 10.130.0.8
    
  3. Now change the MTU to 1400

    ping -s 1400 -c 4 10.130.0.8
    

    Note

    When pinging the VTEP IP directly the BIG-IP was L2 adjacent to the device and could send a large MTU.

    In the second example, the packet is dropped across the VxLAN tunnel.

    In the third example, the packet is able to traverse the VxLAN tunnel.

  4. In a TMOS shell, do a tcpdump of the underlay network.

    tcpdump -i ocp-tunnel -c 10 -nnn
    
  5. In a TMOS shell, view the MAC address entries for the OSE tunnel. This will show the mac address and IP addresses of all of the OpenShift endpoints.

    tmsh show /net fdb tunnel ocp-tunnel
    
    ../../../_images/net-fdb-entries.png
  6. In a TMOS shell, view the ARP entries.

    This will show all of the ARP entries; you should see the VTEP entries on the ocpvlan and the Pod IP addresses on ose-tunnel.

    tmsh show /net arp
    
    ../../../_images/net-arp-entries.png
  7. Validate floating IP address for ocp-tunnel. Check to validate if the configuration is correct from the earlier config step. Make sure the self-IP is a floating IP. Traffic Group should be set to traffic-group-1 floating. If the traffic is local non-floating change to floating.

    ../../../_images/floating.png
  8. Connect to the viutal IP address.

    ../../../_images/success.png
  9. Test failover and make sure you can connect to the virtual.

Attention

Congratulations for completing the HA clustering setup. Before moving to the next module cleanup the deployed resource:

oc delete -f pool-only.yaml