Lab 1.1 - Install & Configure CIS in NodePort Mode

The BIG-IP Controller for Kubernetes installs as a Deployment object

See also

The official CIS documentation is here: Install the BIG-IP Controller: Kubernetes

In this lab we’ll use NodePort mode to deploy an application to the BIG-IP.

See also

For more information see BIG-IP Deployment Options

BIG-IP Setup

  1. Browse to the Deployment tab of your UDF lab session at https://udf.f5.com and connect to BIG-IP1 using the TMUI access method.

    ../../_images/TMUI.png
  2. Login with username: admin and password: F5site02@.

    ../../_images/TMUILogin.png ../../_images/TMUILicense.png

    Attention

    • Check BIG-IP is active and licensed.
    • If your BIG-IP has no license or its license expired, renew the license. You just need a LTM VE license for this lab. No specific add-ons are required (ask a lab instructor for eval licenses if your license has expired)
    • Be sure to be in the Common partition before creating the following objects.
    ../../_images/f5-check-partition.png
  3. Create a partition, which is requiredfor F5 Container Ingress Service.

    • Browse to: System ‣ Users ‣ Partition List

      Attention

      • Be sure to be in the Common partition before creating the following

      objects.

      ../../_images/f5-check-partition.png
    • Create a new partition called “kubernetes” (use default settings)

    • Click Finished

    ../../_images/f5-container-connector-bigip-partition-setup1.png

    # Via the CLI:

    tmsh create auth partition kubernetes
    
  4. Verify AS3 is installed.

    Attention

    This has been done to save time but is documented here for reference.

    See also

    For more info click here: Application Services 3 Extension Documentation

    • Browse to: iApps ‣ Package Management LX and confirm “f5-appsvcs” is in the list as shown below.

      ../../_images/confirm-as3-installed.png
  5. If AS3 is NOT installed follow these steps:

    • Click here to: Download latest AS3
    • Browse back to: iApps ‣ Package Management LX
      • Click Import
      • Browse and select downloaded AS3 RPM
      • Click Upload

Explore the Kubernetes Cluster

  1. Go back to the Deployment tab of your UDF lab session at https://udf.f5.com and connect to kube-master1 using the Web Shell access method.

    ../../_images/WEBSHELL.png
  2. The CLI will appear in a new window or tab. Switch to the ubuntu user account using the following “su” command.

    ../../_images/WEBSHELLroot.png
    su ubuntu
    
  3. Set the working directy with the “cd” command.

    Note

    All the files in the working directory are available upon login of the ubuntu user account.

    cd ~/agilitydocs/docs/class1/kubernetes
    
  4. Check the Kubernetes cluster nodes.

    You can manage nodes in your instance using the CLI. The CLI interacts with node objects that are representations of actual node hosts. The master uses the information from node objects to validate nodes with health checks.

    To list all nodes that are known to the master:

    ../../_images/kube-get-nodes.png
    kubectl get nodes
    

    Attention

    If the node STATUS shows NotReady or SchedulingDisabled contact the lab proctor. The node is not passing the health checks performed from the master, therefore pods cannot be scheduled for placement on the node.

  5. To get more detailed information about a specific node, including the reason for the current condition use the kubectl describe node command. This does provide alot of very useful information and can assist with throubleshooting issues.

    kubectl describe node kube-master1
    
    ../../_images/kube-describe-node.png

CIS Deployment

See also

For a more thorough explanation of all the settings and options see F5 Container Ingress Services - Kubernetes

Now that BIG-IP is licensed and prepped with the “kubernetes” partition, we need to define a Kubernetes deployment and create a Kubernetes secret to hide our bigip credentials.

  1. Create bigip login secret

    kubectl create secret generic bigip-login -n kube-system --from-literal=username=admin --from-literal=password=F5site02@
    

    You should see something similar to this:

    ../../_images/f5-container-connector-bigip-secret.png
  2. Create kubernetes service account for bigip controller

    kubectl create serviceaccount k8s-bigip-ctlr -n kube-system
    

    You should see something similar to this:

    ../../_images/f5-container-connector-bigip-serviceaccount.png
  3. Create cluster role for bigip service account (admin rights, but can be modified for your environment)

    kubectl create clusterrolebinding k8s-bigip-ctlr-clusteradmin --clusterrole=cluster-admin --serviceaccount=kube-system:k8s-bigip-ctlr
    

    You should see something similar to this:

    ../../_images/f5-container-connector-bigip-clusterrolebinding.png
  4. At this point we have two deployment mode options, Nodeport or ClusterIP. This class will feature both modes. For more information see BIG-IP Controller Modes

    Lets start with Nodeport mode

    Note

    • For your convenience the file can be found in /home/ubuntu/agilitydocs/docs/class1/kubernetes (downloaded earlier in the clone git repo step).
    • Or you can copy and paste the file below and create your own file.
    • If you have issues with your yaml and syntax (indentation MATTERS), you can try to use an online parser to help you : Yaml parser
    nodeport-deployment.yaml
     1apiVersion: apps/v1
     2kind: Deployment
     3metadata:
     4  name: k8s-bigip-ctlr-deployment
     5  namespace: kube-system
     6spec:
     7  replicas: 1
     8  selector:
     9    matchLabels:
    10      app: k8s-bigip-ctlr
    11  strategy:
    12    type: RollingUpdate
    13  template:
    14    metadata:
    15      labels:
    16        app: k8s-bigip-ctlr
    17      name: k8s-bigip-ctlr
    18    spec:
    19      serviceAccountName: k8s-bigip-ctlr
    20      containers:
    21        - args:
    22            - --bigip-username=$(BIGIP_USERNAME)
    23            - --bigip-password=$(BIGIP_PASSWORD)
    24            - --bigip-url=10.1.1.5
    25            - --bigip-partition=kubernetes
    26            - --pool-member-type=nodeport
    27            - --insecure=true
    28            - --agent=as3
    29            - --log-level=info
    30            - --custom-resource-mode=false
    31            - --log-as3-response=true
    32            - --as3-validation=true
    33          command:
    34            - /app/bin/k8s-bigip-ctlr
    35          env:
    36            - name: BIGIP_USERNAME
    37              valueFrom:
    38                secretKeyRef:
    39                  key: username
    40                  name: bigip-login
    41            - name: BIGIP_PASSWORD
    42              valueFrom:
    43                secretKeyRef:
    44                  key: password
    45                  name: bigip-login
    46          image: f5networks/k8s-bigip-ctlr:latest
    47          imagePullPolicy: IfNotPresent
    48          name: k8s-bigip-ctlr
    49      dnsPolicy: ClusterFirst
    50      #imagePullSecrets:
    51      #  - name: f5-docker-images
    
  5. Once you have your yaml file setup, you can try to launch your deployment. It will start our f5-k8s-controller container on one of our nodes.

    Note

    This may take around 30sec to be in a running state.

    kubectl create -f nodeport-deployment.yaml
    
  6. Verify the deployment “deployed”

    kubectl get deployment k8s-bigip-ctlr-deployment --namespace kube-system
    
    ../../_images/f5-container-connector-launch-deployment-controller.png
  7. To locate on which node the CIS service is running, you can use the following command:

    kubectl get pods -o wide -n kube-system
    

    We can see that our container is running on kube-node2 below.

    ../../_images/f5-container-connector-locate-controller-container.png

Troubleshooting

If you need to troubleshoot your container, you have two different ways to check the logs, kubectl command or docker command.

Attention

Depending on your deployment, CIS can be running on either kube-node1 or kube-node2. In our example above it’s running on kube-node2

  1. Using kubectl command: you need to use the full name of your pod as shown in the previous image.

    # For example:

    kubectl logs k8s-bigip-ctlr-7469c978f9-6hvbv -n kube-system
    
    ../../_images/f5-container-connector-check-logs-kubectl.png