F5 Solutions for Containers > Class 1: Kubernetes with F5 Container Ingress Service > Module 1: CIS Using NodePort Mode Source | Edit on
Lab 1.1 - Install & Configure CIS in NodePort Mode¶
The BIG-IP Controller for Kubernetes installs as a Deployment object
See also
The official CIS documentation is here: Install the BIG-IP Controller: Kubernetes
In this lab we’ll use NodePort mode to deploy an application to the BIG-IP.
See also
For more information see BIG-IP Deployment Options
BIG-IP Setup¶
Via RDP connect to the UDF lab “jumpbox” host.
Note
Username and password are: ubuntu/ubuntu
Open firefox and connect to bigip1 management console. For your convenience there’s a shortcut on the firefox toolbar.
Note
Username and password are: admin/admin
Attention
- Check BIG-IP is active and licensed.
- If your BIG-IP has no license or its license expired, renew the license. You just need a LTM VE license for this lab. No specific add-ons are required (ask a lab instructor for eval licenses if your license has expired)
- Be sure to be in the
Common
partition before creating the following objects.
First we need to setup a partition that will be used by F5 Container Ingress Service.
- GoTo:
- Create a new partition called “kubernetes” (use default settings)
- Click Finished
# From the CLI: ssh admin@10.1.1.4 tmsh create auth partition kubernetes
Verify AS3 is installed.
Attention
This has been done to save time but is documented here for reference.
See also
For more info click here: Application Services 3 Extension Documentation
GoTo:
and confirm “f5-appsvcs” is in the list as shown below.
If AS3 is NOT installed follow these steps:
- Click here to: Download latest AS3
- Go back to:
- Click Import
- Browse and select downloaded AS3 RPM
- Click Upload
Explore the Kubernetes Cluster¶
On the jumphost open a terminal and start an SSH session with kube-master1.
# If directed to, accept the authenticity of the host by typing "yes" and hitting Enter to continue. ssh kube-master1
“git” the demo files
Note
These files should already be there and upon login updated. If not use the following command to clone the repo.
git clone -b develop https://github.com/f5devcentral/f5-agility-labs-containers.git ~/agilitydocs cd ~/agilitydocs/docs/class1/kubernetes
Check the Kubernetes cluster nodes.
You can manage nodes in your instance using the CLI. The CLI interacts with node objects that are representations of actual node hosts. The master uses the information from node objects to validate nodes with health checks.
To list all nodes that are known to the master:
kubectl get nodes
Attention
If the node STATUS shows NotReady or SchedulingDisabled contact the lab proctor. The node is not passing the health checks performed from the master, therefore pods cannot be scheduled for placement on the node.
To get more detailed information about a specific node, including the reason for the current condition use the kubectl describe node command. This does provide alot of very useful information and can assist with throubleshooting issues.
kubectl describe node kube-master1
CIS Deployment¶
See also
For a more thorough explanation of all the settings and options see F5 Container Ingress Services - Kubernetes
Now that BIG-IP is licensed and prepped with the “kubernetes” partition, we need to define a Kubernetes deployment and create a Kubernetes secret to hide our bigip credentials.
Create bigip login secret
kubectl create secret generic bigip-login -n kube-system --from-literal=username=admin --from-literal=password=admin
You should see something similar to this:
Create kubernetes service account for bigip controller
kubectl create serviceaccount k8s-bigip-ctlr -n kube-system
You should see something similar to this:
Create cluster role for bigip service account (admin rights, but can be modified for your environment)
kubectl create clusterrolebinding k8s-bigip-ctlr-clusteradmin --clusterrole=cluster-admin --serviceaccount=kube-system:k8s-bigip-ctlr
You should see something similar to this:
At this point we have two deployment mode options, Nodeport or ClusterIP. This class will feature both modes. For more information see BIG-IP Controller Modes
Lets start with Nodeport mode
Note
- For your convenience the file can be found in /home/ubuntu/agilitydocs/docs/class1/kubernetes (downloaded earlier in the clone git repo step).
- Or you can copy and paste the file below and create your own file.
- If you have issues with your yaml and syntax (indentation MATTERS), you can try to use an online parser to help you : Yaml parser
nodeport-deployment.yaml¶1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
apiVersion: apps/v1 kind: Deployment metadata: name: k8s-bigip-ctlr namespace: kube-system spec: replicas: 1 selector: matchLabels: app: k8s-bigip-ctlr template: metadata: name: k8s-bigip-ctlr labels: app: k8s-bigip-ctlr spec: serviceAccountName: k8s-bigip-ctlr containers: - name: k8s-bigip-ctlr image: "f5networks/k8s-bigip-ctlr:2.4.1" imagePullPolicy: IfNotPresent env: - name: BIGIP_USERNAME valueFrom: secretKeyRef: name: bigip-login key: username - name: BIGIP_PASSWORD valueFrom: secretKeyRef: name: bigip-login key: password command: ["/app/bin/k8s-bigip-ctlr"] args: [ "--bigip-username=$(BIGIP_USERNAME)", "--bigip-password=$(BIGIP_PASSWORD)", "--bigip-url=https://10.1.1.4:8443", "--insecure=true", "--bigip-partition=kubernetes", "--pool-member-type=nodeport" ]
Once you have your yaml file setup, you can try to launch your deployment. It will start our f5-k8s-controller container on one of our nodes.
Note
This may take around 30sec to be in a running state.
kubectl create -f nodeport-deployment.yaml
Verify the deployment “deployed”
kubectl get deployment k8s-bigip-ctlr --namespace kube-system
To locate on which node the CIS service is running, you can use the following command:
kubectl get pods -o wide -n kube-system
We can see that our container is running on kube-node2 below.
Troubleshooting¶
If you need to troubleshoot your container, you have two different ways to check the logs, kubectl command or docker command.
Attention
Depending on your deployment, CIS can be running on either kube-node1 or kube-node2. In our example above it’s running on kube-node2
Using
kubectl
command: you need to use the full name of your pod as shown in the previous image.# For example: kubectl logs k8s-bigip-ctlr-7469c978f9-6hvbv -n kube-system
Using docker logs command: From the previous check we know the container is running on kube-node2. On your current session with kube-master1 SSH to kube-node2 first and then run the docker command:
Important
Be sure to check which Node your “connector” is running on.
# If directed to, accept the authenticity of the host by typing "yes" and hitting Enter to continue. ssh kube-node2 sudo docker ps
Here we can see our container ID is “e7f69e3ad5c6”
Now we can check our container logs:
sudo docker logs e7f69e3ad5c6
Important
The log messages here are identical to the log messages displayed in the previous kubectl logs command.
Exit kube-node2 back to kube-master1
exit
You can connect to your container with kubectl as well. This is something not typically needed but support may direct you to do so.
Important
Be sure the previous command to exit kube-node2 back to kube-master1 was successfull.
kubectl exec -it k8s-bigip-ctlr-7469c978f9-6hvbv -n kube-system -- /bin/sh cd /app ls -la exit