Install the BIG-IP Controller: Kubernetes

Use a Deployment to install the BIG-IP Controller for Kubernetes.

If you use helm you can use the `f5-bigip-ctlr chart`_ to create and manage the resources below.

Attention

These instructions are for a standard Kubernetes environment. If you are using OpenShift, see Install the BIG-IP Controller for Kubernetes in OpenShift Origin.

Task Summary
Step Task
Initial Setup
Set up RBAC Authentication

Create a Deployment

Upload the Resources to the Kubernetes API server
Verify Pod creation

Initial Setup

Important

The steps in this section require either Administrator or Resource Administrator permissions on the BIG-IP system.

  1. If you want to use BIG-IP High Availability (HA), set up two or more F5 BIG-IPs in a Device Service Cluster (DSC).

  2. Create a new partition on your BIG-IP system.

    Note

    • The BIG-IP Controller can not manage objects in the /Common partition.
    • [Optional] The Controller can declare the IP addresses it configures on the BIG-IP with a Route Domain identifier. You may want to use route domains if you have many applications using the same IP address space that need isolation from one another. After you create the partition on your BIG-IP system, you can 1) create a route domain and 2) assign the route domain as the partition’s default. See create and set a non-zero default Route Domain for a partition for setup instructions.
    • [Optional] If you’re using a BIG-IP HA pair or cluster, sync your changes across the group.
  3. Store your BIG-IP login credentials in a Secret.

  4. If you need to pull the k8s-bigip-ctlr image from a private Docker registry, store your Docker login credentials as a Secret.

Set up RBAC Authentication

  1. Create a Service Account for the BIG-IP Controller.

    kubectl create serviceaccount bigip-ctlr -n kube-system
    serviceaccount "bigip-ctlr" created
    
  2. Create a Cluster Role and Cluster Role Binding.

    Important

    You can substitute a Role and RoleBinding if your Controller doesn’t need access to the entire Cluster.

    The example below shows the broadest supported permission set. You can narrow the permissions down to specific resources, namespaces, etc. to suit your needs. See the Kubernetes RBAC documentation for more information.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# for use in k8s clusters only
# for OpenShift, use the OpenShift-specific examples
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: bigip-ctlr-clusterrole
rules:
- apiGroups: ["", "extensions"]
  resources: ["nodes", "services", "endpoints", "namespaces", "ingresses", "pods"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["", "extensions"]
  resources: ["configmaps", "events", "ingresses/status"]
  verbs: ["get", "list", "watch", "update", "create", "patch"]
- apiGroups: ["", "extensions"]
  resources: ["secrets"]
  resourceNames: ["<secret-containing-bigip-login>"]
  verbs: ["get", "list", "watch"]

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: bigip-ctlr-clusterrole-binding
  namespace: <controller_namespace>
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: bigip-ctlr-clusterrole
subjects:
- apiGroup: ""
  kind: ServiceAccount
  name: bigip-ctlr
  namespace: <controller_namespace>

f5-k8s-sample-rbac.yaml

Create a Deployment

Define a Kubernetes Deployment using valid YAML or JSON. See the k8s-bigip-ctlr configuration parameters reference for all supported configuration options.

Danger

Do not increase the replica count in the Deployment. Running duplicate Controller instances may cause errors and/or service interruptions.

Important

The BIG-IP Controller requires Administrator permissions in order to provide full functionality.

Basic Deployment

The example below shows a Deployment with the basic config parameters required to run the BIG-IP Controller in Kubernetes.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: k8s-bigip-ctlr-deployment
  namespace: kube-system
spec:
  # DO NOT INCREASE REPLICA COUNT
  replicas: 1
  template:
    metadata:
      name: k8s-bigip-ctlr
      labels:
        app: k8s-bigip-ctlr
    spec:
      # Name of the Service Account bound to a Cluster Role with the required
      # permissions
      serviceAccountName: bigip-ctlr
      containers:
        - name: k8s-bigip-ctlr
          image: "f5networks/k8s-bigip-ctlr"
          env:
            - name: BIGIP_USERNAME
              valueFrom:
                secretKeyRef:
                  # Replace with the name of the Secret containing your login
                  # credentials
                  name: bigip-login
                  key: username
            - name: BIGIP_PASSWORD
              valueFrom:
                secretKeyRef:
                  # Replace with the name of the Secret containing your login
                  # credentials
                  name: bigip-login
                  key: password
          command: ["/app/bin/k8s-bigip-ctlr"]
          args: [
            # See the k8s-bigip-ctlr documentation for information about
            # all config options
            # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest
            "--bigip-username=$(BIGIP_USERNAME)",
            "--bigip-password=$(BIGIP_PASSWORD)",
            "--bigip-url=<ip_address-or-hostname>",
            "--bigip-partition=<name_of_partition>",
            "--pool-member-type=nodeport"
            ]
      imagePullSecrets:
        # Secret that gives access to a private docker registry
        - name: f5-docker-images
        # Secret containing the BIG-IP system login credentials
        - name: bigip-login

f5-k8s-bigip-ctlr_basic.yaml

Run Health Checks

Kubernetes has two types of health checks:

  • Readiness Probes: To determine when a pod is ready
  • Liveness Probes: To determine when a pod is healthy or unhealthy after it has become ready

Readiness Probes Kubernetes uses readiness probes to decide when the container is available for accepting traffic. The readiness probe controls which pods to use as the backend for a service. A pod is considered ready when all of its containers are ready. If a pod is not ready, it is removed from service load balancers. For example, if a container loads a large cache at start-up and takes minutes to start, you should not send requests to this container until it is ready, or the requests will fail. Instead, route requests to other pods, which are capable of servicing requests.

Liveness Probes Kubernetes uses liveness probes to know when to restart a container. If a container is unresponsive, the application could be deadlocked due to a multi-threading defect. Restarting the container can make the application more available.

These are the methods you can use to check container status:

  • HTTP request to the pod
  • Command execution to the pod
  • TCP request to the pod

Probes are defined on a container in a deployment.

Here is an example of the deployment using HTTP method:

Parameter Description
periodSeconds specifies that the kubelet should perform a liveness probe every 3 seconds.
initialDelaySeconds tells the kubelet that it should wait 3 seconds before performing the first probe
timeOutSeconds how long to wait for the probe to finish. If this time is exceeded, OpenShift Container Platform considers the probe to have failed.

The kubelet uses a web hook to determine the healthiness of the container. The check is successful if the HTTP response code is between 200 and 399.

To perform a probe, the kubelet sends an HTTP GET request to the server that is running in the Container and listening on port 8080. The handler for the server’s /health path returns a success code. For example:

livenessProbe:
   failureThreshold: 3
   httpGet:
      path: /health
      port: 8080
      scheme: HTTP
   initialDelaySeconds: 15
   periodSeconds: 15
   successThreshold: 1
   timeoutSeconds: 15
readinessProbe:
   failureThreshold: 3
   httpGet:
      path: /health
      port: 8080
      scheme: HTTP
   initialDelaySeconds: 30
   periodSeconds: 30
   successThreshold: 1
   timeoutSeconds: 15

To view the liveness and readiness of the deployed pod, run the command Kubectl describe pod <pod_name> -n kube-system. Here is the example output:

        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        - --log-level=debug
        terminationMessagePolicy: Fileermination-log
      --log-level=debug
      --namespace=default
      --route-label=systest
      --insecure=true
      --agent=cccl
   Liveness:      http-get http://:8080/health delay=15s timeout=15s period=15s #success=1 #failure=3
   Readiness:     http-get http://:8080/health delay=30s timeout=15s period=30s #success=1 #failure=3
   Environment:   <none>
   Mounts:        <none>
Volumes:          <none>

curl http://<self-ip>:<port no>/health shows a response of OK.

Deployments for flannel BIG-IP Integrations

If your BIG-IP device connects to the Cluster network via flannel VXLAN, you must include the following in your Deployment:

  • --pool-member-type=cluster (See Cluster mode for more information.)
  • --flannel-name=/Common/tunnel_name

Download example Deployment with flannel-name defined

Use BIG-IP SNAT Pools and SNAT automap

Note

By default, the BIG-IP Controller uses BIG-IP Automap SNAT for all of the virtual servers it creates. From k8s-bigip-ctlr v1.5.0 forward, you can designate a specific SNAT pool in the Controller Deployment instead of using SNAT automap.

In environments where the BIG-IP connects to the Cluster network, the self IP used as the BIG-IP VTEP serves as the SNAT pool for all origin addresses within the Cluster. The subnet mask you provide when you create the self IP defines the addresses available to the SNAT pool.

See BIG-IP SNATs and SNAT automap for more information.

To use a specific SNAT pool, add the following to the args section of any k8s-bigip-ctlr Deployment:

"--vs-snat-pool-name=<snat-pool>"

Replace <snat-pool> with the name of any SNAT pool that already exists in the /Common partition on the BIG-IP device. The BIG-IP Controller cannot define a new SNAT pool for you.

Download example Deployment with vs-snat-pool-name defined

Upload the Resources to the Kubernetes API server

Upload the Deployment, Cluster Role, and Cluster Role Binding to the Kubernetes API server using kubectl apply.

kubectl apply -f f5-k8s-bigip-ctlr_basic.yaml -f f5-k8s-sample-rbac.yaml [-n kube-system]
deployment "k8s-bigip-ctlr" created
cluster role "bigip-ctlr-clusterrole" created
cluster role binding "bigip-ctlr-clusterrole-binding" created

Verify Pod creation

Use the kubectl get command to verify that the k8s-bigip-ctlr Pod launched successfully.

kubectl get pods -n kube-system
NAME                                  READY     STATUS    RESTARTS   AGE
k8s-bigip-ctlr-331478340-ke0h9        1/1       Running   0          1h

Note

If you use flannel and added your BIG-IP device to the Cluster network, you should now be able to send traffic through the BIG-IP system to and from endpoints within your Cluster.

What’s Next