F5 Container Integrations v1.3

Current Page

Application Services Proxy

Cloud Foundry

Kubernetes / OpenShift

Mesos Marathon

Support

Troubleshooting

Tutorials

Cloud Docs Home > F5 Container Integrations Index

Deploy the ASP Controller - Kubernetes

Summary

The ASP Controller for Kubernetes (f5-kube-proxy) is a container-based application that runs in a Pod on each Node in a Kubernetes Cluster. It takes the place of the standard Kubernetes kube-proxy component.

For Kubernetes v1.6.x and higher, see Patch the kube-proxy daemonset with F5-proxy.

For Kubernetes v1.4.x, see Replace kube-proxy with F5-proxy in the Pod Manifests.

Patch kube-proxy daemonset with f5-kube-proxy

  • You can patch the generic kube-proxy daemonset with the JSON block shown below.

    {
      "spec": {
        "template": {
          "spec": {
            "containers": [
              {
                "name": "kube-proxy",
                "image": f5networks/f5-kube-proxy:1.1.0
                "volumeMounts": [
                  {
                    "mountPath": "/var/run/kubernetes/proxy-plugin",
                    "name": "plugin-config"
                  }
                ]
              }
            ],
            "volumes": [
              {
                "name": "plugin-config",
                "hostPath": {
                  "path": "/var/run/kubernetes/proxy-plugin",
                  "type": "DirectoryOrCreate"
               }
             }
            ]
          }
        },
        # Apply the f5-kube-proxy patch once
        "updateStrategy": {
          "type": "RollingUpdate",
          "rollingUpdate": {
            # Set to a number higher than the possible number of nodes needed
            "maxUnavailable": 10
          }
        }
      }
    }
    
  • Run the following command with the string version of the JSON block.

    kubectl patch daemonset kube-proxy -n kube-system -p <JSON patch>
    

Replace kube-proxy with f5-kube-proxy in the Pod Manifests

Important

Kubernetes “master” and “worker” nodes have distinct Pod Manifests. You need to update both to use f5-kube-proxy.

The CoreOS on Kubernetes Getting Started Guide provides instructions for setting up kube-proxy on master and worker nodes.

SSH to a node and edit the kube-proxy manifest
ssh core@172.16.1.21
Last login: Fri Feb 17 18:33:35 UTC 2017 from 172.16.1.20 on pts/0
CoreOS alpha (1185.3.0)
Update Strategy: No Reboots
core@k8s-worker-0 ~ sudo su
k8s-worker-0 core \# vim /etc/kubernetes/manifests/kube-proxy.yaml
  1. Edit the kube-proxy manifest on each node to match the manifest examples.

    The key additions/changes are:

    Change the command to /proxy in the worker pod manifest(s).
    spec:
      containers:
        command: /proxy
    
    Replace the image with the f5-kube-proxy image in both master and worker manifests.
    spec:
      containers:
        image: f5networks/f5-kube-proxy:1.0.0
    
    Add a new mountPath to the volumeMounts section in both master and worker manifests.
    spec:
      containers:
        volumeMounts:
          ...
          - mountPath: /var/run/kubernetes/proxy-plugin
            name: plugin-config
            readOnly: false
    
    Add plugin-config to the volumes section in both master and worker manifests.
    spec:
      volumes:
        ...
        - name: plugin-config
          hostPath:
            path: /var/run/kubernetes/proxy-plugin
    

Examples

kube-proxy manifest on MASTER node
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: f5networks/f5-kube-proxy:1.0.0
    # do not change the args below if your master kube-proxy settings differ
    # from those shown here
    command:
    - /proxy
    - --master=http://127.0.0.1:8080
    - --proxy-mode=iptables
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
    # add this volumeMount
    - mountPath: /var/run/kubernetes/proxy-plugin
      name: plugin-config
  volumes:
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host
  # add this volume
  - name: plugin-config
    hostPath:
      path: /var/run/kubernetes/proxy-plugin

f5-kube-proxy-manifest-master.yaml

kube-proxy manifest on WORKER node
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: f5networks/f5-kube-proxy:1.0.0
    # replace the args from the original kube-proxy config with those below
    command:
    - /proxy
    # IP address of the master node
    - --master=https://172.16.1.19
    - --proxy-mode=iptables
    - --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
    - mountPath: /etc/kubernetes/worker-kubeconfig.yaml
      name: "kubeconfig"
      readOnly: true
    - mountPath: /etc/kubernetes/ssl
      name: "etc-kube-ssl"
      readOnly: true
    # add this volumeMount
    - mountPath: /var/run/kubernetes/proxy-plugin
      name: plugin-config
  volumes:
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host
  - name: "kubeconfig"
    hostPath:
      path: "/etc/kubernetes/worker-kubeconfig.yaml"
  - name: "etc-kube-ssl"
    hostPath:
      path: "/etc/kubernetes/ssl"
  # add this volume
  - name: plugin-config
    hostPath:
      path: /var/run/kubernetes/proxy-plugin

f5-kube-proxy-manifest-worker.yaml