Cloud Docs Home > F5 Container Integrations Index

F5 Kubernetes Container Integration

Overview

The F5 Kubernetes Container Integration consists of the F5 Kubernetes BIG-IP Controller and the F5 Application Services Proxy (ASP).

The F5 Kubernetes BIG-IP Controller configures BIG-IP Local Traffic Manager (LTM) objects for applications in a Kubernetes cluster, serving North-South traffic.

The F5 Application Services Proxy provides load balancing and telemetry for containerized applications, serving East-West traffic.

F5 Container Solution for Kubernetes

General Prerequisites

The F5 Kubernetes Integration’s documentation set assumes that you:

[1](1, 2) Not required for the F5 Application Services Proxy and ASP controllers (f5-kube-proxy, marathon-asp-ctlr).

F5 Application Services Proxy

The F5 Application Services Proxy (ASP) provides container-to-container load balancing, traffic visibility, and inline programmability for applications. Its light form factor allows for rapid deployment in datacenters and across cloud services. The ASP integrates with container environment management and orchestration systems and enables application delivery service automation.

The F5 Application Services Proxy collects traffic statistics for the Services it load balances; these stats are either logged locally or sent to an external analytics application. You can set the location and type of the analytics application in the stats section of the Service annotation.

F5 Kubernetes Proxy

The F5 Kubernetes Proxy – f5-kube-proxy – replaces the standard Kubernetes network proxy, or kube-proxy. The asp and f5-kube-proxy work together to proxy traffic for Kubernetes Services as follows:

By default, the f5-kube-proxy forwards traffic to ASP on port 10000. You can change this, if needed, to avoid port conflicts. See the f5-kube-proxy product documentation for more information.

F5 Kubernetes BIG-IP Controller

The F5 Kubernetes BIG-IP Controller is a docker container that runs on a Kubernetes Pod. To launch the k8s-bigip-ctlr application in Kubernetes, just create a Deployment.

Once the k8s-bigip-ctlr pod is running, it watches the Kubernetes API for special Kubernetes “F5 Resource” ConfigMap s. These ConfigMaps contain an F5 Resource JSON blob that tells k8s-bigip-ctlr:

  • what Kubernetes Service we want it to manage, and
  • what BIG-IP LTM objects we want to create for that specific Service.

When the k8s-bigip-ctlr discovers new or updated virtual server F5 Resource ConfigMaps, it dynamically applies the desired settings to the BIG-IP device.

You can use the F5 Kubernetes BIG-IP Controller to:

Key Kubernetes Concepts

Namespaces

Note

New in version k8s-bigip-ctlr: v1.1.0-beta.1

See the k8s-bigip-ctlr beta documentation for more information.

The Kubernetes namespace allows you to create/manage multiple environments within a cluster. The F5 Kubernetes BIG-IP Controller can manage all namespaces; a single namespace; or pretty much anything in between.

When creating a BIG-IP front-end virtual server for a Kubernetes Service, you can:

  • specify a single namespace to watch;
  • specify multiple namespaces (pass in each as a separate flag); or
  • don’t specify any namespace (meaning you want to watch all namespaces); this is the default setting as of k8s-bigip-ctlr v1.1.0-beta.1).

F5 Resource Properties

The F5 Kubernetes BIG-IP Controller uses special ‘F5 Resources’ to identify what BIG-IP LTM objects it should create. An F5 resource is a JSON blob included in a Kubernetes ConfigMap.

The virtual server F5 Resource JSON blob must contain the following properties.

Property Description
f5type

a label property defining the type of resource to create on the BIG-IP;

e.g., f5type: virtual-server

schema identifies the schema k8s-bigip-ctlr uses to interpret the encoded data

data

  • frontend
  • backend

a JSON blob

  • a subset of data; defines BIG-IP LTM objects
  • a subset of data; identifies the Kubernetes Service to proxy

The frontend property defines how to expose a Service on a BIG-IP device. You can define frontend using the standard k8s-bigip-ctlr virtualServer parameters or the k8s-bigip-ctlr iApp parameters.

The frontend iApp configuration parameters include a set of customizable iappVariables parameters. These custom user-defined parameters must correspond to fields in the iApp template you want to launch. In addition, you’ll need to define the iApp Pool Member Table that the iApp creates on the BIG-IP device.

The backend property identifies the Kubernetes Service that makes up the server pool. You can also define health monitors for your BIG-IP LTM virtual server(s) and pool(s) in this section.

Kubernetes and OpenShift Origin

See F5 OpenShift Origin Integration.

Monitors and Node Health

When the F5 Kubernetes BIG-IP Controller runs with pool-member-type set to nodeport – the default setting – the k8s-bigip-ctlr is not aware that Kubernetes nodes are down. This means that all pool members on a down Kubernetes node remain active even if the node itself is unavailable. When using nodeport mode, it’s important to configure a BIG-IP health monitor for the virtual server to mark the Kubernetes node as unhealthy if it’s rebooting or otherwise unavailable.