F5 Container Integrations v1.3

Current Page

Application Services Proxy

Cloud Foundry

Kubernetes / OpenShift

Mesos Marathon




Cloud Docs Home > F5 Container Integrations Index


WARNING! You are viewing an outdated documentation set.

View the latest version

F5 Container Integration - Kubernetes

This document provides general information regarding the F5 Integration for Kubernetes. For deployment and usage instructions, please refer to the guides below.


The F5 Container Integration for Kubernetes consists of the BIG-IP Controller for Kubernetes and the Application Services Proxy (ASP).

The BIG-IP Controller for Kubernetes configures BIG-IP Local Traffic Manager (LTM) objects for applications in a Kubernetes cluster, serving North-South traffic.

The Application Services Proxy provides load balancing and telemetry for containerized applications, serving East-West traffic.

F5 Container Solution for Kubernetes

General Prerequisites

The F5 Integration for Kubernetes documentation set assumes that you:

  • already have a Kubernetes cluster running;
  • are familiar with the Kubernetes dashboard and kubectl ;
  • already have a BIG-IP device licensed and provisioned for your requirements; [1] and
  • are familiar with BIG-IP LTM concepts and tmsh commands. [1]
[1](1, 2) Not required for the Application Services Proxy and ASP controllers (f5-kube-proxy, ASP Controller).


When using the BIG-IP Controller in OpenShift, make sure your BIG-IP license includes SDN services.

Application Services Proxy

The Application Services Proxy (ASP) provides container-to-container load balancing, traffic visibility, and inline programmability for applications. Its light form factor allows for rapid deployment in datacenters and across cloud services. The ASP integrates with container environment management and orchestration systems and enables application delivery service automation.


In Kubernetes, the ASP runs as a forward, or client-side, proxy.

Ephemeral store

The ASP ephemeral store is a distributed, in-memory, secure key-value store. It allows ASP instances to share non-persistent, or ephemeral, data.

Health monitors

The ASP health monitor detects endpoint health using both active and passive checks. The ASP adds and removes endpoints from load balancing pools based on the health status determined by these checks. The ASP’s health monitor enhances Kubernetes’ native “liveness probes” as follows:

  • provides a network view of service health;
  • adds/removes endpoints from load balancing pool automatically based on health status;
  • provides opportunistic health checks by observing client traffic;
  • combines data from various health check types – passive and active – to provide a more comprehensive view of endpoints’ health status.


The Application Services Proxy collects traffic statistics for the Services it load balances. These stats are either logged locally or sent to an external analytics application, like Splunk. Use the ASP stats configuration parameters to set the location and type of the analytics application in the ASP ConfigMap.


The ASP Controller for Kubernetes – f5-kube-proxy – replaces the standard Kubernetes network proxy, or kube-proxy.

The ASP and f5-kube-proxy work together to proxy traffic for Kubernetes Services as follows:


By default, the f5-kube-proxy forwards traffic to ASP on port 10000. You can change this, if needed, to avoid port conflicts.

See the f5-kube-proxy reference documentation for more information.

BIG-IP Controller for Kubernetes

The BIG-IP Controller for Kubernetes is a Docker container that runs on a Kubernetes Pod. You can launch the k8s-bigip-ctlr application <install-kctlr> in Kubernetes using a Deployment.

Once the BIG-IP Controller pod is running, it watches the Kubernetes API for special Kubernetes “F5 Resource” ConfigMap s. An F5 Resource ConfigMap contains a JSON blob that tells BIG-IP Controller:

Once the BIG-IP Controller pod is running, it watches the Kubernetes API for special “F5 Resource” ConfigMap s. These ConfigMaps contain an F5 Resource JSON blob that tells the BIG-IP Controller:

  • what Service it should manage, and
  • what objects it should create/update on the BIG-IP system for that Service.

When the BIG-IP Controller discovers new or updated virtual server or iApp F5 Resource ConfigMaps, it configures the BIG-IP system accordingly.


  • The BIG-IP Controller for Kubernetes cannot manage objects in the /Common partition.
  • The BIG-IP partition you want to manage must exist before you launch the BIG-IP Controller.
  • The BIG-IP Controller for Kubernetes does not create or destroy BIG-IP partitions.
  • You can use multiple BIG-IP Controller instances to manage separate BIG-IP partitions.
  • You can create one (1) BIG-IP virtual server per Service port. Create a separate virtual server F5 Resource ConfigMap for each Service port you wish to expose.

The BIG-IP Controller can:

Key Kubernetes Concepts

Cluster Network

The basic assumption of the Kubernetes Cluster Network is that pods can communicate with other pods, regardless of what host they’re on. You have a few different options when connecting your BIG-IP device (platform or Virtual Edition) to a Kubernetes cluster network and the BIG-IP Controller. How (or whether) you choose to integrate your BIG-IP device into the cluster network – and the framework you use – impacts how the BIG-IP system forwards traffic to your Kubernetes Services.

See Nodeport mode vs Cluster mode for more information.


The Kubernetes Namespace allows you to create/manage multiple cluster environments. The BIG-IP Controller for Kubernetes can manage all namespaces; a single namespace; or pretty much anything in between.

When creating a BIG-IP front-end virtual server for a Service, you can:

  • specify a single namespace to watch (this is the only supported mode in k8s-bigip-ctlr v1.0.0);
  • specify multiple namespaces by passing each in as a separate flag; or
  • watch all namespaces (by omitting the namespace flag); this is the default setting as of k8s-bigip-ctlr v1.1.0.

F5 Resource Properties

The BIG-IP Controller for Kubernetes uses special ‘F5 Resources’ to identify what BIG-IP objects it should create. An F5 resource is a JSON blob defined in a Kubernetes ConfigMap.

An F5 Resource JSON blob may contain the properties shown below.

F5 Resource properties
Property Description Required
f5type A label property watched by the BIG-IP Controller. Optional
schema The schema BIG-IP Controller uses to interpret the encoded data. [2] Required
data A JSON object Required
frontend Defines the BIG-IP virtual server.  

Identifies the Service you want to proxy.

Defines BIG-IP health monitor(s) for the Service.

[2]See the F5 schema compatibility table for more information.

The BIG-IP Controller uses the f5type property differently depending on the use case.

  • When used in a virtual server F5 Resource ConfigMap, set f5type: virtual-server. This tells the BIG-IP Controller what type of resource you want to create.
  • When used in OpenShift Route definitions, you can define it any way you like. You can set the BIG-IP Controller to watch for Routes configured with a specific f5type label. For example: f5type: App1 [3]

The frontend property defines how to expose a Service on a BIG-IP device.

The backend property identifies the Kubernetes Service that makes up the server pool. You can define BIG-IP health monitors in this section.

Example F5 virtual server resource

The below example creates one (1) virtual server for the Service named “myService”, with one (1) health monitor and one (1) pool. The Controller will create the virtual server in the kubernetes partition on the BIG-IP system.

Example F5 Resource definition
// Note: Remove all comments before using //
  "virtualServer": {
    "backend": {
      "servicePort": 3000,
      "serviceName": "myService",
      "healthMonitors": [{
        "interval": 30,
        "protocol": "http",
        "send": "GET",
        "timeout": 86400
    "frontend": {
      "virtualAddress": {
        "port": 80,
        // Sets the IP address of the BIG-IP front-end virtual server //
        // omit if you want to create a pool without a virtual server //
        "bindAddr": ""
      "partition": "kubernetes",
      // Accepts any BIG-IP load balancing mode; defaults to round-robin if
      // mode is not provided //
      "balance": "round-robin",
      "mode": "http"
[3]The BIG-IP Controller supports Routes in OpenShift deployments. See OpenShift Routes for more information.

Node Health

When the BIG-IP Controller for Kubernetes runs in Nodeport mode – the default setting – the BIG-IP Controller doesn’t have visibility into the health of individual Kubernetes Pods. It knows when Nodes are down and when all Pods are down. Because of this limited visibility, a pool member may remain active on the BIG-IP system even if the corresponding Pod isn’t available.

When running in Cluster mode, the BIG-IP Controller has visibility into the health of individual Pods.


In either mode of operation, it’s good practice to add a BIG-IP health monitor to the virtual server to ensure the BIG-IP system knows when resources go down.


The BIG-IP Controller provides additional functionality in OpenShift deployments, including support for Routes.

Learn about using the BIG-IP Controller in OpenShift.