F5 Container Integrations v1.2

Current Page

Application Services Proxy

Cloud Foundry

Kubernetes

Mesos Marathon

OpenShift

Support

Troubleshooting

Tutorials

Cloud Docs Home > F5 Container Integrations Index

F5 Kubernetes Container Integration

Overview

The F5 Container Integration for Kubernetes consists of the F5 BIG-IP Controller for Kubernetes and the F5 Application Services Proxy (ASP).

The BIG-IP Controller for Kubernetes configures BIG-IP Local Traffic Manager (LTM) objects for applications in a Kubernetes cluster, serving North-South traffic.

The Application Services Proxy provides load balancing and telemetry for containerized applications, serving East-West traffic.

F5 Container Solution for Kubernetes

General Prerequisites

The F5 Integration for Kubernetes documentation set assumes that you:

[1](1, 2) Not required for the Application Services Proxy and ASP controllers (F5-proxy, ASP Controller).

Application Services Proxy

The Application Services Proxy (ASP) provides container-to-container load balancing, traffic visibility, and inline programmability for applications. Its light form factor allows for rapid deployment in datacenters and across cloud services. The ASP integrates with container environment management and orchestration systems and enables application delivery service automation.

The Application Services Proxy collects traffic statistics for the Services it load balances; these stats are either logged locally or sent to an external analytics application. You can set the location and type of the analytics application in the stats section of the Service annotation.

Important

In Kubernetes, the ASP runs as a forward, or client-side, proxy.

F5-proxy for Kubernetes

The F5-proxy for Kubernetes – F5-proxy – replaces the standard Kubernetes network proxy, or kube-proxy. The asp and F5-proxy work together to proxy traffic for Kubernetes Services as follows:

By default, the F5-proxy forwards traffic to ASP on port 10000. You can change this, if needed, to avoid port conflicts. See the f5-kube-proxy product documentation for more information.

BIG-IP Controller for Kubernetes

The BIG-IP Controller for Kubernetes is a Docker container that runs on a Kubernetes Pod. To launch the BIG-IP Controller application in Kubernetes, create a Deployment.

Once the BIG-IP Controller pod is running, it watches the Kubernetes API for special Kubernetes “F5 Resource” ConfigMap s. These ConfigMaps contain an F5 Resource JSON blob that tells BIG-IP Controller:

  • what Kubernetes Service it should manage, and
  • what objects it should create/update on the BIG-IP system for that Service.

When the BIG-IP Controller discovers new or updated virtual server or iApp F5 Resource ConfigMaps, it configures the BIG-IP system accordingly.

Caution

  • The BIG-IP Controller for Kubernetes cannot manage objects in the /Common partition.
  • The BIG-IP partition must exist before you launch the BIG-IP Controller.
  • The BIG-IP Controller for Kubernetes can’t create or destroy BIG-IP partitions.
  • If you’re running more than one (1) BIG-IP Controller, each must manage a separate BIG-IP partition (for example, k8s-1 and k8s-2).
  • Each virtual server F5 Resource defines a BIG-IP LTM virtual server object for one (1) port associated with one (1) Service. Create a separate virtual server F5 Resource ConfigMap for each Service port you wish to expose.

The BIG-IP Controller can:

Key Kubernetes Concepts

Cluster Network

The basic assumption of the Kubernetes Cluster Network is that pods can communicate with other pods, regardless of what host they’re on. You have a few different options when connecting your BIG-IP device (platform or Virtual Edition) to a Kubernetes cluster network and the BIG-IP Controller. How (or whether) you choose to integrate your BIG-IP device into the cluster network – and the framework you use – impacts how the BIG-IP system forwards traffic to your Kubernetes Services.

See Nodeport mode vs Cluster mode for more information.

Namespaces

The Kubernetes namespace allows you to create/manage multiple environments within a cluster. The BIG-IP Controller for Kubernetes can manage all namespaces; a single namespace; or pretty much anything in between.

When creating a BIG-IP front-end virtual server for a Kubernetes Service, you can:

  • specify a single namespace to watch (this is the only supported mode prior to k8s-bigip-ctlr v1.1.0);
  • specify multiple namespaces (pass in each as a separate flag); or
  • omit the namespace flag (meaning you want to watch all namespaces); this is the default setting as of k8s-bigip-ctlr v1.1.0.

F5 Resource Properties

The BIG-IP Controller for Kubernetes uses special ‘F5 Resources’ to identify what BIG-IP LTM objects it should create. An F5 resource is a JSON blob included in a Kubernetes ConfigMap.

An F5 Resource JSON blob may contain the properties shown below.

F5 Resource properties
Property Description Required
f5type A label property watched by the BIG-IP Controller. Optional
schema

The schema BIG-IP Controller uses to interpret the encoded data.

BE SURE TO USE THE CORRECT SCHEMA VERSION FOR YOUR VERSION OF THE CONTROLLER (see below)

Required
data A JSON object Required
frontend Defines the BIG-IP LTM objects you want to create.  
backend Identifies the Service you want to proxy.  

F5 schema and k8s-bigip-ctlr version compatibility
Schema version k8s-bigip-ctlr version
f5schemadb://bigip-virtual-server_v0.1.3.json 1.1.0
f5schemadb://bigip-virtual-server_v0.1.2.json 1.0.0

The BIG-IP Controller uses the f5type property differently depending on the use case.

  • When used in a virtual server F5 Resource ConfigMap, set f5type: virtual-server. This tells the BIG-IP Controller what type of resource you want to create.
  • When used in Route definitions, you can define it any way you like. You can set the BIG-IP Controller to only watch for Routes configured with a specific f5type label. For example: f5type: App1 [2]

The frontend property defines how to expose a Service on a BIG-IP device.

The backend property identifies the Kubernetes Service that makes up the server pool. You can define health monitors for your BIG-IP LTM virtual server(s) and pool(s) in this section.

[2]The BIG-IP Controller only supports routes in OpenShift deployments. See OpenShift Routes for more information.

Kubernetes and OpenShift

Find out more about using the BIG-IP Controller for Kubernetes in OpenShift.

Node Health

When the BIG-IP Controller for Kubernetes runs in Nodeport mode – the default setting – the BIG-IP Controller doesn’t have visibility into the health of Kubernetes Pods. It knows when Nodes are down and when all Pods are down. Because of this limited visibility, a pool member may remain active on the BIG-IP system even if the corresponding Pod isn’t available.

When running in Cluster mode, the BIG-IP Controller has visibility into the health of individual Pods.

Tip

In either mode of operation, it’s good practice to add a BIG-IP health monitor to the virtual server to ensure the BIG-IP system knows when resources go down.