BIG-IP Controller Modes

If you’re setting up a Kubernetes or OpenShift cluster with the BIG-IP Controller for the first time, you may be asking yourself,

“What is the pool-member-type setting and which mode should I choose?”.

This document clarifies the available options and provides vital information to take into account when making this decision.

In brief: The pool-member-type setting determines what mode the Controller runs in – nodeport or cluster.

Nodeport mode

Nodeport mode is the default mode of operation for the BIG-IP Controller in Kubernetes. From a configuration standpoint, it’s easier to set up since it doesn’t matter what Kubernetes Cluster Network you use. In addition, NodePort mode doesn’t have any specific BIG-IP licensing requirements.

As shown in the diagram below, nodeport mode uses 2-tier load balancing:

  1. The BIG-IP Platform load balances requests to Nodes (kube-proxy).
  2. Nodes (kube-proxy) load balance requests to Pods.

Important limitations to consider:

  • The Kubernetes Services you want to manage must use type: NodePort. [1]
  • The BIG-IP system can’t load balance directly to Pods, which means:
    • some BIG-IP services, like L7 persistence, won’t behave as expected;
    • there’s extra latency; and
    • BIG-IP Controller has limited visibility into Pod health.

If you want to use NodePort mode, continue on to Install the BIG-IP Controller in Kubernetes.

Cluster mode

You should use cluster mode if you intend to integrate your BIG-IP device into the Kubernetes cluster network.


OpenShift users must run the BIG-IP Controller in cluster mode.

Cluster mode requires a Better or Best license that includes SDN services and advanced routing. While there are additional networking configurations to make, cluster mode has distinct benefits over nodeport mode:

  • You can use any type you like for your Kubernetes Services.
  • BIG-IP system can load balance directly to any Pod in the Cluster, which means:
    • BIG-IP services - including L7 persistence - function as expected, and
    • the BIG-IP Controller has full visibility into Pod health via the Kubernetes API.

If you want to run BIG-IP Controller in cluster mode, continue on to Network considerations.

See also

The following guides provide relevant information and instructions:

Network considerations

When thinking about how to integrate your BIG-IP device into the cluster network, you’ll probably want to take into account what you have to do manually vs what the BIG-IP Controller takes care of automatically. In general, the manual operations required occur far less frequently than those that are automatic. The list below shows common operations for a typical Kubernetes cluster, from most-frequent to least-frequent.

  • Add or remove Pods from an existing Service, or expose a Service with Pods.
  • Add or remove a Node from the Cluster.
  • Create a new Kubernetes Cluster from scratch.

The BIG-IP Controller always manages BIG-IP system configurations for Pods automatically. For Nodes and Clusters, you may have to perform some actions manually (or automate them using a different system, like Ansible). [2] Take these into consideration if you’re deciding how to set up your cluster network, or deciding how to integrate the BIG-IP Controller and a BIG-IP device into an existing cluster.


BIG-IP platforms support several overlay networks, like VXLAN, NVGRE, and IPIP. The manual steps noted in the table apply when integrating a BIG-IP device into any overlay network, not just the examples shown here.

The examples below are for instructional purposes only.

Network Type Add Cluster Add Node(s)
Layer 2 networks
Openshift SDN

Create a new OpenShift HostSubnet for the BIG-IP self IP.

Add a new VXLAN network to the BIG-IP system that corresponds to the subnet. [3]

None. The BIG-IP Controller automatically detects OpenShift Nodes and makes the necessary BIG-IP system configurations.
flannel VXLAN

Create a VXLAN tunnel on the BIG-IP system.

Add the BIG-IP to the flannel overlay network.

None. The BIG-IP Controller automatically detects Kubernetes Nodes and makes the necessary BIG-IP system configurations.
Layer 3 networks
Calico Set up BGP peering between the BIG-IP device and Calico.

None. Managed by BGP.

NOTE: Depending on the BGP configuration, you may need to update the BGP neighbor table.

flannel host-gw Configure routes in flannel and on the BIG-IP device for per-node subnet(s). Add/update per-node subnet routes on the BIG-IP device.

What’s Next

Review the k8s-bigip-ctlr configuration parameters.



[1]See Publishing Services - Service Types in the Kubernetes documentation.
[2]See the f5-ansible repo on GitHub for Ansible modules that can manipulate F5 products.
[3]Be sure to use the correct encapsulation format for your network.