If you’re setting up a Kubernetes or OpenShift cluster with the BIG-IP Controller for the first time, you may be asking yourself,
“What is the pool-member-type setting and which mode should I choose?”.
This document clarifies the available options and provides vital information to take into account when making this decision.
In brief: The
pool-member-type setting determines what mode the Controller runs in –
Nodeport mode is the default mode of operation for the BIG-IP Controller in Kubernetes. From a configuration standpoint, it’s easier to set up since it doesn’t matter what Kubernetes Cluster Network you use. In addition, NodePort mode doesn’t have any specific BIG-IP licensing requirements.
As shown in the diagram below,
nodeport mode uses 2-tier load balancing:
Important limitations to consider:
type: NodePort. 
If you want to use NodePort mode, continue on to Install the BIG-IP Controller in Kubernetes.
You should use
cluster mode if you intend to integrate your BIG-IP device into the Kubernetes cluster network.
OpenShift users must run the BIG-IP Controller in cluster mode.
Cluster mode requires a Better or Best license that includes SDN services and advanced routing. While there are additional networking configurations to make, cluster mode has distinct benefits over nodeport mode:
If you want to run BIG-IP Controller in cluster mode, continue on to Network considerations.
The following guides provide relevant information and instructions:
When thinking about how to integrate your BIG-IP device into the cluster network, you’ll probably want to take into account what you have to do manually vs what the BIG-IP Controller takes care of automatically. In general, the manual operations required occur far less frequently than those that are automatic. The list below shows common operations for a typical Kubernetes cluster, from most-frequent to least-frequent.
The BIG-IP Controller always manages BIG-IP system configurations for Pods automatically. For Nodes and Clusters, you may have to perform some actions manually (or automate them using a different system, like Ansible).  Take these into consideration if you’re deciding how to set up your cluster network, or deciding how to integrate the BIG-IP Controller and a BIG-IP device into an existing cluster.
BIG-IP platforms support several overlay networks, like VXLAN, NVGRE, and IPIP. The manual steps noted in the table apply when integrating a BIG-IP device into any overlay network, not just the examples shown here.
The examples below are for instructional purposes only.
|Network Type||Add Cluster||Add Node(s)|
|Layer 2 networks|
Create a new OpenShift HostSubnet for the BIG-IP self IP.
None. The BIG-IP Controller automatically detects OpenShift routes and makes the necessary BIG-IP system
Allocate an overlay IP address from Flannel for the BIG-IP self IP.
Create a network and VXLAN tunnel on the BIG-IP system with a VTEP in the Flannel VXLAN network.
|Add an FDB entry and ARP record for each node.|
|Layer 3 networks|
|Calico||Set up BGP peering between the BIG-IP device and Calico.||
None. Managed by BGP.
NOTE: Depending on the BGP configuration, you may need to update the BGP neighbor table.
|Flannel host-gw||Configure routes in Flannel and on the BIG-IP device for per-node subnet(s).||Add/update per-node subnet routes on the BIG-IP device.|
|||See Publishing Services - Service Types in the Kubernetes documentation.|
|||See the f5-ansible repo on GitHub for Ansible modules that can manipulate F5 products.|
|||Be sure to use the correct encapsulation format for your network.|