VMware vSphere Kubernetes Service (VKS)

Overview

vSphere Kubernetes Service (VKS) is the Kubernetes runtime built directly into VMware Cloud Foundation (VCF). With CNCF certified Kubernetes, VKS enables platform engineers to deploy and manage Kubernetes clusters while leveraging a comprehensive set of cloud services in VCF. Cloud admins benefit from the support for N-2 Kubernetes versions, enterprise grade security, and simplified lifecycle management for modern apps adoption.

Installing CIS in VKS

The recommended method is to use the Helm installation. Follow the general CIS Helm installation instructions and specify the desired CNI integration mode with the pool

Supported CNIs

VKS supports Antrea and Calico CNIs. Both have been validated with direct-to-POD and NodePort networking options. Antrea direct-to-POD networking is achieved with the Antrea NodePortLocal feature while with Calico this is achieved using clusterIP POD IPs.

Using CIS with NodePort pool-member-type

This mode is mainly used when sending traffic to an in-cluster Ingress Controller (2nd tier of load balancing). In this mode the BIG-IP will always have as pool members all the VKS nodes IPs as pool members and doesn´t require any configuration for the CNI (either Antrea or Calico).

To use CIS in this mode, it is only required to specify the following parameter in the Helm values file:

pool_member_type: nodeport

Using Antrea in direct-to-POD mode

What is Antrea NodePortLocal?

For clusters using Antrea as CNI, the NodePortLocal (NPL) feature option can be used; with this a Pod can be directly reached from an external network through a port in the Node. In this mode, instead of relying on NodePort Services implemented by kube-proxy, CIS consumes NPL port mappings published by the Antrea Agent (as K8s Pod annotations) to read NodePort and NodeIP information to load balance traffic to backend pods directly.

Configuring Antrea NodePortLocal

Prerequisites

  • Deploy VKS with Antrea CNI (this is the default CNI).
  • Make sure the nodeportLocal feature is enabled. Depending on the VKS version, the NodePortLocal might be enabled by default or not. To verify this, check the antrea-agent ConfigMap in the kube-system namespace. The required entries for enabling and configuring nodePortLocal can be seen in the next example:
kind: ConfigMap
apiVersion: v1
metadata:
  name: antrea-config
  namespace: kube-system
data:
  antrea-agent.conf: |
    nodePortLocal:
      enable: true
      # Uncomment if you need to change the port range.
      # portRange: 61000-62000

Configuration

To enable NPL feature in the Helm’s values.yaml use the following pool member type configuration:

pool_member_type: nodeportlocal

And to allow the discovery of the application PODs by CIS, the Services used should be annotated with nodeportlocal.antrea.io/enabled: “true” as shown in the next example:

service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    nodeportlocal.antrea.io/enabled: "true"
  labels:
    app: f5-hello-world
  name: f5-hello-world
spec:
  ports:
  - name: f5-hello-world
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: f5-hello-world
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: f5-hello-world
  name: f5-hello-world
spec:
  replicas: 2
  selector:
    matchLabels:
      app: f5-hello-world
  template:
    metadata:
      labels:
        app: f5-hello-world
    spec:
      containers:
        - env:
            - name: service_name
              value: f5-hello-world
          image: f5devcentral/f5-hello-world:latest
          imagePullPolicy: Always
          name: f5-hello-world
          ports:
            - containerPort: 8080
              protocol: TCP

If the NodePortLocal feature gate is enabled, then all the Pods in the Deployment will be annotated with the nodeportlocal.antrea.io annotation. The value of this annotation is a serialized JSON array. For the example above, the pod may look like this:

kubectl describe po f5-hello-world-6d859874b7-prf8l -n cis
Name:         f5-hello-world-6d859874b7-prf8l
Namespace:    cis
Priority:     0
Start Time:   Wed, 09 Mar 2022 00:17:30 -0800
Labels:       app=f5-hello-world
              pod-template-hash=6d859874b7
Annotations:  kubernetes.io/psp: cis-psp
              nodeportlocal.antrea.io: [{"podPort":8080,"nodeIP":"10.244.0.3","nodePort":40001}]

This annotation indicates that port 8080 of the Pod can be reached through port 40001 of the Node with IP Address 10.244.0.3. The nodeportlocal.antrea.io annotation is created and managed by Antrea.

Note

NodePortLocal can only be used with Services of type ClusterIP. The nodeportlocal.antrea.io annotation has no effect for Services of type NodePort or ExternalName. The annotation also has no effect for Services with an empty or missing Selector.

Using Calico in direct-to-POD mode

Please follow the general Calico instructions and use the following pool member type configuration in the Helm chart’s values file:

pool_member_type: cluster

Solution Validation

The solution has been validated when using either NSX or vSphere networking. See the next tables for details:

Component Version Notes
vSphere Kubernetes Service 3.6  
vSphere Cloud Foundation 9.0  
Kubernetes 1.35 VKR version
F5 BIG-IP v17 and v21  
F5 Container Ingress Service (CIS) 2.20.3  
F5 AS3 3.56.0  

The following combination of CIS features and CNIs have been tested with VMware VKS

Feature / Mode IngressCRD MultiCluster Notes
Antrea direct to POD using nodePortLocal ✓* NA
Calico direct to POD using clusterIP
Antrea and Calico to Node using NodePort

[*] See limitations.

Limitations

  • For Antrea NodePortLocal, at present CIS doesn´t support TransportServer CRDs or the multi-cluster feature. Please consult with your F5 representative.

When deploying BIG-IP in a NSX network, the following must be considered:

  • Please note that the BIG-IP cannot have a leg in the same VPC segment where the VMware VKS cluster is.
  • Specifically for Calico, cluster IP mode cannot be used in NSX because this would require the BIG-IP to have one leg in the same VPC segment as VMware VKS which is not possible at present.