Last updated on: 2024-03-19 12:22:57.

VMware Tanzu

Overview of VMware Tanzu

VMware Tanzu is a suite of products that helps users run and manage multiple Kubernetes (K8S) clusters across public and private “clouds”. Tanzu Kubernetes Grid (TKG) is a multi-cloud Kubernetes footprint that you can run both on-premises in vSphere and in the public cloud on Amazon EC2 and Microsoft Azure. TKG by default uses Antrea CNI.

What is NodePortLocal?

For clusters using Antrea as CNI, the NodePortLocal feature option can be used; a Pod can be directly reached from an external network through a port in the Node. In this mode, instead of relying on NodePort Services implemented by kube-proxy, CIS consumes NPL port mappings published by the Antrea Agent (as K8s Pod annotations) to read NodePort and NodeIP information to load balance traffic to backend pods.

Configuring VMware Tanzu

Prerequisites

  • Deploy Tanzu Kubernetes cluster with Antrea CNI. See VMware documentation for more information.
  • Enable nodeportLocal in antrea-agent config. Prior to Antrea version 1.4, a feature gate, NodePortLocal, must be enabled on the antrea-agent for the feature to work. From version 1.4, it is enabled by default.
antrea-agent configmap
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
kind: ConfigMap
apiVersion: v1
metadata:
  name: antrea-config-dcfb6k2hkm
  namespace: kube-system
data:
  antrea-agent.conf: |
    featureGates:
      # True by default starting with Antrea v1.4
      # NodePortLocal: true
    nodePortLocal:
      enable: true
      # Uncomment if you need to change the port range.
      # portRange: 61000-62000

Configuration

  1. To enable NPL feature, set --pool-member-type to nodeportlocal in CIS arguments. This is only applicable for Antrea CNI enabled clusters.

     args:
    --pool-member-type="nodeportlocal"
    

    Example:

    cis-deploy.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: k8s-bigip-ctlr-deployment
      namespace: kube-system
    spec:
    # DO NOT INCREASE REPLICA COUNT
      replicas: 1
      selector:
        matchLabels:
          app: k8s-bigip-ctlr-deployment
      template:
        metadata:
          labels:
            app: k8s-bigip-ctlr-deployment
        spec:
          # Name of the Service Account bound to a Cluster Role with the required
          # permissions
          containers:
            - name: k8s-bigip-ctlr
              image: "f5networks/k8s-bigip-ctlr:latest"
              env:
                - name: BIGIP_USERNAME
                  valueFrom:
                    secretKeyRef:
                    # Replace with the name of the Secret containing your login
                    # credentials
                      name: bigip-login
                      key: username
                - name: BIGIP_PASSWORD
                  valueFrom:
                    secretKeyRef:
                    # Replace with the name of the Secret containing your login
                    # credentials
                      name: bigip-login
                      key: password
              command: ["/app/bin/k8s-bigip-ctlr"]
              args: [
                # See the k8s-bigip-ctlr documentation for information about
                # all config options
                # https://clouddocs.f5.com/containers/latest/
                "--bigip-username=$(BIGIP_USERNAME)",
                "--bigip-password=$(BIGIP_PASSWORD)",
                "--bigip-url=<ip_address-or-hostname>",
                "--bigip-partition=<name_of_partition>",
                "--pool-member-type=nodeportlocal",
                "--insecure",
                ]
          serviceAccountName: bigip-ctlr
    
  2. Push this configuration with the following command:

    kubectl apply -f cis_deploy.yaml
    
  3. Services used should be annotated with nodeportlocal.antrea.io/enabled: "true" for selecting pods for NodePortLocal. This enables NodePortLocal for all the Pods which are selected by the Service through a selector, and the ports of these Pods will be reachable through Node ports allocated. The selected Pods will be annotated with the details about allocated Node port for the Pod.

    service.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        nodeportlocal.antrea.io/enabled: "true"
      labels:
        app: f5-hello-world
      name: f5-hello-world
    spec:
      ports:
      - name: f5-hello-world
        port: 8080
        protocol: TCP
        targetPort: 8080
      selector:
        app: f5-hello-world
      type: ClusterIP
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: f5-hello-world
      name: f5-hello-world
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: f5-hello-world
      template:
        metadata:
          labels:
            app: f5-hello-world
        spec:
          containers:
            - env:
                - name: service_name
                  value: f5-hello-world
              image: f5devcentral/f5-hello-world:latest
              imagePullPolicy: Always
              name: f5-hello-world
              ports:
                - containerPort: 8080
                  protocol: TCP
    

    If the NodePortLocal feature gate is enabled, then all the Pods in the Deployment will be annotated with the nodeportlocal.antrea.io annotation. The value of this annotation is a serialized JSON array. For the example above, the pod may look like this:

    kubectl describe po f5-hello-world-6d859874b7-prf8l -n cis
    Name:         f5-hello-world-6d859874b7-prf8l
    Namespace:    cis
    Priority:     0
    Start Time:   Wed, 09 Mar 2022 00:17:30 -0800
    Labels:       app=f5-hello-world
                  pod-template-hash=6d859874b7
    Annotations:  kubernetes.io/psp: cis-psp
                  nodeportlocal.antrea.io: [{"podPort":8080,"nodeIP":"10.244.0.3","nodePort":40001}]
    

    This annotation indicates that port 8080 of the Pod can be reached through port 40001 of the Node with IP Address 10.244.0.3. The nodeportlocal.antrea.io annotation is created and managed by Antrea.

    Note

    NodePortLocal can only be used with Services of type ClusterIP. The nodeportlocal.antrea.io annotation has no effect for Services of type NodePort or ExternalName. The annotation also has no effect for Services with an empty or missing Selector.

Limitations

CIS currently supports the NPL feature with Ingress, ConfigMap, and virtualserver resource. The feature is validated on k8s Tanzu infrastructure.

Pod Security Policies with Tanzu Kubernetes Clusters

Tanzu Kubernetes clusters are provisioned with the PodSecurityPolicy Admission Controller enabled. So, pod security policy is required to deploy workloads. Cluster administrators can deploy pods from their user account to any namespace, and from service accounts to the kube-system namespace. For all other use cases, you need to explicitly bind to a PodSecurityPolicy object.

For example, if you create a deployment without PSP, pod creation fails and you may observe below in namespace events.

kubectl get events -n test
LAST SEEN TYPE REASON OBJECT MESSAGE
19s Warning FailedCreate replicaset/f5-hello-world-7bb546899 Error creating: pods "f5-hello-world-7bb546899-" is forbidden: PodSecurityPolicy: unable to admit pod: []

To resolve this, create a PSP and a Cluster Role that grant access to use the desired policies by following the steps below.

  1. Create a PodSecurityPolicy.

    kubectl create -f psp.yaml -n test
    
    sample-psp.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: test-psp
    spec:
      privileged: false  # Prevents creation of privileged Pods
      supplementalGroups:
        rule: RunAsAny
      runAsUser:
        rule: RunAsAny
      fsGroup:
        rule: RunAsAny
      seLinux:
        rule: RunAsAny
      volumes:
      - '*'
    
  2. Create a cluster role to grant access to the policy.

    kubectl create -f clusterrole.yaml
    
    clusterrole.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: test-clusterrole
    rules:
    - apiGroups:
      - policy
      resources:
      - podsecuritypolicies
      verbs:
      - use
      resourceNames:
      - test-psp
    
  3. Create rolebinding for namespace access.

    kubectl create -f rolebinding.yaml
    
    rolebinding.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    # Bind the ClusterRole to the desired set of service accounts.
    # Policies should typically be bound to service accounts in a namespace.
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: test-rolebinding
      namespace: test
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: test-clusterrole
    subjects:
    # Example: All service accounts in my-namespace
    - apiGroup: rbac.authorization.k8s.io
      kind: Group
      name: system:serviceaccounts:test
    

Examples Repository

View more examples on GitHub.


Note

To provide feedback on Container Ingress Services or this documentation, please file a GitHub Issue.