TMM Resources

Overview

Service Proxy for Kubernetes (SPK) uses standard Kubernetes Requests and Limits parameters to manage container CPU and memory resources. If you intend to modify the Service Proxy Traffic Management Microkernel (TMM) resource allocations, it is important to understand how Requests and Limits are applied to ensure the Service Proxy TMM Pod runs in Guaranteed QoS.

This document describes the default Requests and Limits values, and demonstrates how to properly modify the default values.

TMM Pod limit values

The containers in the Service Proxy TMM Pod install with the following default resources.limits:

Container memory cpu hugepages-2Mi
f5-tmm 2Gi 2 3Gi
debug 1Gi “500m” None
f5-tmm-routing 1Gi “700m” None
f5-tmm-routed 512Mi “300m” None

Guaranteed QoS class

The Service Proxy TMM container must run in the Guaranteed QoS class; top-priority Pods that are guaranteed to only be killed when exceeding their configured limits. To run as Guaranteed QoS class, the Pod resources.limits and resources.requests parameters must specify the same values. By default, the Service Proxy Pod’s resources.limits are set to the following values:

_images/spk_info.png Note: When the resources.requests parameter is omitted from the Helm values file, it inherits the resources.limits values.

tmm:
  resources:
    limits:
      cpu: "2"
      hugepages-2Mi: "3Gi"
      memory: "2Gi"

_images/spk_warn.png Important: Memory values must be set using either the Mi or Gi suffixes. Do not use full byte values such as 1048576, or the G and M suffixes. Also, do not allocate CPU cores using fractional numbers. These values will cause the TMM Pod to run in either BestEffort or Burstable QoS class.

Verify the QoS class

The TMM Pod’s QoS class can be determined by running the following command:

oc get pod -l app=f5-tmm -o jsonpath='{..qosClass}{"\n"}' -n <project>

In this example, the TMM Pod is in the spk-ingress Project:

oc get pod -l app=f5-tmm -o jsonpath='{..qosClass}{"\n"}' -n spk-ingress
Guaranteed

Modifying defaults

Service Proxy TMM requires hugepages to enable direct memory access (DMS). When allocating additional TMM CPU cores, hugepages must be pre-allocated using the hugepages-2Mi parameter. To calculate the minimum amount of hugepages, use the following formula: 1.5GB x TMM CPU count. For example, allocating 4 TMM CPUs requires 6GB of hugepages memory. To allocate 4 TMM CPU cores to the f5-tmm container, add the following limits to the SPK Controller Helm values file:

tmm:
  resources:
    limits:
      cpu: "4"
      hugepages-2Mi: "6Gi"
      memory: "2Gi"