TMM Resources¶
BIG-IP Next for Kubernetes uses standard Kubernetes Requests and Limits parameters to manage container CPU and memory resources. If you intend to modify the Traffic Management Microkernel (TMM) resource allocations, it is important to understand how Requests and Limits are applied to ensure the TMM Pod runs in Guaranteed QoS.
This document describes the default Requests and Limits values, and demonstrates how to properly modify the default values.
TMM Pod limit values¶
The containers in the TMM Pod install with default deploymentSize set to Small. Following are the available deploymentSize’s for TMM.
Small:
| Container | memory | cpu | hugepages-2Mi |hugepages-1Gi | —-|—-|—-|—- | f5-tmm | 2Gi | 2 | 4Gi |
Medium:
| Container | memory | cpu | hugepages-2Mi |hugepages-1Gi | —-|—-|—-|—- | f5-tmm | 2Gi | 4 | 8Gi |
Large:
| Container | memory | cpu | hugepages-2Mi |hugepages-1Gi | —-|—-|—-|—- | f5-tmm | 2Gi | 8 | 16Gi |
Max:
| Container | memory | cpu | hugepages-2Mi |hugepages-1Gi | —-|—-|—-|—- | f5-tmm | 2Gi | 16 | 25Gi |
For the deploymentSize: "Max", add a tmm environment variable for 16 threads. Set the PAL_CPU_SET environment variable under spec.advanced.tmm.env in cneinstance-cr.yaml, see Apply CNEInstance CR.
advanced:
tmm:
env:
- name: "PAL_CPU_SET"
value: "0-15"
Guaranteed QoS class¶
The TMM container must run in the Guaranteed QoS class; top-priority Pods that are guaranteed to only be killed when exceeding their configured limits. To run as Guaranteed QoS class, the Pod resources.limits and resources.requests parameters must specify the same values. By default, the TMM Pod’s resources.limits and resources.requests specify the same values in deploymentSize, Small, Medium, and Large.
Important: If
deploymentSizeis set toMax, the TMM Pod does not run in Guaranteed QoS class as theresources.limitsandresources.requestsdoes not specify the same values.
Verify the QoS class¶
The TMM Pod’s QoS class can be determined by running the following command:
kubectl get pod -l app=f5-tmm -o jsonpath='{..qosClass}{"\n"}' -n <project>
In this example, the TMM Pod is in the default Project:
kubectl get pod -l app=f5-tmm -o jsonpath='{..qosClass}{"\n"}' -n default
Guaranteed
Modifying defaults¶
The TMM requires hugepages to enable direct memory access (DMS). When allocating additional TMM CPU cores, hugepages must be pre-allocated using the hugepages-2Mi parameter. To calculate the minimum amount of hugepages, use the following formula: 1.5GB x TMM CPU count. For example, allocating 4 TMM CPUs requires minimum 6GB of hugepages memory. Based on the requirement, select a correct deploymentSize and update CNEInstance CR.
Note: To calculate the minimum tmmMapresHugepages use the following formula: 768 x TMM CPU count.
For example: allocating 4 TMM CPUs requires minimum 3072 hugepages.
For example: allocating 4 TMM CPUs update the palCPUSet to 0-3 (The ranges of CPU identifiers that are to be dedicated to TMM instances)