CNF Fixes and Known Issues

This list highlights fixes and known issues for this CNF release.

Fixed Issues

1678517

F5 Validation webhook denies F5BigNetStaticroute CR when eth0 interface is used.

Component: Ingress

Symptoms:
F5 Validation webhook denies F5BigNetStaticroute CR when eth0 interface is used.

Conditions:
Create a StaticRoute CR using the eth0 interface or any other interface that does not contain a ‘.’ in its name.

Impact:
A StaticRoute configured with eth0 or any interface that does not include a ‘.’ in its name will not be permitted.

Fix
The validation webhook logic has been updated to allow interfaces without a ‘.’ in their names

Known Issues

1991001-1

TMM container /etc/hosts population limited to 24 hostAliases entries

Component: FSM

Symptoms: When deploying a TMM pod, the ‘/etc/hosts’ file inside the f5-tmm container is automatically populated by Kubernetes using the hostAliases section of the Helm values file. Only the 24 explicitly defined hostAliases entries are present.

When TMM container is deployed with 32 threads, tmm24 to tmm31 report following error in the logs:

Failed to get host ip for tmm24 Failed to get host ip for tmm25 Failed to get host ip for tmm26 … … Failed to get host ip for tmm31

Conditions:
TMM pod is running in environment with hyperthreading (SMT) enabled with 16 physical cores (which would result in TMM being deployed with 32 TMM threads).

Impact:
Missing ‘/etc/hosts’ entries (entries 25–32) prevent proper hostname resolution inside the TMM container. Any service or process relying on those hostnames will fail to resolve them.

Workaround:
Add the missing host aliases directly in the Helm chart values file under the hostAliases section. This ensures Kubernetes pre-populates ‘/etc/hosts’ with all required entries during pod creation, without requiring runtime modification.


    - ip: "169.254.0.25"
      hostnames:
      - "tmm24"
    - ip: "169.254.0.26"
      hostnames:
      - "tmm25"
    - ip: "169.254.0.27"
      hostnames:
      - "tmm26"
    - ip: "169.254.0.28"
      hostnames:
      - "tmm27"
    - ip: "169.254.0.29"
      hostnames:
      - "tmm28"
    - ip: "169.254.0.30"
      hostnames:
      - "tmm29"
    - ip: "169.254.0.31"
      hostnames:
      - "tmm30"
    - ip: "169.254.0.32"
      hostnames:
      - "tmm31"

Fix:
In TMM deployment running with 32 threads, update Helm chart values file’s hostAliases map to include missing /etc/hosts entry for TMM thread 24 to 31.

2037497

Extra debug information is generated when liveness or readiness probe failed.

Component: DSSM

Symptoms:
Extra debug information is generated when liveness or readiness probe failing due to timeouts and this issue is not observed when probe fails during connectivity issues with redis-server but coming up only when regis-cli command timed out like when NFS server is responding slowly.

Conditions:
When NFS server is not reachable, or responding slowly and commands are timed out with out responding.

Impact:
This is only one of symptoms of the liveness probe fail but won’t effect any functionality.

2038337

Protocol inspection signatures are not getting triggered when you delete and reapply only F5BigIpsPolicy CR

Component: IPS

Known Issue:
If an F5BigIpsPolicy CR is applied together with a F5BigDnsApp or F5BigContextSecure CR, deleting and reapplying only the F5BigIpsPolicy CR causes the protocol inspection signatures to stop processing.

Conditions:
F5BigIpsPolicy CR is applied in conjunction with a F5BigDnsApp CR or F5BigContextSecure CR.

Then F5BigIpsPolicy CR is deleted and then reapplied. Impact:
The protocol inspection signatures will stop processing traffic.

Workaround:
Delete and reapply both the F5BigIpsPolicy CR and the F5BigDnsApp/F5BigContextSecure CR together, rather than reapplying the F5BigIpsPolicy CR alone.

1574561

The tmm-init ConfigMap Overwritten During Rolling Upgrade

Symptoms:
During an f5ingress upgrade, custom TMM user data stored in the ConfigMap is overwritten, resulting in the loss of custom configurations

Conditions:
Upgrade f5ingress Helm chart to a newer version

Impact:
Overwriting custom configuration can lead to interruptions in services provided by CNF/SPK.

Workaround:
Save the tmm-init configuration before the upgrade. After updating the f5ingress Helm chart, transfer the custom configuration from the saved tmm-init file to the user_conf.tcl section of the new tmm-init configuration.

Fix:
If you have custom tmm configuration you must save tmm-init configuration to restore it after rolling update.

1968153-1

Traffic stats missing drop counter for trunk usecases

Component: FSM

Symptoms: Traffic stats are not present when packets are dropped when using a trunk interface.

Conditions: The trunk does not have any interfaces to forward the traffic to

Impact: Missing diagnostics

1492301-3

Increase in pool member statistics for some time after HSL pool members is down.

Component: FSM

Symptoms:
Pool member statistics continue to increment for the duration of configured monitor time-out even after pool member goes down.

Conditions:
While the traffic runs continuously, if HSL pool member goes down when monitor is attached to the pool.

Impact:
The stats increment for a while (monitor timer-out period) after the pool member has been down.

Workaround:
After pool member goes down, wait for the duration of configured monitor time-out value, before starting the traffic.

Fix:
Once the pool member is down. the stats should stop incrementing.

2008705

If you enable session reporting on flow filter, the drop counters are increasing when CPU is loaded to 8%

Component: PE

Symptoms:
The drop counters in the pem_actions_stat table continue to increase, and new subscriber logins are failing.

Conditions:
The Policy Enforcer is provisioned, and the PEM policy is configured for session reporting. A high number of subscribers are attempting to log in. When new subscribers more than 9K per TMM pod are added.

Impact:
New subscriber logins will NOT be successful, leading to traffic impact.

Workaround:
Disable the session reporting action in the PEM policy.

2035321-1

DPDK’s IAVF driver used for Intel E810 VF require advanced RSS config support.

Symptoms:
When using Intel E810 NICs with firmware versions below 4.0 (e.g. 3.10), the DPDK IAVF driver fails to configure Advanced RSS Hash. The following errors are observed in the logs:

iavf_add_del_rss_cfg(): Failed to execute command of OP_ADD_RSS_CFG iavf_init_rss(): fail to set default RSS iavf_dev_configure(): configure rss failed

As a result, DPDK fails to configure the port

Conditions:

  • Hardware: Intel E810 NICs

  • Firmware versions: Earlier than v4.0 (not supporting advanced RSS configuration commands)

  • Ice Driver Version: Earlier than v1.17.2

  • Driver: DPDK IAVF driver with Advanced RSS hash enabled

Impact:
Port initialization fails when attempting to use advanced RSS configuration on unsupported firmware.

Application startup dependent on DPDK port configuration may be blocked.

Workaround:
Use the following firmware and PF driver versions for Intel E810:

  • Firmware versions: v4.0 or later.

  • Ice Driver Version: v1.17.2 or later.

This updated NIC firmware and PF driver provides the required support for advanced RSS configuration.

Fix:
Support for Advanced RSS requires E810 firmware version 4.0 or newer. On older firmware (3.10 and earlier), Advanced RSS configuration is not supported and will cause port initialization to fail. Update the NIC firmware to at least 4.0 to enable this feature. In addition to upgrading the NIC firmware to version 4.0 or later, the PF ice driver must also be updated to at least version 1.17.2 to ensure compatibility and proper operation of Advanced RSS.