CNF Fixes and Known Issues¶
This list highlights fixes and known issues for this CNF release.
Fixed Issue¶
Bug ID 1824317-1¶
Controller fails in discovering endpoints from service which are using named ports.
Affected Product(s)
BIG-IP NEXT (CNF)
Known Affected Versions
CNF-2.0.0
Component
ingress
Symptoms
Controller may not be able to find the target port from the endpoints and application deployments. When such case happens, the controller does not configure TMM for respective ingress TCP CR. CR status could reflect “False” state.
Impact
Controller may fail to configure TMM with ingress configs even if a ingress TCP CR exists, applications pods are deployed, and they seem to be running fine.
Conditions
There are 3 conditions which could lead to this issue:
- There may be a discrepancy in the pods and endpoints cache maintained by the controller. The caching mechanism is supported by the Kubernetes client-go library. If pod cache is not updated and service is using named port then controller may not get the target port. If controller finds the target port by referring pod cache but endpoint cache is not updated then also controller may fail in getting endpoints matching the port number. This can only happen if pod or endpoint cache never gets synchronized with Kubernetes API server due to issues related to environment, application pods etc.
- The app’s pods or endpoints might be in a faulty or inconsistent state. This could lead to incorrect updates of the pod and endpoint resources on the Kubernetes API server.
- The Kubernetes API server itself might not be updating the pod or endpoint resources correctly due to environmental factors or underlying infrastructure issues.
Workaround
You can try any one of the following workarounds to resolve this issue:
- Scale down application deployment and scale it back up. This triggers new events, which may help synchronize the pod and endpoints cache in the controller with the Kubernetes API server.
- Scale down controller and scale it back up. This allows the controller to process all events related to the custom resource (CR), application service, endpoints, and so on, upon startup. As a result, the controller’s caches may synchronize.
Fix Text
Below fix is made under the assumption that the issue stems from a cache synchronization problem. However, if the root cause is Application/Environmental issue, the CNE controller will not be able to resolve it.
Replaced the implementation that relies on application pod to retrieve the target port. Instead, the same information can be obtained directly from the endpoints/endpointSlices resource. Controller only relies on endpoints/endpointSlices resources to perform service discovery.
If an endpoint belongs to the service of interest lacks the port number specified in the service, controller records endpoint details stored in the cache to collect useful data for debugging purpose. In such case, controller also retrieves the endpoints details directly from the Kubernetes API server and log the information.
Bug ID 1691473-1¶
Unable to disable RBAC settings for F5 version validator.
Affected Product(s)
BIG-IP NEXT (CNF)
Known Affected Versions
CNF-2.0.0
Component
Cert-Mgr
Symptoms
RBAC settings for the F5 version validator are created as part of the installation and cannot be disabled.
Impact
- If the installing user role lacks permission to create RBAC settings, the CNF installation will fail.
- If RBAC settings are configured separately before the installation, it may cause conflicts, leading to CNF installation failure.
Conditions
- Open source cert-manager is enabled
- RBAC settings are configured separately by specific persona.
Workaround
None
Fix Text
You can now enable or disable RBAC creation via the values YAML file.
Known Issues¶
Bug ID 1854693-1¶
The container logs are not accessible due to ‘fsnotify’ errors.
Affected Product(s)
BIG-IP NEXT (CNF)
Known Affected Versions
CNF-2.0.0
Component
ingress
Symptoms
The logs of containers have the following error - “failed to create fsnotify watcher: too many open files” and complete log is not accessible.
Impact
This would affect all containers and not only SPK/CNF containers as the limits are system wide.
Conditions
Accessing or trying to view logs of any container with ‘kubectl logs’ command.
Workaround
The system limits for the following parameters have to be increased with the below commands -
sysctl -w fs.inotify.max_user_watches=<num>
sysctl -w fs.inotify.max_user_instances=<num>
where <num>
should be larger number than the current value.
The current value can be retrieved by issuing
sysctl fs.inotify.max_user_watches
sysctl fs.inotify.max_user_instances
This number needs to be set based on the overall system resources
Bug ID 1920305-1¶
Low transactions per second (TPS) observed for CGNAT 44 traffic with DAG
Affected Product(s)
BIG-IP NEXT (CNF)
Known Affected Versions
CNF-2.0.0
Component
DAG
Symptoms
Low transactions per second observed for the CGNAT 44 traffic with the DAG layer.
Impact
Low transactions per second observed for CGNAT 44 traffic.
Conditions
This issue occurs when the looseInitiation
parameter is enabled in the DAG namespace with a high timeout configuration.
Workaround
- If there is no requirement for inbound connections, F5 recommends setting the
looseInitiation
parameter to false. - If there is a requirement for inbound connections, F5 recommends to set the
looseInitiation
parameter to true, with a low timeout value such as 2 seconds.
For additional support resources and technical documentation, see:
- The F5 Technical Support website: http://www.f5.com/support/
- The MyF5 website: https://my.f5.com/manage/s/
- The F5 DevCentral website: http://community.f5.com/