BIG-IP Next for Kubernetes Fixes and Known Issues¶
This list highlights fixes and known issues for this BIG-IP Next for Kubernetes release.
Version: 2.0.0-LA
Build: 2.0.0-LA
Cumulative fixes from BIG-IP Next for Kubernetes v2.0.0-LA that are included in this release
Known Issues in BIG-IP Next for Kubernetes 2.0.0-LA
Cumulative fixes from BIG-IP Next for Kubernetes 2.0.0-LA that are included in this release
There are no cumlative fixes in this release.
Known Issues in BIG-IP Next for Kubernetes v2.0.0-LA
BIG-IP Next for Kubernetes Issues
ID Number | Severity | Links to More Info | Description |
1756581 | 3-Major | Deleting F5SPKVlan CR before F5SPKVXlan CR can cause TMM core | |
1754205 | 2-Critical | TCP Offloading for Ingress Traffic Requires Tagged or Untagged VLANs on Both External and Internal Interfaces | |
1753117 | 2-Critical | F5SPKVlan CR Status Not Updating to True After F5Ingress Restart | |
1714065 | 2-Critical | QKView Utility does not successfully generate QKView files for dSSM Containers | |
1754853 | 2-Critical | Unable to Retrieve mrfdb Records via CWC Debug API | |
1755645 | 2-Critical | HTTP2 bi-directional traffic is not working as expected | |
1753561 | 2-Critical | Traffic to ServiceType LoadBalancer Application Fails | |
1750345 | 2-Critical | TMM pod recovers in 2.5 minutes after deletion | |
1757573 | 2-Critical | Logs from certain containers in the F5ingress pod are unavailable in TODA. | |
1754025 | 3-Major | Orchestrator supports attaching only one SNAT pool through the SPKInstance CR | |
1754001 | 3-Major | Configuring the same F5SPKSnatpool CR in multiple F5SPKEgress CRs can impact the Catch-all listener functionality. | |
1753689 | 3-Major | iHealth Dashboard Displays Incorrect Platform and Version Information | |
1754865 | 3-Major | F5Ingress and RabbitMQ Revision Numbers Continuously Incrementing in helm list Output | |
1750961 | 3-Major | BIG-IP Next for Kubernetes deployment enters a restart loop when the application namespace (watchNamespace) is not found | |
1756665 | 3-Major | Ingress traffic routes through eth0 instead of the internal interface when either L4Route or HttpRoute CR is deleted | |
1757625 | 3-Major | QKView Does Not Collect Auxiliary Information for Some Containers | |
1757297 | 3-Major | The Kubernetes Storage Class must be created as a prerequisite for the QKView API functionality to work | |
1755165 | 3-Major | BIG-IP Next for Kubernetes Controller and TMM installation failure with Orchestrator | |
1757385 | 3-Major | Logs for f5-tmm-routing container are not available through Fluentd/Fluentbit | |
1753021 | 3-Major | Deleting the F5SPKVlan/F5SPKVXlan CR referenced in the F5SPKEgress CR without first deleting the F5SPKEgress CR impacts VXLAN interfaces on the nodes |
Known Issue details for BIG-IP Next for Kubernetes v2.0.0-LA
1756581 : Deleting F5SPKVlan CR before F5SPKVXlan CR can cause TMM core
Component: BIG-IP Next for Kubernetes
Symptoms:
Communication on VXLAN interface may fail and TMM may core, when F5SPKVlan CR referenced in F5SPKVXlan CR is deleted before deleting F5SPKVXlan CR.
Conditions:
1. F5SPKVXlan CR is configured to reference an F5SPKVlan CR
2. F5SPKVlan CR deleted before the corresponding F5SPKVXlan CR is removed
3. The VXLAN resource is removed from the F5SPKEgress CR.
Impact:
RDeleting an F5SPKVlan CR while it is still referenced by an F5SPKVXlan CR can disrupt the VXLAN interface in TMM, potentially causing communication failures. To avoid this issue, always delete the F5SPKVXlan CR before removing the F5SPKVlan CR it references.
Workaround:
None
1754205 : TCP Offloading for Ingress Traffic Requires Tagged or Untagged VLANs on Both External and Internal Interfaces
Component: BIG-IP Next for Kubernetes
Symptoms:
TCP offloading fails to function when there is a mismatch in VLAN configurations, such as one VLAN being tagged and the other untagged, between the external and internal VLAN interfaces.
Conditions:
1. F5SPKVlan CRD Configuration: One VLAN is configured as tagged, while the other is configured as untagged.
2. TCP Offloading: TCP offloading is enabled on the system.
Impact:
TCP offloading does not function, leading to longer-than-normal delays in traffic processing.
Workaround:
None
1753117 : F5SPKVlan CR Status Not Updating to True After F5Ingress Restart
Component: BIG-IP Next for Kubernetes
Symptoms:
When F5Ingress is restarted, some existing F5SPKVlan CRs do not update their "Ready" status to "True", even though the CR configuration is sent to all gRPC endpoints.
Conditions:
Restart of F5Ingress
Impact:
There is no impact on functionality. However, when queried for the F5SPKVlan CR, an incorrect "Ready" state and message are displayed.
Workaround:
The F5SPKVlan CR configuration sent to all gRPC endpoints can be found in the F5Ingress logs.
1714065 : QKView Utility does not successfully generate QKView files for dSSM Containers
Component: BIG-IP Next for Kubernetes
Symptoms:
The QKView Utility is unable to generate a QKView TAR file for Distributed Session State Management (dSSM) containers.
Conditions:
A QKView is being generated on the BIG-IP Next for Kubernetes to collect system logs and configuration details.
Impact:
QKView is unable to collect information from the dSSM container, resulting in incomplete or missing data in the generated QKView file.
Workaround:
To obtain debug logs from the DSSM container, you can extract the f5-fluentd-f5-toda-fluentd.default.tar.gz which should be created as a result of the qkview API request.
The log will exist under qkview/subpackages/qkview/default/dssm-f5-dssm-db-0/f5-dssm/f5-dssm.log
1757573 : Logs from certain containers in the F5ingress pod are unavailable in TODA
Component: BIG-IP Next for Kubernetes
Symptoms:
Only logs from the main F5ingress container are sent to Fluentd, while logs from other containers in the F5ingress pod are not captured.
Conditions:
1. TODA log collection and viewing
2. QkView API
Impact:
1. You may be unable to view all logs via Fluentd.
2. If a pod restart or log rotation, the logs prior to restart/rotation would not be available in the QKView
Workaround:
None
1754025 : Orchestrator supports attaching only one SNAT pool through the SPKInstance CR.
Component: BIG-IP Next for Kubernetes
Symptoms:
The SPKInstance CR includes a field to specify a Shared SNAT pool name:
controller:
egress:
snatpoolName:
In wholeClusterMode, each application namespace requires its own F5SPKEgress CR and F5SPKSnatpool CR. However, the SPKInstance CR does not support the attachment of multiple SNAT pools, limiting the ability to manage separate SNAT pools for each namespace.
Conditions:
The 'snatpoolName' field in the SPKInstance CR accepts only a single SNAT pool name.
Impact:
If you set the 'snatpoolName' field in the SPKInstance configuration while using wholeClusterMode, it won't work or have any effect.
For example:
SPKInfrastructure Configuration:
wholeClusterMode: "enabled"
SPKInstance Configuration:
controller:
egress:
snatpoolName: "egress_snatpool" # This setting will be ignored.
Instead, you need to create separate SNAT pool configurations for each app namespace manually. The system doesn't let you use just one SNAT pool across all namespaces when in wholeClusterMode.
Workaround:
If wholeClusterMode is enabled, you can ignore the 'snatpoolName' field in the SPKInstance CR.
Configuring the F5SPKEgress and F5SPKSnatpool CRs separately for each application namespace is sufficient for the egress use case to function correctly.
1754001 : Configuring the same F5SPKSnatpool CR in multiple F5SPKEgress CRs can impact the Catch-all listener functionality.
Component: BIG-IP Next for Kubernetes
Symptoms:
When you configures the same F5SPKSnatpool CR across multiple F5SPKEgress CRs (for example, with an IPv4 address in the SNAT), the EDENY error may occur, showing a catch-all listener for IPv4 on other F5SPKEgress CRs.
Conditions:
1. Apply two or more F5SPKVVXLAN CRs, with an SNAT pool configured with an IPv4 address.
2. Configure F5SPKEgress CRs for each VXLAN.
3. Use the same F5SPKSnatpool CR across all the F5SPKEgress CRs.
Impact:
Configuring the same F5SPKSnatpool CR in multiple F5SPKEgress CRs can affect the Catch-all listener functionality, as the address used in the SNAT pool is already in use by one of the F5SPKEgress CRs. Therefore, It is suggested not to use the same SNAT pool across multiple Egress CRs to avoid conflicts and ensure proper traffic handling.
Workaround:
None
1753689 : iHealth Dashboard Displays Incorrect Platform and Version Information</h4>
Component: BIG-IP Next for Kubernetes
Symptoms:
In the iHealth dashboard (ihealth.f5.com/qkview-analyzer), the Platform field displays "Could not determine," and the Version field shows the default version (1.0.0) instead of the actual Platform and Version data.
Conditions:
iHealth is unable to discover the hostname and Platform information from the generated QKView when using the CWC Debug API.
Impact:
Qkview is unable to collect information from the dSSM container, resulting in incomplete or missing data in the generated qkview file.
Workaround:
None
1754853 : Unable to Retrieve mrfdb Records via CWC Debug API
Component: BIG-IP Next for Kubernetes
Symptoms:
When executing the mrfdb command using the job ID in the Debug API, no output is returned.
Conditions:
1. Install BIG-IP Next for Kubernetesusing the orchestrator tool.
2. Deploy a UDP app and send ingress traffic to the app.
3. Execute the Debug API for the mrfdb command to retrieve the Job ID.
4. Query the Debug API using the Job ID.
5. Observe that no response is returned for the curl command.
Impact:
mrfdb details cannot be retrieved using the Debug API.
Workaround:
To retrieve mrfdb details, exec into the debug container within the TMM Pod and execute the mrfdb command:
1. Use the following command to access the debug container:
kubectl exec -it ds/f5-tmm -c debug -n
For example:
kubectl exec -it ds/f5-tmm -c debug -n f5-utils -- bash
2. Run the mrfdb command to fetch the details:
/mrfdb -ipport
For example:
/mrfdb -ipport 10.101.197.166:26379 -serverName dssm-svc -displayAllBins
This allows you to manually retrieve the mrfdb information.
1755645 : HTTP2 bi-directional traffic is not working as expected
Component: BIG-IP Next for Kubernetes
Symptoms:
HTTP2 bi-directional traffic is not working as expected, resulting in failed traffic routing and misdirected requests.
Conditions:
Traffic sent from an application pod, intended for bi-directional persistence, uses the Node's IP address instead of the application pod's IP address.
Impact:
The bi-directional persistence mechanism does not work as expected, leading to failed traffic routing and misdirected requests. This results in improper handling of application traffic and potential disruptions to the expected behavior of the application.
Workaround:
None
1753561 : Traffic to ServiceType LoadBalancer Application Fails
Component: BIG-IP Next for Kubernetes
Symptoms:
The kernel route for a ServiceType LoadBalancer application is not created in the Traffic Management Microkernel (TMM) when the F5SPKServiceTypeLBIpPool CR is deployed in the `app-ns` namespace.
Conditions:
F5SPKServiceTypeLBIpPool CR is deployed in the `app-ns` namespace.
Impact:
Traffic to ServiceType LoadBalancer Application Fails
Workaround:
None
1750345 : TMM pod recovers in 2.5 minutes after deletion
Component: BIG-IP Next for Kubernetes
Symptoms:
With containerd version 1.7.x, the CNI `Del` and `Add` interface encounters the following failure:
"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400."
Conditions:
This issue is observed with 1.7.x containerd version on some systems where movement of SFs from tmm netns to host netns is taking 100 seconds on delete TMM POD
Impact:
There is a delay in the TMM pod reaching a stable state after deleting TMM POD.
Workaround:
On some systems with containerd version 2.0.0 and runc version 1.2.1, no significant delay observed with movement of SFs from tmm netns to host netns on deleting TMM POD
1754865 : F5Ingress and RabbitMQ Revision Numbers Continuously Incrementing in helm list Output
Component: BIG-IP Next for Kubernetes
Symptoms:
After installing the Orchestrator and creating at least one SPKInstance and SPKInfrastructure combination, the revision numbers for f5ingress and rabbitmq in the helm list output continuously increase, even though no updates or changes are made.
Conditions:
1. The Orchestrator must be installed, and BIG-IP Next for Kubernetesmust be deployed using the Orchestrator.
2. The Orchestrator pod must remain running during this process.
Impact:
There is no impact on functionality; however, the retried content clutters the Orchestrator logs and causes the revision numbers in the helm list to become less useful.
Workaround:
None
1750961 : BIG-IP Next for Kubernetes deployment enters a restart loop when the application namespace (watchNamespace) is not found
Component: BIG-IP Next for Kubernetes
Symptoms:
During BIG-IP Next for Kubernetes installation through the orchestrator, if the watchNamespace is set to a non-existent namespace, all pods continuously restart in a loop.
Conditions:
Install the orchestrator, create the SPKInfrastructure CR, and then create an SPKInstance CR with the watchNamespace filled with a non-existent namespace.
If the watchNamespace field in the SPKInstance CR is left empty, the pods will start correctly. However, the f5ingress will not monitor any namespaces for configuration.
Impact:
BIG-IP Next for Kubernetes enters a constant loop of pods being created and terminated, leaving the product unable to perform any other operations.
Workaround:
Ensure that the watchNamespace field is populated with at least one valid and existing namespace, and that the specified namespace (or all specified namespaces, if multiple) are created in the cluster before proceeding with the BIG-IP Next for Kubernetes installation.
1756665 : Ingress traffic routes through eth0 instead of the internal interface when either L4Route or HttpRoute CR is deleted
Component: BIG-IP Next for Kubernetes
Symptoms:
Ingress traffic routes through eth0 instead of the internal interface when either L4Route or HttpRoute CR is deleted.
Conditions:
L4Route and HttpRoute CRs are configured to be served by same service (or any two ingress use cases (CRs) configured to be served by the same service).
Impact:
Traffic flows via eth0 instead of internal interface, leading to reduced throughput.
Workaround:
If you have multiple F5SPKIngress CRs using the same service and want to delete one, delete all CRs for that service, then re-apply the ones you want to keep, leaving out the one you want to delete.
1757625 : QKView Does Not Collect Auxiliary Information for Some Containers
Component: BIG-IP Next for Kubernetes
Symptoms:
QKView does not collect auxiliary information, such as disk space, system uptime, etc for some containers.
Conditions:
QKView API
Impact:
QKView will not include non-critical information, such as disk space usage or system uptime, for some containers.
Workaround:
None
1757297 : The Kubernetes Storage Class must be created as a prerequisite for the QKView API functionality to work
Component: BIG-IP Next for Kubernetes
Symptoms:
QKView logs panic errors in the CWC container and QKView API will not function as expected.
Conditions:
This issue occurs when the storage class is not installed on the cluster and SPK/CWC is installed.
Impact:
QKView API will not function as expected.
Workaround:
None
1755165 : BIG-IP Next for Kubernetes Controller and TMM installation failure with Orchestrator
Component: BIG-IP Next for Kubernetes
Symptoms:
1. Under specific conditions, the BIG-IP Next for Kubernetes Controller and TMM may fail to install via the Orchestrator.
2. The Orchestrator logs display the error: "f5-toda-logging.deployment not defined"
Conditions:
1. Installing BIG-IP Next for Kubernetes using the Orchestrator
2. Other conditions which trigger this are unknown.
Impact:
BIG-IP Next for Kubernetes cannot function because the Controller and TMM are not installed.
Workaround:
Uninstall BIG-IP Next for Kubernetes by deleting the SPKInfrastructure and SPKInstance CRs. Restart the Orchestrator, then reapply the CRs to reinstall BIG-IP Next for Kuberbetes. Multiple retries may be needed to achieve a successful installation.
1757385 : Logs for f5-tmm-routing container are not available through Fluentd/Fluentbit
Component: BIG-IP Next for Kubernetes
Symptoms:
For Fluentd logging, processes are expected to write their standard output and error streams to specific named pipes in the filesystem. In the f5-tmm-routing container, the mapping of standard output and error to these pipes is incomplete. As a result, logs generated by this container are not captured by the standard logging package.
Conditions:
This issue occurs when the environment variable EXPORT_ZEBOS_LOGS="true" is set.
Impact:
1. Users may encounter an error message such as:
"Error opening pipe for write '/var/log/f5/f5-tmm-routing_f5-tmm-sbjlh_stdout.pipe': No such device or address (os error 6)".
2. Logs for the f5-tmm-routing container will not be accessible through Fluentd.
Workaround:
1. Ensure that the environment variable EXPORT_ZEBOS_LOGS is unset.
2. Use the kubectl logs command to retrieve logs directly.
For example:
kubectl logs
1753021 : Deleting the F5SPKVlan/F5SPKVXLAN CR referenced in the F5SPKEgress CR without first deleting the F5SPKEgress CR impacts VXLAN interfaces on the nodes
Component: BIG-IP Next for Kubernetes
Symptoms:
The F5SPKVlan/F5SPKVXLAN CR referenced in the F5SPKEgress CR can be removed. However, a warning will be displayed if you delete the F5SPKVlan/F5SPKVXLAN CR while it is still referenced in the F5SPKEgress CR.
Conditions:
1. F5SPKEgress CR with PseudoCNI enabled, referencing a VXLAN resource.
2. The VXLAN resource is removed from the F5SPKEgress CR.
Impact:
Removing the F5SPKVXLAN CR will affect the VXLAN interfaces on the nodes and may result in the deletion of PseudoCNI routes. Therefore, it is strongly recommended not to delete the F5SPKVlan/F5SPKVXLAN CR referenced in the F5SPKEgress CR without first deleting the F5SPKEgress CR.
Workaround:
None