BIG-IP Next Fixes and Known Issues¶
This list highlights known issues for this BIG-IP Next release.
Version: 20.3.0
Build: 2.716.2+0.0.50
Known Issues in BIG-IP Next v20.3.0
Cumulative fixes from BIG-IP Next v20.3.0 that are included in this release
Vulnerability Fixes
ID Number | CVE | Links to More Info | Description |
1449709-6 | CVE-2024-28889 | K000138912, BT1449709 | Possible TMM core under certain Client-SSL profile configurations |
1663073 | CVE-2024-24790 | K000141251 | CVE-2024-24790 golang: net/netip: Unexpected behavior from Is methods for IPv4-mapped IPv6 addresses |
1662993 | CVE-2024-24790 | K000141251 | CVE-2024-24790: golang: net/netip: Unexpected behavior from Is methods for IPv4-mapped IPv6 addresses |
1506949 | CVE-2024-0727 | K000138695 | CVE-2024-0727 openssl: denial of service via null dereference |
BIG-IP Next Fixes
ID Number | Severity | Links to More Info | Description |
1630885 | 0-Unspecified | CVE-2023-45142 - OpenTelemetry - potential memory exhaustion in otelhttp | |
1671813 | 1-Blocking | Upgrade shows complete at less then 100% progress bar | |
1671721 | 1-Blocking | Central Manager may hang permanently during the initial install process★ | |
1669641 | 1-Blocking | Central Manager might not mount all filesystems properly during a reboot if external storage is configured★ | |
1634149 | 1-Blocking | Node Validation improvements | |
1630093 | 1-Blocking | CM silently requires an internet connection when creating instances using the F5OS-based provider | |
1612377 | 1-Blocking | K000140722 | Central Manager cannot manage Provider if Provider certificate changes |
1611077 | 1-Blocking | UI does not load after upgrading★ | |
1602141 | 1-Blocking | Invalid certificates can disrupt configuration and status updates | |
1601221 | 1-Blocking | CM erroneously reports failover has failed during BIG-IP Next upgrade★ | |
1600445 | 1-Blocking | Historic telemetry collected by BIG-IP Next Central Manager may be lost | |
1599305 | 1-Blocking | After upgrading, unable to edit the Central Manager part of policies attached to the applications★ | |
1597037 | 1-Blocking | Adding a new TLS instance to an existing application (a default TLS instance) fails to flow traffic as expected | |
1593605 | 1-Blocking | HTTPS Traffic not working on BIG-IP Next HA formed from Central Manager with SSL Orchestrator topology | |
1590065 | 1-Blocking | The same gateway address is not considered as valid on multiple static routes | |
1589069 | 1-Blocking | AS3 application health status and alerts in the UI stay healthy and green, regardless of the application health | |
1586501 | 1-Blocking | K000140380 | Configuring external logger in Instance Log Management halts telemetry reception in Central Manager and other configured external loggers |
1585793 | 1-Blocking | The f5-fsm-tmm crashes upon configuring BADOS under traffic | |
1584753 | 1-Blocking | K000139851 | TMM in BIG-IP Next expires the license after 50 days |
1576545-1 | 1-Blocking | After upgrade, BIG-IP Next tenant os unable to export toda-otel (event logs) data to Cemtral Manager★ | |
1561053 | 1-Blocking | Application status migration status incorrectly labeled as green when certain properties are removed | |
1269733-6 | 1-Blocking | BT1269733 | HTTP GET request with headers has incorrect flags causing timeout |
1670441 | 2-Critical | Central Manager application migration tool is unable to receive very large UCS files | |
1642165 | 2-Critical | Central Manager could fail to onboard a BIG-IP Next instance even after setup appears complete | |
1631197 | 2-Critical | CVE-2024-41110 moby: Authz zero length regression | |
1619945 | 2-Critical | Boot time on KVM is excessively long when no management IP is assigned | |
1612225 | 2-Critical | Unable to Initiate BIG-IP Next Instance Upgrade | |
1602697-3 | 2-Critical | Full-proxy HTTP/2 may allow unconstrained buffering | |
1602561 | 2-Critical | Inspection services cannot be deployed when one of the instances managed by BIG-IP Next Central Manager is in unhealthy state | |
1601949 | 2-Critical | Moving a self IP from one VLAN to another VLAN across L1 networks may cause self IP unreachable | |
1591209 | 2-Critical | Unable to force re-authentication on IDP when BIG-IP Next is acting as SAML SP | |
1587445 | 2-Critical | WAF enforcer crash during handling of a specific HTTP POST request | |
1587337 | 2-Critical | HA cluster on CM UI could be unhealthy during standby upgrade★ | |
1584741 | 2-Critical | In the Table commands in iRule, the subtable count command fails in BIG-IP Next 20.x | |
1584681 | 2-Critical | Application service creation fails if name contains "fallback" | |
1584073-1 | 2-Critical | WAF enforcer might crash when application is removed during handling traffic | |
1580181 | 2-Critical | When BIG-IP Next HA is created using CM, the spinner does not refresh | |
1579365 | 2-Critical | Unsupported nested properties are not underlined during application migration process | |
1571993 | 2-Critical | Access Session data is not cleared after TMM restart | |
1564157 | 2-Critical | BIG-IP Next Central Manager requires VELOS/rSeries systems to use an SSL certificate containing the host IP address in the CN or SANs list.★ | |
1560493 | 2-Critical | Inaccurate Reflection of Selfip Prefix Length in TMM Statistics and "ip addr" Output | |
1455677-3 | 2-Critical | ACCESS Policy hardening | |
1399137 | 2-Critical | "40001: bind: address already in use" failure logs on BIG-IP Next HA setup | |
1329853 | 2-Critical | Application traffic is intermittent when more than one virtual server is configured | |
1678537-1 | 3-Major | CVE-2024-6232: reDOS vulnerability in Python tarball module can cause crash | |
1671069 | 3-Major | CVE-2024-6119: OpenSSL vulnerability | |
1634109 | 3-Major | Instance Creation Failed for rSeries 2k and 4k★ | |
1633977 | 3-Major | On rSeries system, operations which involve reboot, might result in Tenant failure state★ | |
1630889 | 3-Major | CVE-2023-45288 - golang: x/net/http2: unlimited number of CONTINUATION frames causes DoS | |
1630877 | 3-Major | CVE-2023-44487: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) | |
1629793 | 3-Major | WebSocket messages do not arrive server when using waf policy | |
1627337 | 3-Major | Removed deprecated APIs | |
1610997 | 3-Major | CM Scale: waf-policy-builder in CrashLoopBackoff during WAF policies deployment | |
1607837 | 3-Major | BIG-IP Next Central Manager does not support NTP configuration via cloud-init | |
1593381-1 | 3-Major | When upgrade fails, release version displayed in GUI is different from CLI release version.★ | |
1592929 | 3-Major | Attaching or detaching of an iRule version is not supported for AS3 application | |
1592589 | 3-Major | Suggestion details page for WAF policy with "on-demand" learning mode includes incorrect operations options | |
1589577 | 3-Major | When no token exists, LLM log writes "LICENSING-1116:DecryptionFailed" | |
1587497-1 | 3-Major | WAF security report shows alerted requests even though no alerts were generated | |
1585773-1 | 3-Major | Unable to migrate large number of applications at once | |
1585285 | 3-Major | Unable to stage applications for migration when session contains large number of application services | |
1580545 | 3-Major | iRule allows function local variable | |
1569589 | 3-Major | Default values of Access policy are not migrated | |
1560473 | 3-Major | Traffic won't work with http monitor for L3, http-transparent service | |
1472669-1 | 3-Major | idle timer in BIG-IP Next Central Manager can log out user during file uploads★ | |
1348837 | 3-Major | Admin can delete their own account | |
1348833 | 3-Major | A cryptographically insecure pseudo-random number generator was used to create passwords during the reset process. | |
1309265 | 3-Major | CVE-2022-41723 golang.org/x/net vulnerable to Uncontrolled Resource Consumption | |
1309257 | 3-Major | CVE-2022-41715 potential golang regex DoS | |
1308845 | 3-Major | CVE-2022-46146 exporter-toolkit: authentication bypass via cache poisoning | |
1251181 | 3-Major | VLAN names longer than 15 characters can cause issues with troubleshooting | |
1232521-6 | 3-Major | SCTP connection sticking on BIG-IP even after connection terminated | |
1572437 | 4-Minor | CVE-2024-0450: : python: The zipfile module is vulnerable to zip-bombs leading to denial of service | |
1531845 | 4-Minor | CVE-2023-27043: python: Parsing errors in email/_parseaddr.py lead to incorrect value in email address part of tuple | |
1516785 | 4-Minor | CVE-2023-49081: aiohttp: HTTP request modification | |
1509361 | 4-Minor | CVE-2023-50782 python-cryptography: Bleichenbacher timing oracle attack against RSA decryption | |
1507021 | 4-Minor | CVE-2023-45803: urllib3: Request body not stripped after redirect | |
1498489 | 4-Minor | LDAP Bind Password not Re-populated in BIG-IP Next Central Manager GUI | |
1490381 | 4-Minor | Pagination for iRules page not supported with a large number of iRules | |
1472337 | 4-Minor | Missing object referenced in authenticationTrustCA | |
1394625 | 4-Minor | Application service failes to deploy even if marked as green (ready to deploy) |
Cumulative fix details for BIG-IP Next v20.3.0 that are included in this release
1678537-1 : CVE-2024-6232: reDOS vulnerability in Python tarball module can cause crash
Component: BIG-IP Next
Symptoms:
Regular expressions that allowed excessive backtracking during tarfile.TarFile header parsing are vulnerable to ReDoS via specifically-crafted UCS file.
Conditions:
Uses uploads UCS for migration.
Impact:
UCS load might crash or timeout.
Workaround:
Never upload untrusted UCS files.
Fix:
Python has been updated to a non-vulnerable version.
1671813 : Upgrade shows complete at less then 100% progress bar
Component: BIG-IP Next
Symptoms:
Upgrade in progress bar may incorrectly show that the upgrade is completed and allow users to close the progress bar.
Conditions:
Upgrade is from either 20.2.0 or 20.2.1 to 20.3.0 and CM is in HA mode (3 nodes).
Impact:
During a CM HA upgrade from 20.2.0 or 20.2.1 to 20.3.0, the "BIG-IP NEXT CM upgrade in Progress" dialog may incorrectly indicate that the upgrade is complete. This can allow users to close the progress bar and navigate the CM while the upgrade is still ongoing. However, users will soon be redirected back to the maintenance page as the upgrade continues.
Workaround:
Once the CM HA upgrade begins, wait until the progress bar reaches 100% before attempting to navigate the system.
Fix:
Upgrade progress bar correctly displays the status of upgrade
1671721 : Central Manager may hang permanently during the initial install process★
Component: BIG-IP Next
Symptoms:
The CM install process can be initiated by clicking the button in the GUI after logging in for the first time.
CM install can also be done via CLI:
/opt/cm-bundle/cm install
After that the system may hang during the install.
To verify this specific defect, check the end of the /var/log/central-manager/central-manager-cli.log file and it will end here:
Release "kafka" does not exist. Installing it now.
NAME: kafka
LAST DEPLOYED: Sat Aug 3 11:02:57 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
2024-08-03T11:03:21+00:00 info: Installing central manager application resources...
secret/cm-bootstrap-status patched
Release "mbiq" does not exist. Installing it now.
Conditions:
- New Central Manager deployment
- "setup" has been run
- The CM install process has been started via web UI or CLI
Impact:
Central Manager install never completes.
Workaround:
There are two options:
1. Delete the Central Manager instance and deploy a new one.
2. At the CLI, run uninstall and install again:
# /opt/cm-bundle/cm uninstall
Wait for this to complete, then:
/opt/cm-bundle/cm install
Fix:
Central Manager will not hang during install.
1671069 : CVE-2024-6119: OpenSSL vulnerability
Component: BIG-IP Next
Symptoms:
Applications performing certificate name checks (e.g., TLS clients checking server certificates) may attempt to read an invalid memory address resulting in abnormal termination of the application process.
Conditions:
Applications performing certificate name checks between an expected name and an `otherName` subject alternative name of an X.509 certificate.
The FIPS modules in 3.3, 3.2, 3.1 and 3.0 are not affected by this issue.
Impact:
Denial of service can occur only when the application also specifies an expected DNS name, Email address or IP address.
Workaround:
NA
Fix:
Applied the patch for openssl CVE-2024-6119
1670441 : Central Manager application migration tool is unable to receive very large UCS files
Component: BIG-IP Next
Symptoms:
A UCS file will fail to upload to the Central Manager application migration manager when the file is very large.
Conditions:
- Upload UCS file to Central Manager
- File is roughly over 1.4GB
Impact:
File will fail to upload.
Workaround:
UCS files don't need to be so large for the config migration. They can become very large due to archived epsec files.
An administrator can untar the file on a linux system, remove the epsec files and tar back up again.
Alternatively and more easily, follow the instructions here and gather a new UCS file:
K21175584: Removing unnecessary OPSWAT EPSEC packages from the BIG-IP APM system
1669641 : Central Manager might not mount all filesystems properly during a reboot if external storage is configured★
Component: BIG-IP Next
Symptoms:
The Central Manager VM might not properly mount all filesystems after a reboot when external storage is configured. The following symptoms have been observed in such cases:
- Subsequent upgrade to the BIG-IP Next 20.3.0 release failed due to an error writing to the /tmp directory as the admin user.
- Multiple pods remain stuck with the CreateContainerConfigError due to a missing volume.
- One or both of the /opt/cm-backup and /opt/cm-qkview directories are no longer using external storage.
Conditions:
When External storage is configured and the BIG-IP Next Central Manager is rebooted.
Impact:
If the Central Manager VM failed to mount all filesystems properly after a reboot, future operations on the Central Manager would fail.
Workaround:
Before rebooting the system, including prior to upgrading, modify the /etc/fstab and /etc/fstab.external-storage files to add the _netdev mount option immediately after the nosuid mount option for the mounts of the /opt/cm-backup and /opt/cm-qkview directories, as shown below. This must be done on every node in a multi-node cluster. If the system has already been rebooted and is experiencing any of the documented symptoms, this workaround can be applied, followed by rebooting the VM to recover.
$ cat /etc/fstab
...
10.1.1.1:/export/data /mnt/external-storage nfs auto,nofail,noatime,noexec,nosuid,nodev,nolock,tcp,actimeo=1800,retry=2,_netdev 0 0/mnt/external-storage/a05d57ec-2703-47b8-a742-ca1c2148ad2b/cm-backup /opt/cm-backup none bind,rw,auto,nouser,nodev,noatime,exec,nosuid,_netdev 0 0
/mnt/external-storage/a05d57ec-2703-47b8-a742-ca1c2148ad2b/cm-qkview /opt/cm-qkview none bind,rw,auto,nouser,nodev,noatime,exec,nosuid,_netdev 0 0
$ cat /etc/fstab.external-storage
10.1.1.1:/export/data /mnt/external-storage nfs auto,nofail,noatime,noexec,nosuid,nodev,nolock,tcp,actimeo=1800,retry=2,_netdev 0 0
/mnt/external-storage/7cdb4e2f-4c2f-42bf-b759-f106da06100d/cm-backup /opt/cm-backup none bind,rw,auto,nouser,nodev,noatime,exec,nosuid,_netdev 0 0
/mnt/external-storage/7cdb4e2f-4c2f-42bf-b759-f106da06100d/cm-qkview /opt/cm-qkview none bind,rw,auto,nouser,nodev,noatime,exec,nosuid,_netdev 0 0
Fix:
The _netdev mount option will be automatically added during the upgrade to the BIG-IP Next 20.3.0 release. This new mount option will also be included in fresh installations starting with the BIG-IP Next 20.3.0 release.
1663073 : CVE-2024-24790 golang: net/netip: Unexpected behavior from Is methods for IPv4-mapped IPv6 addresses
Links to More Info: K000141251
1662993 : CVE-2024-24790: golang: net/netip: Unexpected behavior from Is methods for IPv4-mapped IPv6 addresses
Links to More Info: K000141251
1642165 : Central Manager could fail to onboard a BIG-IP Next instance even after setup appears complete
Component: BIG-IP Next
Symptoms:
When attempting to onboard a BIG-IP Next Instance that has been set up via console setup script, Central Manager may display an error "DEVICE-0206 BIG-IP Next Instance Discovery error" or "DEVICE-0207 Failed to Configure Analytics Service"
Conditions:
Installing a BIG-IP Next instance via the console setup script in VMWare or KVM.
Impact:
The BIG-IP Next instance is not able to be onboarded to Central Manager.
Workaround:
Use the Check Health API (/api/v1/health/ready) to check the Instance's Health Status until healthy status is received. This indicates the Instance is ready to be added to Central Manager.
Fix:
The setup script on the BIG-IP Next instance now insures that all elements of the system are ready for use before indicating that setup has completed successfully.
1634149 : Node Validation improvements
Component: BIG-IP Next
Symptoms:
Instability or crashes in the Central Manager Node's Kubernetes Service.
Conditions:
N/A
Impact:
The Kubernetes on the Central Manager node could crash and disrupt availability.
Workaround:
N/A
Fix:
Node validation improvements have been completed.
1634109 : Instance Creation Failed for rSeries 2k and 4k★
Component: BIG-IP Next
Symptoms:
Trying to create a BIG-IP Next tenant on rSeries 2k and 4k fails.
Conditions:
Trying to create a BIG-IP Next tenant on rSeries 2k and 4k.
The logs indicates that "there are containers with unready status".
Impact:
Unable to create a BIG-IP Next instance. The tenant will not be deleted by Central Manager.
Workaround:
When creating instance in rSeries 2k and 4k, In the Instance Creation wizard, under the "Troubleshooting" tab, select "1200" in the "Timeout (seconds)" dropdown.
Fix:
Selecting the Timeout to be 1200 when creating instance from CM for rSeries 2k and 4k
1633977 : On rSeries system, operations which involve reboot, might result in Tenant failure state★
Component: BIG-IP Next
Symptoms:
After reboot of the F5OS-A rSeries system in any operations (for example, live upgrade, reboot) with multiple tenants deployed, some or all of the tenants might not become operational. This is due to the vfio device problem. With this the tenant pods get into restarting loop and never comes up.
The tenant pod state can be checked with the below command on the host system.
[root@appliance-1:Active] vfio # kubectl get pods
NAME READY STATUS RESTARTS AGE
f5-resource-manager-bpnrr 1/1 Running 0 3h
virt-launcher-bigip-14-1-kz56l 1/1 Running 0 3h4m
virt-launcher-bigip-19-1-5m72j 1/1 Running 0 3h4m
virt-launcher-bigip-3-1-pn6c2 1/1 Running 0 3h4m
virt-launcher-bigip-4-1-8x4cc 1/1 Running 0 3h4m
virt-launcher-bigip-20-1-q99b7 1/1 Running 0 3h4m
virt-launcher-bigip-5-1-vr4cf 1/1 Running 0 3h4m
virt-launcher-bigip-18-1-zfrns 1/1 Running 0 162m
virt-launcher-bigip-1-1-qhjd5 1/1 Terminating 0 4m8s
virt-launcher-bigip-13-1-vjwwd 1/1 Terminating 0 3m19s
virt-launcher-bigip-12-1-7swfq 0/1 Completed 0 87s
virt-launcher-bigip-16-1-pqjx6 1/1 Running 0 43s
virt-launcher-bigip-15-1-56x2g 0/1 PodInitializing 0 5s
[root@appliance-1:Active] vfio #
Conditions:
WThe issue might occur in a live software upgrade or any situation that involves a reboot of the rSeries F5OS-A system with multiple tenants deployed.
The below logs will be observed in issue occurring pod logs repeatedly for every retry of the vfio device access by qemu-kvm.
[root@appliance-1:Active] # kubectl get pods, this command shows the pod name. You can use the following command to see the log in the problem pod. Hash in the pod name changes for every restart of the pod.
[root@appliance-1:Active] # kubectl logs <<Problem Pod name displayed in above command>> | grep busy
qemu-kvm: -device vfio-pci,host=0000:54:02.1,id=hostdev0,bus=pci.10,addr=0x0: vfio 0000:54:02.1: failed to open /dev/vfio/130: Device or resource busy
Impact:
Some or all of the vfio devices are the problem, which results in some or all tenants deployed on the rSeries host do not work as expected. They do not change to a RUNNING state.
Workaround:
As the vfio devices are in problem state, a reboot of appliance will resolve the issue.
Fix:
NA
1631197 : CVE-2024-41110 moby: Authz zero length regression
Component: BIG-IP Next
Symptoms:
A vulnerability was found in Authorization plugins in Docker Engine (AuthZ). Using a specially-crafted API request, an Engine API client could make the daemon forward a request or response to an authorization plugin without the body. In certain circumstances, the authorization plugin may allow a request that it would have otherwise denied if the body had been forwarded to it.
Impact:
While vulnerable code is present, it is not exposed in default, recommended, or standard configurations.
Fix:
The logcli package is no longer included in the system.
1630889 : CVE-2023-45288 - golang: x/net/http2: unlimited number of CONTINUATION frames causes DoS
Component: BIG-IP Next
Symptoms:
A vulnerability was discovered with the implementation of the HTTP/2 protocol in the Go programming language. There were insufficient limitations on the amount of CONTINUATION frames sent within a single stream. An attacker could potentially exploit this to cause a Denial of Service (DoS) attack.
Impact:
While vulnerable code is present, it is not exposed in default, recommended, or standard configurations.
Fix:
The logcli package is no longer included in the system.
1630885 : CVE-2023-45142 - OpenTelemetry - potential memory exhaustion in otelhttp
Component: BIG-IP Next
Symptoms:
A memory leak was found in the otelhttp handler of open-telemetry. This flaw allows a remote, unauthenticated attacker to exhaust the server's memory by sending many malicious requests, affecting the availability.
Impact:
While vulnerable code is present, it is not exposed in default, recommended, or standard configurations.
Fix:
The logcli package is no longer included in the system.
1630877 : CVE-2023-44487: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
Component: BIG-IP Next
Symptoms:
The HTTP/2 protocol allows a denial of service (server resource consumption) because request cancellation can reset many streams quickly.
Impact:
While vulnerable code is present, it is not exposed in default, recommended, or standard configurations.
Fix:
The logcli package is no longer included in the system.
1630093 : CM silently requires an internet connection when creating instances using the F5OS-based provider
Component: BIG-IP Next
Symptoms:
In CM internet-disconnected environment, creating F5OS based instance from CM will fail due to CM tries to connect to the phonehome URL which requires internet connectivity.
Conditions:
-- Creating a new BIG-IP Next instance on VELOS
-- Central Manager does not have access to the Internet
Impact:
You cannot create an instance on F5OS from CM when the CM does not have any internet connectivity.
Workaround:
Create the F5OS based instance directly in the F5OS provider, and discover it to CM.
Fix:
CM that has no internet-connectivity can now create F5OS-based instance
1629793 : WebSocket messages do not arrive server when using waf policy
Component: BIG-IP Next
Symptoms:
Websocket messages are stalled in bigip-next and do not arrive to the server, when WAF policy is attached to the application.
Conditions:
-- WAF policy is attached to the application.
-- Websocket traffic is sent.
Impact:
Handshake seems successful, but websocket messages do not arrive to server.
Workaround:
None.
Fix:
Bypass websocket messages by WAF when there is no websocket configuration.
1627337 : Removed deprecated APIs
Component: BIG-IP Next
Symptoms:
Several WAF APIs are deprecated and are now removed:
GET /api/v1/security/filetype-violations
GET /api/v1/spaces/default/security/waf-policies/{id}/filetype-violations
GET /api/v1/spaces/default/security/waf-policies/{id}/filetype-violations/{viol_id}
PUT /api/v1/spaces/default/security/waf-policies/{id}/filetype-violations/{viol_id}
POST /api/v1/spaces/default/security/waf-policies/{id}/filetype-violations/update
Conditions:
Using the iControl REST API
Impact:
You will need to use new API endpoints that replaced the deprecated API endpoints
Workaround:
The following API endpoints should be used instead of the old ones:
Old API: /api/v1/spaces/default/security/waf-policies/{id}/filetype-violations
New API: /api/v1/spaces/default/security/waf-policies/{id}/violations
Old API: /api/v1/spaces/default/security/waf-policies/{id}/filetype-violations/update
New API: /api/v1/spaces/default/security/waf-policies/{id}/violations/update
Old API: /api/v1/spaces/default/security/waf-policies/{id}/filetype-violations/{viol_id}
New API: /api/v1/spaces/default/security/waf-policies/{id}/violations/{viol_id}
Old API: /api/v1/security/filetype-violations
New API: /api/v1/security/violations
Fix:
Deprecated APIs removed.
1619945 : Boot time on KVM is excessively long when no management IP is assigned
Component: BIG-IP Next
Symptoms:
The BIG-IP Next VM takes 15 to 20 minutes to boot.
Conditions:
Attempting to create a KVM Next instance without a management IP and no cloud-init datasource is provided.
Impact:
The BIG-IP Next instance will boot in about 15 to 20 minutes.
Workaround:
If the BIG-IP Next instance is created without using DHCP or cloud-init to assign the management IP on KVM, download and use the kvm-boot.iso to deploy your KVM instance.
This will speed-up the boot time.
You can find a link to kvm-boot.iso at https://clouddocs.f5.com/bigip-next/latest/install/next_install_kvm_setup.html
Fix:
No cloud and without using DHCP on KVM, must download and use the kvm-boot.iso.
1612377 : Central Manager cannot manage Provider if Provider certificate changes
Links to More Info: K000140722
Component: BIG-IP Next
Symptoms:
Central Manager will no longer be able to manage Next instances with a Provider if the Provider certificate changes.
Conditions:
-- Central Manager has a Provider configured.
-- The Provider certificate changes.
-- Central Manager can no longer manage Next instance states via the Provider.
Impact:
Central Manager cannot deploy or delete any instance under that Provider.
Workaround:
Mitigation: In pre-orange release, do not change the provider certificate once the provider has been managed by CM.
Workaround: If provider certificate has been changed in pre-orange release while the provider is managed under CM, update the certificate to be exactly the original certificate.
Fix:
Re-trusting the provider certificate in CM has been fixed.
1612225 : Unable to Initiate BIG-IP Next Instance Upgrade
Component: BIG-IP Next
Symptoms:
The BIG-IP Next Central Manager UI sends an API request to the BIG-IP Next instance to retrieve the file ID of the uploaded upgrade bundle. If the API request encounters any file entries without a set 'fileName' attribute, the UI will display the following error message: "Failed to initialize upgrade process: Failed to initialize upgrade form: Failed to fetch instance files: Cannot read properties of undefined (reading 'endsWith')."
Conditions:
BIG-IP Next instance upgrade attempted via the BIG-IP Next Central Manager UI when files with no 'fileName' attribute exist on the BIG-IP Next instance.
Impact:
The BIG-IP Next Central Manager UI for initiating the BIG-IP Next instance upgrade will fail to load properly.
Workaround:
The following steps must be followed using the BIG-IP Next Central Manager API:
1. Use the 'POST /api/login' endpoint to obtain a token.
2. Use the 'GET api/v1/spaces/default/instances' endpoint to identify the instance ID exhibiting this issue.
3. Use the 'GET api/device/v1/proxy/<instance ID>?path=/files' endpoint to identify the list of files for the instance that do not contain a 'fileName' attribute.
4. Use the 'DELETE api/device/v1/proxy/<instance ID>?path=/files/<file ID>' endpoint to delete each identified file.
Fix:
Delete all file entries from each BIG-IP Next instance that do not have a 'fileName' attribute.
1611077 : UI does not load after upgrading★
Component: BIG-IP Next
Symptoms:
After upgrading Central Manager, you are logged out of the UI and when you attempt to reconnect you do not get the CM login page, but rather get an NGINX 404 error page.
Conditions:
Upgrading Central Manager from 20.1.0 to 20.2.0
Impact:
Central Manager upgrade fails and Central Manager won't start.
Workaround:
Perform a restore operation on a new machine(s).
Fix:
After upgrading BIG-IP Next Central Manager, you are no longer logged out of the UI.
1610997 : CM Scale: waf-policy-builder in CrashLoopBackoff during WAF policies deployment
Component: BIG-IP Next
Symptoms:
There are two issues:
1. The CPU utilization of waf-policy-builder was high (exceeded K8s limit).
2. After 1k WAF policies deployment, it was observed that waf-policy-builder pod was in CrashLoopBackoff after multiple restarts. - This might be a duplication of https://bugzilla.olympus.f5net.com/show_bug.cgi?id=1677333
Conditions:
1. The CPU utilization of waf-policy-builder occurs as soon as the system starts.
2. The CrashLoopBackoff might be due to waf-policy-builder exceeding K8s memory limit following multiple policies deployment.
Impact:
1. The high CPU should not cause a crash but a decreased performance.
2. CrashLoopBackoff is a fatal error for waf-policy-builder pod.
The affected module is waf-policy-builder in CM and the implications of that.
Workaround:
The workaround is partial, it relates only to the crash.
You might be able to mitigate the crash by reducing the number of policies deployed (if applicable in the configuration). Waf-policy-builder pod should then be manually deleted (using kubectl). This is in order to clear the CrashLoopBackoff of waf-policy-builder and to start with the new configuration.
Fix:
The first of the two issues was fixed: The read of the `kafka_poll_interval_ms` value from the configuration was fixed. The wrong value read caused to a high CPU of waf-policy-builder.
1607837 : BIG-IP Next Central Manager does not support NTP configuration via cloud-init
Component: BIG-IP Next
Symptoms:
If you supply cloud-init user-data that specifies NTP pools and or sources, chrony will not be configured to use that data.
Conditions:
Cloud-init user data with NTP configuration supplied when first booting BIG-IP Next Central Manager.
Impact:
Custom NTP sources must be configured via the setup utility instead of via cloud-init.
Workaround:
Either run the setup utility to configure the custom NTP server IP addresses or modify the /etc/chrony/sources.d/central-manager.sources file to contain the sources being advertised by DHCP.
Fix:
If cloud-init user-data with pools or servers defined for the cloud-init NTP module is supplied when the BIG-IP Next Central Manager is first booted, it will now correctly configure chrony to use those sources.
1602697-3 : Full-proxy HTTP/2 may allow unconstrained buffering
Component: BIG-IP Next
Symptoms:
tmm crashes and restarts due to memory pressure
Conditions:
When using HTTP2 Full proxy configuration, under certain conditions, tmm restarts.
Impact:
Traffic disrupted while tmm restarts.
Workaround:
NA
Fix:
No unconstrained buffering is seen after the fix
1602561 : Inspection services cannot be deployed when one of the instances managed by BIG-IP Next Central Manager is in unhealthy state
Component: BIG-IP Next
Symptoms:
Inspections services cannot be deployed to the instances using UI.
Conditions:
One of the three instances managed by BIG-IP Next Central Manager is in unknown state.
Impact:
You won't be able to deploy inspection services.
Workaround:
1. Use Central Manager API to deploy on healthy instances.
or
2. Fix the state of the instance that is in the unknown state.
1602141 : Invalid certificates can disrupt configuration and status updates
Component: BIG-IP Next
Symptoms:
A virtual address with RHI configuration marked as Never may be advertised over BGP.
Conditions:
There are multiple virtual servers with the same virtual address and RHI configuration is marked as Never, and RHI configuration is created before the application or stack is created.
Impact:
A virtual address that should not be advertised is advertised through BGP.
Workaround:
Create the RHI configuration for Never after the application or stack is configured.
1601949 : Moving a self IP from one VLAN to another VLAN across L1 networks may cause self IP unreachable
Component: BIG-IP Next
Symptoms:
-- Ping to the self IP fails after assigning it to a different VLAN
-- The Self IP address might not exist in the kernel
Conditions:
-- Two different L1 networks configured
-- Two VLANs configured each on different L1 networks
-- A Self IP is moved from one VLAN to another VLAN
Impact:
Traffic drop to the virtual servers/pools using the underlying self IP
Workaround:
Remove VLAN/Self IP and re-add it
1601221 : CM erroneously reports failover has failed during BIG-IP Next upgrade★
Component: BIG-IP Next
Symptoms:
After the first node of an HA pair has been upgraded, failover is triggered, either automatically by CM or manually by the user, and CM reports failover has failed due to "401 Failed to authenticate." even though failover has occurred.
Conditions:
-- BIG-IP Next HA pair is upgraded using CM.
-- VMware environment
Impact:
CM shows BIG-IP Next HA status as Unhealthy though the actual BIG-IP Next status is healthy.
Workaround:
1. Open the properties drawer for the instance and go to the HA section.
2. Confirm that the nodes have swapped roles in the cluster. The new active should be at the upgraded version and the standby should be at the older version.
2.1 The cluster health API could also be used here via postman to confirm that failover has finished.
GET https://{{CM-address}}/api/v1/spaces/default/instances/{{Big-IP-Next-ID}}/health
The response should show the nodes have swapped roles and one is ACTIVE and the other is STANDBY.
3. Disable the toggle for "Enable automatic failover" and click upgrade for the standby node and follow normal upgrade workflow steps.
4. When upgrade has finished the HA instance in the Instance list grid will show the upgraded version and cluster will be healthy.
Fix:
None
1600445 : Historic telemetry collected by BIG-IP Next Central Manager may be lost
Component: BIG-IP Next
Symptoms:
If one of the BIG-IP Next Central Manager high availability (HA) nodes become unavailable, the BIG-IP Next instance telemetry may no longer be available through the BIG-IP Next Central Manager.
Conditions:
Any of the BIG-IP Next Central Manager high availability (HA) nodes become unavailable.
Impact:
Historic BIG-IP Next instance telemetry may no longer be available through BIG-IP Next Central Manager. Once the node is restored, or replaced by a new node, BIG-IP Next Central Manager will start collecting and presenting telemetry again.
Workaround:
None
1599305 : After upgrading, unable to edit the Central Manager part of policies attached to the applications★
Component: BIG-IP Next
Symptoms:
Before upgrade, if there are applications attached with WAF policies, then after the upgrade, parts of the policies are not editable until the application is re-deployed.
Conditions:
Applications attached with WAF policies exist before the upgrade.
Impact:
Unable to edit part of the WAF policies that are attached to applications before upgrade.
Workaround:
Re-deploy the application to edit the policies.
1597037 : Adding a new TLS instance to an existing application (a default TLS instance) fails to flow traffic as expected
Component: BIG-IP Next
Symptoms:
Traffic flow does not work as expected when a new TLS instance is added to an existing application.
Conditions:
1. Create default SSL certificate and custom certificate from Central Manager UI.
2. Deploy an https application and validate LTM traffic with default certificate.
Edit the application to add new certificate for TLS instance under protocols and profiles.
4. Add the imported certificate (custom cert) using "enable https client side"
5. Save the application with new TLS settings and certificate added.
6. Click on Review and deploy.
7. Validate the changes done on application.
8. If validation is successful. Click on deploy application
Impact:
Traffic flow does not work as expected
Workaround:
Suggested Workarounds:
1. Delete the existing cert in the UI and recreate the same certificate (either before or after adding new certificate) and save the application.
2. Use API with multiCerts to true for each certificate block.
1593605 : HTTPS Traffic not working on BIG-IP Next HA formed from Central Manager with SSL Orchestrator topology
Component: BIG-IP Next
Symptoms:
HTTPS traffic is not working.
Conditions:
BIG-IP Next HA Setup with SSL Orchestrator Provisioned
Impact:
User can experience traffic downtimes, if instance gets down during upgrade or due to network interruption issues.
Workaround:
None
1593381-1 : When upgrade fails, release version displayed in GUI is different from CLI release version.★
Component: BIG-IP Next
Symptoms:
During a CM upgrade, if the upgrade fails (for example, in an air-gap environment where the CM is disconnected from the internet), the version displayed in the GUI is different from the version shown in the CLI. Typically, the GUI retains the current version, while the CLI shows the target version that failed to upgrade. This discrepancy in version causes confusion about whether the upgrade was successful or not.
Conditions:
Upgrade CM in an air-gapped environment.
Impact:
CM becomes dysfunctional, the failed upgrade leads to discrepancies in version representation, where the CM GUI displays the current version while the CLI indicates the target version.
Workaround:
If upgrade has failed due to a ephemeral condition(ie, pod startup timeout), user should restore CM to a previous version backup and retry upgrade to version of 20.2.1. Refer How to: Back up and restore BIG-IP Next Central Manager (https://clouddocs.f5.com/bigip-next/20-2-0/use_cm/cm_backup_restore_using_ui_api.html).
if upgrading from 20.0.2->20.2.0 caused the failure and there is no backup, perform the following steps on your CLI to create a backup and restore back to 20.0.2:
helm rollback mbiq-vault 1 (will fail)
helm rollback mbiq-vault 1 (succeeds)
kubectl get statefulset mbiq-vault -o yaml > vault.yaml
vi vault.yaml #delete the line that reads "value: https://$(HOSTNAME).mbiq-vault-internal.default.svc.cluster.local:8200"
kubectl delete statefulset mbiq-vault
kubectl apply -f vault.yaml
#wait for vault to come back up
/opt/cm-bundle/cm backup
#create new 20.0.2 CM
#scp backup file to CM
#restore using backup
1592929 : Attaching or detaching of an iRule version is not supported for AS3 application
Component: BIG-IP Next
Symptoms:
In Central Manager, from iRule space, attaching or detaching of a different iRule version of deployed AS3 application is not supported, it is supported only for FAST applications.
Conditions:
- Migrating the AS3 application which is having an iRule
- Attach or detach a different iRule version from iRule space
Impact:
Unable to deploy the application with different iRule version.
Workaround:
Redeploy the application with new iRule version by directly editing the AS3 declaration from application space.
Following is an example:
"iRules": [
{
"cm": "migrated_myfakeiRule2::v2"
}
],
1592589 : Suggestion details page for WAF policy with "on-demand" learning mode includes incorrect operations options
Component: BIG-IP Next
Symptoms:
For event logs associated with WAF policy which has "on-demand" learning mode - its suggestion details can include operations like Ignore and Delete, which should not be available in that case. Only Accept operation should be visible and available.
Conditions:
1. Configure WAF policy with "On-demand" learning mode and application attached to BIG-IP Next instance.
2. Run traffic which produces suggestion(s).
3. For the event of this traffic, open its details and press "Accept Request" button.
4. A table with at least one suggestion should appear. Click on on of the suggestions.
5. The suggestion details page appear with Ignore and Delete options.
Impact:
You cannot "undo" the operations in this scenario, the Accept Request case aims to let you perform the "Accept" operation only.
Workaround:
Don't use the Ignore or Delete operations in suggestion details shown via Event logs details.
Fix:
Delete and Ignore operations for Accept Request case are not visible.
1591209 : Unable to force re-authentication on IDP when BIG-IP Next is acting as SAML SP
Component: BIG-IP Next
Symptoms:
When BIG-IP Next is configured as a SAML SP with force authentication enabled in the SAML Auth item, IDP still does not re-authenticate the user when trying to access the SP.
Conditions:
Issue is observed for all usecases where force authentication is enabled in SAML Auth item.
Impact:
User in not re-authenticated while trying to access the SP, even though the admin configured the SP to force re-authentication.
Workaround:
None
Fix:
Not available yet
1590065 : The same gateway address is not considered as valid on multiple static routes
Component: BIG-IP Next
Symptoms:
When configured with multiple static routes with the same gateway IP address as mentioned below, the BIG-IP Next instance considers the first static route and does not configure the remaining static routes.
- destination prefix 192.17.17.17/24 with gateway IP 198.2.1.1
- destination prefix 192.18.18.18/24 with gateway IP 198.2.1.1
Conditions:
Multiple static routes with same gateway IP address.
Impact:
Unable to configure multiple static routes with same gateway IP address.
Workaround:
Change the environment variable 'DPVD_NETWORK_VALIDATOR_ENABLE ' to False from True, following is an example command:
sudo kubectl edit deploy f5-fsm-tmm
Fix:
The same gateway IP address can be used on multiple static routes.
1589577 : When no token exists, LLM log writes "LICENSING-1116:DecryptionFailed"
Component: BIG-IP Next
Symptoms:
The LLM log will write error level messages as so:
[ERRO] token/crypter.go:57 JWT decryption failed. Error: LICENSING-1116:DecryptionFailed:'zero token' text is too short to decrypt
[ERRO] Error while decrypting token - zero token error - LICENSING-1116:DecryptionFailed:'zero token' text is too short to decrypt
This message is benign and can be ignored.
Conditions:
Visiting the licensing page on Central Manager, when no token is setup.
Impact:
There is no functional impact.
Workaround:
Ignore this log message under the described condition.
Fix:
Empty token does not cause LLM to write error logs.
1589069 : AS3 application health status and alerts in the UI stay healthy and green, regardless of the application health
Component: BIG-IP Next
Symptoms:
The health status and alerts for AS3 applications are not showing as expected in the UI. Regardless of the status, the AS3 application health always shows as healthy.
Conditions:
An AS3 application monitored by BIG-IP Next Central Manager encounters a health issue.
Impact:
The application health and alert status remains green and healthy in the UI, not reflecting the correct state.
Workaround:
None
Fix:
The BIG-IP Next Central Manager UI now correctly reflects the health and alert status of AS3 applications.
1587497-1 : WAF security report shows alerted requests even though no alerts were generated
Component: BIG-IP Next
Symptoms:
When creating a security report, the generated report might show alerts, even though none were reported in the WAF dashboards and event log.
Conditions:
Generate a security report for a policy that is blocking traffic.
Impact:
The generated report might incorrectly show blocked requests as alerts even though no alerts were reported.
1587445 : WAF enforcer crash during handling of a specific HTTP POST request
Component: BIG-IP Next
Symptoms:
WAF enforcer crashes while handling a very large POST request.
Conditions:
A very large POST request received by enforcer.
Impact:
Enforcer crashes. Traffic disrupted while waf-enforcer restarts.
Workaround:
None
Fix:
Fixed handling of the request.
1587337 : HA cluster on CM UI could be unhealthy during standby upgrade★
Component: BIG-IP Next
Symptoms:
This is an intermittent issue due to a race condition during HA cluster creation and standby writing self-signed certificates in standby vault.
Following is an expected HA workflow steps:
1. Two BIG-IP Next instances (instance-1 and Instance-2) boot up as standalone on BIG-IP Next 20.2.0 image.
2. Both instances create and store self-signed certificates in vault DB.
3. HA cluster creation job is initiated.
4. Active instance creates self-signed certificates for new cluster IP and updates vault DB.
5. Standby instance creates self-signed certificates for new cluster IP and updates vault database.
6. During HA cluster join and database sync, active DB replaces standby DB.
From the above steps, if Step.5 occurs before Step.6, then HA cluster goes into unknown state.
If Step.5 occurs after Step.6, then HA cluster is healthy and upgrade works fine as expected.
Conditions:
After creating BIG-IP Next cluster, upgrade the version on standby.
Impact:
BIG-IP Next HA cluster is unreachable from CM.
Workaround:
During HA upgrades, if standby node is not reachable, follow below steps:
1. Disable "enable automatic failover" flag and Force failover.
2. On CM UI, click on the HA cluster name-> certificates-> Establish Trust. HA status on CM UI changes from Unknown to Unhealthy.
3. Upgrade new standby instance to BIG-IP Next 20.2.1.
Both active and standby should be on BIG-IP Next 20.2.1 and HA should be healthy in CM UI.
Fix:
After upgrades, HA cluster should always come-up in Healthy state.
1586501 : Configuring external logger in Instance Log Management halts telemetry reception in Central Manager and other configured external loggers
Links to More Info: K000140380
Component: BIG-IP Next
Symptoms:
When you go to Instance > Log Management in BIG-IP Next Central Manager and set up an external logger, the Central Manager will no longer receive any telemetry while the configured external logger will get all of the available logs from the BIG-IP Next instance. When new external loggers are configured, only the last will receive any telemetry, while all previously configured external loggers will stop receiving telemetry.
Conditions:
Configure external logger for the BIG-IP Next instance.
Impact:
Configured Central Manager stops receiving telemetry. Other external loggers stop receiving telemetry too.
Workaround:
F5 offers two bash scripts that you can run via the Central Manager CLI, one to identify impacted instances (find_broken_telemetry_instances.sh) and a second script to fix telemetry streaming to Central Manager (update_cm_logger.sh). There is no workaround for allowing streaming telemetry to Central Manager and external loggers without the fix.
Fix:
The creation or deletion of external loggers no longer interferes with other loggers, including telemetry. Although the certificates for the external loggers are visible in the “Certificates & Keys” screen, they cannot be deleted or updated from there.
1585793 : The f5-fsm-tmm crashes upon configuring BADOS under traffic
Component: BIG-IP Next
Symptoms:
The f5-fsm-tmm crashes.
Conditions:
Deploy BIG-IP Next WAF and perform external IP vulnerability scan.
Configure BADOS while traffic is running to the WAF application service.
Impact:
Traffic disrupted while tmm restarts.
Workaround:
None
Fix:
The f5-fsm-tmm works as expected after configuring BADOS under traffic.
1585773-1 : Unable to migrate large number of applications at once
Component: BIG-IP Next
Symptoms:
When you click the Deploy button to migrate a large number of applications (over 500 applications) at once, you might get the following error:
Cannot read properties of undefined (reading 'status_code')
Conditions:
Select more than 500 applications to migrate to BIG-IP Next Central Manager.
Impact:
More than 500 applications cannot be migrated at once.
Workaround:
None
1585285 : Unable to stage applications for migration when session contains large number of application services
Component: BIG-IP Next
Symptoms:
When you click Add application on Application Migration page, the following error is returned:
Unexpected Error: applications?limit-1000000
Conditions:
Migrate a UCS file that contains a large number of virtual servers (more than 2000).
Impact:
Applications cannot be migrated using UCS files that have a large number (more than 2000) of virtual servers.
Workaround:
Increase amount of memory for mbiq-journeys-feature deployment.
1. Log in to BIG-IP Next Central Manager using SSH
2. Execute following command: kubectl patch deployment mbiq-journeys-feature -p '{"spec":{"template":{"spec":{"containers":[{"name":"mbiq-journeys-feature","resources":{"limits":{"cpu":"1","memory":"1.5Gi"}}}]}}}}'
1584753 : TMM in BIG-IP Next expires the license after 50 days
Links to More Info: K000139851
Component: BIG-IP Next
Symptoms:
-- BIG-IP Next suddenly stops passing application traffic.
-- The TMM logs show that the license has expired
-- The TMM state changes to unlicensed.
Conditions:
-- BIG-IP Next instances
-- A valid license is applied, with more than 50 days until expiration
-- 50 (49.7) days elapse after the license activation
Impact:
TMM becomes unlicensed and stops passing application traffic
Workaround:
Restart the BIG-IP Next instance before 49.7 days has elapsed.
1584741 : In the Table commands in iRule, the subtable count command fails in BIG-IP Next 20.x
Component: BIG-IP Next
Symptoms:
The Table iRule command allows storage of user data during runtime inside "subtables", administrators use these to store states. The Table command allows to count the number of records in a subtable, following is an example:
table keys -subtable TABLE -count
Conditions:
Using Table "count" command:
table keys -subtable MYSUBTABLE -count
Impact:
Count is incorrectly reported as 0.
Workaround:
None
Fix:
The Table count command returns the correct number of records.
1584681 : Application service creation fails if name contains "fallback"
Component: BIG-IP Next
Symptoms:
Application service will not be created if name contains "fallback" key in it.
Conditions:
Application service name contains "fallback" key.
Impact:
Application service is not created.
Workaround:
Application service name should not contain "fallback" key.
1584073-1 : WAF enforcer might crash when application is removed during handling traffic
Component: BIG-IP Next
Symptoms:
WAF enforcer might crash when configured application is removed while network traffic is passing through it.
Conditions:
-- Application was configured with a WAF profile
-- Traffic is ongoing
-- The application is removed
Impact:
Enforcer might sometimes crash. Traffic disrupted while waf-enforcer restarts.
Workaround:
None
Fix:
Fixed handling of configuration change.
1580545 : iRule allows function local variable
Component: BIG-IP Next
Symptoms:
This iRule code makes DPVD core dump:
when RULE_INIT {
set ::tmm [TMM::cmp_unit]
}
when HTTP_REQUEST {
log local0. "Request $::tmm"
}
Conditions:
When using a function local variable like "set ::my_bool true"
Impact:
DPVD crashes.
Workaround:
None
Fix:
This code now works without coredump:
when RULE_INIT {
set ::tmm [TMM::cmp_unit]
}
when HTTP_REQUEST {
log local0. "Request $::tmm"
}
1580181 : When BIG-IP Next HA is created using CM, the spinner does not refresh
Component: BIG-IP Next
Symptoms:
The spinner will not refresh upon the completion of the HA task and keep spinning.
Conditions:
BIG-IP Next HA is established via Central Manager.
Impact:
The spinner gives an impression that HA creation is still in progress even though the process has completed.
Workaround:
If you have waited 20+ minutes for high availability to be established and the spinner has not gone away, you can close the drawer to go back to the Instances screen and refresh the page. The instance status will be green and healthy if high availability was successfully established.
Fix:
The spinner now reflects the correct state of the task. It will spin when the task is in progress and stop once the task is complete.
1579365 : Unsupported nested properties are not underlined during application migration process
Component: BIG-IP Next
Symptoms:
During application migration, if there are unsupported nested properties (sub-properties of properties) of tmsh objects, they will not be underlined during the Configuration Analyzer.
Conditions:
Configuration of migrated application contains an object with nested properties that are not supported on BIG-IP Next, e.g.:
ltm pool /AS3_Tenant/AS3_Application/testItem2 {
members {
/AS3_Tenant/192.168.2.2:400 {
address 192.168.2.2
connection-limit 1000
dynamic-ratio 50
monitor min 1 of { /Common/http } // unsupported nested property that will not be underlined, but should be
priority-group 4
rate-limit 100
ratio 50
}
}
min-active-members 1
}
Impact:
Functionality of the application service might not work as expected.
Workaround:
You can check if all nested properties are present in the AS3 preview of the AS3 declaration. Those that are not present will not be migrated.
Refer to the AS3 Next schema reference: https://clouddocs.f5.com/bigip-next/latest/schemasupport/schema-reference.html
1576545-1 : After upgrade, BIG-IP Next tenant os unable to export toda-otel (event logs) data to Cemtral Manager★
Component: BIG-IP Next
Symptoms:
After upgrade, the BIG-IP Next tenant is unable to export toda-otel (event logs) data to CM in VELOS
Conditions:
Upgrading BIG-IP Next tenant from 20.1 to 20.2 on a VELOS system.
Impact:
After upgrade, the BIG-IP Next tenant is unable to export toda-otel (event logs) data to CM
Workaround:
For VELOS Standalone
====================
After upgrade, if the f5-toda-otel-collector cannot connect to host change the tenant status from "DEPLOYED" TO "CONFIGURED" TO "DEPLOYED" to fix the issue. Please note that it will take 5 to 10 min for tenant status to change and it might impact the traffic.
For VELOS HA follow the following steps
=======================================
1. Setup CM on Mango build
2. Add 2 BIG-IP Next instances(Mango build) on the CM
3. Bring up HA on CM with the Enable Auto Failover option unchecked
4. Add a license to the HA instance.
5. Deploy a basic HTTP app in FAST mode with WAF policy attached (Enforcement mode - Blocking, Log Events - all)
6. Send the traffic and verify the WAF Dashboard under the Security section, should be able to see the Total Requests and Blocked response fields with non-zero values
7. Upgrade standby instance to latest nectarine build with the "auto-failover" button switched off.
8. We will observe the instances goes into an unhealthy state on CM.
9. Change the status of the standby instance from Deployed to Configure Mode and save it through partition GUI/CLI.
10. After confirming the status of the pods, change the state of the standby instance back to the Deployed state from the configured state. There should be no impact on the traffic flow during this step.
11. Now do the force failover and check the health status of instances, it will still show unhealthy as instances are in between upgrades.(one instance with Mango build (standby node) and other with Nectarine build(Active node))
12. Now Upgrade the standby instance to the latest nectarine build with the "auto-failover" button switched off.
13. HA should look healthy in this state and traffic should continue to flow.
14. Change the state of the standby instance from Deployed to Configure Mode and save it using partition GUI/CLI
15. After confirming the status of the pods for the instance on partition CLI, change the state of the standby instance back to the Deployed state from the configured state.
16. We will observe the Event logs on the WAF Dashboard under the security section on CM.
17. We can also observe the logs on the "f5-toda-otel-collector" pod showing no Export failures.
18. Upgrade the CM. Systems should be Healthy.
1572437 : CVE-2024-0450: : python: The zipfile module is vulnerable to zip-bombs leading to denial of service
Component: BIG-IP Next
Symptoms:
A flaw was found in the Python/CPython 'zipfile' that can allow a zip-bomb type of attack. An attacker may craft a zip file format, leading to a Denial of Service when processed.
Impact:
While vulnerable code is present, it is not exposed in default, recommended, or standard configurations.
1571993 : Access Session data is not cleared after TMM restart
Component: BIG-IP Next
Symptoms:
The session entry stored in Redis server will not be cleared when TMM restarts and the user session does not come back to the BIG-IP (after TMM restarts).
Conditions:
User session is created in Redis server
TMM restarts
Traffic for the corresponding user session does not come back to BIG-IP.
Impact:
The session record in Redis server is not cleared in specific scenario of TMM restart and the traffic for user session does not come back to BIG-IP.
Workaround:
None
1569589 : Default values of Access policy are not migrated
Component: BIG-IP Next
Symptoms:
Default values of an Access policy are not migrated to BIG-IP Next Central Manager.
Conditions:
Migrate a virtual server with an Access policy that contains default values.
Impact:
- Access Policy imported to BIG-IP Next Central Manager does not have default values populated.
- If the affected policy is deployed to a BIG-IP Next instance, it will use default values applied by BIG-IP Next.
Workaround:
From the BIG-IP Next Central Manager UI, you can edit Access policy values for property or leave it un-selected.
1564157 : BIG-IP Next Central Manager requires VELOS/rSeries systems to use an SSL certificate containing the host IP address in the CN or SANs list.★
Component: BIG-IP Next
Symptoms:
BIG-IP Next Central Manager requires that virtualization providers use a valid SSL certificate. A self-signed certificate can also be explicitly accepted by BIG-IP Next Central Manager users, if the certificate otherwise passes SSL validation successfully.
When F5OS generates self-signed SSL certificates for its HTTPS services, it does not include the actual hostname or IP address in the Common Name or Subject Alternative Names (SANs) fields. As a result, this self-signed certificate will not pass SSL validation for strict TLS clients, because the HTTPS server name does not match any Subject names in the certificate.
Conditions:
A BIG-IP Next Central Manager user attempts to add a VELOS or rSeries system as a virtualization provider, when the VELOS or rSeries system is using the default self-signed certificate generated by the system.
Impact:
BIG-IP Next Central Manager cannot successfully add VELOS or rSeries systems as virtualization providers, and therefore cannot dynamically create new BIG-IP Next instances on VELOS or rSeries systems.
Workaround:
1. Create a self-signed SSL certificate that includes the F5OS system's actual IP address in the Subject Alternative Names (SANs) field. For example, the following steps can be used:
A. Save the following data into a file named “ip-san.cnf":
[req]
default_bits = 2048
distinguished_name = req_distinguished_name
req_extensions = req_ext
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
countryName = XX
stateOrProvinceName = N/A
localityName = N/A
organizationName = Self-signed certificate
commonName = F5OS Self-signed certificate
[req_ext]
subjectAltName = @alt_names
[v3_req]
subjectAltName = @alt_names
[alt_names]
IP.1 = 127.0.0.1
DNS.1 = f5platform.host
B. Edit the file -- change IP.1 at the end to be the rSeries or VELOS partition management IP address. Optionally, other certificate fields may also be updated if the new cert should have specific values for them (e.g., commonName, organizationName, localityName, etc.).
C. Run the following command, to create the two certificate files "ip-san-cert.pem" and "ip-san-key.pem”:
openssl req -x509 -nodes -days 730 -newkey rsa:2048 -keyout ip-san-key.pem -out ip-san-cert.pem -config ip-san.cnf
2. In the VELOS Partition or rSerie s Hardware UI:
A. Navigate to the AUTHENTICATION & ACCESS -> TLS Configuration page.
B. Locate and update the "TLS Certificate" and "TLS Key" text boxes to the new Cert file & Key file, respectively.
C. The F5OS system will then use this new certificate with its HTTPS services.
Fix:
F5OS (VELOS/rSeries) has included a default valid self-signed certificate in F5OS version 1.8.0. With that, CM now can discover the fresh-installed VELOS/rSeries
1561053 : Application status migration status incorrectly labeled as green when certain properties are removed
Component: BIG-IP Next
Symptoms:
When migrating applications to BIG-IP Next, certain unsupported properties might be removed during the migration process, but the virtual server status is incorrectly labelled as "Ready for migration" (green status), rather than notify with a "Warning" (yellow status).
Conditions:
Migration of a UCS to BIG-IP Central Manager that contains application services with certain unsupported properties. Some examples are:
min-active-members
slow-ramp-time
Following migration, the following can be reviewed:
- Virtual server status is green ("Ready for migration")
- Virtual server contains configuration with tmsh objects that can be translated into AS3 classes supported in BIG-IP Next.
- tmsh objects that contain unsupported properties cannot be translated into configurable options for AS3 class.
AS3 Schema Reference: https://clouddocs.f5.com/bigip-next/latest/schemasupport/schema-reference.html
Impact:
Unsupported properties are silently dropped without logs and the status of the migration is incorrect (status is green, but should be yellow). The application service after migration might not be functional because of the missing properties.
Workaround:
None
1560493 : Inaccurate Reflection of Selfip Prefix Length in TMM Statistics and "ip addr" Output
Component: BIG-IP Next
Symptoms:
Changes to the prefix length of selfips are not reflected in TMM statistics or the "ip addr" output.
Conditions:
Configure an L1-network with VLAN and self-ip with a certain prefix, then altering the prefix length or subnet of the self-ip.
Impact:
The modifications made to the self-ip prefix length are not reflected in TMM statistics.
Workaround:
To address changes in self-ip subnets, it is necessary to delete the L1-network and subsequently re-add it.
Fix:
The selfip lookup in create/update configuration handler will now validate the prefix-length along with the address and route-domain.
1560473 : Traffic won't work with http monitor for L3, http-transparent service
Component: BIG-IP Next
Symptoms:
User will configure the http monitor in L3, http-transparent or http-explicit service and no package seen.
Conditions:
Http monitor should be added in these services
Impact:
Due to ID 1584485, traffic will not work for l3 and http-transparent service. This ID is for ltm.
http-explicit traffic pass should be ok.
Workaround:
Do not add HTTP monitor
Fix:
Http-explicit should be ok after this fix.
1531845 : CVE-2023-27043: python: Parsing errors in email/_parseaddr.py lead to incorrect value in email address part of tuple
Component: BIG-IP Next
Symptoms:
The email module of Python through 3.11.3 incorrectly parses e-mail addresses that contain a special character. The wrong portion of an RFC2822 header is identified as the value of the addr-spec. In some applications, an attacker can bypass a protection mechanism in which application access is granted only after verifying receipt of e-mail to a specific domain (e.g., only @company.example.com addresses may be used for signup). This occurs in email/_parseaddr.py in recent versions of Python.
Impact:
While vulnerable code is present, it is not exposed in default, recommended, or standard configurations.
1516785 : CVE-2023-49081: aiohttp: HTTP request modification
Component: BIG-IP Next
Symptoms:
A flaw was found in the python-aiohttp package. This issue could allow a remote attacker to modify an existing HTTP request or create a new request that could have minor confidentiality or integrity impacts.
Impact:
While vulnerable code is present, it is not exposed in default, recommended, or standard configurations.
1509361 : CVE-2023-50782 python-cryptography: Bleichenbacher timing oracle attack against RSA decryption
Component: BIG-IP Next
Symptoms:
A flaw was found in the python-cryptography package. This issue may allow a remote attacker to decrypt captured messages in TLS servers that use RSA key exchanges, which may lead to exposure of confidential or sensitive data
Impact:
While vulnerable code is present, it is not exposed in default, recommended, or standard configurations.
1507021 : CVE-2023-45803: urllib3: Request body not stripped after redirect
Component: BIG-IP Next
Symptoms:
urllib3 previously wouldn't remove the HTTP request body when an HTTP redirect response using status 301, 302, or 303 after the request had its method changed from one that could accept a request body (like `POST`) to `GET` as is required by HTTP RFCs.
Conditions:
NA
Impact:
The vulnerability requires a previously trusted service to become compromised to affect confidentiality.
Workaround:
NA
Fix:
urllib3 module has been updated to a non-vulnerable version
1506949 : CVE-2024-0727 openssl: denial of service via null dereference
Links to More Info: K000138695
1498489 : LDAP Bind Password not Re-populated in BIG-IP Next Central Manager GUI
Component: BIG-IP Next
Symptoms:
After successfully saving an LDAP configuration in the BIG-IP Next Central Manager GUI Auth Providers, the "Bind User Password" field does not re-populate when returning to the same LDAP configuration tab.
Any time an administrator needs to make further changes to the LDAP configuration or use the 'test' feature, the Bind User Password must be re-entered.
Conditions:
The LDAP Bind Password is not re-populated in the BIG-IP Next Central Manager GUI after initial configuration. Once the password is set and saved, navigating back to the LDAP configuration page does not show the bind password field populated. This issue may prevent users from confirming whether the password has been saved or requires re-entry during subsequent configuration updates.
Impact:
Users are required to re-enter the saved Bind user password, leading to a poor UI experience.
Workaround:
Re-enter the Bind User Password when making any changes to the LDAP authentication provider configuration settings.
Fix:
The GUI no longer requires the bind password be re-entered when making other configuration changes to the LDAP authentication provider or using the Test Auth Provider Settings feature.
1490381 : Pagination for iRules page not supported with a large number of iRules
Component: BIG-IP Next
Symptoms:
Pagination is not supported in the iRules data grid when 100s of iRules are configured to BIG-IP Next Central Manager.
Conditions:
This issue occurs when there are 100s of iRules on BIG-IP Next Central Manager, which do not fit in a single iRule view.
Impact:
If iRules data exceeds about 500, then all iRules data will be shown at once. So it will be difficult to find specific iRules.
Workaround:
Search for iRule name from the search bar to find a specific iRule.
1472669-1 : idle timer in BIG-IP Next Central Manager can log out user during file uploads★
Component: BIG-IP Next
Symptoms:
During a file upload, the UI idle timer will log out after ~ 20 minutes, possibly terminating the file upload, or making it appear as though the upload hasn't completed when it has.
Conditions:
Upload a file to BIG-IP Next Central Manager.
Impact:
File upload is incomplete.
Workaround:
Interact periodically with the UI by moving the mouse or pressing keys in the browser window during a file upload that takes longer than ~20 minutes. This will reset the idle timer and prevent the UI from terminating the user session.
1472337 : Missing object referenced in authenticationTrustCA
Component: BIG-IP Next
Symptoms:
If a authenticationTrustCA is used in the declaration, then the object referenced in it (Certificate and its key) does not point to the available object in the declaration.
Conditions:
Application with authenticationTrustCA is migrated.
Impact:
Application will not be deployable to an Instance. Application will be migratable as draft only.
Workaround:
User can manually import Certificate and Key to CM and then create an object in the application declaration that would match the reference used in authenticationTrustCA.
Fix:
Object referenced in authenticationTrustCA is now generated in the application declaration.
1455677-3 : ACCESS Policy hardening
Component: BIG-IP Next
Symptoms:
Under certain traffic patterns, ACCESS policy may crash
Conditions:
ACCESS policy evaluation is enabled
Impact:
A TMM core.
Workaround:
None
Fix:
The core is resolved.
1449709-6 : Possible TMM core under certain Client-SSL profile configurations
Links to More Info: K000138912, BT1449709
1399137 : "40001: bind: address already in use" failure logs on BIG-IP Next HA setup
Component: BIG-IP Next
Symptoms:
Following error logs/events are displayed as part of HA cluster configuration of BIG-IP Next tenants:
"40001: bind: address already in use"
Conditions:
Errors are observed when HA is configured between two BIG-IP Next tenants.
Impact:
These are just error messages. No functional impact.
Workaround:
N/A
Fix:
N/A
1394625 : Application service failes to deploy even if marked as green (ready to deploy)
Component: BIG-IP Next
Symptoms:
Deployment fails for a migrated application service that is marked green (ready for deployment.
Conditions:
During application migration, upload a UCS with a virtual server that has clientssl profile attached that points to a cert/key pair with RSA 512 OR 1024 key (unsupported).
Complete the migration and pre-deployment process, and deploy the application service.
Impact:
The application service will not have a deployment location option and can only be saved as a draft.
Fix:
Application services with unsupported certificates and key pairs, or other missing/unsupported objects are marked with a blue status to indicate that the application service requires changes once saved as a draft.
1348837 : Admin can delete their own account
Component: BIG-IP Next
Symptoms:
The default admin user can delete their own account via the API.
Conditions:
-- Admin user
-- Deleting the admin account via the API
Impact:
This inconsistency between the API and GUI can lead to a lockout of the BIG-IP Next Central Manager if an admin user deletes their account via the API and no other backup admin or non-admin users are available to log in.
Workaround:
Perform operations through the GUI.
Fix:
The issue has been resolved. Self-deletion of user accounts is now disallowed via the API.
1348833 : A cryptographically insecure pseudo-random number generator was used to create passwords during the reset process.
Component: BIG-IP Next
Symptoms:
The Math.random() function in JavaScript does not produce cryptographically secure random numbers.
Conditions:
When resetting a user password, using the “Randomly Generate” option from the reset drawer may result in a weak password being generated.
Impact:
This could result in weak and easily guessable passwords, increasing the risk of brute force attacks.
Workaround:
Manually input the password using the ‘Manually Enter’ option during the password reset process.
Fix:
The use of Math.random for password generation has been removed.
1329853 : Application traffic is intermittent when more than one virtual server is configured
Component: BIG-IP Next
Symptoms:
After deploying an application containing multiple virtual servers, only one of the virtual servers responds to clients.
In the Central Manager GUI, one virtual server is marked as red and the other is marked as green, even though you can ping all of the pool members for each of the virtual servers.
Conditions:
-- The application contains multiple virtual servers
-- The virtual addresses for each of the virtual servers is identical and the port is identical
Alternatively, you could encounter this by deploying two different applications where the virtual address and port are identical.
Impact:
The application will deploy without error even if an IP address/port conflict occurs, and traffic will be disrupted to one or both of the virtual addresses.
Workaround:
Assign different virtual addresses and/or virtual ports for different application services. If any two existing applications has same listeners defined, you can change the data by adding unique listeners and re-deploy.
1309265 : CVE-2022-41723 golang.org/x/net vulnerable to Uncontrolled Resource Consumption
Component: BIG-IP Next
Symptoms:
A maliciously crafted HTTP/2 stream could cause excessive CPU consumption in the HPACK decoder, sufficient to cause a denial of service from a small number of small requests.
Impact:
While vulnerable code is present, it is not exposed in default, recommended, or standard configurations.
Fix:
The logcli package is no longer included in the system.
1309257 : CVE-2022-41715 potential golang regex DoS
Component: BIG-IP Next
Symptoms:
Programs which compile regular expressions from untrusted sources may be vulnerable to memory exhaustion or denial of service. The parsed regexp representation is linear in the size of the input, but in some cases the constant factor can be as high as 40,000, making relatively small regexps consume much larger amounts of memory. After fix, each regexp being parsed is limited to a 256 MB memory footprint. Regular expressions whose representation would use more space than that are rejected. Normal use of regular expressions is unaffected.
Impact:
While vulnerable code is present, it is not exposed in default, recommended, or standard configurations.
Fix:
The logcli package is no longer included in the system.
1308845 : CVE-2022-46146 exporter-toolkit: authentication bypass via cache poisoning
Component: BIG-IP Next
Symptoms:
Prometheus Exporter Toolkit is a utility package to build exporters. Prior to versions 0.7.2 and 0.8.2, if someone has access to a Prometheus web.yml file and users' bcrypted passwords, they can bypass security by poisoning the built-in authentication cache. Versions 0.7.2 and 0.8.2 contain a fix for the issue. There is no workaround, but attacker must have access to the hashed password to use this functionality.
Impact:
While vulnerable code is present, it is not exposed in default, recommended, or standard configurations.
Fix:
The logcli package is no longer included in the system.
1269733-6 : HTTP GET request with headers has incorrect flags causing timeout
Links to More Info: BT1269733
Component: BIG-IP Next
Symptoms:
The 504 Gateway Timeout pool member responses are generated from a Microsoft webserver handling HTTP/2 requests.
The tcpdump shows that the HTTP/2 stream sends the request without an appropriate End Stream flag on the Headers packet.
Conditions:
The server has to provide settings with max-frame-size small enough to force BIG-IP to split the headers across multiple HTTP/2 frames, otherwise this issue does not occur.
Impact:
The HTTP GET request causing timeout.
Workaround:
None
1251181 : VLAN names longer than 15 characters can cause issues with troubleshooting
Component: BIG-IP Next
Symptoms:
If the VLAN name is longer than 15 characters, traffic originating from the debug-sidecar will not work correctly and can cause issues with troubleshooting.
Conditions:
The user creates an L1 network with a VLAN that has a name longer than 15 characters.
Impact:
Traffic that originates from the debug sidecar will not work correctly.
For example, if an internal VLAN is configured with a long name, the name in the output from 'ip addr' and 'ip route' on the debug sidecar will show a truncated name. Additionally, if a ping is attempted to a destination that is connected using this VLAN, the ping packets will be dropped and ping will fail.
Workaround:
Use VLAN names less than 16 characters long.
1232521-6 : SCTP connection sticking on BIG-IP even after connection terminated
Component: BIG-IP Next
Symptoms:
After an SCTP client has terminated, the BIG-IP still shows the connection when issuing "show sys conn protocol sctp"
Conditions:
Under certain conditions, an SCTP client connection may still exist even if the client has sent a SHUTDOWN request.
Impact:
Memory resources will be consumed as these type of lingering connections accumulate
Fix:
SCTP connections are properly internally closed when required.
Known Issues in BIG-IP Next v20.3.0
BIG-IP Next Issues
ID Number | Severity | Links to More Info | Description |
1694325-1 | 1-Blocking | Unable to save "new" HTTPS client-side certificates on HTTP2 app | |
1682089-1 | 1-Blocking | Configuration migration of an access policy from BIG-IP 17.1.0 or above to BIG-IP Next 20.3.0 Access is causing an invalid login page | |
1678561-1 | 1-Blocking | Application health remains stuck in Unknown state. | |
1674409-1 | 1-Blocking | High API load can render BIG-IP Next unresponsive. | |
1644653-1 | 1-Blocking | BIG-IP Next Central Manager displays a Failed status for High Availability (HA) when adding a third node | |
1596021-1 | 1-Blocking | serverTLS/clientTLS name in Service_TCP do not match the clientSSL/serverSSL profile name | |
1579441-1 | 1-Blocking | Connection requests on rSeries may not appear to be DAG distributed as expected | |
1574585-3 | 1-Blocking | Auto-Failback cluster cannot upgrade active node★ | |
1353589 | 1-Blocking | Provisioning of BIG-IP Next Access modules is not supported on VELOS, but containers continue to run | |
1352969 | 1-Blocking | Upgrades with TLS configuration can cause TMM crash loop | |
1350285-1 | 1-Blocking | Traffic is not passing after the tenant is licensed and network is configured | |
1696253-1 | 2-Critical | Failed to upload Instance QKView file to iHealth from BIG-IP Next Central Manager | |
1696129-1 | 2-Critical | Network Interface Instance Data Metrics show all available interfaces rather than only those that are used | |
1696113-1 | 2-Critical | Save button on application configure network may not be enabled. | |
1695977-1 | 2-Critical | BGP is unsupported in High Availability (HA) Mode on VELOS | |
1695053-1 | 2-Critical | QKView Generation may fail if upgrade file remains on BIG-IP Next Instance | |
1694241-1 | 2-Critical | CM Disconnected mode: Module provision is failing | |
1692209-1 | 2-Critical | Central Manager backup fails after upgrade | |
1691633-1 | 2-Critical | VELOS upgrade error alert states status 400 when image is not present or ready on tenant★ | |
1689457-1 | 2-Critical | Potential Issues with Debug Utility Activation for Users with Underscores (_) in Usernames. | |
1689421-1 | 2-Critical | Upgrade of BIG-IP Next instance may require reestablishing trust★ | |
1682029-1 | 2-Critical | Application and instances graphs may show traffic spikes after HA failover or active node shutdown | |
1682017-1 | 2-Critical | A possible gap in app/instance graphs might be shown after HA failover/blade shutdown | |
1679977-1 | 2-Critical | Creating multiple L1-Networks with same names will only create one L1-Network | |
1678817-1 | 2-Critical | Intermittent failure in Data Group deployment when added from SSL Orchestrator policy | |
1678677-1 | 2-Critical | Re-discover Active node | |
1678453-1 | 2-Critical | A high number of application creations and deletions can cause frequent Out of Memory (OOM) errors for WebSSO | |
1678009-1 | 2-Critical | CM may not display the complete FQDN if the FQDN is long | |
1677913-1 | 2-Critical | Redeployment fails when CRLDP Responder mode is changed | |
1677537-1 | 2-Critical | Inspection services endpoints related configuration are not propagated to BIG-IP Next instances during upgrades★ | |
1677141 | 2-Critical | Updating a L1-Network is not allowed if either HA Control-Plane VLAN or Data-Plane VLAN are part of the same L1-Network | |
1672109-1 | 2-Critical | Unable to reach backend application when configured with "host" in Network Access Optimized Application | |
1671645-1 | 2-Critical | Interfaces not properly mapped after switching port profiles from 8x10 to 4x25 | |
1670689-1 | 2-Critical | BIG-IP Next Central Manager High Availability Installation Failures in High Disk Latency Environments★ | |
1668017-1 | 2-Critical | Cannot add new VLANs to existing HA L1 Network | |
1644545-1 | 2-Critical | Central Manager (CM) restore fails when using a full backup file with an external storage configuration | |
1644157 | 2-Critical | "Error sending OCSP request" seen in apmd logs for OCSP authentication access policy | |
1641909 | 2-Critical | Applications created from the API sometimes can't be edited in the GUI. | |
1641901-1 | 2-Critical | App configs are not reflected in f5-fsm, causing traffic failure | |
1636229-1 | 2-Critical | A vCPU count change can stop traffic for up to 3 hours | |
1635421 | 2-Critical | License server unavailable when a node goes down★ | |
1634065-1 | 2-Critical | BIG-IP Next application telemetry data missing for a brief period from Central Manager when a CM node goes down | |
1632833-1 | 2-Critical | Upgrade to Release version 20.3.0 might create a core file★ | |
1629161-1 | 2-Critical | L1-Network cannot be deleted | |
1604997-1 | 2-Critical | Central Manager (CM) Prometheus pod in CrashLoopBackOff | |
1600809-1 | 2-Critical | Upgrading BIG-IP Next Central Manager does not show unsupported properties in migrations created before upgrade.★ | |
1600377-1 | 2-Critical | The BIG-IP Central Manager GUI does not support backup file uploads when external storage is configured. | |
1596929-1 | 2-Critical | Policy-compiler supports policy versions only up to 17.0.0. | |
1596801-1 | 2-Critical | Route Health Injection default for BIG-IP Next is "ANY"★ | |
1593613 | 2-Critical | When an upgrade fails, CM cannot be restored and becomes dysfunctional due to multiple containers entering the 'CrashLoopBackOff' state★ | |
1590037-1 | 2-Critical | Provisioning SSL Orchestrator on BIG-IP NEXT HA cluster fails when using Central Manager UI | |
1585309 | 2-Critical | Server-Side traffic flows using a default VRF even though pool is configured in a non-default VRF | |
1579977-1 | 2-Critical | BIG-IP Next instance telemetry data is missing from the BIG-IP Next Central Manager when a BIG-IP Next Central Manager High Availability node goes down. | |
1576277 | 2-Critical | 'Backup file creation failed' for instance after upgrade to v20.2.0 | |
1550345-2 | 2-Critical | BIG-IP Next API gateway takes long time to respond large access policy playload | |
1492705 | 2-Critical | During upgrading to BIG-IP Next 20.1.0, the BIG-IP Next 20.1.0 Central Manager failed to connect with BIG-IP Next 20.0.2 instance | |
1474669-2 | 2-Critical | Fluentbit core may be generated when restarting the pod | |
1466305 | 2-Critical | Anomaly in factory reset behavior for DNS enabled BIG-IP Next deployment | |
1410241-1 | 2-Critical | Traffic for TAP is not seen on service interface when connection mirroring is turned on | |
1365005 | 2-Critical | Analytics data is not restored after upgrading to BIG-IP Next version 20.0.1★ | |
1354265 | 2-Critical | The icb pod may restart during install phase | |
1343005-1 | 2-Critical | Modifying L4 serverside after the stack is created can result in the update not being applied | |
1087937 | 2-Critical | API endpoints do not support page query | |
1696161-1 | 3-Major | Unable to update the OAuth client configuration | |
1695873-1 | 3-Major | BIG-IP Next Central Manager does not load as expected after initial password change★ | |
1692233-1 | 3-Major | AS3 Declaration fails to update serverTLS/clientTLS when multiple SSL profiles are configured | |
1682085-1 | 3-Major | OAuth Resource Server agent fails to deploy when using a private key to decrypt the access token | |
1682021-1 | 3-Major | Unable to save Service Provider configuration changes in the SAML Federation rule | |
1680361-1 | 3-Major | Expression of the first branch missed after a Rule node was moved and saved | |
1680189-1 | 3-Major | Creating instances does not enable the "Default VRF" field on VLANs by default | |
1679593-1 | 3-Major | User should provide an IPv4 address space when an IPv6 address space is provided | |
1671465 | 3-Major | FAST-0002: Internal Server Error: Unable to render template Examples/http: rpc error: code = Unknown desc = missed comma between flow collection entries | |
1670977-1 | 3-Major | The BIG-IP Next Central Manager backup fails when a node becomes unreachable | |
1635369-1 | 3-Major | CM pool has a manditory monitor constraint. | |
1629897-1 | 3-Major | Shared object installation status might be incorrect on a migration resume. | |
1629105-1 | 3-Major | Incorrect conversion of DTLS virtual server★ | |
1629077-1 | 3-Major | BIG-IP Next Central Manager does not support NTP configuration via DHCP | |
1623609-1 | 3-Major | Skipped certificate marked as imported during application migration via the GUI. | |
1623533-1 | 3-Major | Observing drop in traffic throughput with debug-sidecar inline tcpdump packet capture | |
1623421-1 | 3-Major | External OpenAPI files cannot be used with HTTPS links | |
1622005-1 | 3-Major | OpenAPI files that are extremely large cannot be applied | |
1615257 | 3-Major | Application monitors edit drawer autosaves | |
1604657 | 3-Major | High CPU utilization and reduced throughput in certain conditions when connection mirroring is enabled in HA | |
1603561-1 | 3-Major | L1-Network name cannot be changed | |
1602001 | 3-Major | Upgrading from 20.2.1 or Earlier versions will delete all External Loggers★ | |
1601573 | 3-Major | UI elements related to virtual servers not shown after upgrade★ | |
1601233-1 | 3-Major | Multi-replica in HA not supported for alert feature | |
1600381-1 | 3-Major | WAF enforcer might crash during handling of response | |
1593805 | 3-Major | The air-gapped environment upgrade from BIG-IP Next 20.0.2-0.0.68 to BIG-IP Next 20.2.0-0.5.41 fails★ | |
1589865-1 | 3-Major | Licensing via CM fails with "400 The SSL certificate error" | |
1586869 | 3-Major | Unable to create the same standby instance, when Instance HA creation failed using CM-created instances★ | |
1584637 | 3-Major | After upgrade, 'Accept Request' will only work on events after policy redeploy★ | |
1584625 | 3-Major | Virtual server information of application containing multiple virtual IP addresses and WAF policies after upgrade is missing★ | |
1583541 | 3-Major | Re-establish trust with BIG-IP after upgrade to 20.2.1 using a 20.1.1 Central Manager★ | |
1583049-1 | 3-Major | Central Manager Logs | |
1582421-1 | 3-Major | BIG-IP Next Central Manager functionality impacted if the host IP address changes | |
1582409-1 | 3-Major | BIG-IP Next Central Manager will not start if the DNS server details are not provided | |
1574685 | 3-Major | Generated WAF report can be loaded without text | |
1574681 | 3-Major | Dynamic Parameter Extract from allowed URLs does not show in the parameter in the WAF policy | |
1574573 | 3-Major | Global Resiliency Group status not reflecting correctly on update | |
1574565 | 3-Major | Inability to edit Generic Host While Re-Enabling Global Resiliency | |
1568129 | 3-Major | During upgrade from BIG-IP Next 20.1.0 to BIG-IP Next 20.2.0, issue identified with instances that has L3-Forwards with non default VRF (L3-Network) configuration | |
1567129 | 3-Major | Unable to deploy Apps on BIG-IP Next v20.2.0 created using Instantiation from v20.1.x★ | |
1566745-1 | 3-Major | L3VirtualAddress set to ALWAYS advertise will not advertise if there is no associated Stack behind it | |
1495017 | 3-Major | BIG-IP Next Hostname, Group Name and FQDN name should adhere to RFC 1123 specification | |
1495005 | 3-Major | Cannot create Global Resiliency Group with multiple instances if the DNS instances have same hostname | |
1494997 | 3-Major | Deleting a GSLB instance results in record creation of GR group in BIG-IP Next Central Manager | |
1491197 | 3-Major | Server Name (TLS ClientHello) Condition in policy shouldn't be allowed when "Enable UDP" option is selected in application under Protocols & Profiles | |
1491121 | 3-Major | Patching a new application service's parameters overwrites entire application service parameters | |
1489945 | 3-Major | HTTPS applications with self-signed certificates traffic is not working after upgrading BIG-IP Next instances to new version of BIG-IP Next Central Manager★ | |
1474801 | 3-Major | BIG-IP Next Central Manager creates a default VRF for all VLANS of the onboarded Next device | |
1403861 | 3-Major | Data metrics and logs will not be migrated when upgrading BIG-IP Next Central Manager from 20.0.2 to a later release | |
1366321-1 | 3-Major | BIG-IP Next Central Manager behind a forward-proxy | |
1365433 | 3-Major | Creating a BIG-IP Next instance on vSphere fails with "login failed with code 501" error message★ | |
1360121-1 | 3-Major | Unexpected virtual server behavior due to removal of objects unsupported by BIG-IP Next | |
1360097-1 | 3-Major | Migration highlights and marks "net address-list" as unsupported, but addresses are converted to AS3 format | |
1360093-1 | 3-Major | Abbreviated IPv6 destination address attached to a virtual server is not converted to AS3 format | |
1359209-1 | 3-Major | The health of application service shown as "Good" when deployment fails as a result of invalid iRule syntax | |
1358985-1 | 3-Major | Failed deployment of migrated application services to a BIG-IP Next instance | |
1355605 | 3-Major | "NO DATA" is displayed when setting names for appliction services, virtual servers and pools, that exceed max characters | |
1314617 | 3-Major | Deleting an interface on a running BIG-IP Next instance can cause the system to behave unexpectedly | |
1134225 | 3-Major | K000138849 | AS3 declarations with a SNAT configuration do not get removed from the underlying configuration as expected |
1122689-3 | 3-Major | Cannot modify DNS configuration for a BIG-IP Next VE instance through API | |
1694333 | 4-Minor | Incorrect VRF VLANs when clicking on badge to show a view of VLANs | |
1660913-1 | 4-Minor | For API workflows switching between /declare and /documents is unsupported. | |
1634929 | 4-Minor | Parameter names in api documentation is invalid for metrics api | |
1633569-1 | 4-Minor | Default values for new entities in an attached OpenAPI file do not match the policy’s current configuration | |
1629537 | 4-Minor | Logged-in admin user will not be able to change password before Central Manager setup | |
1615261 | 4-Minor | Application page may show "No Data" for Active Alerts instead of zero. | |
1593745 | 4-Minor | Issues identified during Backup, Restore, and User Operations between two BIG-IP Next Central Managers for Standalone and High Availability Nodes. | |
1588813-1 | 4-Minor | CM Restore on a 3 node BIG-IP Next Central Manager with external storage fails with ES errors | |
1588101-1 | 4-Minor | Any changes made on the BIG-IP Next Central Manager after the BIG-IP Next instance backup will not be reflected on the BIG-IP Next Central Manager once the BIG-IP Next instance is restored. | |
1581877 | 4-Minor | An error is seen when no device certificates are present on the BIG-IP Next Instance | |
1576273 | 4-Minor | No L1-Networks in an instance causes BIG-IP Next Central Manager upgrade to v20.2.0 to fail★ | |
1575549 | 4-Minor | BIG-IP Next Central Manager discovery requires an instance to have both Default L2-Network and Default L3-Network if either one already exists | |
1574997 | 4-Minor | BIG-IP Next Central Manager HA node installation requires logout to add node★ | |
1560605 | 4-Minor | Global Resiliency functionality fails to meet expectations on Safari browsers | |
1498421 | 4-Minor | Restoring Central Manager (VE) with KVM HA Next instance fails on a new BIG-IP Next Central Manager | |
1498121 | 4-Minor | BIG-IP Next Central Manager upgrade alerts not visible in global bell icon | |
1365445 | 4-Minor | Creating a BIG-IP Next instance on vSphere fails with "login failed with code 401" error message★ | |
1365417 | 4-Minor | Creating a BIG-IP Next VE instance in vSphere fails when a backslash character is in the provider username★ | |
1360709 | 4-Minor | Application page can show an error alert that includes "FAST delete task failed for application" | |
1360621 | 4-Minor | Adding a Control Plane VLAN must be done only during BIG-IP Next HA instance creation | |
1354645 | 4-Minor | Error displays when clicking "Edit" on the Instance Properties panel | |
1350365 | 4-Minor | Performing licensing changes directly on a BIG-IP Next instance | |
1325713 | 4-Minor | Monthly backup cannot be scheduled for the days 29, 30, or 31 |
Known Issue details for BIG-IP Next v20.3.0
1696253-1 : Failed to upload Instance QKView file to iHealth from BIG-IP Next Central Manager
Component: BIG-IP Next
Symptoms:
The upload of an Instance QKView generated from BIG-IP Next Central Manager fails if the file size exceeds 64 MB.
Conditions:
Attempting to upload a BIG-IP Next Instance QKView file that is 64 MB or larger to iHealth.
Impact:
Uploading QKView files larger than 64 MB to iHealth fails.
Workaround:
Follow these steps to resolve the issue:
1. The QKView that failed to upload will still be created locally. Select the failed QKView and click the download button to save the generated QKView file to your local machine.
2. Log in to iHealth (https://account.f5.com/ihealth2) using your iHealth credentials.
3. Upload the QKView file from your local machine to the iHealth website, specifying a case number if applicable.
Note: If the QKView file is not generated when you click the download button, attempt to generate a new QKView file.
1696161-1 : Unable to update the OAuth client configuration
Component: BIG-IP Next
Symptoms:
Updating the value in the text input field causes it to be saved as a string, resulting in the following error:
"Request body has an error: doesn't match the schema: doesn't match schema due to: doesn't match schema due to: Error at '/server
type': value is not one of the allowed values."
Conditions:
Create an access policy with the OAuth Federation. Change the default value of the "Token Validation Interval" on the OAuth Server or the "Access Token Expires In" on the OAuth Provider.
Impact:
An error occurs when saving the policy if the default values are changed.
Workaround:
The user needs to update the "Token Validation Interval" on the OAuth Server or the "Access Token Expires In" on the OAuth Provider for an access policy using the API.
1696129-1 : Network Interface Instance Data Metrics show all available interfaces rather than only those that are used
Component: BIG-IP Next
Symptoms:
Network Interface Instance Data Metrics show all available interfaces rather than showing only those that are used.
Additionally, the metrics charts for the unused interfaces are shown incorrectly with data.
Conditions:
Viewing data for a BIG-IP Next instance running on rSeries with at least one application deployed.
Impact:
Network Interface Instance Data Metrics show all available interfaces rather than showing only those that are used.
The data metrics that are shown in the UI for the L1-Network interfaces that are not used will show incorrect data.
Workaround:
None
1696113-1 : Save button on application configure network may not be enabled.
Component: BIG-IP Next
Symptoms:
When all vlans are removed from the list of vlans to listen on the save button will be disabled.
Conditions:
There must be vlans listed in the selection box, removing all of them will disable the save button.
Impact:
Removing all vlans will not allow the user to disable the "Enable VLANS" option by saving the page.
Workaround:
You may disable the "Enable Vlans" option but only with vlans selected in the select box.
In other words, do not delete vlans before disabling vlan filtering, in order to enable the Save button and update the config to disable vlan filtering.
1695977-1 : BGP is unsupported in High Availability (HA) Mode on VELOS
Component: BIG-IP Next
Symptoms:
BGP connections will not be formed and routes will not be shared with BGP peers.
Conditions:
A pair of BIG-IP Next instances running on a VELOS chassis in High Availability (HA) mode will not send BGP peer requests to BGP neighbors. This issue occurs after HA pairs are established on both active and standby instances.
Impact:
Traffic will fail to reach the intended applications because BGP routes from the BIG-IP Next instance will not be advertised to BGP peers.
Workaround:
None
1695873-1 : BIG-IP Next Central Manager does not load as expected after initial password change★
Component: BIG-IP Next
Symptoms:
After updating the initial password, the BIG-IP Next Central Manager may not load as expected.
Conditions:
Fresh installation of BIG-IP Next Central Manager and initial password is changed.
Impact:
BIG-IP Next Central Manager information may not load as expected.
Workaround:
Log out and log in again to the BIG-IP Next Central Manager.
1695053-1 : QKView Generation may fail if upgrade file remains on BIG-IP Next Instance
Component: BIG-IP Next
Symptoms:
After upgrading a BIG-IP Next instance, if the upgrade file remains on the filesystem of the instance, a QKView request may fail.
Conditions:
A large file is stored in the BIG-IP Next file system.
Impact:
Cannot create QKView requests.
Workaround:
Delete the large file from the BIG-IP Next instance using a Central Manager API call and ensure successful QKView generation, follow these steps:
1. Send a DELETE request to:
https://{{cm_mgmt_ip}}/api/device/v1/proxy/{{instance_id}}?path=files/{{file_id}}
2. Wait 25 minutes before submitting the QKView request.
1694333 : Incorrect VRF VLANs when clicking on badge to show a view of VLANs
Component: BIG-IP Next
Symptoms:
When clicking on the badge in the second column of the VRF's grid, the VLAN column, the VLANs shown will always be the same regardless of which VRF is chosen.
Conditions:
More than 1 VLAN exists (including the Default VRF).
Impact:
Incorrect information is displayed.
Workaround:
View VLANs directly in the VRF by clicking the VRF Name column. To avoid misinformation, do not dive into the VRF VLANs via the badge.
1694325-1 : Unable to save "new" HTTPS client-side certificates on HTTP2 app
Component: BIG-IP Next
Symptoms:
The save button is grayed out and you are unable click Save when adding new HTTPS client certificates to a HTTP2 app.
Conditions:
- Using Central Manager.
- Having an HTTP2 app with Client-side TLS and Server-side TLS deployed to an instance.
- Updating or replacing the Client-side TLS.
Impact:
You are unable to save the updated application.
Workaround:
Change something on the Server-side TLS and click Save for that change. Revert the change on the Server-side TLS and click Save. Click Save for Protocols & Profiles.
1694241-1 : CM Disconnected mode: Module provision is failing
Component: BIG-IP Next
Symptoms:
While Central Manager is operating in disconnected mode, when you trigger a provisioning request and then start a second provisioning request before Central Manager completes previous provisioning, ack submission will fail.
Note:
In case of connected mode, CM UI allows provisioning request in sequential order. Second provisioning will be allowed only after completion of first provisioning request.
Conditions:
-- Central Manager
-- Licensing mode is configured for "Disconnected mode"
-- Provision a second module before you have completed the Ack verification of the first transaction
Impact:
The updated report is not applied to the instance.
Workaround:
Activate one feature at a time, and perform Ack verification in sequential order.
1692233-1 : AS3 Declaration fails to update serverTLS/clientTLS when multiple SSL profiles are configured
Component: BIG-IP Next
Symptoms:
The migrated application does not contain serverTLS and/or clientTLS property, which needs updating:
"Common_virtual_multi_ssl": {
"class": "Service_TCP",
"persistenceMethods": [],
"profileTCP": {
"use": "/tenantc3d2758c11a68/Common_virtual_multi_ssl/tcp_default_v14"
},
"serverTLS": "<Choose your SSL client-side profile>",
"snat": "none",
"virtualAddresses": [
"10.10.10.21"
],
"virtualPort": 443
},
Conditions:
The virtual server includes multiple SSL profiles on the client and/or server side.
Example:
ltm virtual /tenantf0154bc117746/Common_virtual_multi_ssl/Common_virtual_multi_ssl {
creation-time 2024-10-04:03:20:50
destination /tenantf0154bc117746/Common_virtual_multi_ssl/10.10.10.21:443
ip-protocol tcp
last-modified-time 2024-10-04:03:20:50
mask 255.255.255.255
profiles {
/tenantf0154bc117746/Common_virtual_multi_ssl/ssl_prof_client_ecdsa {
context clientside
}
/tenantf0154bc117746/Common_virtual_multi_ssl/ssl_prof_client_rsa {
context clientside
}
/tenantf0154bc117746/Common_virtual_multi_ssl/tcp_default_v14 { }
}
source 0.0.0.0/0
translate-port enabled
}
Saved AS3 application:
"Common_virtual_multi_ssl": {
"Common_virtual_multi_ssl": {
"class": "Service_TCP",
"persistenceMethods": [],
"profileTCP": {
"use": "/tenantf0154bc117746/virtual_multi_ssl/tcp_default_v14"
},
"snat": "none",
"virtualAddresses": [
"10.10.10.21"
],
"virtualPort": 443
},
Impact:
The virtual server configured with multiple SSL profiles fails to link to any SSL profile.
Workaround:
Update the saved Application Service with the serverTLS and/or clientTLS property:
"serverTLS": "<Choose your SSL client-side profile>"
and replace <Choose your SSL client-side profile> with a valid TLS_Server class object name.
1692209-1 : Central Manager backup fails after upgrade
Component: BIG-IP Next
Symptoms:
When you set up NFS external storage, the ownership of the /opt/cm-backup and /opt/cm-qkview directories is changed from admin:admin to root:root. This causes Central Manager backups and QKViews to fail.
Conditions:
NFS external storage was configured before the upgrade.
Impact:
Backups and QKViews in BIG-IP Next Central Manager may fail.
Workaround:
Run the following commands in the BIG-IP Next Central Manager CLI to correct ownership:
- sudo chown -R admin:admin /opt/cm-backup
- sudo chown -R admin:admin /opt/cm-qkview
1691633-1 : VELOS upgrade error alert states status 400 when image is not present or ready on tenant★
Component: BIG-IP Next
Symptoms:
Global alert raised stating: "Initializing upgrade fails with status 400". This does not communicate sufficiently what went wrong and what the user should do next.
Conditions:
When a user triggers a VELOS instance upgrade and specifies an image that is not present or ready on the tenant.
Impact:
User will not know that they need to do next to get a successful upgrade.
Workaround:
Check that the image specified when triggering upgrade is present and ready on the specified VELOS tenant and try upgrade again.
1689457-1 : Potential Issues with Debug Utility Activation for Users with Underscores (_) in Usernames.
Component: BIG-IP Next
Symptoms:
Enabling debug utility will fail from Central Manager with the error "Failed to read request Query Param Error at: username. Reason: string does not match the regular expression \"^[a-z][-a-z0-9]*$\"""
Conditions:
- User with username that does not follow regex pattern '^[a-z][-a-z0-9]*$\'.
- Trying to enable debug utility from Central Manager.
Impact:
User can not enable debug utility from Central Manager.
Workaround:
1. A username should be created using a combination of uppercase letters (A-Z), lowercase letters (a-z), and numbers (0-9) to enable the debug utility. Although the following are allowed, avoid using them as they will impact enabling the debug utility:
- underscores (_), dashes (-) or dots (.).
- starting the username with an uppercase letter or number.
2. For users with a dash (-) in their username, enable the debug utility using OpenAPI.
1689421-1 : Upgrade of BIG-IP Next instance may require reestablishing trust★
Component: BIG-IP Next
Symptoms:
When upgrading a BIG-IP Next instance to the 20.3.0 release, it is possible that the certificate on the BIG-IP Next instance temporarily changes. When this happens, it requires the user to trust the temporary certificate to complete the upgrade and then to reestablish trust with the original certificate once the upgrade completes.
Conditions:
Upgrading BIG-IP Next instance to the 20.3.0 release.
Impact:
User intervention may be required to complete the upgrade and ensure the instance can continue to be managed by BIG-IP Next Central Manager after the upgrade. You may need to do this intervention multiple times.
Workaround:
If the BIG-IP Next instance upgrade process prompts to trust a new certificate, use the provided button to accept the new certificate.
If the BIG-IP Next instance shows up with a health status of UNKNOWN at any point after the upgrade, navigate to the instance properties screen and select the Certificates section. Ignore the errors that show up on these screens and click the Establish Trust button. When prompted, accept this certificate fingerprint. After the operation completes, click the Cancel & Exit button and wait for the next health update to bring the instance back to a valid health status.
1682089-1 : Configuration migration of an access policy from BIG-IP 17.1.0 or above to BIG-IP Next 20.3.0 Access is causing an invalid login page
Component: BIG-IP Next
Symptoms:
Configuration migration of an Access Policy from BIG-IP 17.1.0 or above to BIG-IP Next 20.3.0 Access is causing an invalid login page.
Conditions:
Using the application migration tool to migrate the access policy configuration from BIG-IP 17.1.0 or above to BIG-IP Next 20.3.0, customization objects are converted with empty strings.
Impact:
The login Page is not loading as expected.
Workaround:
Manually edit the access policy configuration to supply the customization strings by referring to the BIG-IP 17.1.0 config file. Later, use the BIG-IP Next Central Manager to save the access policy and then deploy the application.
1682085-1 : OAuth Resource Server agent fails to deploy when using a private key to decrypt the access token
Component: BIG-IP Next
Symptoms:
The Scope agent validates the received Access Token against a list of JWT providers. Each provider has an associated JWT configuration.
A known issue occurs when the OAuth Resource Server agent fails to deploy and shows a pre-deploy error if the user uploads a private key to decrypt the token.
Conditions:
The Resource Server does not use the ID token and only requires the Access Token. Its primary function is token verification.
Even when the Access Token is attached, the API payload is missing the Access Token key, which results in a pre-deploy error.
Steps to Reproduce:
1. Create an Access policy with the OAuth Federation Resource Server, and set the validation mode to internal.
2. Choose JWE encryption and attach the private key for the Access Token.
3. Save and deploy the policy.
Impact:
An Access Policy with OAuth Federation will fail for F5 as a Resource Server when using internal validation mode.
Workaround:
Using the API, add the private keys to the allowedKeys field under jwtConfig.
1682029-1 : Application and instances graphs may show traffic spikes after HA failover or active node shutdown
Component: BIG-IP Next
Symptoms:
A spike in the instance/application graphs might be shown after failover or active node shutdown.
Conditions:
BIG-IP Next high availability (HA) pair switched from active node to standby node.
Impact:
The instance/application graphs may show a spike in traffic.
Workaround:
None.
1682021-1 : Unable to save Service Provider configuration changes in the SAML Federation rule
Component: BIG-IP Next
Symptoms:
The Disable option in the Service Provider configuration does not remove the data from the access policy.
Conditions:
This issue occurs under the following conditions:
-- Create an access policy and add a SAML Federation rule.
-- Set the Service Provider configuration to Advanced and enable certain configuration options.
-- Set the value on the Providers page.
-- Save the new policy.
-- Reopen the policy from the policy list.
-- Disable Service Provider configuration options on the SAML Federation Rule Properties page.
-- Save the edited policy.
-- Reopen the policy. However, the changes to the Service Provider configuration have not been saved.
Impact:
Changes to the SAML Federation configuration cannot be saved.
Workaround:
After disabling the option in the SAML Federation configuration, the user needs to delete and recreate the Service Providers or Identity Providers on the SAML Federation Providers page.
1682017-1 : A possible gap in app/instance graphs might be shown after HA failover/blade shutdown
Component: BIG-IP Next
Symptoms:
instance metrics are received on a fixed interval of 30 seconds and application metrics are received on a fixed interval of 2 minutes.
When a failover occurs, switching from one node to the other causes a temporarily disconnection between BIG-IP Next and Central Manager. This can cause a gap to occur in metrics reporting.
Conditions:
BIG-IP Next HA pair switches from one node to the other.
Impact:
A gap in the app/instance graphs will occur.
Workaround:
None
1680361-1 : Expression of the first branch missed after a Rule node was moved and saved
Component: BIG-IP Next
Symptoms:
After moving a rule node within the same flow and saving the policy, when it was reopened, the moved nodes did not have accurate branch expressions.
Conditions:
Move a Rule node from one point to another inside a Flow.
Impact:
This may cause errors during policy deployment, and incorrect branch expressions could also impact traffic flow.
Workaround:
This enhancement is designed to help users rearrange the policy without deleting the nodes. Even if an error occurs while moving the nodes, users can still be able to create the policy using the traditional method of dropping nodes from the sidebar.
1680189-1 : Creating instances does not enable the "Default VRF" field on VLANs by default
Component: BIG-IP Next
Symptoms:
BIG-IP Next supports Virtual Routing and Forwarding (VRF), but it must be configured when the networks are being created. If you do not choose a default VRF this can lead to traffic issues later when applications are created.
Conditions:
Setting up networking for a new BIG-IP Next instance.
When creating a new BIG-IP Next instance, you are prompted to set the instance's network settings, including Self IPs, VLANs, and L1 Networks. The VLANs section includes a checkbox labeled "Default VRF". This checkbox is unchecked by default and can lead to an empty list of L3 Networks.
Impact:
No VLANs get marked as a default VRF, and this setting may be unchangeable after the instance is created.
Workaround:
If you do have any VLANS set as a Default VRF, you will not pass any traffic. When deploying an application, the application will use your Default VRFs by default, but you may select a different VRF if you have created multiple VRFs.
When creating your isntance, check the "Default VRF" checkbox for VLANs as appropriate when creating a new instance. If you're uncertain which VLAN to select, select your instance's External VLAN.
1679977-1 : Creating multiple L1-Networks with same names will only create one L1-Network
Component: BIG-IP Next
Symptoms:
Creating multiple L1-Networks with same names will only create one L1-Network in instance; however, Central Manager reports that multiple L1-Networks are successfully created.
Conditions:
Creating multiple L1-Networks with the same names
Impact:
Instance will only create one L1-Network; however, CM reports that multiple L1-Networks are successfully created.
Workaround:
Workaround (mitigation) if you have not created the L1-Networks: do not create multiple L1-Networks with same names.
Workaround if you have already created the L1-Networks:
1. Login to CM using CM login API
2. Grab the instance ID that the L1-Networks have been created by using the GET /api/v1/spaces/default/instances API.
3. Retrieve all the L1-Networks directly in the instance by requesting a GET request to "api/device/v1/proxy/{INSTANCE_ID}?path=/L1-networks"
4. Go to CM UI and delete all the L1-Networks that are not included in step #3
1679593-1 : User should provide an IPv4 address space when an IPv6 address space is provided
Component: BIG-IP Next
Symptoms:
IPv6-only address space is not supported. You must configure either IPv4 or both IPv4 and IPv6 for split tunneling on Central Manager or BIG-IP Next.
Conditions:
This issue occurs under the following conditions:
-- Split tunneling is enabled on the Network Access resource.
-- Only an IPv6 address is specified in the Include Address Space field.
-- The application containing the access policy is deployed.
Impact:
BIG-IP Next Central Manager accepts the configuration, but the Edge Client gets stuck on Initializing when trying to connect.
Workaround:
Add an IPv4 address space with the IPv6 address space in the Include Address Spaces field under Split Tunneling in the Network Access resource.
1678817-1 : Intermittent failure in Data Group deployment when added from SSL Orchestrator policy
Component: BIG-IP Next
Symptoms:
When you add two or more data groups to an SSL Orchestrator policy and deploy together, it fails to deploy any one of them.
Conditions:
Adding data groups to an SSL Orchestrator policy.
Impact:
The policy deployment fails hence the application deployment may also fail for applications which have an SSL Orchestrator policy attached to them.
Workaround:
Retry application deployment:
1) Retry application deployment will redeploy the data group and policy.
2) If the retry deployment fails, create a new policy with the same data groups. Detach the existing policy from the application and attach the new application and retry deployment.
1678677-1 : Re-discover Active node
Component: BIG-IP Next
Symptoms:
The health state of a HA instance may transition to Unknown when attempting to re-discover using the active device's management IP address when the HA pair was already discovered using the floating management IP address by Central Manager.
Conditions:
HA instance is managed on Central Manager.
Impact:
The health state of a HA instance may transition to Unknown
Workaround:
Establish Trust by clicking the name of the instance, Click Certificates, Click Establish Trust
1678561-1 : Application health remains stuck in Unknown state.
Component: BIG-IP Next
Symptoms:
After a successful application deployment the health state does not change to another state in several minutes. The application health remains in "Unknown" state.
Conditions:
A new application deployment will start with a health state of "Unknown" as the data plane is being configured.
Impact:
When an application health state remains in "Unknown" it is an indication of a configuration error.
Workaround:
Find the configuration error: review the f5-fsm-tmm log by way of qkview, debug-sidecar or 3rd party logging tool for configuration errors. Configuration errors can be found by searching for keyword "TMC ERROR" in f5-fsm-tmm log. If an error has been found, delete the application that is in the unknown state. Resolve the configuration error and create new app.
1678453-1 : A high number of application creations and deletions can cause frequent Out of Memory (OOM) errors for WebSSO
Component: BIG-IP Next
Symptoms:
When hundreds or thousands of applications are created, deployed, and deleted rapidly, the WebSSO container can run out of memory, leading to a loop where the OOM-killer is triggered, causing repeated WebSSO restarts.
Conditions:
Rapid creation, deployment, and deletion of hundreds, or thousands of applications.
Impact:
The BIG-IP Next instance can become unresponsive.
Workaround:
-- Create an application and deploy it, ensuring that you check for 200 OK responses.
-- Before creating the next application, poll the link provided in the response waiting for the deployment task status to change from "pending". Once the application has been successfully deployed, the cycle restarts.
1678009-1 : CM may not display the complete FQDN if the FQDN is long
Component: BIG-IP Next
Symptoms:
In the Instance grid, CM may not display the complete FQDN if the FQDN is longer, depending on the screen resolution.
Conditions:
When the instance is added using FQDN and when the instance has a long name.
Impact:
The user may not view the entire FQDN in the Instance grid page, if they have a smaller resolution.
Workaround:
Click on the instance and view the Instance name in the properties section.
1677913-1 : Redeployment fails when CRLDP Responder mode is changed
Component: BIG-IP Next
Symptoms:
A change in the CRLDP Responder mode triggers a corresponding change in the dependent networking objects.
Conditions:
This issue occurs under the following conditions:
-- Deploy the application with an Access policy that includes a CRLDP rule.
-- Update the Access policy to change the CRLDP Authentication Responder mode to a different value and save the policy.
-- Redeploy the application containing the updated Access policy.
Impact:
Redeployment of the application failed with an HTTP 400 error.
Workaround:
The user needs to delete the deployed application, recreate it with the same policy, and configure the deployment parameters.
1677537-1 : Inspection services endpoints related configuration are not propagated to BIG-IP Next instances during upgrades★
Component: BIG-IP Next
Symptoms:
1. Updates to endpoints configuration of inspection services are not propagated to BIG-IP Next instances.
2. Central Manager shows a different config than what has been configured on the BIG-IP Next instance.
Conditions:
1. Deploy an inspection service with only changes to global screens.
2. Update the endpoints configuration on global screen.
3. Deploy the updates without modifying instance specific end points configuration on deployment screen.
Impact:
You may notice the traffic is still flowing to previous endpoints configured (updates won't be propagated)
Workaround:
1. Use the deployment screen to update the required configuration.
1677141 : Updating a L1-Network is not allowed if either HA Control-Plane VLAN or Data-Plane VLAN are part of the same L1-Network
Component: BIG-IP Next
Symptoms:
Updates to a L1-Network - such as adding/removing a VLAN from defaultVRF, adding/removing a VLAN - are not allowed if either HA Control-Plane VLAN or Data-Plane VLAN are part of the same L1-Network.
Conditions:
Creating L1-Network with VLANs including HA Control-Plane VLAN or Data-Plane VLAN and then attempting to update the L1-Network and it's objects after HA instance is created.
Impact:
You cannot update the L1-Network.
Workaround:
Prior to create HA, exclude HA VLANs from other VLANs by creating a separate L1-Network for either HA VLANs or the other VLANs
1674409-1 : High API load can render BIG-IP Next unresponsive.
Component: BIG-IP Next
Symptoms:
Under extremely high API load with FAST applications the BIG-IP Next instance may become unresponsive.
Conditions:
When sending 1000s of applications requests or deleting 1000s of applications without sufficient rate limiting.
Impact:
The BIG-IP Next instance will become unresponsive.
Workaround:
For large scale FAST API deployments use the following guidelines.
-- Create an application and then deploy it being sure to check for 200 OK responses.
-- Before creating the next application, poll the link in the response waiting for the deployment task status change from "pending". When that application has been successfully deployed, the cycle may be restarted.
1672109-1 : Unable to reach backend application when configured with "host" in Network Access Optimized Application
Component: BIG-IP Next
Symptoms:
The backend optimized application is not accessible when confugured with FQDN.
Conditions:
The optimized application is configured with an FQDN instead of an IP address.
Impact:
The backend optimized application is not reachable.
Workaround:
Configure with IP address instead of FQDN.
1671645-1 : Interfaces not properly mapped after switching port profiles from 8x10 to 4x25
Component: BIG-IP Next
Symptoms:
When a BIG-IP Next instance is initially onboarded using a 4x25 port profile, switching to an 8x10 port profile results in improper interface mapping.
Conditions:
This issue occurs when a BIG-IP Next instance, initially configured with a 4x25 port profile, is later modified to use an 8x10 port profile.
Impact:
Interfaces from the original 4x25 port profile remain active even after attempting to switch to the 8x10 port profile.
Workaround:
Avoid changing a BIG-IP Next instance from a 4x25 to an 8x10 port profile. Instead, onboard a new instance directly with the 8x10 port profile configuration.
1671465 : FAST-0002: Internal Server Error: Unable to render template Examples/http: rpc error: code = Unknown desc = missed comma between flow collection entries
Component: BIG-IP Next
Symptoms:
Pool members that end with non hex characters, for example 2001:db8::, will fail config validation on Central Manager.
Conditions:
- Central Manager
- Deploying app from template
- Pool member uses ip6 address ending in a colon
Impact:
Address cannot be used for a pool member.
Workaround:
Write full address with zeros.
1670977-1 : The BIG-IP Next Central Manager backup fails when a node becomes unreachable
Component: BIG-IP Next
Symptoms:
The BIG-IP Next Central Manager (CM) backup may fail if one of the nodes in a High Availability (HA) group becomes unreachable (e.g., due to shutdown).
Conditions:
-- The node is shut down and not reachable.
-- The node has not been removed from the group.
Impact:
The BIG-IP Next Central Manager backup process will start, but an alert will indicate that the backup has failed.
Workaround:
Remove the unreachable node through the GUI.
1670689-1 : BIG-IP Next Central Manager High Availability Installation Failures in High Disk Latency Environments★
Component: BIG-IP Next
Symptoms:
The installation of BIG-IP Next Central Manager may fail with a “node not ready” status when there is high disk I/O latency.
Conditions:
High disk latency, such as when using NAS-mounted disk storage, can cause the High Availability installation of the BIG-IP Next Central Manager to fail.
Impact:
The installation of BIG-IP Next Central Manager in High Availability mode will fail.
Workaround:
It is recommended to use low-latency block storage devices for local storage volumes on Virtual Machine (VM) instances.
1668017-1 : Cannot add new VLANs to existing HA L1 Network
Component: BIG-IP Next
Symptoms:
The L1 Network of an HA that contains the control and data plane VLANs cannot be edited.
Conditions:
The HA L1 Network that is being edited uses control and data plane VLANs
Impact:
-- You cannot edit the L1 network containing the control and data plane VLANs
-- you are unable to add/remove VLANs or self IPs that are in that network
Workaround:
Create another L1 network and add the new VLAN to that new L1
1660913-1 : For API workflows switching between /declare and /documents is unsupported.
Component: BIG-IP Next
Symptoms:
Applications created with /declare or /documents must be managed with the API endpoint that created them. You can't switch between endpoints.
Conditions:
Switching between the /declare and /documents APIs when managing applications.
Impact:
Errors will be encountered when switching end points /declare to /documents.
Workaround:
Use the same API endpoint when managing applications.
1644653-1 : BIG-IP Next Central Manager displays a Failed status for High Availability (HA) when adding a third node
Component: BIG-IP Next
Symptoms:
The BIG-IP Next Central Manager displays a Failed status for High Availability when a third node is added to the group.
Conditions:
This issue occurs when nodes are added to a standalone BIG-IP Next Central Manager to form a High Availability group.
Impact:
The BIG-IP Next Central Manager displays a Failed status in the High Availability. The last node needs to be removed and replaced with refreshed node or a new node to achieve a healthy High Availability state.
Workaround:
When adding a third node to the High Availability cluster, if the status displays as "Failed", the last node needs to be removed, and a refreshed node or a new node needs to be added back to the Central Manager High Availability group. The steps are as follows:
1. Log in to the Central Manager UI and remove the last node.
2. Reset the impacted node by running k3s_reset_cluster.py and rebooting it. Clear the contents of /var/lib/f5/infra-manager/registration-details.json.
3. Add the node back to the Central Manager High Availability group.
1644545-1 : Central Manager (CM) restore fails when using a full backup file with an external storage configuration
Component: BIG-IP Next
Symptoms:
Backup file restoration fails on BIG-IP Next Central Manager version 20.3.0.
Conditions:
For Standalone CM:
1. In the UI, configure external storage and start the CM services.
2. After installation completes, log in to the CM and create a full backup.
3. Attempt to restore the backup file on the same CM instance.
For HA CM:
1. In the UI, configure external storage, add nodes, and start the CM services.
2. After installation completes, log in to the CM and create a full backup.
3. Attempt to restore the backup file on the same CM instance.
Impact:
Backup file restoration fails on the BIG-IP Next Central Manager version 20.3.0.
Workaround:
For Standalone CM:
Follow the steps before taking a full backup of the CM:
1. SSH into the CM.
2. Run command "kubectl exec -it cmdb-elasticsearch-0 -c elasticsearch -- bash"
3. Inside the Elasticsearch pod, run the following command:
curl -X PUT "http://localhost:9200/application_cm" -H 'Content-Type: application/json' -d '{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0
},
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
},
"_hash": {
"type": "keyword"
},
"action": {
"type": "keyword"
},
"level": {
"type": "keyword"
},
"msg": {
"type": "text"
},
"podname": {
"type": "keyword"
},
"source": {
"type": "keyword"
}
}
}
}'
4. On the CM UI, navigate to the System > CM Maintenance > Backup & Restore path.
5. Follow the instructions to create the full backup.
For HA CM:
Follow the steps before taking a full backup of the CM:
1. SSH into the main node of HA CM.
2. Run command "kubectl exec -it cmdb-elasticsearch-0 -c elasticsearch -- bash"
3. Inside the Elasticsearch pod, run the following command:
curl -X PUT "http://localhost:9200/application_cm" -H 'Content-Type: application/json' -d '{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1
},
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
},
"_hash": {
"type": "keyword"
},
"action": {
"type": "keyword"
},
"level": {
"type": "keyword"
},
"msg": {
"type": "text"
},
"podname": {
"type": "keyword"
},
"source": {
"type": "keyword"
}
}
}
}'
4. On to CM UI, navigate to the System > CM Maintenance > Backup & Restore path.
5. Follow the instructions to create the full backup.
1644157 : "Error sending OCSP request" seen in apmd logs for OCSP authentication access policy
Component: BIG-IP Next
Symptoms:
OCSP request traffic does not reach the OSCP server if the DNS resolver is improperly configured at first deployment of the application.
Conditions:
An application has been deployed that has an Access policy with an OCSP Authentication agent. The OCSP Authentication agent has been configured with an FQDN in the OSCP Responder URL. DNS resolver has not been added to the configuration settings, and the default DNS resolver has not been configured.
Impact:
Once the application is deployed in these conditions, OCSP Authentication agent will not be able to forward the request to the OCSP server as it cannot resolve the FQDN. Trying to redeploy the application with appropriate DNS resolver configuration fails to correct the issue.
Workaround:
A new application needs to be created with proper DNS resolver configuration and then deployed. Alternatively, the admin can restart tmm once the original application has been redeployed with the correct DNS resolver configuration.
1641909 : Applications created from the API sometimes can't be edited in the GUI.
Component: BIG-IP Next
Symptoms:
In some cases, users are unable to edit FAST applications when they were created through the API.
Conditions:
Switching between API and UI can trigger this issue.
Impact:
An application can not be edited in the UI.
Workaround:
As a workaround, you can go to Protocols and Profiles, enable Server TLS, then disable it and save. Review and Deploy gets enabled. From there, you can Edit the application.
1641901-1 : App configs are not reflected in f5-fsm, causing traffic failure
Component: BIG-IP Next
Symptoms:
In a case when license activation happens from standby node after odd number of failovers, license would get rejected on TMM due to cluster locking failure.
Conditions:
Setup HA cluster, do odd number of failovers and trigger License Activation.
Impact:
License activation will get blocked if the activation is being tried on the Standby server after odd number of failovers.
Workaround:
Workaround: (Perform any one of below)
1. Restart licensing pod on standby node once HA is assembled.
2. Restart standby node once HA is assembled.
1636229-1 : A vCPU count change can stop traffic for up to 3 hours
Component: BIG-IP Next
Symptoms:
After changing the number of vCPUs for a BIG-IP Next instance, traffic stops passing through the instance. The instance is running but it will not pass traffic.
Conditions:
-- Central Manager managing one or more BIG-IP Next instances
-- The BIG-IP Next instances are licensed and passing traffic
-- From Central Manager, you change the number of vCPUs of a BIG-IP Next instance
Impact:
Traffic disrupted until the telemetry report is sent to F5. This can take up to maximum of 3 hours.
Workaround:
CM Admin can deactivate and activate the license again for immediate passing of traffic
1635421 : License server unavailable when a node goes down★
Component: BIG-IP Next
Symptoms:
License feature (mbiq-llm) pod is designed as a single replica. When the node hosting mbiq-llm crashes, k3s tries to reschedule this pod with other available nodes. This process will take time and impacts mbiq-llm trying to come to running state. As a result of this, the license features remain inaccessible during this transition time.
Conditions:
-- Central Manager configured for high availability (three nodes)
-- One of the nodes where license server is scheduled goes down
Impact:
License features are inaccessible, there will be an error on the instance properties page that the "license service is unavailable"
Workaround:
Since license feature (mbiq-llm) will take some time (a couple of minutes) to be up and ready, wait for some time and it should work once that happens.
1635369-1 : CM pool has a manditory monitor constraint.
Component: BIG-IP Next
Symptoms:
When defining pools, the Central Manager UI requires a monitor to be configured.
Conditions:
-- Central Manager
-- Configuring a new pool
Impact:
A monitor must be defined when creating a pool.
Workaround:
Define a monitor for your environment that will be able to mark your pool members up/down.
1634929 : Parameter names in api documentation is invalid for metrics api
Component: BIG-IP Next
Symptoms:
The following error occurs while using the examples from the CM API specification (Retrieve applications time series metrics)
{
"status": 500,
"message": "ADO-QUERY-00001: Failed to get metrics: unknown metric name: cpu.idle"
}
Conditions:
CM applications metrics API
Impact:
Unable to retrieve the applications metrics via API calls, i.e.
/api/v1/spaces/default/analytics/application-services/metrics?names=cpu.idle&start=now-1h
Workaround:
'cpu.idle' and 'cpu.system' are invalid, use 'cpu.idle.usage.percent' and 'cpu.system.usage.percent' instead.
1634065-1 : BIG-IP Next application telemetry data missing for a brief period from Central Manager when a CM node goes down
Component: BIG-IP Next
Symptoms:
If one of the nodes in BIG-IP Next Central Manager goes down, then the telemetry data for applications will show gaps in the telemetry charts.
Conditions:
Any of the nodes in the BIG-IP Next Central Manager HA Nodes becomes unavailable or goes down.
Impact:
BIG-IP Next instance application data metrics will be missing about 2 minutes of data.
Workaround:
None
1633569-1 : Default values for new entities in an attached OpenAPI file do not match the policy’s current configuration
Component: BIG-IP Next
Symptoms:
When a user modifies a policy’s configuration, it will impact the default values for new entities (such as URLs and parameters). If a new OpenAPI file is added to the policy, the default values for new entities created from the OpenAPI specification (OAS) file will not be determined by the policy’s configuration. Instead, they will be based on a policy template.
Conditions:
The user’s WAF policy has values that are different from the default template values, which impact the creation of entities.
Impact:
When a new OpenAPI file is added to the policy, the entities created from it will receive values based on the default template values instead of the current values in the policy.
Workaround:
Modify the fields of newly created entities manually to align with the desired values.
1632833-1 : Upgrade to Release version 20.3.0 might create a core file★
Component: BIG-IP Next
Symptoms:
When upgraded to release version 20.3.0 from previous versions, there is possibility that core file generated.
Conditions:
Upgrade to release version 20.3.0 from previous versions
Impact:
Cores generated are from previous installed versions and there should not be any impact.
Workaround:
Core file generated during upgrade can be ignored or deleted.
1629897-1 : Shared object installation status might be incorrect on a migration resume.
Component: BIG-IP Next
Symptoms:
Shared object installation status (installed) reported in the Migration feature is incorrect when a session is resumed.
Conditions:
-- A shared object is installed in a new migration.
-- The object is removed from the Central Manager.
-- The migration is then resumed
Impact:
Application may be migrated as draft with a reference to an object that does not exist on Central Manager.
Workaround:
Start a new migration instead of resuming it to ensure the status of all shared objects is up to date.
1629537 : Logged-in admin user will not be able to change password before Central Manager setup
Component: BIG-IP Next
Symptoms:
The admin user is unable to change the admin password.
Conditions:
1. Login with admin to new CM that has not completed setup.
2. Change the default admin password.
Impact:
Admin user cannot change the password until Central Manager setup is completed.
Workaround:
If you want to change the password without completing the CM setup, you can change the password using the below api
Api endpoint: '/api/change-password'
Method: POST
Payload: {
"username": "admin",
"temp_password": "current password",
"new_password": "new password"
}
1629161-1 : L1-Network cannot be deleted
Component: BIG-IP Next
Symptoms:
Created L1-Network cannot be deleted.
Conditions:
-- BIG-IP Next instance managed and onboarded by Central Manager
-- Configure network & proxy settings are already configured on the instance
-- Clean up the instance's network configuration (L1 networks, Vlans, & IPs)
Impact:
Central Manager is unable to delete the L1 Network objects on the BIG-IP Next instance.
Workaround:
Workaround:
1. Delete the L1-Network in instance using CM proxy API (see below).
2. Delete the L1-Network in CM (using the UI).
How to Delete the L1-Network in instance using CM proxy API:
1. Login to CM using CM login API
2. Grab the instance ID that L1-network object needs to be modified using the GET /api/v1/spaces/default/instances API.
3. Get the L1-Network ID that needs to be deleted by requesting a GET request to "api/device/v1/proxy/{INSTANCE_ID}?path=/L1-networks"
4. Delete the L1-Network by requesting a DELETE request to "/api/device/v1/proxy/{INSTANCE_ID}?path=/L1-networks/{L1_NETWORK_ID}". Take a note of the Job ID returned in the "id" param of the response.
5. Ensure that the L1-Network deletion is successful by requesting a GET request to "/api/device/v1/proxy/{INSTANCE_ID}?path=/jobs/{JOB_ID}", and ensure that the "title" under the "message" has a value of "jobUpdateSuccessful".
1629105-1 : Incorrect conversion of DTLS virtual server★
Component: BIG-IP Next
Symptoms:
If a virtual server with UDP and ssl profile is being migrated, it results in creating Service_UDP class without reference to proper DTLS_server or DTLS_client class in AS3 declaration. Additionally, the declaration includes also TLS_server or TLS_client class, which is incorrect.
Conditions:
Migration of DTLS virtual server.
Impact:
Migration process results in creating UDP application without TLS.
Workaround:
User can manually modify AS3 declaration using CM AS3 editor before deployment to Next instance, according to latest Next schema documentation (DTLS_client/DTLS_server and Service_UDP class)
https://clouddocs.f5.com/bigip-next/latest/schemasupport/schema-reference.html#service-udp
https://clouddocs.f5.com/bigip-next/latest/schemasupport/schema-reference.html#dtls-server
1629077-1 : BIG-IP Next Central Manager does not support NTP configuration via DHCP
Component: BIG-IP Next
Symptoms:
If you supply NTP server IP address via DHCP, chrony will not be configured to use that data.
Conditions:
DHCP server provides NTP server IP addresses to BIG-IP Next Central Manager.
Impact:
Custom NTP sources must be configured via the setup utility or cloud-init instead of DHCP.
Workaround:
Either run the setup utility to configure the custom NTP server IP addresses or modify the /etc/chrony/sources.d/central-manager.sources file to contain the sources being advertised by DHCP.
1623609-1 : Skipped certificate marked as imported during application migration via the GUI.
Component: BIG-IP Next
Symptoms:
A certificate that is skipped during migration is incorrectly marked in the GUI as installed.
Conditions:
Migration of a virtual server configured with an unsupported or default certificate.
Impact:
Cosmetic only.
The declaration is fine and the skipped certificate is not installed on the CM.
Workaround:
None
1623533-1 : Observing drop in traffic throughput with debug-sidecar inline tcpdump packet capture
Component: BIG-IP Next
Symptoms:
This could be observed with earlier release versions and might be limitation with existing design of tcpdump packet capture flow.
no traffic throughput drop was observed when tcpdump captured to file.
Conditions:
Send some huge traffic using ixia traffic generator with 10G
once the traffic is stable , perform a tcpdump on the tmm debug sidecar container
Impact:
Traffic throughput affected when admin/debug user capture packets inline using debug-sidecar tcpdump
Workaround:
Traffic throughput drop not observed when tcpdump captured to file.
1623421-1 : External OpenAPI files cannot be used with HTTPS links
Component: BIG-IP Next
Symptoms:
Creating a new policy using an external OpenAPI file from a HTTPS address is not possible.
Conditions:
User has OpenAPI file located in an HTTPS address.
Impact:
Cannot create a new policy with an external OpenAPI file located in an HTTPS address.
Workaround:
By downloading the OpenAPI file locally, users can then easily create the policy by uploading the file to Central Manager.
1622005-1 : OpenAPI files that are extremely large cannot be applied
Component: BIG-IP Next
Symptoms:
Uploading a significantly large OpenAPI file containing numerous endpoints and parameters results in the failure to apply it to a policy.
Conditions:
User has very large OpenAPI file.
Impact:
Large OpenAPI files cannot be used for WAF policy applications.
Workaround:
None
1615261 : Application page may show "No Data" for Active Alerts instead of zero.
Component: BIG-IP Next
Symptoms:
The main application page may show "No Data" under the Active Alerts heading when the application service page will show zero.
Conditions:
Lack of active alerts.
Impact:
This is a cosmetic issue. "No Data" is the same state as zero active alerts.
Workaround:
None
1615257 : Application monitors edit drawer autosaves
Component: BIG-IP Next
Symptoms:
In the “Manage Monitors” drawer, when selecting monitor types under the “Monitor Type” drop-down, note that there is no explicit ‘Save’ button. The drawer auto-saves user input as changes are made. If you have created custom monitors, you could delete one of them, but not a default monitor.
Conditions:
The save/delete options on the application monitor page could be confusing.
Impact:
The delete and save operations on the Manage Monitors page could lead to confusion.
Workaround:
The Manage Monitors page auto saves any user input.
1604997-1 : Central Manager (CM) Prometheus pod in CrashLoopBackOff
Component: BIG-IP Next
Symptoms:
The Prometheus pod is stuck in a CrashLoopBackOff state with 2 out of 3 containers running.
Conditions:
The Central Manager has accumulated a large amount of telemetry data.
Impact:
Prior to the BIG-IP Next 20.3.0 release, instance telemetry data will be unavailable.
Starting with the BIG-IP Next 20.3.0 release, there is no impact on functionality, as instance telemetry data is no longer stored in Prometheus. However, telemetry data for the BIG-IP Next Central Manager will not be available for debugging purposes. This does not affect any Central Manager functionality.
Workaround:
SSH into the Central Manager as the admin user and execute the following commands:
pvc_name="pvc/prometheus-pv-claim"
pv_name="pv/$(kubectl get ${pvc_name} -o jsonpath='{.spec.volumeName}')"
pod_name=$(kubectl get pod -l app.kubernetes.io/name=prometheus -o name)
echo "apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-pv-claim
annotations:
"helm.sh/resource-policy": keep
labels:
helm.sh/chart: prometheus-0.1.0
app.kubernetes.io/instance: prometheus
app.kubernetes.io/name: prometheus
app.kubernetes.io/version: "0.0.0"
app.kubernetes.io/managed-by: Helm
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi" > pvc.yaml
kubectl delete "${pv_name}" "${pvc_name}" "${pod_name}"
kubectl apply -f pvc.yaml
Confirm the Prometheus pod is running successfully and then execute this command:
rm pvc.yaml
1604657 : High CPU utilization and reduced throughput in certain conditions when connection mirroring is enabled in HA
Component: BIG-IP Next
Symptoms:
High CPU utilization and reduced throughput in certain conditions when connection mirroring is enabled in HA
Conditions:
Connection mirroring is enabled in HA. System is bombarded with short-term connections.
Impact:
High CPU utilization and reduced throughput is seen in HA setup as compared to a standalone system.
Workaround:
Connection mirroring is usually reserved for long-lived connections or sessions. If the traffic pattern is short-term connections, disable connection mirroring.
1603561-1 : L1-Network name cannot be changed
Component: BIG-IP Next
Symptoms:
Created L1-Network name cannot be changed.
Conditions:
- BIG-IP Next instance managed and onboarded by Central Manager
- Configure network & proxy settings are already configured on the instance.
- Edit the instance's configured L1-Network name.
Impact:
The Central Manager cannot change the name of the L1 Network on the BIG-IP Next instance. It may indicate that the update was successful, but the outcome will be as follows:
1. The previous L1-Network remains unchanged.
2. A new L1-Network (with the updated name) is added.
Workaround:
Use the following steps:
1. Delete the L1-Network in instance using Central Manager proxy API (see below).
2. Delete the L1-Network in Central Manager (using the UI).
How to Delete the L1-Network in an Instance Using the Central Manager Proxy API:
1. Log in to the Central Manager using the Central Manager login API.
2. Retrieve the instance ID for the L1 network object that needs to be modified by using the GET /api/v1/spaces/default/instances API.
3. Obtain the old L1 network ID that needs to be deleted by sending a GET request to api/device/v1/proxy/{INSTANCE_ID}?path=/L1-networks.
4. Delete the old L1 network by sending a DELETE request to /api/device/v1/proxy/{INSTANCE_ID}?path=/L1-networks/{L1_NETWORK_ID}. Note the Job ID returned in the "id" parameter of the response.
5. Ensure that the deletion of the L1 network is successful by sending a GET request to /api/device/v1/proxy/{INSTANCE_ID}?path=/jobs/{JOB_ID}. Confirm that the "title" under the "message" has the value "jobUpdateSuccessful."
1602001 : Upgrading from 20.2.1 or Earlier versions will delete all External Loggers★
Component: BIG-IP Next
Symptoms:
Upgrading BIG-IP Next Central Manager (CM) from 20.2.1 or earlier versions deletes all configured external loggers.
Conditions:
An external logger is configured for the BIG-IP Next Instance.
Impact:
After the upgrade, the external logger configuration must be reconfigured manually.
Workaround:
None
1601573 : UI elements related to virtual servers not shown after upgrade★
Component: BIG-IP Next
Symptoms:
After the upgrade from 20.2.0 or below to 20.2.1 or above, several things will no longer work until existing applications are redeployed:
1. In the L7 dashboard, the whole section that filters for virtual servers will not be shown.
2. In event logs -> L7 DoS, it will be impossible to filter by virtual server (the dropdown will be empty).
3. In reports, if you click Create, select "Virtual Servers" and the click "Select", the drop down will be empty.
Conditions:
A WAF application was deployed on 20.2.1 or below.
Impact:
The following elements are not shown:
1. In the L7 dashboard, the whole section that filters for virtual servers will not be shown.
2. In event logs -> L7 DoS, it will be impossible to filter by virtual server (the dropdown will be empty).
3. In reports, if you click Create, select "Virtual Servers" and the click "Select", the drop down will be empty.
Workaround:
Redeploy affected applications.
1601233-1 : Multi-replica in HA not supported for alert feature
Component: BIG-IP Next
Symptoms:
Due to multi-replica is not yet implemented for the alert feature in HA, it may take a while for the alert to be activated on the receiving node. This can cause QKViews not to be generated since they rely on the alert feature.
Conditions:
The problem occurs when the node does not have the alert feature.
Impact:
After restarting one of the instance's in Central Manager HA, the QKView for BIG-IP Next HA configuration remains in a running state.
Workaround:
Once the alert feature is up, we can wait a couple of minutes and it should be functional.
1600809-1 : Upgrading BIG-IP Next Central Manager does not show unsupported properties in migrations created before upgrade.★
Component: BIG-IP Next
Symptoms:
Prior to 20.2.1, Journeys reported only supported objects. If any of the object's properties was unsupported, you are not informed about that gap.
After upgrading to 20.2.1, old migration sessions do not get updated to report unsupported properties.
Configuration Analyzer does not show unsupported properties underlined in application's configuration files.
Conditions:
-- Migrations created in a version prior to 20.2.1
-- Upgrade to 20.2.1
-- Analyze the configuration
Impact:
Migration status of the applications is invalid.
Workaround:
Start a new migration using the same UCS archive to get the proper reporting of unsupported properties.
1600381-1 : WAF enforcer might crash during handling of response
Component: BIG-IP Next
Symptoms:
The WAF enforcer may experience a crash if it receives a response containing a specially designed, large Set-Cookie header.
Conditions:
The protected server sends a large Set-Cookie header.
Impact:
The Enforcer may experience occasional crashes, resulting in disrupted traffic until the WAF-enforcer is restarted.
Workaround:
None
1600377-1 : The BIG-IP Central Manager GUI does not support backup file uploads when external storage is configured.
Component: BIG-IP Next
Symptoms:
Steps:
1. Configure two BIG-IP Next Central Managers (BIG-IP Next Central Manager 1 and BIG-IP Next Central Manager 2) with external storage.
2. Create a backup on BIG-IP Next Central Manager 1 and download the backup file.
3. Open BIG-IP Next Central Manager 2 to restore.
4. The “Upload Backup File” button is not visible, preventing access to the restore functionality.
Conditions:
The BIG-IP Next Central Manager GUI does not allow uploading, downloading, or deleting backup files when external storage is configured.
Impact:
Users cannot upload a backup file or perform a restore through the BIG-IP Next Central Manager GUI.
Workaround:
Manually transfer the backup file from BIG-IP Next Central Manager 1 to BIG-IP Next Central Manager 2. For detailed instructions, refer to the “Restore the BIG-IP Next Central Manager with External Storage” section.
https://clouddocs.f5.com/bigip-next/latest/use_cm/cm_backup_restore_using_ui_api.html#restore-the-big-ip-next-central-manager-with-external-storage
1596929-1 : Policy-compiler supports policy versions only up to 17.0.0.
Component: BIG-IP Next
Symptoms:
Policy versions above 17.0.0. are rejected during import.
Conditions:
A policy includes the following parameter "softwareVersion" with a value above 17.0.0.
Impact:
Policy import fails.
Workaround:
None
1596801-1 : Route Health Injection default for BIG-IP Next is "ANY"★
Component: BIG-IP Next
Symptoms:
The default for route health injection in BIG-IP Next is ANY. This is an intentional change in behavior from previous versions of BIG-IP (17.x and lower) where it is "Disabled".
Conditions:
Dynamic routing enabled (BGP, OSPF)
For more information on the available route health injection settings, see https://my.f5.com/manage/s/article/K15923612
Impact:
BIG-IP Next will always advertise virtual IP addresses by default. Previous versions of BIG-IP (17.x and lower) disable route health injection by default.
Workaround:
Set the appropriate default for RHI.
1596021-1 : serverTLS/clientTLS name in Service_TCP do not match the clientSSL/serverSSL profile name
Component: BIG-IP Next
Symptoms:
When you try to deploy the application service, if the serverTLS/clientTLS name in Service_TCP do not match the clientSSL/serverSSL profile name, you might get the following error messages:
serverTLS: must contain a path pointing to an existing reference
or
clientTLS: must contain a path pointing to an existing reference
Conditions:
Object names are truncated if application or partition names are too long.
Impact:
Application service deployment to the BIG-IP Next instance fails.
Workaround:
Ensure that the serverTLS/clientTLS name in the Service_TCP class and clientSSL/serverSSL name in the declaration are same.
1593805 : The air-gapped environment upgrade from BIG-IP Next 20.0.2-0.0.68 to BIG-IP Next 20.2.0-0.5.41 fails★
Component: BIG-IP Next
Symptoms:
When upgrading from BIG-IP Next version 20.0.2-0.0.68 to version 20.2.0-0.5.41, the process encounters a failure. Post-upgrade, the Central Manager GUI exhibits a continuous flashing behavior, persisting for a few minutes before returning to normal functionality. Furthermore, the failed upgrade leads to discrepancies in version representation, where the GUI displays the current version while the CLI indicates the target version.
Conditions:
Upgrade CM from version 20.0.2-0.0.68 to version 20.2.0-0.5.41.
Impact:
CM becomes dysfunctional, the failed upgrade leads to discrepancies in version representation, where the CM GUI displays the current version while the CLI indicates the target version.
Workaround:
Backup and restore CM from the version 20.0.2, refer How to: Back up and restore BIG-IP Next Central Manager (https://clouddocs.f5.com/bigip-next/20-0-2/use_cm/cm_backup-restore.html).
1593745 : Issues identified during Backup, Restore, and User Operations between two BIG-IP Next Central Managers for Standalone and High Availability Nodes.
Component: BIG-IP Next
Symptoms:
Performing a backup on one BIG-IP Next Central Manager, followed by user operations, and then performing a restore on another BIG-IP Next Central Manager with subsequent user operations may result in the following issues
You cannot download the QKView on the restored BIG-IP Next Central Manager if it was created by the previous BIG-IP Next Central Manager before the backup operation.
After you restore the backup on the new BIG-IP Next Central Manager setup, any BIG-IP Next instance deleted on the previous BIG-IP Next Central Manager enters an unknown state.
Additionally, after you restore the backup on the new BIG-IP Next Central Manager, the deleted app on the previous BIG-IP Next Central Manager cannot process traffic until you redeploy the app.
The BIG-IP Next Central Manager does not support uploading and downloading backup files when configured with external storage.
Conditions:
Perform a backup on one BIG-IP Next Central Manager and restore it on another BIG-IP Next Central Manager.
Impact:
After restoring the new BIG-IP Next Central Manager, certain operations might not function properly.
Workaround:
If you delete the app after taking a backup on the BIG-IP Next Central Manager and then restore it on a new BIG-IP NEXT Central Manager, traffic will not pass through. Users must edit and redeploy the app for traffic to function properly.
1593613 : When an upgrade fails, CM cannot be restored and becomes dysfunctional due to multiple containers entering the 'CrashLoopBackOff' state★
Component: BIG-IP Next
Symptoms:
Upgrade from BIG-IP Next version 20.0.2 to version 20.2.0 fails with status of CrashLoopBackOff on several pods.
Conditions:
Upgrading from BIG-IP Next version 20.0.2 to version 20.2.0
Impact:
Central Manager is unusable due to pods not being in valid state.
Workaround:
Restore CM to a previous version backup and upgrade to next minor version of 20.1.0. Refer How to: Back up and restore BIG-IP Next Central Manager (https://clouddocs.f5.com/bigip-next/20-0-2/use_cm/cm_backup-restore.html).
1590037-1 : Provisioning SSL Orchestrator on BIG-IP NEXT HA cluster fails when using Central Manager UI
Component: BIG-IP Next
Symptoms:
When user creates an HA cluster of BIG-IP Next instances using Central Manager UI. After successful creation and licensing of the instance, Provisioning SSL Orchestrator from the UI may make it unresponsive and display "Enabling SSL Orchestrator is in progress..." message.
Conditions:
When user creates HA cluster of BIG-IP Next instances and tries to provision SSL Orchestrator from the UI.
Impact:
Provisioning SSL Orchestrator on HA cluster may result in unresponsive UI.
Workaround:
Configure HA cluster, license, and provision SSL Orchestrator using OpenAPI, prior to adding cluster to Central Manager.
1589865-1 : Licensing via CM fails with "400 The SSL certificate error"
Component: BIG-IP Next
Symptoms:
An error occurs in LLM logs at at the Central Manager licensing screen during BIG-IP Next license activation.
...... error while getting Signed Ack. Response: <html>
<head><title>400 The SSL certificate error</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The SSL certificate error</center>
<hr><center>server</center>
</body>
</html>
...... ack verification task failed with Error: LICENSING-1120::<html>
<head><title>400 The SSL certificate error</title></head>
<body>
</html>
Conditions:
- Central Manager
- License Activation
Impact:
Unable to perform License Activation on BIG-IP Next Instance from Central Manager.
Workaround:
For License Activation:
Login to the CM Shell as admin and perform the below steps:
-------------------------------------------------------------------
Step A:
-------------------------------------------------------------------
Copy the Vault client certificate from the LLM pod to CM so that access to the Vault server is possible
Execute the below commands to get the tls.key, tls.crt and ca.crt for performing operations on LLM objects.
kubectl get secrets/mbiq-llm-vault-client-cert -o 'go-template={{index .data "tls.key"}}' | base64 -d > tls.key
kubectl get secrets/mbiq-llm-vault-client-cert -o 'go-template={{index .data "tls.crt"}}' | base64 -d > tls.crt
kubectl get secrets/mbiq-vault-cert -o 'go-template={{index .data "ca.crt"}}' | base64 -d > ca.crt
-------------------------------------------------------------------
Step B:
-------------------------------------------------------------------
Get the client token to perform operations on the LLM objects
1. Execute the below command to get the Vault IP
kubectl get svc | grep mbiq-vault-active
Example:
$ kubectl get svc | grep mbiq-vault-active
mbiq-vault-active ClusterIP 10.1.1.1 <none> 8200/TCP,8201/TCP 11h
Note: If the IP is not available with "kubectl get svc | grep mbiq-vault-active" execute the below command to get the Vault IP
kubectl get svc | grep mbiq-vault
Use the IP for mbiq-vault from the above command result.
Example:
$ kubectl get svc | grep mbiq-vault
mbiq-vault-internal ClusterIP None <none> 8200/TCP,8201/TCP 25d
mbiq-vault ClusterIP 10.1.1.2 <none> 8200/TCP,8201/TCP 25d
2. Use the obtained IP from the above step to generate the client_token and this token has to be retrieved for every API call on LLM objects
curl --insecure --request PUT --cacert ca.crt --cert tls.crt --key tls.key --data '{"name": "llm"}' https://<Vault IP>:8200/v1/auth/cert/login | jq '.auth.client_token'
Example:
$ curl --insecure --request PUT --cacert ca.crt --cert tls.crt --key tls.key --data '{"name": "llm"}' https://10.1.1.1:8200/v1/auth/cert/login | jq '.auth.client_token'
Example client_token output:
"hvs.CAESIABDIsdPQxrJzfCNqRhTzI4L2f26SOmjp1Wp2dKp2zIvGh4KHGh2cy5DWkRkbVpKVTRLNjZsWW1UejBDM1ZnN0I"
-------------------------------------------------------------------
Step C:
-------------------------------------------------------------------
Delete the LLM objects - certs, privateKey, digitalAssetID and certificateChain
Execute the below commands to delete certs, privateKey, digitalAssetID, and certificateChain of llm pod
Note: For each of the below command execution it should have a new client_token and use Step B-2 to generate the client_token. This is done as the client_token is valid only for a single operation.
Fetch the client_token from B-2
curl --insecure --cacert ca.crt --header "X-Vault-Token: <client_token>" -X DELETE https://<Vault IP>:8200/v1/secret/llm/certs
Fetch the client_token from B-2
curl --insecure --cacert ca.crt --header "X-Vault-Token: <client_token>" -X DELETE https://<Vault IP>:8200/v1/secret/llm/privateKey
Fetch the client_token from B-2
curl --insecure --cacert ca.crt --header "X-Vault-Token: <client_token>" -X DELETE https://<Vault IP>:8200/v1/secret/llm/digitalAssetID
Fetch the client_token from B-2
curl --insecure --cacert ca.crt --header "X-Vault-Token: <client_token>" -X DELETE https://<Vault IP>:8200/v1/secret/llm/certificateChain
Example:
curl --insecure --cacert ca.crt --header "X-Vault-Token: hvs.CAESIABDIsdPQxrJzfCNqRhTzI4L2f26SOmjp1Wp2dKp2zIvGh4KHGh2cy5DWkRkbVpKVTRLNjZsWW1UejBDM1ZnN0I" -X DELETE https://10.1.1.1:8200/v1/secret/llm/certs
curl --insecure --cacert ca.crt --header "X-Vault-Token: hvs.CAESIFiHjXY4LNxlKoIyO1NfdGQBs-bK3Cpkh4SSc_k4u75eGh4KHGh2cy5uVGR5RFFLd2d6dGhEaVRZeEpvUFdTT1E" -X DELETE https://10.1.1.1:8200/v1/secret/llm/privateKey
curl --insecure --cacert ca.crt --header "X-Vault-Token: hvs.CAESIJfIwIzEprlD5r589sK9YELSlhOuZtx-rpMx8sV-e6YvGh4KHGh2cy5vTFBzM0JYUlZkQzBlWE43bGQ4NG1uTXQ" -X DELETE https://10.1.1.1:8200/v1/secret/llm/digitalAssetID
curl --insecure --cacert ca.crt --header "X-Vault-Token: hvs.CAESIOJj_40Zd2z3vyXzRBaRMS-A_o-GR8ottySpNn4SvStKGh4KHGh2cy5pZHBOY0djWVNGUnZwS3ZIdFpPTnZCaHk" -X DELETE https://10.1.1.1:8200/v1/secret/llm/certificateChain
-------------------------------------------------------------------
Step D:
-------------------------------------------------------------------
Restart the LLM pod
Execute the below command to retrieve the POD details
kubectl get pods | grep llm
Restart the pod using the below command
kubectl delete pod <pod name>
Check the status of the LLM pod using the below command
kubectl get pods | grep llm
As the llm objects are cleared they will be recreated on pod restart with correct values.
Example:
$ kubectl get pods | grep llm
mbiq-llm-84c56d748d-jn7jm 2/2 Running 0 29m
$ kubectl delete pod mbiq-llm-84c56d748d-jn7jm
pod "mbiq-llm-84c56d748d-jn7jm" deleted
$ kubectl get pods | grep llm
mbiq-llm-84c56d748d-5rdlz 0/2 PodInitializing 0 4s
$ kubectl get pods | grep llm
mbiq-llm-84c56d748d-5rdlz 2/2 Running 0 6s
Once the pod is in a running state, from CM initiate the license operation(Activate/ Switch License)
After the activation is successful delete the certificates from Step A
rm -f tls.key tls.crt ca.crt
1588813-1 : CM Restore on a 3 node BIG-IP Next Central Manager with external storage fails with ES errors
Component: BIG-IP Next
Symptoms:
BIG-IP Next Central Manager restore fails with critical alert raised with the description:
Error registering Elasticsearch snapshot repository: failed to register Elasticsearch snapshot repository: response not acknowledged. result: map[error:map[caused_by:map[caused_by:map[reason:/vol/elasticsearch-snapshot/restore-temp/elasticsearch type:access_denied_exception] reason:[elastic-repo] cannot create blob store type:repository_exception] reason:[elastic-repo] Could not determine repository generation from root blobs root_cause:[map[reason:[elastic-repo] cannot create blob store type:repository_exception]] type:repository_exception] status:500]
Conditions:
Configure a 3 node BIG-IP Next Central Manager and take a CM backup.
Now on a fresh 3 node BIG-IP Next Central Manager use the backup file and restore the BIG-IP Next Central Manager.
Impact:
CM restore succeeds but Elasticsearch is not restored.
Workaround:
When Elasticsearch is not restored, an alert message is raised. Run ./opt/cm-bundle/cm restore_es and this will make sure ES is restored.
1588101-1 : Any changes made on the BIG-IP Next Central Manager after the BIG-IP Next instance backup will not be reflected on the BIG-IP Next Central Manager once the BIG-IP Next instance is restored.
Component: BIG-IP Next
Symptoms:
After an instance restore is successful, and all the data present in instance is restored, BIG-IP Next Central Manager still shows incorrect data.
Conditions:
Create a BIG-IP Next Central Manager, and discover an instance.
Create an instance backup.
Make any changes on the instance using BIG-IP Next Central Manager. For example, create a QKview, modify networks or selfips, or delete application services on the instance using BIG-IP Next Central Manager UI.
Restore the instance with the backup file created.
The change is reflected only on the instance but not on BIG-IP Next Central Manager. As a result, changes before the backup are not visible on the BIG-IP Next Central Manager UI.
Impact:
There are discrepancies between BIG-IP Next Central Manager and the instance.
Workaround:
None
1586869 : Unable to create the same standby instance, when Instance HA creation failed using CM-created instances★
Component: BIG-IP Next
Symptoms:
Using CM-created instances, if instance HA creation is failed and standby instance is removed from CM, then you will not be able to create the same standby instance configuration.
Conditions:
Creating Instance HA using CM-created instances.
Impact:
Unable to create the same standby instance.
Workaround:
Remove the active instance from CM. This will delete both active and standby instance from CM and the provider. Create both instances again.
1585309 : Server-Side traffic flows using a default VRF even though pool is configured in a non-default VRF
Component: BIG-IP Next
Symptoms:
Traffic flows when a default VRF is configured and a pool is configured in a non-default VRF without a route in non-default VRF.
Conditions:
- Default VRF is configured
- Pool is configured in non-default VRF
- Route to pool exists in default VRF, but not in non-default VRF.
Impact:
Traffic continues to work when pool is configured in non-default VRF, but there is no route in non-default VRF.
Workaround:
For network isolation, do not configure a default VRF. Use all non-default VRFs in the configuration.
1584637 : After upgrade, 'Accept Request' will only work on events after policy redeploy★
Component: BIG-IP Next
Symptoms:
After BIG-IP Next Central Manager is upgraded to version 20.2.1 from a previous version, the 'Accept Request' option on events does not work.
Conditions:
Upgrade BIG-IP Next Central Manager that contains WAF events to version 20.2.1.
Click 'Accept Request' for an event. The results return:
'No policy builder data in event [support id]'
Impact:
All events (new or pre-update events) in the WAF event log will not return results when you select 'Accept Request'.
Workaround:
You can receive and accept results for new events from the event log when you manually redeploy the WAF policy.
1584625 : Virtual server information of application containing multiple virtual IP addresses and WAF policies after upgrade is missing★
Component: BIG-IP Next
Symptoms:
When creating a report based on specific virtual servers after upgrading BIG-IP Next Central Manager with multi-VIP applications, the action is not possible.
Filtering by virtual server in BaDoS logs and dashboard after upgrade is not possible
Conditions:
Create a report for an application that contains multiple virtual servers.
Impact:
Limitations in the actions you can take in the Web Application dashboard and in reports when filtering by virtual servers.
Workaround:
Re-deploy the applications after upgrade.
1583541 : Re-establish trust with BIG-IP after upgrade to 20.2.1 using a 20.1.1 Central Manager★
Component: BIG-IP Next
Symptoms:
Central Manager will report BIG-IP upgrade as failed due to timeout waiting for it to complete.
Conditions:
Using a 20.1.1 Central Manager to upgrade a BIG-IP Next instance to 20.2.1
Impact:
BIG-IP Next upgrade will have succeeded but Central manager will think it never completed. The version displayed for the BIG-IP in the Central Manger UI will be inaccurate. Until trust is re-established all communication with the BIG-IP will fail.
Workaround:
1. Open instance properties drawer, click "Certificates" and establish trust with the instance manually
2. Select the instance in the grid and re-trigger the upgrade to the Nutmeg version again
The instance upgrade should detect the upgrade to the version it is already at and return success. CM task will then fetch the new version and update its DB, in turn updating the version in the UI grid.
1583049-1 : Central Manager Logs
Component: BIG-IP Next
Symptoms:
When a Kubernetes pod is restarted, the logs from the previously running pod are lost.
Conditions:
-- Viewing logs for a Kubernetes pod what was restarted
-- You wish to review log messages occurred before the pod was restarted
Impact:
QkView does not have logs from before the pod was restarted
The new logs generated after the restart will not contain information about the cause of the restart.
Workaround:
To understand why a pod was restarted, you can get CM logs that have all comprehensive feature-level logs which provide a user with all of the previous/current CM activities and all information about what might have caused the pod to restart. You can follow the instruction to generate a CM qkview file and uplad it to F5 iHealth from this link: https://clouddocs.f5.com/bigip-next/latest/support/cm_qkview_script.html
Once you upload the generated CM qkview to iHealth,
1. click on the entry that you have just uploaded to iHealth webpage.
2. Go to "Files" tab on the left side of your screen to see the whole file tree.
3. Go to
all -> host-qkview -> filesystem -> var -> log -> application
and download the latest application log (application.0.log)
1582421-1 : BIG-IP Next Central Manager functionality impacted if the host IP address changes
Component: BIG-IP Next
Symptoms:
If the IP address of BIG-IP Next Central Manager virtual machine changes, then BIG-IP Next Central Manager functionality will be impacted.
Conditions:
BIG-IP Next Central Manager virtual machine IP address is changed.
Impact:
Once the BIG-IP Next Central Manager service is up and running, changing the IP address of the host would cause Central Manager functionality to be impacted.
Workaround:
Do not change the host IP address of BIG-IP Next Central Manager.
1582409-1 : BIG-IP Next Central Manager will not start if the DNS server details are not provided
Component: BIG-IP Next
Symptoms:
If no DNS server is configured via DHCP or the setup script, then BIG-IP Next Central Manager initialization fails.
Conditions:
No DNS server is configured on the BIG-IP Next Central Manager host.
Impact:
The BIG-IP Next Central Manager initialization fails if no DNS server is configured.
Workaround:
Run 'setup' from the BIG-IP Next Central Manager CLI and manually configure a static IP address along with one or more DNS servers.
1581877 : An error is seen when no device certificates are present on the BIG-IP Next Instance
Component: BIG-IP Next
Symptoms:
A certificate not found error is shown when the user has not uploaded the device certificate directly to BIG-IP Next. In this case, BIG-IP Next will use the self-signed certificate by default.
Conditions:
-- A BIG-IP Next instance is created outside of Central Manager
-- A device certificate was not uploaded during onboarding
-- The instance is added to Central Manager
Impact:
On the Certificate page for the BIG-IP device, an error is displayed "Unable to GET certificates, received 13167-01025".
This error message has no impact on the functionality of the BIG-IP Next instance.
Workaround:
None
1579977-1 : BIG-IP Next instance telemetry data is missing from the BIG-IP Next Central Manager when a BIG-IP Next Central Manager High Availability node goes down.
Component: BIG-IP Next
Symptoms:
BIG-IP Next Instance telemetry can be missing from the BIG-IP Next Central Manager for five to ten minutes if any of the BIG-IP Next Central Manager HA nodes go down or unavailable.
- Instance data metrics such as Instance health, Traffic, and Network Interface metrics will be lost, as they are available only for the previous hour.
- All other data such as Application metrics and WAF logs, will not be lost. However, these metrics could be unavailable for 5-10 minutes during the node down event.
Conditions:
Any of the nodes in the BIG-IP Next Central Manager HA Nodes becomes unavailable or goes down.
Impact:
BIG-IP Next instance data metrics, such as instance health, traffic, and network interface metrics, will be lost. All other metrics, such as application metrics and WAF logs, might be missing for 5-10 minutes.
Workaround:
Wait for 5-10 minutes and BIG-IP Next Telemetry data will resume on the BIG-IP Next Central Manager.
Run the following command on the VM console of the BIG-IP Next Central Manager to resume the instance data metrics:
kubectl delete pods prometheus-mbiq-kube-prometheus-prometheus-0 --grace-period=0 --force
1579441-1 : Connection requests on rSeries may not appear to be DAG distributed as expected
Component: BIG-IP Next
Symptoms:
Connection requests on rSeries may not be distributed across TMM instances as expected. For example, TMM0 may appear to service more requests than other TMMs, when a round-robin even distribution across TMMs was expected. This may be due to the `port adjust` setting not having the default value of `xor5mid-xor5low`.
Conditions:
Multiple TMMs on rSeries, where connection requests are not distributed across TMMs as expected.
Impact:
Connection requests may be unevenly distributed across TMMs, causing some TMMs to be under heavier load than other TMMs.
Workaround:
Adjust traffic patterns for load balancing, or tune DAG behavior with additional DAG configuration options to adjust assignment of connection requests to TMMs.
1576277 : 'Backup file creation failed' for instance after upgrade to v20.2.0
Component: BIG-IP Next
Symptoms:
Instance backup fails on BIG-IP Next Central Manager version 20.2.0 with message:
'Backup file creation failed'
Conditions:
A BIG-IP Next Central Manager version 20.1.1 and BIG-IP Next version 20.1.0
1. Upgrade the BIG-IP Next instances to version 20.2.0.
2. In the BIG-IP Next Central Manager UI, go to Infrastructure > Instances.
3. Create a backup file for the instance by selecting the desired BIG-IP Next instance, select the Actions menu, select Back Up & Schedule and input the required information.
4. Create a backup file for the instance by selecting the desired BIG-IP Next instance, click Actions and select Back Up & Schedule and input the required information.
5. In the BIG-IP Next Central Manager UI, go to Infrastructure > Instances > Backup & Restore.
Impact:
Instance backup file generation fails with message:
'Backup file creation failed"
Workaround:
Once you have upgraded the BIG-IP Next instances to version 20.2.0, you need to delete large image files as they prevent the successful backup. In addition, you need to delete the failed backup file.
You must send API calls to the instance to remove the large upgrade files and failed backup files before the backup will succeed. This example uses Postman to send the API calls. The following is an example procedure with variables {{ }} around them. You can use variables or insert the actual value for each request:
1. Send a login request to BIG-IP Next Central Manager and record the “access_token” from the response. This is used to make all other API calls.
a. Use the command POST https://{{remote-CM-address}}/api/login, or if no variables are used, then use the command POST https://10.145.69.227/api/login
b. The body for the request is a JSON object with the credentials for the user.
{ "username": "username", "password": "password" }
2. Send a request to BIG-IP Next Central Manager's inventory and identify the instance that you want to delete the file from. Record the “id” from the response. The access_token from the previous step is used as the Bearer Token for the request. Repeat this for all other requests as well:
GET https://{{remote-CM-address}}/api/device/v1/inventory
3. Delete the large image files and failed backup files. Send a request for the files present on the instance. Note the instance ID from the previous step is used in the request URL. In the response, record the "id" for the "file name" or "description" in the response. Example files:
- The upgrade image file: BIG-IP Next 20.2.0....tgz
- The original backup file: backup and restore of the system
GET https://{{remote-CM-address}}/api/device/v1/proxy/{{remote-Big-IP-Next-ID}}?path=/files
4. Send a request to delete the file on the instance. The file ID from the previous step and paste it to the end of delete URL. For example:
DELETE https://{{remote-CM-address}}/api/device/v1/proxy/{{remote-Big-IP-Next-ID}}?path=/files/644fcd02-fa38-4383-ac1c-f67e0c899e0d
5.Wait at least 20 minutes after the deletion before initiating steps to create another instance backup.
IMPORTANT NOTE: The file deletion process can take up to 20 minutes to complete. If the files are not fully deleted, the new backup attempt will fail.
6. If required, repeat step 4 to delete any other large files, unrelated to upgrade, such as QKView or core files.
1576273 : No L1-Networks in an instance causes BIG-IP Next Central Manager upgrade to v20.2.0 to fail★
Component: BIG-IP Next
Symptoms:
Upgrade to of BIG-IP Next Central Manager v20.2.0 fails.
Conditions:
BIG-IP Next Central Manager has an instance with no L1-Networks.
Impact:
Cannot upgrade to v20.2.0.
Workaround:
Add a blank DefaultL1Network to each instance using instance editor.
1575549 : BIG-IP Next Central Manager discovery requires an instance to have both Default L2-Network and Default L3-Network if either one already exists
Component: BIG-IP Next
Symptoms:
Discovering an instance on BIG-IP Next Central Manager requires the instance only be discovered if it is configured with neither Default L2-Network or Default L3-Network, or has both of them, and Default L2-Network be under the Default L3-Network.
Conditions:
A BIG-IP Next Central Manager user attempts to discover an instance they own. Before discovering this instance on BIG-IP Next Central Manager, the user configured it with an L2 Network named "Default L2-Network" and an L3 Network named something other than "Default L3-Network". When the user tries discovering the instance on CM, discovery failed noting that Default L2-Network was present, but no Default L3-Network.
Impact:
BIG-IP Next Central Manager cannot discover an instance if either one of "Default L2-Network" or "Default L3-Network" exist, but not both.
Workaround:
If a user configures an instance with an L2 Network named Default L2-Network, they should create an L3 network named Default L3-Network and have its L2 Network be the default L2. If neither exists, or both exist, and the Default L2 is under the Default L3, discovery succeeds.
1574997 : BIG-IP Next Central Manager HA node installation requires logout to add node★
Component: BIG-IP Next
Symptoms:
As part of a BIG-IP Next Central Manager HA installation, the you must log out of the User Interface (UI), when the first node is added to the cluster.
Conditions:
1. Create 3 VM instances on BIG-IP Next Central Manager
2. From UI, change the BIG-IP Next Central Manager password for all 3 instances
3. Login to node-1
4. Click Set up button
5. Fill out the form and click Add
Impact:
You must log out and then re log-in to successfully add the new node.
Workaround:
Wait for up to 5 minutes for the BIG-IP Next Central Manager cluster to be ready before re-logging in.
1574685 : Generated WAF report can be loaded without text
Component: BIG-IP Next
Symptoms:
When generating a WAF report, the loaded print screen for PDF is displayed without text content.This issue is reported primarily on Mac OS and intermittently.
Conditions:
No specific conditions apply, it happens intermittently and mainly on Mac operating systems.
Impact:
The report does not contain text and is not usable.
Workaround:
Retry generating a WAF report.
1574681 : Dynamic Parameter Extract from allowed URLs does not show in the parameter in the WAF policy
Component: BIG-IP Next
Symptoms:
After successfully creating a dynamic parameter with its respective extract URLs, reentering the parameter settings won't show the saved extract URLs.
Conditions:
Configure a WAF policy parameter as 'Dynamic' with extract URLs.
Impact:
Inability to see configured extract URLs from the UI parameter configuration screen within the WAF policy.
Workaround:
Go to the WAF policy and select the Policy Editor from the Panel menu. Once in the policy editor, search for the key word "extractions": The JSON shows the parameter extraction with its respective extract URLs.
1574585-3 : Auto-Failback cluster cannot upgrade active node★
Component: BIG-IP Next
Symptoms:
A cluster created with the auto-failback flag enabled will not upgrade the active node.
Conditions:
Enable the auto-failback flag.
Impact:
The active node cannot be upgraded.
Workaround:
Auto-failback cannot be configured through Central Manager GUI or API to prevent getting into this situation. Once the issue is resolved, this feature will be re-enabled in the product.
1574573 : Global Resiliency Group status not reflecting correctly on update
Component: BIG-IP Next
Symptoms:
After updating the Global Resiliency group, the group status may not immediately switch to "DEPLOYING," potentially causing the UI to inaccurately reflect the ongoing provisioning process, despite deployment being in progress.
Conditions:
During updates to the Global Resiliency group.
Impact:
Update Status of Global Resiliency Group is incorrect.
Workaround:
To mitigate this issue, wait for approximately 5 minutes after updating the Global Resiliency group. This will allow the DNS listener address to become available for the newly added instance.
1574565 : Inability to edit Generic Host While Re-Enabling Global Resiliency
Component: BIG-IP Next
Symptoms:
Following the re-enabling of Global Resiliency from a previously disabled state, users are unable to simultaneously add or edit Generic Hosts.
Conditions:
During the re-enabling process of Global Resiliency.
Impact:
Unable to add or edit Generic Host information.
Workaround:
Refrain from making any changes to the Generic Host when re-enabling Global Resiliency from a previously disabled state.
After the application has been deployed, you can then proceed to add or modify Generic Hosts during the next application edit.
1568129 : During upgrade from BIG-IP Next 20.1.0 to BIG-IP Next 20.2.0, issue identified with instances that has L3-Forwards with non default VRF (L3-Network) configuration
Component: BIG-IP Next
Symptoms:
In BIG-IP Next 20.1.0, it is possible for instances to have a L3-forward that uses non-default L3-Network (VRF).
In BIG-IP Next 20.2.0, the parameter L3-Network (VRF) completely removed in the L3-forward GUI. For any L3-forward in CM version 20.2.0, always use the Default VRF configuration.
In BIG-IP Next 20.2.0, Central Manager is not supporting creating or editing L3-Forward using non default VRF configuration. All the L3-Forward that is shown in the L3-Forward GUI will be assumed using default VRF configuration. If the L3-Forward is using non-default VRF configuration, the only action that user can do is deleting that L3-Forward.
Conditions:
Upgrade from BIG-IP Next 20.1.0 to BIG-IP Next 20.2.0 with L3-Forward config using non default VRF
Impact:
You cannot assume that the existing L3-Forward config is using the default VRF or non default VRF in the CM UI. You will have to re-create an L3-Forward using the CM UI so that it will use the default VRF.
Workaround:
Delete the L3-Forward
1567129 : Unable to deploy Apps on BIG-IP Next v20.2.0 created using Instantiation from v20.1.x★
Component: BIG-IP Next
Symptoms:
1. Install BIG-IP Next Central Manager with v20.1.x build BIG-IP-Next-CentralManager-20.1.1-0.0.1.
2. Deploy 2 tenants on rseries via IOD process with v20.1.x build(20.1.0-2.279.0+0.0.75) and 20.2.0 build(20.2.0-2.375.1+0.0.1). Configure L1-L3 during IOD process on both tenants.
3. Deploy FastL4 migrated app on the v20.2.0 tenant. Observed below error during deployment-
The task failed, failure reason: AS3-0007: AS3 Deploy Error: Failed to accept request on BIG-IP Next instance: {"code":422,"message":"At least one L3-network object must be configured before applying a declaration.","errors":[]}
Conditions:
If v20.2.0 BIG-IP Next was created using instantiation from BIG-IP Next Central Manager.
Impact:
Since there are no default objects created for v20.1.x BIG-IP Next Central Manager and v20.2.0 BIG-IP Next combination, the application creation will fail as it expects the presence of a VRF object.
Workaround:
1. Upgrade BIG-IP Next Central Manager to v20.2.0 and create VLANs by editing the BIG-IP instance and make sure to check the "Default VRF" check box.
1566745-1 : L3VirtualAddress set to ALWAYS advertise will not advertise if there is no associated Stack behind it
Component: BIG-IP Next
Symptoms:
L3VirtualAddress set to RHI Mode ALWAYS advertise will not advertise if there is no associated Application Stack behind it.
Conditions:
Configuration of RHI Mode to ALWAYS advertise on an L3VirtualAddress without an associated Application Stack.
Impact:
L3VirtualAddress will not be advertised as expected.
Workaround:
None
1560605 : Global Resiliency functionality fails to meet expectations on Safari browsers
Component: BIG-IP Next
Symptoms:
Global Resiliency Group UI main pane goes under the left navigation in Safari browser.
Conditions:
When creating a Global Resiliency group in Safari browser.
Impact:
Not able to create Global Resiliency Group.
Workaround:
Use Chrome browser for creating Global Resiliency Group.
1550345-2 : BIG-IP Next API gateway takes long time to respond large access policy playload
Component: BIG-IP Next
Symptoms:
The BIG-IP Next API gateway takes a long time to respond to a large access policy config payload. API gateway timeouts could occur.
Conditions:
Create an access policy tree with depth over 10 using "nextItems" property.
Impact:
API performance is degraded, and the API gateway may time out.
Workaround:
Break the policy tree into multiple macros and stitch them together.
1498421 : Restoring Central Manager (VE) with KVM HA Next instance fails on a new BIG-IP Next Central Manager
Component: BIG-IP Next
Symptoms:
The user cannot restore BIG-IP Next Central Manager for the first time.
Conditions:
BIG-IP Next Central Manager on VE managing instances which includes a KVM HA instance.
Impact:
For first time, user will not be able to restore the backup archive into a new BIG-IP Next Central Manager.
Workaround:
The user must perform a second restoration of the backup archive into a new BIG-IP Next Central Manager.
1498121 : BIG-IP Next Central Manager upgrade alerts not visible in global bell icon
Component: BIG-IP Next
Symptoms:
User of BIG-IP Next Central Manager not able to see the alerts sent by upgrade of BIG-IP Next Central Manager.
Conditions:
During upgrade of Central Manager from version 20.0.x to 20.1.x, it may encounter errors.
Impact:
Alerts does not reflect in the 'Global Bell Icon' if there are errors during BIG-IP Next Central Manager upgrade.
1495017 : BIG-IP Next Hostname, Group Name and FQDN name should adhere to RFC 1123 specification
Component: BIG-IP Next
Symptoms:
Hostname, Group Name and FQDN Name used in Global Resiliency feature should be lowercase.
Conditions:
Providing names for above mentioned fields with capital letters causes failure.
Impact:
Group creation or FQDN creation fails when capital letters are used in them.
Workaround:
Always create names with small letters and should adhere to RFC 1123 specification.
1495005 : Cannot create Global Resiliency Group with multiple instances if the DNS instances have same hostname
Component: BIG-IP Next
Symptoms:
The hostname is defaulted and cannot be modified when the hostname is not specified for the BIG-IP Next instances on BIG-IP Next Central Manager
Conditions:
Create a Global Resiliency Group with more than one BIG-IP Next instance with same name.
Impact:
Global Resiliency Group creation fails.
Workaround:
Make sure the hostname is set and unique for the BIG-IP Next instances going to be used in Global Resiliency Group creation.
1494997 : Deleting a GSLB instance results in record creation of GR group in BIG-IP Next Central Manager
Component: BIG-IP Next
Symptoms:
Deleting the BIG-IP instance from the "Infrastructure -> My Instances" will disrupt the Global Resiliency Configuration using those instances.
Conditions:
The issue occurs when an instance is deleted directly while it is being used in a Global Resiliency Configuration.
Impact:
Deleting the instance under these conditions will break the Global Resiliency feature, leading to DNS resolution failure for the GR Group.
Workaround:
Refrain from deleting the instances when they are currently being used in a Global Resiliency Group.
1492705 : During upgrading to BIG-IP Next 20.1.0, the BIG-IP Next 20.1.0 Central Manager failed to connect with BIG-IP Next 20.0.2 instance
Component: BIG-IP Next
Symptoms:
BIG IP Next 20.1.0 Central Manager is managing BIG-IP Next 20.0.2 instances.
When upgrading Next instance from BIG-IP 20.0.2 to BIG-IP 20.1.0, Central Manager failed to connect with the instance.
Conditions:
BIG IP Next 20.1.0 Central Manager managing BIG-IP Next 20.0.2 instances.
Impact:
Connection to BIG-IP Next instances fails.
Workaround:
Following is the workaround:
1. Start with BIG-IP Next Central Manager of 20.0.2 managing BIG-IP Next 20.0.2 instances
2. Upgrade Next instances of 20.0.2 version to 20.1.0 version
3. Upgrade Central Manager from 20.0.2 version to 20.1.0 version.
1491197 : Server Name (TLS ClientHello) Condition in policy shouldn't be allowed when "Enable UDP" option is selected in application under Protocols & Profiles
Component: BIG-IP Next
Symptoms:
Validation is not available in BIG-IP Next Central Manager for the mutually exclusive configurations "Enable UDP" in application and "TLS ClientHello" condition in SSL Orchestrator policies.
When we deploy Application with UDP enabled, then attach SSL Orchestrator policies to the application, it should not have "TLS Client Hello" condition based on "Server Name".
Conditions:
Below are the condition in sequence:
1. Create an application with UDP enabled
2. Create and Attach an sslo policy, to that application, which has "TLS ClientHello" condition based on "Server Name" and deployed to next instance.
Impact:
Traffic processing will not work as the configuration is not valid and will not be sent to TMM until fixed.
1491121 : Patching a new application service's parameters overwrites entire application service parameters
Component: BIG-IP Next
Symptoms:
When sending a PATCH API request to append an application service's parameters, all parameter are completely replaced with changes, rather than partially changing the parameters according to the PATCH request.
Conditions:
Use a PATCH API request to partially update application service parameters.
Deploy changes.
Impact:
If you send incomplete application service parameters, the changes will completely replace the existing parameters, and only partial parameters will be saved. This will lead to failed application service deployment as the parameters are incomplete.
Workaround:
When using the API request to change application service parameters, include in the body of the request full application service parameters, and not just partial changes.
1489945 : HTTPS applications with self-signed certificates traffic is not working after upgrading BIG-IP Next instances to new version of BIG-IP Next Central Manager★
Component: BIG-IP Next
Symptoms:
HTTPS traffic is not working after upgrading the BIG-IP Next instances for the application service previously deployed using BIG-IP Next Central Manager version 20.0.x.
Conditions:
1. Install BIG-IP Next Central Manager version 20.0.x and add BIG-IP Next instance(s).
2. Deploy the HTTP application service with a self-signed certificate created on BIG-IP Next Central Manager to an instance.
3. Observed traffic is working fine.
4. Now upgrade from 20.0.x to the newest version and observe HTTPS had traffic stopped working.
Impact:
This impacts HTTPS application service traffic.
Workaround:
1. Upgrade BIG-IP Next Central Manager to latest version.
2. Create new self-signed certificates for the already deployed self-signed certificates through application services.
3. Replace the existing self-signed certificate in the application service with newly created self-signed certificate and re-deploy the application service.
4. After successfully re-deploying the application service, make sure traffic is working on the instance.
5. Delete the old self-signed certificate(s) created in the earlier versions of BIG-IP Next Central Manager.
1474801 : BIG-IP Next Central Manager creates a default VRF for all VLANS of the onboarded Next device
Component: BIG-IP Next
Symptoms:
BIG-IP Next Central Manager creates a default VRF for all VLANS of the onboarded Next device.
Conditions:
The user wants to use specific VLANS for application traffic.
Impact:
Users would not be able to select VLANS for an application.
Workaround:
1. User must create VLANs using /L1 Networks endpoint directly on BIG-IP Next, before adding the device to BIG-IP Next Central Manager.
2. The user can add the device to CM and choose the VLANs for SSL Orchestrator use cases.
Subsequently:
1. User should perform L1Network related operations on Next only.
1474669-2 : Fluentbit core may be generated when restarting the pod
Component: BIG-IP Next
Symptoms:
When shutting down a pod, fluent-bit core files may be generated as a result of access to an invalid pointer.
Conditions:
Restarting the pod where fluentbit runs.
Impact:
Core files might be generated during pod shutdown. Since fluentd is third-party software, the core files cannot be used for debugging.
Workaround:
None
1466305 : Anomaly in factory reset behavior for DNS enabled BIG-IP Next deployment
Component: BIG-IP Next
Symptoms:
Factory reset API does not bring TMM to default provisioned modules. DNS pods along with cne-proxy and cne-controller are not deleted.
Conditions:
BIG-IP Next cluster with DNS provisioned and WAF disabled.
Impact:
BIG-IP Next cluster with DNS provisioned will not go back to default deployment and user will have to deprovision DNS and re-provision WAF.
Workaround:
Deprovision DNS if cluster needs to go to factory defaults.
1410241-1 : Traffic for TAP is not seen on service interface when connection mirroring is turned on
Component: BIG-IP Next
Symptoms:
Traffic for TAP is not seen on service interface when connection mirroring is turned on
Conditions:
-- Connection mirroring is turned on for a CM HA set up.
-- Application is configured with SSL Orchestrator service.
-- Traffic is passed.
Impact:
Traffic for TAP is not seen on service interface.
Workaround:
Connection Mirroring is not supported for SSL Orchestrator. It should be turned off when configuring with SSL Orchestrator policies or services.
1403861 : Data metrics and logs will not be migrated when upgrading BIG-IP Next Central Manager from 20.0.2 to a later release
Component: BIG-IP Next
Symptoms:
In the version 20.1.0 of BIG-IP Next Central Manager, OpenSearch is replaced by Elasticsearch as the main storage for data metrics and logs.
Due to incompatibility between OpenSearch and Elasticsearch, metrics and logs that are stored on BIG-IP Next Central Manager in earlier versions will not be available after upgrading.
Conditions:
Upgrade BIG-IP Next Central Manager from a release version prior to 20.1.0.
Impact:
After the upgrade is complete, the data metrics and logs from the previous version will not be available on the upgraded BIG-IP Next Central Manager.
1366321-1 : BIG-IP Next Central Manager behind a forward-proxy
Component: BIG-IP Next
Symptoms:
Using "forward proxy" for external network calls from BIG-IP Next Central Manager fails.
Conditions:
When the network environment BIG-IP Next Central Manager is deployed in has a policy of routing all external calls through a forward proxy.
Impact:
BIG-IP Next Central Manager does not currently support proxy configurations, so you cannot deploy BIG-IP Next instances in that environment.
Workaround:
Allow BIG-IP Next Central Manager to connect to external endpoints by bypassing the "forward proxy" until BIG-IP Next Central Manager supports proxy configurations.
1365445 : Creating a BIG-IP Next instance on vSphere fails with "login failed with code 401" error message★
Component: BIG-IP Next
Symptoms:
Creating a BIG-IP Next VE instance in vShpere fails.
Conditions:
This happens when the randomly generated initial admin password contains an unsupported character.
Impact:
Creating a BIG-IP Next VE instance fails.
Workaround:
Try recreating the BIG-IP Next VE instance.
1365433 : Creating a BIG-IP Next instance on vSphere fails with "login failed with code 501" error message★
Component: BIG-IP Next
Symptoms:
Creating a BIG-IP Next VE instance fails and returns a code 503 error.
Conditions:
Attempting to create a BIG-IP Next VE instance from BIG-IP Next Central Manager when the vSphere environment has insufficient resources.
Impact:
Creating a BIG-IP Next VE instance fails.
Workaround:
Use one of the following workarounds.
- Retry creating the BIG-IP Next instance.
- Create the BIG-IP Next instance directly in the vSphere provider environment then add it to BIG-IP Next Central Manager.
1365417 : Creating a BIG-IP Next VE instance in vSphere fails when a backslash character is in the provider username★
Component: BIG-IP Next
Symptoms:
If you include a backslash character in the provider username when creating a BIG-IP Next VE instance creation fails because BIG-IP Next Central Manager parses it as an escape character.
Conditions:
Creating a BIG-IP Next VE instance that includes a backslash character in the provider username.
Impact:
Creation of the BIG-IP Next instance fails.
Workaround:
Do not use the backslash character in the provider username.
1365005 : Analytics data is not restored after upgrading to BIG-IP Next version 20.0.1★
Component: BIG-IP Next
Symptoms:
After upgrading from BIG-IP Next version 20.0 to 20.0.1, analytic data is not restored.
Conditions:
After upgrading from BIG-IP Next version 20.0 to 20.0.1.
Impact:
Analytics data is not automatically restored after upgrading and cannot be restored manually.
1360709 : Application page can show an error alert that includes "FAST delete task failed for application"
Component: BIG-IP Next
Symptoms:
After you successfully delete a BIG-IP Next instance that has application services deployed to it, an alert banner on the Applications page states that the delete task failed even though it's successful.
Conditions:
Delete a BIG-IP Next instance and then navigate to the Applications page.
Impact:
This can cause confusion.
Workaround:
None
1360621 : Adding a Control Plane VLAN must be done only during BIG-IP Next HA instance creation
Component: BIG-IP Next
Symptoms:
If you attempt to edit a BIG-IP Next HA instance properties to add a Control Plane VLAN, it fails.
Conditions:
Editing the properties for an existing BIG-IP Next VE HA instance and attempting to add a Control Plane VLAN.
Impact:
The attempt to edit/add Control Plane VLAN fails.
Workaround:
Create the Control Plane VLAN when you initially create the BIG-IP Next HA instance.
1360121-1 : Unexpected virtual server behavior due to removal of objects unsupported by BIG-IP Next
Component: BIG-IP Next
Symptoms:
The migration process ensures that application services are supported by BIG-IP Next. If a property value is not currently supported by BIG-IP Next, it is removed and is not present in the AS3 declaration. If the object was a default value, the object is replaced by a default value that is supported by BIG-IP Next.
Conditions:
1. Migration a UCS archive from BIG-IP to BIG-IP Next Central Manager.
2. Review the AS3 declaration during the Pre Deployment staged.
Example for "cache-size" property of "web-acceleration" profile:
- BIG-IP config cache-size = 500mb OR 0mb
- AS3 schema supported range = 1-375mb
- BIG-IP Next stack (clientSide/caching/cacheSize) supported range 1-375mb
- AS3 output created by migration does not produce "cacheSize" property if cache-size is greater than 375mb or lower than 1mb.
- Deployment of AS3 declaration uses BIG-IP Next defaults in both cases (cache-size 375 or 0mb)
Impact:
Default values of virtual server's objects may change, impacting virtual server's behavior.
Workaround:
Although you cannot use values which are unsupported by BIG-IP Next, you can update the AS3 declaration with missing properties to specify values other than default ones added during the migration process.
To do so, read: https://clouddocs.f5.com/bigip-next/latest/schemasupport/schema-reference.html
to modify AS3 declaration by adding missing properties and specifying values within supported range.
1360097-1 : Migration highlights and marks "net address-list" as unsupported, but addresses are converted to AS3 format
Component: BIG-IP Next
Symptoms:
Objects of a type: "net address-list" are incorrectly marked as unsupported, while virtual servers in AS3 output contain the property "virtualAddresses".
Conditions:
If an address list is used to configure a virtual server, it will be highlighted as unsupported in the configuration editor even if it is properly translated to AS3 "virtualAddresses" property.
Example of the object:
net address-list /tenant3892a81b1f9e6/application_11/IPv6AddressList {
addresses {
fe80::1ff:fe23:4567:890a-fe80::1ff:fe23:4567:890b { }
fe80::1ff:fe23:4567:890c { }
fe80::1ff:fe23:4567:890d { }
}
description IPv6
}
Example of an AS3 property:
"virtualAddresses": [
"fe80::1ff:fe23:4567:890a-fe80::1ff:fe23:4567:890b",
"fe80::1ff:fe23:4567:890c",
"fe80::1ff:fe23:4567:890d"
],
Impact:
- The object is translated to virtualAddresses property in the AS3, but an application is marked as yellow.
- The object is translated, but one of the values from the address list is not supported on BIG-IP Next (IPv6 value range)
Workaround:
Verify that all addresses from 'net address-list' object are configured as "virtualAddresses" property value list in the AS3 output.
Verify that all addresses from 'net address-list' are supported on BIG-IP Next. Remove or modify virtualAddresses value list if needed.
1360093-1 : Abbreviated IPv6 destination address attached to a virtual server is not converted to AS3 format
Component: BIG-IP Next
Symptoms:
Service class in AS3 output does not have 'virtualAddresses' property, for example:
"Common_virtual_test": {
"snat": "none",
"class": "Service_TCP",
"profileTCP": {
"use": "/tenant017b16b41f5c7/application_9_SMtD/tcp_default_v14"
},
"persistenceMethods": []
}
Conditions:
Migrate an application service with abbreviated IPv6 address:
ltm virtual-address /tenant017b16b41f5c7/application_9_SMtD/aa::b {
address aa::b
arp enabled
traffic-group /Common/traffic-group-1
Impact:
Virtual server is misconfigured, no listener on a specific IP address is created.
Workaround:
All application services containing virtual servers configured with abbreviated IPv6 addresses should be updated once they are migrated to BIG-IP Next Central Manager.
Go to Applications -> My Application Services, find your application service name and edit it.
Find your virtual server name and update it with a property
"virtualAddresses": [
"aa::b",
]
like this:
"Common_virtual_test": {
"snat": "none",
"class": "Service_TCP",
"virtualAddresses": [
"aa::b",
],
"profileTCP": {
"use": "/tenant017b16b41f5c7/application_9_SMtD/tcp_default_v14"
},
"persistenceMethods": []
}
1359209-1 : The health of application service shown as "Good" when deployment fails as a result of invalid iRule syntax
Component: BIG-IP Next
Symptoms:
When an application servvice with an invalid iRule is deployed to an instance from BIG-IP Next Central Manager, deployment is shown as successful but the post deployment iRule validation failed on the instance. Health status should be changed to "Critical/Warning" but it is still shown as "good".
Conditions:
Deploy an application service with an invalid iRule.
Impact:
Incorrect status of the application service is shown in the My Application Services page.
Workaround:
Always try to use a valid iRule when deploying to BIG-IP Next.
1358985-1 : Failed deployment of migrated application services to a BIG-IP Next instance
Component: BIG-IP Next
Symptoms:
Deployment of a migrated application service to a BIG-IP Next instance might fail even if the declaration is valid. This can occur after the application service was successfully saved as draft on BIG-IP Next Central Manager
The following can appear in the deployment logs:
- No event with error code from deployment to instance in migration logs
- 202 response code "in progress" from deployment to instance in migration logs
- 503 response code "Configuration in progress" from deployment to instance in migration logs
Conditions:
1. Migrate an application service during a migration session
2. Select a deployment location and deploy the application service.
Review the migration log: the application service was successfully saved to BIG-IP Next Central Manager, but the deployment to the selected location failed with error.
Impact:
There are 3 different errors that can result in the deployment logs (Deployment Summary>View logs):
Reason 1:
Migration process started.
Application: <application name> saved as draft to BIG-IP Next Central Manager.
Migration process failed.
Reason 2:
Migration process started
Application: <application name> saved as draft to BIG-IP Next Central Manager.
Log Message: Deployment to <BIG-IP Next IP address> failed with the error: '{'code': 202, 'host': '<hostname>, 'message': 'in progress', 'runTime': 0, 'tenant': '<tenant name>'}'.
Migration process failed.
Reason 3:
If you are currently processing the same AS3 declaration sent from a different source or migration session:
Migration process started.
Application: <application name> saved as draft to BIG-IP Next Central Manager.
Log message: Deployment to <BIG-IP Next IP address> failed with the error: '{'code': 503, 'errors': [], 'message': 'Configuration operation in progress on device, please try again later.'}'.
Migration process failed.
Workaround:
The application service was successfully saved as a draft on BIG-IP Next Central Manager.
You can go to My Application Services, select the application service that failed to deploy, and deploy the application service to a selected instance location.
1355605 : "NO DATA" is displayed when setting names for appliction services, virtual servers and pools, that exceed max characters
Component: BIG-IP Next
Symptoms:
"NO DATA" is displayed in the application metrics charts when setting a name that exceeds 33 characters for an application service, pool, or virtual server.
Conditions:
1. Create an application service with a virtual server and a pool.
2. Set the name of each of the objects above to be 34 characters or longer.
3. Add an endpoint to the pool.
4. Deploy the application service, and wait for the application service to pass traffic.
Impact:
"NO DATA" is displayed in the application service, pool and virtual server data metrics charts.
Workaround:
When creating an application the names of the application services, pools and virtual servers cannot exceed 33 characters.
1354645 : Error displays when clicking "Edit" on the Instance Properties panel
Component: BIG-IP Next
Symptoms:
When editing the properties of a BIG-IP Next instances page, a "Error: unsupported platform type" displays.
Conditions:
When viewing the Instances page, the BIG-IP Next instance's hostname to view its properties. On the Instance Properties panel, click the Edit button.
Impact:
This can cause confusion.
Workaround:
Wait for the BIG-IP Next instance's hostname to load on Instance Properties panel before clicking the Edit button.
1354265 : The icb pod may restart during install phase
Component: BIG-IP Next
Symptoms:
The icb may generate a core during the install phase which will cause a restart of icb pod. However, we have observed icb to restart fine with no issues.
Conditions:
The issue is seen during the upgrade install.
Impact:
Post-first panic, icb restarts fine and no known bad impact is observed.
Workaround:
None
1353589 : Provisioning of BIG-IP Next Access modules is not supported on VELOS, but containers continue to run
Component: BIG-IP Next
Symptoms:
1) Containers that belong to the BIG-IP Next Access module keep running on BIG-IP Next all the time on VELOS & rSeries.
2) On VE, the containers run only if the BIG-IP Next Access module is provisioned using: /api/v1/systems/{systemID}/provisioning api
Conditions:
This is observed all the time when BIG-IP Next is deployed on VELOS/r-series.
Impact:
Containers that belong to the BIG-IP Next Access module keep running all the time and this can lead to wastage of resources on VELOS & rSeries.
Workaround:
If you do not want to run BIG-IP Next Access containers as part of a BIG-IP Next tenant deployment, you can use this workaround before installing the tenant:
1) Run the following command on the standby controller:
sed -i 's/access: true/access: false/g' /var/F5/partition<partition-ID>/SPEC/<IMAGE_VERSION>. yaml
2) Trigger failover from partition cli ->:
system redundancy go-standby
3) Install the tenant.
1352969 : Upgrades with TLS configuration can cause TMM crash loop
Component: BIG-IP Next
Symptoms:
After upgrading from a version prior to 20.0.1, connection is lost.
Conditions:
- Keys and certificates are configured as files in TLS configuration.
- Upgrading from a version prior to 20.0.1.
Impact:
An error similar to the following is logged: Failed to connect to <IP address port: xx> No route to host
Workaround:
After upgrading, reconfigure the private key files so that validation properly occurs.
Fix any existing mismatch keys and certificates.
1350365 : Performing licensing changes directly on a BIG-IP Next instance
Component: BIG-IP Next
Symptoms:
BIG-IP Next Central Manager will become out of sync with a managed BIG-IP Next instance if you perform licensing actions directly to the BIG-IP Next instance.
Conditions:
Add a BIG-IP Next instance to BIG-IP Next Central Manager. Perform licensing actions directly on the BIG-IP Next instance.
Impact:
BIG-IP Next Central Manager is no longer synchronized with its managed instance.
1350285-1 : Traffic is not passing after the tenant is licensed and network is configured
Component: BIG-IP Next
Symptoms:
After configuring and licensing the BIG-IP Next tenant, such as after an upgrade, traffic is not passing.
Conditions:
A BIG-IP Next tenant is configured without vlans, and a /PUT to create the L1 networking interface is performed; and then vlans are later allocated to the tenant. In this scenario, the (later-) allocated vlans will not take effect for the previously configured L1 network interface.
Impact:
Data traffic associated with the later-added vlans will not be processed.
Workaround:
Workaround is to allocate vlans to the BIG-IP Next tenant before the /PUT call to create the L1 network interface; at which point the L1 network interface will be associated with a vlan allocated to that BIG-IP Next instance.
1343005-1 : Modifying L4 serverside after the stack is created can result in the update not being applied
Component: BIG-IP Next
Symptoms:
Any updates to L4 server-side settings after the stack is created can result in the update not being applied, leading to traffic disruption.
Conditions:
Modifying an L4 serverside with a snat pool attached to an application stack to snat type AUTOMAP.
Impact:
Traffic is disrupted after updating the L4 serverside config
Workaround:
Resend the stack config after the updating the L4 serverside to resume the traffic.
1325713 : Monthly backup cannot be scheduled for the days 29, 30, or 31
Component: BIG-IP Next
Symptoms:
You cannot schedule a monthly backup on the last 3 days of the month (29, 30, or 31) because some months do not contain these days (for example, February).
Conditions:
Creating a monthly backup schedule from BIG-IP Next Central Manager that contains the days 29, 30, or 31.
Impact:
If you select these days for your schedule, BIG-IP Next Central Manager returns a 500 error.
1314617 : Deleting an interface on a running BIG-IP Next instance can cause the system to behave unexpectedly
Component: BIG-IP Next
Symptoms:
Once a BIG-IP Next instance is configured to use certain interfaces on the first boot up, deleting one of them would put the system in an unpredictable state. This update to the instance should be avoided.
Conditions:
Remove an interface from an existing BIG-IP Next instance.
Impact:
The BIG-IP Next behaves unpredictably once a network interface is removed from a running instance.
Workaround:
None
1134225 : AS3 declarations with a SNAT configuration do not get removed from the underlying configuration as expected
Links to More Info: K000138849
Component: BIG-IP Next
Symptoms:
AS3-configured L4-serversides object contains a SNAT property when it should not, given that SNAT was previously configured in the declaration and then subsequently removed.
Conditions:
SNAT configuration was specified in the AS3 declaration and then subsequently removed.
Impact:
A SNAT cannot be removed once it has been added.
Workaround:
Remove the L4-serversides object, either by removing the relevant configuration from the AS3 declaration or by using DELETE /api/v1/L4-serversides, and then re-POST the AS3 declaration without the SNAT.
1122689-3 : Cannot modify DNS configuration for a BIG-IP Next VE instance through API
Component: BIG-IP Next
Symptoms:
Making updates to BIG-IP Next Virtual Edition (VE) DNS configuration through onboarding or the API does not update the DNS configuration as expected.
Conditions:
Making updates to a BIG-IP Next DNS configuration through the API.
Impact:
The BIG-IP Next instance continues to use the DNS servers supplied by DHCP on the interface by default.
Workaround:
Prior to updating the BIG-IP Next DNS configuration through the API, issue the following commands.
$ rm -f /etc/resolv.conf; touch /etc/resolv.conf
This removes all DNS configurations. DNS can then be managed through the BIG-IP Next instance's API, and the DNS provided by DHCP is ignored.
1087937 : API endpoints do not support page query
Component: BIG-IP Next
Symptoms:
The 'page' query is not supported.
Conditions:
This issue is seen when the API is called directly. There is no impact on the functionality if BIG-IP Next Central Manager or AS3 is used.
Impact:
Pagination of results does not function correctly.
Workaround:
Remove 'limit' parameter. This causes all objects to be returned in the response.
★ This issue may cause the configuration to fail to load or may significantly impact system performance after upgrade
For additional support resources and technical documentation, see:
- The F5 Technical Support website: http://www.f5.com/support/
- The MyF5 website: https://my.f5.com/manage/s/
- The F5 DevCentral website: http://devcentral.f5.com/