BIG-IP Next Fixes and Known Issues¶
This list highlights known issues for this BIG-IP Next release.
Version: 20.2.1
Build: 2.430.2+0.0.48
Known Issues in BIG-IP Next v20.2.1
Cumulative fixes from BIG-IP Next v20.2.1 that are included in this release
BIG-IP Next Fixes
ID Number | Severity | Links to More Info | Description |
1599305-1 | 1-Blocking | After upgrading, unable to edit the Central Manager part of policies attached to the applications | |
1590065-1 | 1-Blocking | The same gateway address is not considered as valid on multiple static routes | |
1585793-1 | 1-Blocking | The f5-fsm-tmm crashes upon configuring BADOS under traffic | |
1584753-1 | 1-Blocking | K000139851 | TMM in BIG-IP Next expires the license after 50 days |
1575393 | 1-Blocking | WaitingForPlatformNetworkError error message during instance edit in HA post upgrade★ | |
1575073 | 1-Blocking | Missing file(s) reported in iHealth when looking at BIG-IP Next commands list | |
1567089 | 1-Blocking | Outbound requests sent on the network during BIG-IP Next Central Manager installation★ | |
1561053-2 | 1-Blocking | Application status migration status incorrectly labeled as green when certain properties are removed | |
1474081 | 1-Blocking | Central Manager upgrade fails, leaving VM in maintenance mode★ | |
1584741-1 | 2-Critical | In the Table commands in iRule, the subtable count command fails in BIG-IP Next 20.x | |
1575261 | 2-Critical | The Setup may fail to complete due to fluentd in CrashLoopBackOff. | |
1575157 | 2-Critical | Access session creation failure with ERR_TYPE logged in TMM | |
1576613 | 3-Major | PB crashes upon detaching a policy | |
1576101 | 3-Major | Unable to login with LDAP user to BIG-IP Next Central Manager | |
1573565 | 3-Major | Access policy using SAML authentication deploy incorrectly due to the certificate that is installed without the signing key | |
1572681 | 3-Major | Data Group Changes not reflected on BIG-IP after deployment when removing key-value pairs | |
1560561-1 | 3-Major | Instance Manager and Security Manager roles are required to deploy Inspection Service with VLAN changes |
Cumulative fix details for BIG-IP Next v20.2.1 that are included in this release
1599305-1 : After upgrading, unable to edit the Central Manager part of policies attached to the applications
Component: BIG-IP Next
Symptoms:
Before upgrade, if there are applications attached with WAF policies, then after the upgrade, parts of the policies are not editable until the application is re-deployed.
Conditions:
Applications attached with WAF policies exist before the upgrade.
Impact:
Unable to edit part of the WAF policies that are attached to applications before upgrade.
Workaround:
Re-deploy the application to edit the policies.
1590065-1 : The same gateway address is not considered as valid on multiple static routes
Component: BIG-IP Next
Symptoms:
When configured with multiple static routes with the same gateway IP address as mentioned below, the BIG-IP Next instance considers the first static route and does not configure the remaining static routes.
- destination prefix 192.17.17.17/24 with gateway IP 198.2.1.1
- destination prefix 192.18.18.18/24 with gateway IP 198.2.1.1
Conditions:
Multiple static routes with same gateway IP address.
Impact:
Unable to configure multiple static routes with same gateway IP address.
Workaround:
Change the environment variable 'DPVD_NETWORK_VALIDATOR_ENABLE ' to False from True, following is an example command:
sudo kubectl edit deploy f5-fsm-tmm
Fix:
The same gateway IP address can be used on multiple static routes.
1585793-1 : The f5-fsm-tmm crashes upon configuring BADOS under traffic
Component: BIG-IP Next
Symptoms:
The f5-fsm-tmm crashes.
Conditions:
Deploy BIG-IP Next WAF and perform external IP vulnerability scan.
Configure BADOS while traffic is running to the WAF application service.
Impact:
The f5-fsm-tmm crashes, traffic is disrupted.
Workaround:
None
Fix:
The f5-fsm-tmm works as expected after configuring BADOS under traffic.
1584753-1 : TMM in BIG-IP Next expires the license after 50 days
Links to More Info: K000139851
Component: BIG-IP Next
Symptoms:
-- BIG-IP Next suddenly stops passing application traffic.
-- The TMM logs show that the license has expired
-- The TMM state changes to unlicensed.
Conditions:
-- BIG-IP Next instances
-- A valid license is applied, with more than 50 days until expiration
-- 50 (49.7) days elapse after the license activation
Impact:
TMM becomes unlicensed and stops passing application traffic
Workaround:
Restart the BIG-IP Next instance before 49.7 days has elapsed.
1584741-1 : In the Table commands in iRule, the subtable count command fails in BIG-IP Next 20.x
Component: BIG-IP Next
Symptoms:
The Table commands in iRule allow storage of user data during runtime inside "subtables", administrators use these to store states. The Table command allows to count the number of records in a subtable, following is an example:
table keys -subtable TABLE -count
Conditions:
Using Table "count" command:
table keys -subtable MYSUBTABLE -count
Impact:
Count is incorrectly reported as 0.
Workaround:
None
Fix:
The Table count command returns the correct number of records.
1576613 : PB crashes upon detaching a policy
Component: BIG-IP Next
Symptoms:
Upon detaching a policy, policy builder (PB) crashes.
Conditions:
Steps to Reproduce:
1. Create an application with a fundamental policy.
2. Delete the application.
Impact:
When PB crashes, any data (statistics and this affects suggestions) that is not saved to persistence will be lost.
The user consequences are losing unsaved data and the downtime of the crash itself.
Workaround:
None
Fix:
Fixed an issue causing policy builder to crash.
1576101 : Unable to login with LDAP user to BIG-IP Next Central Manager
Component: BIG-IP Next
Symptoms:
If user upgrades HA instances and BIG-IP Next Central Manager to latest build, and then configures LDAP and creates a new user available in LDAP, the user is not able to login with the new LDAP user.
Conditions:
1. Upgrade BIG-IP Next Central Manager from 20.1.x to 20.2.x.
2. Configure LDAP and create a user available in LDAP.
3. Attempt to login using the new LDAP user credentials.
Impact:
User is not able to login with new LDAP user after upgrade to 20.2.0..
Workaround:
As a workaround, use the following steps:
- Use ssh to log into the machine using machine credentials
- Restart the system and gateway pod using these commands:
kubectl rollout restart mbiq-system-feature
kubectl rollout restart mbiq-gateway-feature
1575393 : WaitingForPlatformNetworkError error message during instance edit in HA post upgrade★
Component: BIG-IP Next
Symptoms:
Users with instances configured in High Availability (HA) setups might experience a waitingForPlatformNetworkError error when trying to make modifications to an instance from Configuration Manager or apply changes to the /onboard endpoint configuration following an upgrade of the instances.
Conditions:
-- Two instances/tenants configured as HA pair.
-- Upgrade of HA pair.
-- Modifying instance configuration such as DNS, NTP, or HA attributes.
Impact:
The modification request fails and returns error waitingForPlatformNetworkError
Workaround:
Initiating a manual fail-over.
Fix:
Users will be able to make edits to the instance post HA upgrade.
1575261 : The Setup may fail to complete due to fluentd in CrashLoopBackOff.
Component: BIG-IP Next
Symptoms:
In some cases, you might be unable to complete the BIG-IP Next Central Manager initial setup (script) due to fluentd in CrashLoopBackOff and script timing out.
var/log/central-manager/central-manager-cli.log
Error: INSTALLATION FAILED: client rate limiter Wait returned an error: context deadline exceeded
var/log/syslog
Mar 7 14:08:11 central-manager k3s[1940]: E0307 14:08:11.499777 1940 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"fluentd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=fluentd pod=mbiq-fluentd-0_default(fa83f8e7-9ffd-451e-912e-f6fb037a083d)\"" pod="default/mbiq-fluentd-0" podUID=fa83f8e7-9ffd-451e-912e-f6fb037a083
Conditions:
The BIG-IP Next Central Manager Virtual Machine is deployed on a hypervisor with high resource utilization (eg long/slow storage response).
Impact:
You may be unable to complete the initial setup of the BiG-Ip Next Central Manager.
Workaround:
- Make sure you start with a configuration with default numbers of vCPUs.
- Run uninstall:
/opt/cm-bundle/cm uninstall
- Start a new setup, fill in all details:
setup
- When the installer asks the following, stop for a moment.
Would you like to start the BIG-IP Next Central Manager application installation (Y/n) [Y]:
- Open a new ssh session and navigate to:
cd /var/opt/cm-bundle/artifacts/
- Create a new directory and unpack umbrella spec to it:
mkdir custom
tar -C custom -zxvf umbrella-20.1.1-1.1.tgz
- Save the old umbrella spec:
mv umbrella-20.1.1-1.1.tgz umbrella-20.1.1-1.1.tgz.old
- Open fluentd chart:
sudo chmod 644 custom/umbrella/charts/fluentd/values.yaml
vi custom/umbrella/charts/fluentd/values.yaml
- Within the chart navigate to startup probe section. NOTE! There are two sections like this you need to modify:
startupProbe:
enabled: false
httpGet:
path: /fluentd.healthcheck?json=%7B%22ping%22%3A+%22pong%22%7D
port: http
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
- Give it some more time, edit initialDelaySeconds to 180 (or periodSeconds). NOTE! There are two sections like this you need to modify. Save changes.
initialDelaySeconds: 180
- Tar the new file from /custom dir (make sure file structure remains the same as in the original file):
root@lab:/var/opt/cm-bundle/artifacts/custom# tar -cvzf ../umbrella-20.1.1-1.1.tgz umbrella/
- From the previous ssh window, proceed with installation:
Would you like to start the BIG-IP Next Central Manager application installation (Y/n) [Y]: Y
- When fluentd install completes, from that separate SSH session, see if changes were applied successfully, you should see a similar section to the one you edited before:
k edit pod mbiq-fluentd-0
Fix:
The Fluentd configuration has been updated to provide additional time for the initial startup.
1575157 : Access session creation failure with ERR_TYPE logged in TMM
Component: BIG-IP Next
Symptoms:
End user receives a Connection Reset when tries to connect to the application protected by access policy.
Conditions:
Anytime when the license keys are read from redis back to internal sessiondb.
Impact:
End user will not be able to access the application protected by access policy
Workaround:
None
Fix:
None
1575073 : Missing file(s) reported in iHealth when looking at BIG-IP Next commands list
Component: BIG-IP Next
Symptoms:
When viewing the Commands list on iHealth for a qkview that came from a BIG-IP Next instance, multiple yellow warnings are displayed at the top of the page:
Missing file(s): ["/qkview/subpackages/f5-fsm-tmm-7b5d4cd86c-9mzm7-f5-fsm-f5dr/qkview/commands/ab457c83eed053c6cb8f84ca82dd5403/0/out","/qkview/subpackages/f5-fsm-tmm-7b5d4cd86c-9mzm7-f5-fsm-f5dr/qkview/commands/d69345dc5a0f984a9533a5bb704d6392/0/out",
...
]
Conditions:
-- iHealth
-- BIG-IP Next qkview
-- Standard commands list
Impact:
A large yellow banner listing missing file(s) is displayed.
The banner can be ignored.
Workaround:
None
1573565 : Access policy using SAML authentication deploy incorrectly due to the certificate that is installed without the signing key
Component: BIG-IP Next
Symptoms:
SAML authentication is not working, while the application is deployed on BIG-IP Next.
Observed logs on BIG-IP Next:
Creating an agent for [5bddf66c-ea1b-526b-b4f9-7481fc587d6d__converted_policy__SAML_Auth_ag], type [47]
AccessAgentManager: createAccessAgent() , Policy name [5bddf66c-ea1b-526b-b4f9-7481fc587d6d__converted_policy], agent name [5bddf66c-ea1b-526b-b4f9-7481fc587d6d__converted_policy__SAML_Auth_ag], agent type [47]
Library [libsamlAuthAgent] return [1]
AccessAgentFactory::Creating Agent Instance: type [47], name [5bddf66c-ea1b-526b-b4f9-7481fc587d6d__converted_policy__SAML_Auth_ag]
getExternalDependencies/798: Returning external dependency size: 0
File metadata for [test13_1] not found
Error (APM_ERR_NOT_FOUND) in get SP Signing key [test13_1] for [5bddf66c-ea1b-526b-b4f9-7481fc587d6d__converted_policy__SAML_Auth__test_saml]
Conditions:
Object of the "apm aaa saml" type with sp-certificate and sp-signkey configured.
Example:
apm aaa saml /Common/test_saml {
entity-id https://sp.journeys.com
idp-connectors {
/Common/test_cbip_idp_2 { }
}
is-authn-request-signed true
name-id-policy-format urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified
sp-certificate /Common/test13
sp-signkey /Common/test13
want-assertion-signed false
}
sys file ssl-cert /tenant84a57b1912699/application_1/test13 {
cache-path /config/filestore/files_d/Common_d/certificate_d/:Common:test13_59918_1
revision 1
source-path /var/run/key_mgmt/38SULC/ssl.crt/test13
}
sys file ssl-key /tenant84a57b1912699/application_1/test13_1 {
cache-path /config/filestore/files_d/Common_d/certificate_key_d/:Common:test13_59915_1
revision 1
source-path /var/run/key_mgmt/XxuApS/ssl.key/test13
}
Impact:
Certificate is imported to BIG-IP Next Central Manager by Journeys with "Access" tag, meaning it is imported without signing key.
In that Access policy cannot pre-deploy the key to BIG-IP Next instance, resulting in misconfigured application.
Workaround:
User can manually import certificate with signing key and then update imported policy by selecting proper certificate and key.
1572681 : Data Group Changes not reflected on BIG-IP after deployment when removing key-value pairs
Component: BIG-IP Next
Symptoms:
Deleting a key from a data group fails to trigger a redeployment of the data group.
Conditions:
-- A data group is deployed
-- You remove a row (key & value) from the data group
Impact:
The changes are not redeployed.
Workaround:
If you want to remove a key-value pair, you must create a new Data Group. Refer to the new data group inside the SSL Orchestrator policy, and redeploy the SSL Orchestrator policy.
1567089 : Outbound requests sent on the network during BIG-IP Next Central Manager installation★
Component: BIG-IP Next
Symptoms:
A software used by BIG-IP Next Central Manager sends outbound requests during the installation process.
Conditions:
Installing the BIG-IP Next Central Manager application.
Impact:
None
Workaround:
None
Fix:
Outbound connection requests by BIG-IP Next Central Manager are now disabled.
1561053-2 : Application status migration status incorrectly labeled as green when certain properties are removed
Component: BIG-IP Next
Symptoms:
When migrating applications to BIG-IP Next, certain unsupported properties might be removed during the migration process, but the virtual server status is incorrectly labelled as "Ready for migration" (green status), rather than notify with a "Warning" (yellow status).
Conditions:
Migration of a UCS to BIG-IP Central Manager that contains application services with certain unsupported properties. Some examples are:
min-active-members
slow-ramp-time
Following migration, the following can be reviewed:
- Virtual server status is green ("Ready for migration")
- Virtual server contains configuration with tmsh objects that can be translated into AS3 classes supported in BIG-IP Next.
- tmsh objects that contain unsupported properties cannot be translated into configurable options for AS3 class.
AS3 Schema Reference: https://clouddocs.f5.com/bigip-next/latest/schemasupport/schema-reference.html
Impact:
Unsupported properties are silently dropped without logs and the status of the migration is incorrect (status is green, but should be yellow). The application service after migration might not be functional because of the missing properties.
1560561-1 : Instance Manager and Security Manager roles are required to deploy Inspection Service with VLAN changes
Component: BIG-IP Next
Symptoms:
Security Manager alone cannot deploy the inspection service because inspection service depends on instance APIs.
Conditions:
When we navigate from inspection service to instance level screen, the instance API fails, and it does not show any instances in the grid as it does not meet the correct permissions required to access the API.
Impact:
Will not be able to deploy the inspection service.
Workaround:
Have security manager and instance manager as the roles.
Fix:
Ensure that security manager has the right permissions to access instance APIs.
1474081 : Central Manager upgrade fails, leaving VM in maintenance mode★
Component: BIG-IP Next
Symptoms:
After upgrading Central Manager, the admin GUI no longer allows you to log in.
The /api/system-info API endpoint indicates that the upgrade is in progress and the system is in maintenance mode.
Fluentd logs stop happening, with the last line indicating that a file is being copied.
nats.fluent_log: {"level":"info","msg":"Initiating platform upgrade"}
nats.fluent_log: {"level":"info","msg":"file copy destination: /vol/local/upgrade/packages/BIG-IP-Next-CentralManager-20.0.2-0.0.68-Update.iso.tmp"}
Lastly, the file at /vol/local/upgrade/packages/ is not the complete file.
Conditions:
-- Upgrading Central Manager
-- The exact conditions that trigger this have not yet been determined.
Impact:
The upgrade fails and the system remains in maintenance mode.
Workaround:
Echo $(kubectl get secret mbiq-db-postgresql -o jsonpath='{.data}' | jq -r '."postgres-password"' | base64 -d)
#Copy the password and supply it into the following command.
kubectl exec mbiq-db-postgresql-0 -c postgresql -i -t -- psql -h localhost -U postgres -d bigiq_db -c "UPDATE mbiq_shared.tasks SET state='resumeFromSelfUpgrade', status='running' WHERE task_type='cm-upgrade-task' AND status='running';"
kubectl rollout restart deployment mbiq-upgrade-manager-feature
Known Issues in BIG-IP Next v20.2.1
BIG-IP Next Issues
ID Number | Severity | Links to More Info | Description |
1601413-1 | 1-Blocking | During BIG-IP Next upgrade, the Central Manager reports that the BIG-IP Next HA failover has failed | |
1600445-1 | 1-Blocking | Historic telemetry collected by BIG-IP Next Central Manager may be lost | |
1597037-1 | 1-Blocking | Adding a new TLS instance to an existing application (a default TLS instance) fails to flow traffic as expected | |
1596021-1 | 1-Blocking | serverTLS/clientTLS name in Service_TCP do not match the clientSSL/serverSSL profile name | |
1593605-1 | 1-Blocking | HTTPS Traffic not working on BIG-IP Next HA formed from Central Manager with SSL Orchestrator topology | |
1593381 | 1-Blocking | When upgrade fails, release version displayed in GUI is different from CLI release version. | |
1587337-1 | 1-Blocking | HA cluster on CM UI could be unhealthy during standby upgrade★ | |
1586501-1 | 1-Blocking | Configuring external logger in instance log management causes Central Manager to stop receiving telemetry | |
1579977-1 | 1-Blocking | BIG-IP Next instance telemetry data is missing from the BIG-IP Next Central Manager when a BIG-IP Next Central Manager High Availability node goes down. | |
1579441-1 | 1-Blocking | Connection requests on rSeries may not appear to be DAG distributed as expected | |
1576545-1 | 1-Blocking | After upgrade, BIG-IP Next tenant os unable to export toda-otel (event logs) data to Cemtral Manager★ | |
1574585 | 1-Blocking | Auto-Failback cluster cannot upgrade active node★ | |
1353589 | 1-Blocking | Provisioning of BIG-IP Next Access modules is not supported on VELOS, but containers continue to run | |
1352969 | 1-Blocking | Upgrades with TLS configuration can cause TMM crash loop | |
1350285-1 | 1-Blocking | Traffic is not passing after the tenant is licensed and network is configured | |
1329853-1 | 1-Blocking | Application traffic is intermittent when more than one virtual server is configured | |
1602561-1 | 2-Critical | Inspection services cannot be deployed when one of the instances managed by BIG-IP Next Central Manager is in unhealthy state | |
1591209-1 | 2-Critical | Unable to force re-authentication on IDP when BIG-IP Next is acting as SAML SP | |
1590037-1 | 2-Critical | Provisioning SSL Orchestrator on BIG-IP NEXT HA cluster fails when using Central Manager UI | |
1585309-1 | 2-Critical | Server-Side traffic flows using a default VRF even though pool is configured in a non-default VRF | |
1584681-1 | 2-Critical | Application service creation fails if name contains "fallback" | |
1579365-1 | 2-Critical | Unsupported nested properties are not underlined during application migration process | |
1576277 | 2-Critical | 'Backup file creation failed' for instance after upgrade to v20.2.0 | |
1571993-1 | 2-Critical | Access Session data is not cleared after TMM restart | |
1560493-1 | 2-Critical | Inaccurate Reflection of Selfip Prefix Length in TMM Statistics and "ip addr" Output | |
1492705 | 2-Critical | During upgrading to BIG-IP Next 20.1.0, the BIG-IP Next 20.1.0 Central Manager failed to connect with BIG-IP Next 20.0.2 instance | |
1466305 | 2-Critical | Anomaly in factory reset behavior for DNS enabled BIG-IP Next deployment | |
1365005 | 2-Critical | Analytics data is not restored after upgrading to BIG-IP Next version 20.0.1 | |
1354265 | 2-Critical | The icb pod may restart during install phase | |
1602141-1 | 3-Major | Invalid certificates can disrupt configuration and status updates | |
1593805 | 3-Major | The air-gapped environment upgrade from BIG-IP Next 20.0.2-0.0.68 to BIG-IP Next 20.2.0-0.5.41 fails | |
1592929-1 | 3-Major | Attaching or detaching of an iRule version is not supported for AS3 application | |
1587497 | 3-Major | WAF security report shows alerted requests even though no alerts were generated | |
1586869 | 3-Major | Unable to create the same standby instance, when Instance HA creation failed using CM-created instances★ | |
1585773 | 3-Major | Unable to migrate large number of applications at once | |
1585285 | 3-Major | Unable to stage applications for migration when session contains large number of application services | |
1584637 | 3-Major | After upgrade, 'Accept Request' will only work on events after policy redeploy | |
1584625 | 3-Major | Virtual server information of application containing multiple virtual IP addresses and WAF policies after upgrade is missing★ | |
1583541 | 3-Major | Re-establish trust with BIG-IP after upgrade to 20.2.1 using a 20.1.1 Central Manager★ | |
1574685 | 3-Major | Generated WAF report can be loaded without text | |
1574681 | 3-Major | Dynamic Parameter Extract from allowed URLs doesn't show in the parameter in the WAF policy | |
1574573 | 3-Major | Global Resiliency Group status not reflecting correctly on update | |
1574565 | 3-Major | Inability to edit Generic Host While Re-Enabling Global Resiliency | |
1569969-1 | 3-Major | WAF policy with default DoS profile cannot be migrated | |
1569589-2 | 3-Major | Default values of Access policy are not migrated | |
1568129 | 3-Major | During upgrade from BIG-IP Next 20.1.0 to BIG-IP Next 20.2.0, issue identified with instances that has L3-Forwards with non default VRF (L3-Network) configuration | |
1567129 | 3-Major | Unable to deploy Apps on BIG-IP Next v20.2.0 created using Instantiation from v20.1.x★ | |
1566745-1 | 3-Major | L3VirtualAddress set to ALWAYS advertise will not advertise if there is no associated Stack behind it | |
1505193-1 | 3-Major | Draft applications in CM will show Good status before deployment | |
1495017 | 3-Major | BIG-IP Next Hostname, Group Name and FQDN name should adhere to RFC 1123 specification | |
1495005 | 3-Major | Cannot create Global Resiliency Group with multiple instances if the DNS instances have same hostname | |
1494997 | 3-Major | Deleting a GSLB instance results in record creation of GR group in BIG-IP Next Central Manager | |
1491197 | 3-Major | Server Name (TLS ClientHello) Condition in policy shouldn't be allowed when "Enable UDP" option is selected in application under Protocols & Profiles | |
1491121 | 3-Major | Patching a new application service's parameters overwrites entire application service parameters | |
1489945 | 3-Major | HTTPS applications with self-signed certificates traffic is not working after upgrading BIG-IP Next instances to new version of BIG-IP Next Central Manager | |
1474801 | 3-Major | BIG-IP Next Central Manager creates a default VRF for all VLANS of the onboarded Next device | |
1472669 | 3-Major | idle timer in BIG-IP Next Central Manager can log out user during file uploads★ | |
1403861 | 3-Major | Data metrics and logs will not be migrated when upgrading BIG-IP Next Central Manager from 20.0.2 to a later release | |
1366321-1 | 3-Major | BIG-IP Next Central Manager behind a forward-proxy | |
1365433 | 3-Major | Creating a BIG-IP Next instance on vSphere fails with "login failed with code 501" error message★ | |
1360121-1 | 3-Major | Unexpected virtual server behavior due to removal of objects unsupported by BIG-IP Next | |
1360097-1 | 3-Major | Migration highlights and marks "net address-list" as unsupported, but addresses are converted to AS3 format | |
1360093-1 | 3-Major | Abbreviated IPv6 destination address attached to a virtual server is not converted to AS3 format | |
1359209-1 | 3-Major | The health of application service shown as "Good" when deployment fails as a result of invalid iRule syntax | |
1358985-1 | 3-Major | Failed deployment of migrated application services to a BIG-IP Next instance | |
1355605 | 3-Major | "NO DATA" is displayed when setting names for appliction services, virtual servers and pools, that exceed max characters | |
1134225 | 3-Major | K000138849 | AS3 declarations with a SNAT configuration do not get removed from the underlying configuration as expected |
1122689-3 | 3-Major | Cannot modify DNS configuration for a BIG-IP Next VE instance through API | |
1593745 | 4-Minor | Issues identified during Backup, Restore, and User Operations between two BIG-IP Next Central Managers for Standalone and High Availability Nodes. | |
1588813 | 4-Minor | CM Restore on a 3 node BIG-IP Next Central Manager with external storage fails with ES errors | |
1588101 | 4-Minor | Any changes made on the BIG-IP Next Central Manager after the BIG-IP Next instance backup will not be reflected on the BIG-IP Next Central Manager once the BIG-IP Next instance is restored. | |
1576273 | 4-Minor | No L1-Networks in an instance causes BIG-IP Next Central Manager upgrade to v20.2.0 to fail★ | |
1575549 | 4-Minor | BIG-IP Next Central Manager discovery requires an instance to have both Default L2-Network and Default L3-Network if either one already exists | |
1564157-1 | 4-Minor | BIG-IP Next Central Manager requires VELOS/rSeries systems to use an SSL certificate containing the host IP address in the CN or SANs list.★ | |
1560605 | 4-Minor | Global Resiliency functionality fails to meet expectations on Safari browsers | |
1498421 | 4-Minor | Restoring Central Manager (VE) with KVM HA Next instance fails on a new BIG-IP Next Central Manager | |
1498121 | 4-Minor | BIG-IP Next Central Manager upgrade alerts not visible in global bell icon | |
1490381-1 | 4-Minor | Pagination for iRules page not supported with a large number of iRules | |
1394625-1 | 4-Minor | Application service failes to deploy even if marked as green (ready to deploy) | |
1365445 | 4-Minor | Creating a BIG-IP Next instance on vSphere fails with "login failed with code 401" error message★ | |
1365417 | 4-Minor | Creating a BIG-IP Next VE instance in vSphere fails when a backslash character is in the provider username★ | |
1360709 | 4-Minor | Application page can show an error alert that includes "FAST delete task failed for application" | |
1360621 | 4-Minor | Adding a Control Plane VLAN must be done only during BIG-IP Next HA instance creation | |
1354645 | 4-Minor | Error displays when clicking "Edit" on the Instance Properties panel | |
1350365 | 4-Minor | Performing licensing changes directly on a BIG-IP Next instance | |
1325713-2 | 4-Minor | Monthly backup cannot be scheduled for the days 29, 30, or 31 |
Known Issue details for BIG-IP Next v20.2.1
1602561-1 : Inspection services cannot be deployed when one of the instances managed by BIG-IP Next Central Manager is in unhealthy state
Component: BIG-IP Next
Symptoms:
Inspections services cannot be deployed to the instances using UI.
Conditions:
One of the three instances managed by BIG-IP Next Central Manager is in unknown state.
Impact:
You won't be able to deploy inspection services.
Workaround:
1. Use Central Manager API to deploy on healthy instances.
or
2. Fix the state of the instance that is in the unknown state.
1602141-1 : Invalid certificates can disrupt configuration and status updates
Component: BIG-IP Next
Symptoms:
A virtual address with RHI configuration marked as Never may be advertised over BGP.
Conditions:
There are multiple virtual servers with the same virtual address and RHI configuration is marked as Never, and RHI configuration is created before the application or stack is created.
Impact:
A virtual address that should not be advertised is advertised through BGP.
Workaround:
Create the RHI configuration for Never after the application or stack is configured.
1601413-1 : During BIG-IP Next upgrade, the Central Manager reports that the BIG-IP Next HA failover has failed
Component: BIG-IP Next
Symptoms:
Upgrade of first node has completed, and when failover is triggered, either automatically by CM or manually, the CM reports that the failover has failed due to 401 Failed to authenticate.
Even with CM reporting failure, the failover operation will proceed on the instance.
Conditions:
During BIG-IP Next upgrade using CM.
Impact:
CM shows BIG-IP Next HA status as Unhealthy though the actual BIG-IP Next status is healthy.
Workaround:
Use the following steps as a workaround:
1. Open the properties drawer for the instance and go to the HA section.
2. Confirm that the nodes have swapped roles in the cluster. The new active should be at the upgraded version and the standby should be at the earlier version.
2.1 Also, the cluster health API can be used through postman to confirm that failover has finished.
GET https://{{CM-address}}/api/v1/spaces/default/instances/{{Big-IP-Next-ID}}/health
The response should show the nodes have swapped roles, and one is ACTIVE and the other is STANDBY.
3. Disable the toggle for "Enable automatic failover" and click upgrade for the standby node and follow normal upgrade workflow procedure.
4. When upgrade has finished, in the Instance list page, the HA instance will show the upgraded version and the cluster will be healthy.
1600445-1 : Historic telemetry collected by BIG-IP Next Central Manager may be lost
Component: BIG-IP Next
Symptoms:
If one of the BIG-IP Next Central Manager high availability (HA) nodes become unavailable, the BIG-IP Next instance telemetry may no longer be available through the BIG-IP Next Central Manager.
Conditions:
Any of the BIG-IP Next Central Manager high availability (HA) nodes become unavailable.
Impact:
Historic BIG-IP Next instance telemetry may no longer be available through BIG-IP Next Central Manager. Once the node is restored, or replaced by a new node, BIG-IP Next Central Manager will start collecting and presenting telemetry again.
Workaround:
NA.
1597037-1 : Adding a new TLS instance to an existing application (a default TLS instance) fails to flow traffic as expected
Component: BIG-IP Next
Symptoms:
Traffic flow does not work as expected when a new TLS instance is added to an existing application.
Conditions:
1. Create default SSL certificate and custom certificate from Central Manager UI.
2. Deploy an https application and validate LTM traffic with default certificate.
Edit the application to add new certificate for TLS instance under protocols and profiles.
4. Add the imported certificate (custom cert) using "enable https client side"
5. Save the application with new TLS settings and certificate added.
6. Click on Review and deploy.
7. Validate the changes done on application.
8. If validation is successful. Click on deploy application
Impact:
Traffic flow does not work as expected
Workaround:
Suggested Workarounds:
1. Delete the existing cert in the UI and recreate the same certificate (either before or after adding new certificate) and save the application.
2. Use API with multiCerts to true for each certificate block.
1596021-1 : serverTLS/clientTLS name in Service_TCP do not match the clientSSL/serverSSL profile name
Component: BIG-IP Next
Symptoms:
When you try to deploy the application service, if the serverTLS/clientTLS name in Service_TCP do not match the clientSSL/serverSSL profile name, you might get the following error messages:
serverTLS: must contain a path pointing to an existing reference
or
clientTLS: must contain a path pointing to an existing reference
Conditions:
Object names are truncated if application or partition names are too long.
Impact:
Application service deployment to the Big-IP Next instance fails.
Workaround:
Ensure that the serverTLS/clientTLS name in the Service_TCP class and clientSSL/serverSSL name in the declaration are same.
1593805 : The air-gapped environment upgrade from BIG-IP Next 20.0.2-0.0.68 to BIG-IP Next 20.2.0-0.5.41 fails
Component: BIG-IP Next
Symptoms:
When upgrading from BIG-IP Next version 20.0.2-0.0.68 to version 20.2.0-0.5.41, the process encounters a failure. Post-upgrade, the Central Manager GUI exhibits a continuous flashing behavior, persisting for a few minutes before returning to normal functionality. Furthermore, the failed upgrade leads to discrepancies in version representation, where the GUI displays the current version while the CLI indicates the target version.
Conditions:
Upgrade CM from version 20.0.2-0.0.68 to version 20.2.0-0.5.41.
Impact:
CM becomes dysfunctional, the failed upgrade leads to discrepancies in version representation, where the CM GUI displays the current version while the CLI indicates the target version.
Workaround:
Backup and restore CM from the version 20.0.2, refer How to: Back up and restore BIG-IP Next Central Manager (https://clouddocs.f5.com/bigip-next/20-0-2/use_cm/cm_backup-restore.html).
1593745 : Issues identified during Backup, Restore, and User Operations between two BIG-IP Next Central Managers for Standalone and High Availability Nodes.
Component: BIG-IP Next
Symptoms:
Performing a backup on one BIG-IP Next Central Manager, followed by user operations, and then performing a restore on another BIG-IP Next Central Manager with subsequent user operations may result in the following issues
You cannot download the QKView on the restored BIG-IP Next Central Manager if it was created by the previous BIG-IP Next Central Manager before the backup operation.
After you restore the backup on the new BIG-IP Next Central Manager setup, any BIG-IP Next instance deleted on the previous BIG-IP Next Central Manager enters an unknown state.
Additionally, after you restore the backup on the new BIG-IP Next Central Manager, the deleted app on the previous BIG-IP Next Central Manager cannot process traffic until you redeploy the app.
The BIG-IP Next Central Manager does not support uploading and downloading backup files when configured with external storage.
Conditions:
Perform a backup on one BIG-IP Next Central Manager and restore it on another BIG-IP Next Central Manager.
Impact:
After restoring the new BIG-IP Next Central Manager, certain operations might not function properly.
Workaround:
If you delete the app after taking a backup on the BIG-IP Next Central Manager and then restore it on a new BIG-IP NEXT Central Manager, traffic will not pass through. Users must edit and redeploy the app for traffic to function properly.
1593605-1 : HTTPS Traffic not working on BIG-IP Next HA formed from Central Manager with SSL Orchestrator topology
Component: BIG-IP Next
Symptoms:
HTTPS traffic is not working.
Conditions:
BIG-IP Next HA Setup with SSL Orchestrator Provisioned
Impact:
User can experience traffic downtimes, if instance gets down during upgrade or due to network interruption issues.
Workaround:
None
1593381 : When upgrade fails, release version displayed in GUI is different from CLI release version.
Component: BIG-IP Next
Symptoms:
During a CM upgrade, if the upgrade fails (for example, in an air-gap environment where the CM is disconnected from the internet), the version displayed in the GUI is different from the version shown in the CLI. Typically, the GUI retains the current version, while the CLI shows the target version that failed to upgrade. This discrepancy in version causes confusion about whether the upgrade was successful or not.
Conditions:
Upgrade CM in an air-gapped environment.
Impact:
CM becomes dysfunctional, the failed upgrade leads to discrepancies in version representation, where the CM GUI displays the current version while the CLI indicates the target version.
Workaround:
None
1592929-1 : Attaching or detaching of an iRule version is not supported for AS3 application
Component: BIG-IP Next
Symptoms:
In Central Manager, from iRule space, attaching or detaching of a different iRule version of deployed AS3 application is not supported, it is supported only for FAST applications.
Conditions:
- Migrating the AS3 application which is having an iRule
- Attach or detach a different iRule version from iRule space
Impact:
Unable to deploy the application with different iRule version.
Workaround:
Redeploy the application with new iRule version by directly editing the AS3 declaration from application space.
Following is an example:
"iRules": [
{
"cm": "migrated_myfakeiRule2::v2"
}
],
1591209-1 : Unable to force re-authentication on IDP when BIG-IP Next is acting as SAML SP
Component: BIG-IP Next
Symptoms:
When BIG-IP Next is configured as a SAML SP with force authentication enabled in the SAML Auth item, IDP still does not re-authenticate the user when trying to access the SP.
Conditions:
Issue is observed for all usecases where force authentication is enabled in SAML Auth item.
Impact:
User in not re-authenticated while trying to access the SP, even though the admin configured the SP to force re-authentication.
Workaround:
None
1590037-1 : Provisioning SSL Orchestrator on BIG-IP NEXT HA cluster fails when using Central Manager UI
Component: BIG-IP Next
Symptoms:
When user creates an HA cluster of BIG-IP Next instances using Central Manager UI. After successful creation and licensing of the instance, Provisioning SSL Orchestrator from the UI may make it unresponsive and display "Enabling SSLO is in progress..." message.
Conditions:
When user creates HA cluster of BIG-IP Next instances and tries to provision SSL Orchestrator from the UI.
Impact:
Provisioning SSLO on HA cluster may result in unresponsive UI.
Workaround:
Configure HA cluster, license, and provision SSL Orchestrator using OpenAPI, prior to adding cluster to Central Manager.
1588813 : CM Restore on a 3 node BIG-IP Next Central Manager with external storage fails with ES errors
Component: BIG-IP Next
Symptoms:
BIG-IP Next Central Manager restore fails with critical alert raised with the description:
Error registering Elasticsearch snapshot repository: failed to register Elasticsearch snapshot repository: response not acknowledged. result: map[error:map[caused_by:map[caused_by:map[reason:/vol/elasticsearch-snapshot/restore-temp/elasticsearch type:access_denied_exception] reason:[elastic-repo] cannot create blob store type:repository_exception] reason:[elastic-repo] Could not determine repository generation from root blobs root_cause:[map[reason:[elastic-repo] cannot create blob store type:repository_exception]] type:repository_exception] status:500]
Conditions:
Configure a 3 node BIG-IP Next Central Manager and take a CM backup.
Now on a fresh 3 node BIG-IP Next Central Manager use the backup file and restore the BIG-IP Next Central Manager.
Impact:
CM restore succeeds but Elasticsearch is not restored.
Workaround:
When Elasticsearch is not restored, an alert message is raised. Run ./opt/cm-bundle/cm restore_es and this will make sure ES is restored.
1588101 : Any changes made on the BIG-IP Next Central Manager after the BIG-IP Next instance backup will not be reflected on the BIG-IP Next Central Manager once the BIG-IP Next instance is restored.
Component: BIG-IP Next
Symptoms:
After an instance restore is successful, and all the data present in instance is restored, BIG-IP Next Central Manager still shows incorrect data.
Conditions:
Create a BIG-IP Next Central Manager, and discover an instance.
Create an instance backup.
Make any changes on the instance using BIG-IP Next Central Manager. For example, create a QKview, modify networks or selfips, or delete application services on the instance using BIG-IP Next Central Manager UI.
Restore the instance with the backup file created.
The change is reflected only on the instance but not on BIG-IP Next Central Manager. As a result, changes before the backup are not visible on the BIG-IP Next Central Manager UI.
Impact:
There are discrepancies between BIG-IP Next Central Manager and the instance.
Workaround:
None
1587497 : WAF security report shows alerted requests even though no alerts were generated
Component: BIG-IP Next
Symptoms:
When creating a security report, the generated report might show alerts, even though none were reported in the WAF dashboards and event log.
Conditions:
Generate a security report for a policy that is blocking traffic.
Impact:
The generated report might incorrectly show blocked requests as alerts even though no alerts were reported.
1587337-1 : HA cluster on CM UI could be unhealthy during standby upgrade★
Component: BIG-IP Next
Symptoms:
This is an intermittent issue due to a race condition during HA cluster creation and standby writing self-signed certificates in standby vault.
Following is an expected HA workflow steps:
1. Two BIG-IP Next instances (instance-1 and Instance-2) boot up as standalone on BIG-IP Next 20.2.0 image.
2. Both instances create and store self-signed certificates in vault DB.
3. HA cluster creation job is initiated.
4. Active instance creates self-signed certificates for new cluster IP and updates vault DB.
5. Standby instance creates self-signed certificates for new cluster IP and updates vault database.
6. During HA cluster join and database sync, active DB replaces standby DB.
From the above steps, if Step.5 occurs before Step.6, then HA cluster goes into unknown state.
If Step.5 occurs after Step.6, then HA cluster is healthy and upgrade works fine as expected.
Conditions:
After creating BIG-IP Next cluster, upgrade the version on standby.
Impact:
BIG-IP Next HA cluster is unreachable from CM.
Workaround:
During HA upgrades, if standby node is not reachable, follow below steps:
1. Disable "enable automatic failover" flag and Force failover.
2. On CM UI, click on the HA cluster name-> certificates-> Establish Trust. HA status on CM UI changes from Unknown to Unhealthy.
3. Upgrade new standby instance to BIG-IP Next 20.2.1.
Both active and standby should be on BIG-IP Next 20.2.1 and HA should be healthy in CM UI.
1586869 : Unable to create the same standby instance, when Instance HA creation failed using CM-created instances★
Component: BIG-IP Next
Symptoms:
Using CM-created instances, if instance HA creation is failed and standby instance is removed from CM, then you will not be able to create the same standby instance configuration.
Conditions:
Creating Instance HA using CM-created instances.
Impact:
Unable to create the same standby instance.
Workaround:
Remove the active instance from CM. This will delete both active and standby instance from CM and the provider. Create both instances again.
1586501-1 : Configuring external logger in instance log management causes Central Manager to stop receiving telemetry
Component: BIG-IP Next
Symptoms:
In BIG-IP Next Central Manager, navigating to Instance > Log management, creating an external logger causes the CM to stop receiving all telemetry.
Conditions:
Configure external logger for the instance.
Impact:
Configured CM stops receiving telemetry, container logs cannot be sent to external loggers.
Workaround:
None
1585773 : Unable to migrate large number of applications at once
Component: BIG-IP Next
Symptoms:
When you click the Deploy button to migrate a large number of applications (over 500 applications) at once, you might get the following error:
Cannot read properties of undefined (reading 'status_code')
Conditions:
Select more than 500 applications to migrate to BIG-IP Next Central Manager.
Impact:
More than 500 applications cannot be migrated at once.
Workaround:
None
1585309-1 : Server-Side traffic flows using a default VRF even though pool is configured in a non-default VRF
Component: BIG-IP Next
Symptoms:
Traffic flows when a default VRF is configured and a pool is configured in a non-default VRF without a route in non-default VRF.
Conditions:
- Default VRF is configured
- Pool is configured in non-default VRF
- Route to pool exists in default VRF, but not in non-default VRF.
Impact:
Traffic continues to work when pool is configured in non-default VRF, but there is no route in non-default VRF.
Workaround:
For network isolation, do not configure a default VRF. Use all non-default VRFs in the configuration.
1585285 : Unable to stage applications for migration when session contains large number of application services
Component: BIG-IP Next
Symptoms:
When you click Add application on Application Migration page, the following error is returned:
Unexpected Error: applications?limit-1000000
Conditions:
Migrate a UCS file that contains a large number of virtual servers (more than 2000).
Impact:
Applications cannot be migrated using UCS files that have a large number (more than 2000) of virtual servers.
Workaround:
Increase amount of memory for mbiq-journeys-feature deployment.
1. Log in to BIG-IP Next Central Manager using SSH
2. Execute following command: kubectl patch deployment mbiq-journeys-feature -p '{"spec":{"template":{"spec":{"containers":[{"name":"mbiq-journeys-feature","resources":{"limits":{"cpu":"1","memory":"1.5Gi"}}}]}}}}'
1584681-1 : Application service creation fails if name contains "fallback"
Component: BIG-IP Next
Symptoms:
Application service will not be created if name contains "fallback" key in it.
Conditions:
Application service name contains "fallback" key.
Impact:
Application service is not created.
Workaround:
Application service name should not contain "fallback" key.
1584637 : After upgrade, 'Accept Request' will only work on events after policy redeploy
Component: BIG-IP Next
Symptoms:
After BIG-IP Next Central Manager is upgraded to version 20.2.1 from a previous version, the 'Accept Request' option on events does not work.
Conditions:
Upgrade BIG-IP Next Central Manager that contains WAF events to version 20.2.1.
Click 'Accept Request' for an event. The results return:
'No policy builder data in event [support id]'
Impact:
All events (new or pre-update events) in the WAF event log will not return results when you select 'Accept Request'.
Workaround:
You can receive and accept results for new events from the event log when you manually redeploy the WAF policy.
1584625 : Virtual server information of application containing multiple virtual IP addresses and WAF policies after upgrade is missing★
Component: BIG-IP Next
Symptoms:
When creating a report based on specific virtual servers after upgrading BIG-IP Next Central Manager with multi-VIP applications, the action is not possible.
Filtering by virtual server in BaDoS logs and dashboard after upgrade is not possible
Conditions:
Create a report for an application that contains multiple virtual servers.
Impact:
Limitations in the actions you can take in the Web Application dashboard and in reports when filtering by virtual servers.
Workaround:
Re-deploy the applications after upgrade.
1583541 : Re-establish trust with BIG-IP after upgrade to 20.2.1 using a 20.1.1 Central Manager★
Component: BIG-IP Next
Symptoms:
Central Manager will report BIG-IP upgrade as failed due to timeout waiting for it to complete.
Conditions:
Using a 20.1.1 Central Manager to upgrade a BIG-IP Next instance to 20.2.1
Impact:
BIG-IP Next upgrade will have succeeded but Central manager will think it never completed. The version displayed for the BIG-IP in the Central Manger UI will be inaccurate. Until trust is re-established all communication with the BIG-IP will fail.
Workaround:
1. Open instance properties drawer, click "Certificates" and establish trust with the instance manually
2. Select the instance in the grid and re-trigger the upgrade to the Nutmeg version again
The instance upgrade should detect the upgrade to the version it is already at and return success. CM task will then fetch the new version and update its DB, in turn updating the version in the UI grid.
1579977-1 : BIG-IP Next instance telemetry data is missing from the BIG-IP Next Central Manager when a BIG-IP Next Central Manager High Availability node goes down.
Component: BIG-IP Next
Symptoms:
BIG-IP Next Instance telemetry can be missing from the BIG-IP Next Central Manager for five to ten minutes if any of the BIG-IP Next Central Manager HA nodes go down or unavailable.
- Instance data metrics such as Instance health, Traffic, and Network Interface metrics will be lost, as they are available only for the previous hour.
- All other data such as Application metrics and WAF logs, will not be lost. However, these metrics could be unavailable for 5-10 minutes during the node down event.
Conditions:
Any of the nodes in the BIG-IP Next Central Manager HA Nodes becomes unavailable or goes down.
Impact:
BIG-IP Next instance data metrics, such as instance health, traffic, and network interface metrics, will be lost. All other metrics, such as application metrics and WAF logs, might be missing for 5-10 minutes.
Workaround:
Wait for 5-10 minutes and BIG-IP Next Telemetry data will resume on the BIG-IP Next Central Manager.
Run the following command on the VM console of the BIG-IP Next Central Manager to resume the instance data metrics:
kubectl delete pods prometheus-mbiq-kube-prometheus-prometheus-0 --grace-period=0 --force
1579441-1 : Connection requests on rSeries may not appear to be DAG distributed as expected
Component: BIG-IP Next
Symptoms:
Connection requests on rSeries may not be distributed across TMM instances as expected. For example, TMM0 may appear to service more requests than other TMMs, when a round-robin even distribution across TMMs was expected. This may be due to the `port adjust` setting not having the default value of `xor5mid-xor5low`.
Conditions:
Multiple TMMs on rSeries, where connection requests are not distributed across TMMs as expected.
Impact:
Connection requests may be unevenly distributed across TMMs, causing some TMMs to be under heavier load than other TMMs.
Workaround:
Adjust traffic patterns for load balancing, or tune DAG behavior with additional DAG configuration options to adjust assignment of connection requests to TMMs.
1579365-1 : Unsupported nested properties are not underlined during application migration process
Component: BIG-IP Next
Symptoms:
During application migration, if there are unsupported nested properties (sub-properties of properties) of tmsh objects, they will not be underlined during the Configuration Analyzer.
Conditions:
Configuration of migrated application contains an object with nested properties that are not supported on BIG-IP Next, e.g.:
ltm pool /AS3_Tenant/AS3_Application/testItem2 {
members {
/AS3_Tenant/192.168.2.2:400 {
address 192.168.2.2
connection-limit 1000
dynamic-ratio 50
monitor min 1 of { /Common/http } // unsupported nested property that will not be underlined, but should be
priority-group 4
rate-limit 100
ratio 50
}
}
min-active-members 1
}
Impact:
Functionality of the application service might not work as expected.
Workaround:
You can check if all nested properties are present in the AS3 preview of the AS3 declaration. Those that are not present will not be migrated.
Refer to the AS3 Next schema reference: https://clouddocs.f5.com/bigip-next/latest/schemasupport/schema-reference.html
1576545-1 : After upgrade, BIG-IP Next tenant os unable to export toda-otel (event logs) data to Cemtral Manager★
Component: BIG-IP Next
Symptoms:
After upgrade, the BIG-IP Next tenant is unable to export toda-otel (event logs) data to CM in VELOS
Conditions:
Upgrading BIG-IP Next tenant from 20.1 to 20.2 on a VELOS system.
Impact:
After upgrade, the BIG-IP Next tenant is unable to export toda-otel (event logs) data to CM
Workaround:
For VELOS Standalone
====================
After upgrade, if the f5-toda-otel-collector cannot connect to host change the tenant status from "DEPLOYED" TO "CONFIGURED" TO "DEPLOYED" to fix the issue. Please note that it will take 5 to 10 min for tenant status to change and it might impact the traffic.
For VELOS HA follow the following steps
=======================================
1. Setup CM on Mango build
2. Add 2 BIG-IP Next instances(Mango build) on the CM
3. Bring up HA on CM with the Enable Auto Failover option unchecked
4. Add a license to the HA instance.
5. Deploy a basic HTTP app in FAST mode with WAF policy attached (Enforcement mode - Blocking, Log Events - all)
6. Send the traffic and verify the WAF Dashboard under the Security section, should be able to see the Total Requests and Blocked response fields with non-zero values
7. Upgrade standby instance to latest nectarine build with the "auto-failover" button switched off.
8. We will observe the instances goes into an unhealthy state on CM.
9. Change the status of the standby instance from Deployed to Configure Mode and save it through partition GUI/CLI.
10. After confirming the status of the pods, change the state of the standby instance back to the Deployed state from the configured state. There should be no impact on the traffic flow during this step.
11. Now do the force failover and check the health status of instances, it will still show unhealthy as instances are in between upgrades.(one instance with Mango build (standby node) and other with Nectarine build(Active node))
12. Now Upgrade the standby instance to the latest nectarine build with the "auto-failover" button switched off.
13. HA should look healthy in this state and traffic should continue to flow.
14. Change the state of the standby instance from Deployed to Configure Mode and save it using partition GUI/CLI
15. After confirming the status of the pods for the instance on partition CLI, change the state of the standby instance back to the Deployed state from the configured state.
16. We will observe the Event logs on the WAF Dashboard under the security section on CM.
17. We can also observe the logs on the "f5-toda-otel-collector" pod showing no Export failures.
18. Upgrade the CM. Systems should be Healthy.
1576277 : 'Backup file creation failed' for instance after upgrade to v20.2.0
Component: BIG-IP Next
Symptoms:
Instance backup fails on BIG-IP Next Central Manager version 20.2.0 with message:
'Backup file creation failed'
Conditions:
A BIG-IP Next Central Manager version 20.1.1 and BIG-IP Next version 20.1.0
1. Upgrade the BIG-IP Next instances to version 20.2.0.
2. In the BIG-IP Next Central Manager UI, go to Infrastructure > Instances.
3. Create a backup file for the instance by selecting the desired BIG-IP Next instance, select the Actions menu, select Back Up & Schedule and input the required information.
4. Create a backup file for the instance by selecting the desired BIG-IP Next instance, click Actions and select Back Up & Schedule and input the required information.
5. In the BIG-IP Next Central Manager UI, go to Infrastructure > Instances > Backup & Restore.
Impact:
Instance backup file generation fails with message:
'Backup file creation failed"
Workaround:
Once you have upgraded the BIG-IP Next instances to version 20.2.0, you need to delete large image files as they prevent the successful backup. In addition, you need to delete the failed backup file.
You must send API calls to the instance to remove the large upgrade files and failed backup files before the backup will succeed. This example uses Postman to send the API calls. The following is an example procedure with variables {{ }} around them. You can use variables or insert the actual value for each request:
1. Send a login request to BIG-IP Next Central Manager and record the “access_token” from the response. This is used to make all other API calls.
a. Use the command POST https://{{remote-CM-address}}/api/login, or if no variables are used, then use the command POST https://10.145.69.227/api/login
b. The body for the request is a JSON object with the credentials for the user.
{ "username": "username", "password": "password" }
2. Send a request to BIG-IP Next Central Manager's inventory and identify the instance that you want to delete the file from. Record the “id” from the response. The access_token from the previous step is used as the Bearer Token for the request. Repeat this for all other requests as well:
GET https://{{remote-CM-address}}/api/device/v1/inventory
3. Delete the large image files and failed backup files. Send a request for the files present on the instance. Note the instance ID from the previous step is used in the request URL. In the response, record the "id" for the "file name" or "description" in the response. Example files:
- The upgrade image file: BIG-IP Next 20.2.0....tgz
- The original backup file: backup and restore of the system
GET https://{{remote-CM-address}}/api/device/v1/proxy/{{remote-Big-IP-Next-ID}}?path=/files
4. Send a request to delete the file on the instance. The file ID from the previous step and paste it to the end of delete URL. For example:
DELETE https://{{remote-CM-address}}/api/device/v1/proxy/{{remote-Big-IP-Next-ID}}?path=/files/644fcd02-fa38-4383-ac1c-f67e0c899e0d
5.Wait at least 20 minutes after the deletion before initiating steps to create another instance backup.
IMPORTANT NOTE: The file deletion process can take up to 20 minutes to complete. If the files are not fully deleted, the new backup attempt will fail.
6. If required, repeat step 4 to delete any other large files, unrelated to upgrade, such as QKView or core files.
1576273 : No L1-Networks in an instance causes BIG-IP Next Central Manager upgrade to v20.2.0 to fail★
Component: BIG-IP Next
Symptoms:
Upgrade to of BIG-IP Next Central Manager v20.2.0 fails.
Conditions:
BIG-IP Next Central Manager has an instance with no L1-Networks.
Impact:
Cannot upgrade to v20.2.0.
Workaround:
Add a blank DefaultL1Network to each instance using instance editor.
1575549 : BIG-IP Next Central Manager discovery requires an instance to have both Default L2-Network and Default L3-Network if either one already exists
Component: BIG-IP Next
Symptoms:
Discovering an instance on BIG-IP Next Central Manager requires the instance only be discovered if it is configured with neither Default L2-Network or Default L3-Network, or has both of them, and Default L2-Network be under the Default L3-Network.
Conditions:
A BIG-IP Next Central Manager user attempts to discover an instance they own. Before discovering this instance on BIG-IP Next Central Manager, the user configured it with an L2 Network named "Default L2-Network" and an L3 Network named something other than "Default L3-Network". When the user tries discovering the instance on CM, discovery failed noting that Default L2-Network was present, but no Default L3-Network.
Impact:
BIG-IP Next Central Manager cannot discover an instance if either one of "Default L2-Network" or "Default L3-Network" exist, but not both.
Workaround:
If a user configures an instance with an L2 Network named Default L2-Network, they should create an L3 network named Default L3-Network and have its L2 Network be the default L2. If neither exists, or both exist, and the Default L2 is under the Default L3, discovery succeeds.
1574685 : Generated WAF report can be loaded without text
Component: BIG-IP Next
Symptoms:
When generating a WAF report, the loaded print screen for PDF is displayed without text content.This issue is reported primarily on Mac OS and intermittently.
Conditions:
No specific conditions apply, it happens intermittently and mainly on Mac operating systems.
Impact:
The report doesn't contain text and is not usable.
Workaround:
Retry generating a WAF report.
1574681 : Dynamic Parameter Extract from allowed URLs doesn't show in the parameter in the WAF policy
Component: BIG-IP Next
Symptoms:
After successfully creating a dynamic parameter with its respective extract URLs, reentering the parameter settings won't show the saved extract URLs.
Conditions:
Configure a WAF policy parameter as 'Dynamic' with extract URLs.
Impact:
Inability to see configured extract URLs from the UI parameter configuration screen within the WAF policy.
Workaround:
Go to the WAF policy and select the Policy Editor from the Panel menu. Once in the policy editor, search for the key word "extractions": The JSON shows the parameter extraction with its respective extract URLs.
1574585 : Auto-Failback cluster cannot upgrade active node★
Component: BIG-IP Next
Symptoms:
A cluster created with the auto-failback flag enabled will not upgrade the active node.
Conditions:
Enable the auto-failback flag.
Impact:
The active node cannot be upgraded.
Workaround:
Disassemble the cluster and upgrade each node individually or create another cluster with the auto-failback flag disabled.
1574573 : Global Resiliency Group status not reflecting correctly on update
Component: BIG-IP Next
Symptoms:
After updating the Global Resiliency group, the group status may not immediately switch to "DEPLOYING," potentially causing the UI to inaccurately reflect the ongoing provisioning process, despite deployment being in progress.
Conditions:
During updates to the Global Resiliency group.
Impact:
Update Status of Global Resiliency Group is incorrect.
Workaround:
To mitigate this issue, wait for approximately 5 minutes after updating the Global Resiliency group. This will allow the DNS listener address to become available for the newly added instance.
1574565 : Inability to edit Generic Host While Re-Enabling Global Resiliency
Component: BIG-IP Next
Symptoms:
Following the re-enabling of Global Resiliency from a previously disabled state, users are unable to simultaneously add or edit Generic Hosts.
Conditions:
During the re-enabling process of Global Resiliency.
Impact:
Unable to add or edit Generic Host information.
Workaround:
Refrain from making any changes to the Generic Host when re-enabling Global Resiliency from a previously disabled state.
After the application has been deployed, you can then proceed to add or modify Generic Hosts during the next application edit.
1571993-1 : Access Session data is not cleared after TMM restart
Component: BIG-IP Next
Symptoms:
The session entry stored in Redis server will not be cleared when TMM restarts and the user session does not come back to the BIG-IP (after TMM restarts).
Conditions:
User session is created in Redis server
TMM restarts
Traffic for the corresponding user session does not come back to BIG-IP.
Impact:
The session record in Redis server is not cleared in specific scenario of TMM restart and the traffic for user session does not come back to BIG-IP.
Workaround:
None
1569969-1 : WAF policy with default DoS profile cannot be migrated
Component: BIG-IP Next
Symptoms:
A WAF policy with a default L7 DoS profile cannot be imported/migration to BIG-IP Next Central Manager.
Conditions:
Migrate a virtual server containing a WAF policy and default DoS profile to BIG-IP Next Central Manager.
Impact:
During the application migration (application selection) step of the migration wizard:
- Application status is blue (application can be only saved as draft).
- WAF policy is underlined in yellow.
- WAF policy cannot be imported to BIG-IP Next Central Manager (skipped status).
1569589-2 : Default values of Access policy are not migrated
Component: BIG-IP Next
Symptoms:
Default values of an Access policy are not migrated to BIG-IP Next Central Manager.
Conditions:
Migrate a virtual server with an Access policy that contains default values.
Impact:
- Access Policy imported to BIG-IP Next Central Manager does not have default values populated.
- If the affected policy is deployed to a BIG-IP Next instance, it will use default values applied by BIG-IP Next.
Workaround:
From the BIG-IP Next Central Manager UI, you can edit Access policy values for property or leave it un-selected.
1568129 : During upgrade from BIG-IP Next 20.1.0 to BIG-IP Next 20.2.0, issue identified with instances that has L3-Forwards with non default VRF (L3-Network) configuration
Component: BIG-IP Next
Symptoms:
In BIG-IP Next 20.1.0, it is possible for instances to have a L3-forward that uses non-default L3-Network (VRF).
In BIG-IP Next 20.2.0, the parameter L3-Network (VRF) completely removed in the L3-forward GUI. For any L3-forward in CM version 20.2.0, always use the Default VRF configuration.
In BIG-IP Next 20.2.0, Central Manager is not supporting creating or editing L3-Forward using non default VRF configuration. All the L3-Forward that is shown in the L3-Forward GUI will be assumed using default VRF configuration. If the L3-Forward is using non-default VRF configuration, the only action that user can do is deleting that L3-Forward.
Conditions:
Upgrade from BIG-IP Next 20.1.0 to BIG-IP Next 20.2.0 with L3-Forward config using non default VRF
Impact:
You cannot assume that the existing L3-Forward config is using the default VRF or non default VRF in the CM UI. You will have to re-create an L3-Forward using the CM UI so that it will use the default VRF.
Workaround:
Delete the L3-Forward
1567129 : Unable to deploy Apps on BIG-IP Next v20.2.0 created using Instantiation from v20.1.x★
Component: BIG-IP Next
Symptoms:
1. Install BIG-IP Next Central Manager with v20.1.x build BIG-IP-Next-CentralManager-20.1.1-0.0.1.
2. Deploy 2 tenants on rseries via IOD process with v20.1.x build(20.1.0-2.279.0+0.0.75) and 20.2.0 build(20.2.0-2.375.1+0.0.1). Configure L1-L3 during IOD process on both tenants.
3. Deploy FastL4 migrated app on the v20.2.0 tenant. Observed below error during deployment-
The task failed, failure reason: AS3-0007: AS3 Deploy Error: Failed to accept request on BIG-IP Next instance: {"code":422,"message":"At least one L3-network object must be configured before applying a declaration.","errors":[]}
Conditions:
If v20.2.0 BIG-IP Next was created using instantiation from BIG-IP Next Central Manager.
Impact:
Since there are no default objects created for v20.1.x BIG-IP Next Central Manager and v20.2.0 BIG-IP Next combination, the application creation will fail as it expects the presence of a VRF object.
Workaround:
1. Upgrade BIG-IP Next Central Manager to v20.2.0 and create VLANs by editing the BIG-IP instance and make sure to check the "Default VRF" check box.
1566745-1 : L3VirtualAddress set to ALWAYS advertise will not advertise if there is no associated Stack behind it
Component: BIG-IP Next
Symptoms:
L3VirtualAddress set to RHI Mode ALWAYS advertise will not advertise if there is no associated Application Stack behind it.
Conditions:
Configuration of RHI Mode to ALWAYS advertise on an L3VirtualAddress without an associated Application Stack.
Impact:
L3VirtualAddress will not be advertised as expected.
1564157-1 : BIG-IP Next Central Manager requires VELOS/rSeries systems to use an SSL certificate containing the host IP address in the CN or SANs list.★
Component: BIG-IP Next
Symptoms:
BIG-IP Next Central Manager requires that virtualization providers use a valid SSL certificate. A self-signed certificate can also be explicitly accepted by BIG-IP Next Central Manager users, if the certificate otherwise passes SSL validation successfully.
When F5OS generates self-signed SSL certificates for its HTTPS services, it does not include the actual hostname or IP address in the Common Name or Subject Alternative Names (SANs) fields. As a result, this self-signed certificate will not pass SSL validation for strict TLS clients, because the HTTPS server name does not match any Subject names in the certificate.
Conditions:
A BIG-IP Next Central Manager user attempts to add a VELOS or rSeries system as a virtualization provider, when the VELOS or rSeries system is using the default self-signed certificate generated by the system.
Impact:
BIG-IP Next Central Manager cannot successfully add VELOS or rSeries systems as virtualization providers, and therefore cannot dynamically create new BIG-IP Next instances on VELOS or rSeries systems.
Workaround:
1. Create a self-signed SSL certificate that includes the F5OS system's actual IP address in the Subject Alternative Names (SANs) field. For example, the following steps can be used:
A. Save the following data into a file named “ip-san.cnf":
[req]
default_bits = 2048
distinguished_name = req_distinguished_name
req_extensions = req_ext
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
countryName = XX
stateOrProvinceName = N/A
localityName = N/A
organizationName = Self-signed certificate
commonName = F5OS Self-signed certificate
[req_ext]
subjectAltName = @alt_names
[v3_req]
subjectAltName = @alt_names
[alt_names]
IP.1 = 127.0.0.1
DNS.1 = f5platform.host
B. Edit the file -- change the IP address at the end to be the IP address of the F5OS system. Optionally, other certificate fields may also be updated if the new cert should have specific values for them (e.g., commonName, organizationName, localityName, etc.).
C. Run the following command, to create the two certificate files "ip-san-cert.pem" and "ip-san-key.pem”:
openssl req -x509 -nodes -days 730 -newkey rsa:2048 -keyout ip-san-key.pem -out ip-san-cert.pem -config ip-san.cnf
2. In the VELOS Partition or rSerie s Hardware UI:
A. Navigate to the AUTHENTICATION & ACCESS -> TLS Configuration page.
B. Locate and update the "TLS Certificate" and "TLS Key" text boxes to the new Cert file & Key file, respectively.
C. The F5OS system will then use this new certificate with its HTTPS services.
1560605 : Global Resiliency functionality fails to meet expectations on Safari browsers
Component: BIG-IP Next
Symptoms:
Global Resiliency Group UI main pane goes under the left navigation in Safari browser.
Conditions:
When creating a Global Resiliency group in Safari browser.
Impact:
Not able to create Global Resiliency Group.
Workaround:
Use Chrome browser for creating Global Resiliency Group.
1560493-1 : Inaccurate Reflection of Selfip Prefix Length in TMM Statistics and "ip addr" Output
Component: BIG-IP Next
Symptoms:
Changes to the prefix length of selfips are not reflected in TMM statistics or the "ip addr" output.
Conditions:
Configure an L1-network with VLAN and self-ip with a certain prefix, then altering the prefix length or subnet of the self-ip.
Impact:
The modifications made to the self-ip prefix length are not reflected in TMM statistics.
Workaround:
To address changes in self-ip subnets, it is necessary to delete the L1-network and subsequently re-add it.
1505193-1 : Draft applications in CM will show Good status before deployment
Component: BIG-IP Next
Symptoms:
BIG-IP Next CM uses the "Good" status as the default when creating a new application. This allows applications that do not receive health data, such as, application with out a monitor stays in a "Good" status.
When application is deployed, data will be received and the health status is updated.
Conditions:
Drafting applications in BIG-IP Next CM.
Impact:
Un-deployed applications will show Good status when they are not passing traffic.
Workaround:
When the applications is deployed, CM will receive health data and update the status.
1498421 : Restoring Central Manager (VE) with KVM HA Next instance fails on a new BIG-IP Next Central Manager
Component: BIG-IP Next
Symptoms:
The user cannot restore BIG-IP Next Central Manager for the first time.
Conditions:
BIG-IP Next Central Manager on VE managing instances which includes a KVM HA instance.
Impact:
For first time, user will not be able to restore the backup archive into a new BIG-IP Next Central Manager.
Workaround:
The user must perform a second restoration of the backup archive into a new BIG-IP Next Central Manager.
1498121 : BIG-IP Next Central Manager upgrade alerts not visible in global bell icon
Component: BIG-IP Next
Symptoms:
User of BIG-IP Next Central Manager not able to see the alerts sent by upgrade of BIG-IP Next Central Manager.
Conditions:
During upgrade of Central Manager from version 20.0.x to 20.1.x, it may encounter errors.
Impact:
Alerts does not reflect in the 'Global Bell Icon' if there are errors during BIG-IP Next Central Manager upgrade.
1495017 : BIG-IP Next Hostname, Group Name and FQDN name should adhere to RFC 1123 specification
Component: BIG-IP Next
Symptoms:
Hostname, Group Name and FQDN Name used in Global Resiliency feature should be lowercase.
Conditions:
Providing names for above mentioned fields with capital letters causes failure.
Impact:
Group creation or FQDN creation fails when capital letters are used in them.
Workaround:
Always create names with small letters and should adhere to RFC 1123 specification.
1495005 : Cannot create Global Resiliency Group with multiple instances if the DNS instances have same hostname
Component: BIG-IP Next
Symptoms:
The hostname is defaulted and cannot be modified when the hostname is not specified for the BIG-IP Next instances on BIG-IP Next Central Manager
Conditions:
Create a Global Resiliency Group with more than one BIG-IP Next instance with same name.
Impact:
Global Resiliency Group creation fails.
Workaround:
Make sure the hostname is set and unique for the BIG-IP Next instances going to be used in Global Resiliency Group creation.
1494997 : Deleting a GSLB instance results in record creation of GR group in BIG-IP Next Central Manager
Component: BIG-IP Next
Symptoms:
Deleting the BIG-IP instance from the "Infrastructure -> My Instances" will disrupt the Global Resiliency Configuration using those instances.
Conditions:
The issue occurs when an instance is deleted directly while it is being used in a Global Resiliency Configuration.
Impact:
Deleting the instance under these conditions will break the Global Resiliency feature, leading to DNS resolution failure for the GR Group.
Workaround:
Refrain from deleting the instances when they are currently being used in a Global Resiliency Group.
1492705 : During upgrading to BIG-IP Next 20.1.0, the BIG-IP Next 20.1.0 Central Manager failed to connect with BIG-IP Next 20.0.2 instance
Component: BIG-IP Next
Symptoms:
BIG IP Next 20.1.0 Central Manager is managing BIG-IP Next 20.0.2 instances.
When upgrading Next instance from BIG-IP 20.0.2 to BIG-IP 20.1.0, Central Manager failed to connect with the instance.
Conditions:
BIG IP Next 20.1.0 Central Manager managing BIG-IP Next 20.0.2 instances.
Impact:
Connection to BIG-IP Next instances fails.
Workaround:
Following is the workaround:
1. Start with BIG-IP Next Central Manager of 20.0.2 managing BIG-IP Next 20.0.2 instances
2. Upgrade Next instances of 20.0.2 version to 20.1.0 version
3. Upgrade Central Manager from 20.0.2 version to 20.1.0 version.
1491197 : Server Name (TLS ClientHello) Condition in policy shouldn't be allowed when "Enable UDP" option is selected in application under Protocols & Profiles
Component: BIG-IP Next
Symptoms:
Validation is not available in BIG-IP Next Central Manager for the mutually exclusive configurations "Enable UDP" in application and "TLS ClientHello" condition in SSL Orchestrator policies.
When we deploy Application with UDP enabled, then attach SSL Orchestrator policies to the application, it should not have "TLS Client Hello" condition based on "Server Name".
Conditions:
Below are the condition in sequence:
1. Create an application with UDP enabled
2. Create and Attach an sslo policy, to that application, which has "TLS ClientHello" condition based on "Server Name" and deployed to next instance.
Impact:
Traffic processing will not work as the configuration is not valid and will not be sent to TMM until fixed.
1491121 : Patching a new application service's parameters overwrites entire application service parameters
Component: BIG-IP Next
Symptoms:
When sending a PATCH API request to append an application service's parameters, all parameter are completely replaced with changes, rather than partially changing the parameters according to the PATCH request.
Conditions:
Use a PATCH API request to partially update application service parameters.
Deploy changes.
Impact:
If you send incomplete application service parameters, the changes will completely replace the existing parameters, and only partial parameters will be saved. This will lead to failed application service deployment as the parameters are incomplete.
Workaround:
When using the API request to change application service parameters, include in the body of the request full application service parameters, and not just partial changes.
1490381-1 : Pagination for iRules page not supported with a large number of iRules
Component: BIG-IP Next
Symptoms:
Pagination is not supported in the iRules data grid when 100s of iRules are configured to BIG-IP Next Central Manager.
Conditions:
This issue occurs when there are 100s of iRules on BIG-IP Next Central Manager, which do not fit in a single iRule view.
Impact:
If iRules data exceeds about 500, then all iRules data will be shown at once. So it will be difficult to find specific iRules.
Workaround:
Search for iRule name from the search bar to find a specific iRule.
1489945 : HTTPS applications with self-signed certificates traffic is not working after upgrading BIG-IP Next instances to new version of BIG-IP Next Central Manager
Component: BIG-IP Next
Symptoms:
HTTPS traffic is not working after upgrading the BIG-IP Next instances for the application service previously deployed using BIG-IP Next Central Manager version 20.0.x.
Conditions:
1. Install BIG-IP Next Central Manager version 20.0.x and add BIG-IP Next instance(s).
2. Deploy the HTTP application service with a self-signed certificate created on BIG-IP Next Central Manager to an instance.
3. Observed traffic is working fine.
4. Now upgrade from 20.0.x to the newest version and observe HTTPS had traffic stopped working.
Impact:
This impacts HTTPS application service traffic.
Workaround:
1. Upgrade BIG-IP Next Central Manager to latest version.
2. Create new self-signed certificates for the already deployed self-signed certificates through application services.
3. Replace the existing self-signed certificate in the application service with newly created self-signed certificate and re-deploy the application service.
4. After successfully re-deploying the application service, make sure traffic is working on the instance.
5. Delete the old self-signed certificate(s) created in the earlier versions of BIG-IP Next Central Manager.
1474801 : BIG-IP Next Central Manager creates a default VRF for all VLANS of the onboarded Next device
Component: BIG-IP Next
Symptoms:
BIG-IP Next Central Manager creates a default VRF for all VLANS of the onboarded Next device.
Conditions:
The user wants to use specific VLANS for application traffic.
Impact:
Users would not be able to select VLANS for an application.
Workaround:
1. User must create VLANs using /L1 Networks endpoint directly on BIG-IP Next, before adding the device to BIG-IP Next Central Manager.
2. The user can add the device to CM and choose the VLANs for SSL Orchestrator use cases.
Subsequently:
1. User should perform L1Network related operations on Next only.
1472669 : idle timer in BIG-IP Next Central Manager can log out user during file uploads★
Component: BIG-IP Next
Symptoms:
During a file upload, the UI idle timer will log out after ~ 20 minutes, possibly terminating the file upload, or making it appear as though the upload hasn't completed when it has.
Conditions:
Upload a file to BIG-IP Next Central Manager.
Impact:
File upload is incomplete.
Workaround:
Interact periodically with the UI by moving the mouse or pressing keys in the browser window during a file upload that takes longer than ~20 minutes. This will reset the idle timer and prevent the UI from terminating the user session.
1466305 : Anomaly in factory reset behavior for DNS enabled BIG-IP Next deployment
Component: BIG-IP Next
Symptoms:
Factory reset API does not bring TMM to default provisioned modules. DNS pods along with cne-proxy and cne-controller are not deleted.
Conditions:
BIG-IP Next cluster with DNS provisioned and WAF disabled.
Impact:
BIG-IP Next cluster with DNS provisioned will not go back to default deployment and user will have to deprovision DNS and re-provision WAF.
Workaround:
Deprovision DNS if cluster needs to go to factory defaults.
1403861 : Data metrics and logs will not be migrated when upgrading BIG-IP Next Central Manager from 20.0.2 to a later release
Component: BIG-IP Next
Symptoms:
In the version 20.1.0 of BIG-IP Next Central Manager, OpenSearch is replaced by Elasticsearch as the main storage for data metrics and logs.
Due to incompatibility between OpenSearch and Elasticsearch, metrics and logs that are stored on BIG-IP Next Central Manager in earlier versions will not be available after upgrading.
Conditions:
Upgrade BIG-IP Next Central Manager from a release version prior to 20.1.0.
Impact:
After the upgrade is complete, the data metrics and logs from the previous version will not be available on the upgraded BIG-IP Next Central Manager.
1394625-1 : Application service failes to deploy even if marked as green (ready to deploy)
Component: BIG-IP Next
Symptoms:
Deployment fails for a migrated application service that is marked green (ready for deployment.
Conditions:
During application migration, upload a UCS with a virtual server that has clientssl profile attached that points to a cert/key pair with RSA 512 OR 1024 key (unsupported).
Complete the migration and pre-deployment process, and deploy the application service.
Impact:
The application service will not have a deployment location option and can only be saved as a draft.
1366321-1 : BIG-IP Next Central Manager behind a forward-proxy
Component: BIG-IP Next
Symptoms:
Using "forward proxy" for external network calls from BIG-IP Next Central Manager fails.
Conditions:
When the network environment BIG-IP Next Central Manager is deployed in has a policy of routing all external calls through a forward proxy.
Impact:
BIG-IP Next Central Manager does not currently support proxy configurations, so you cannot deploy BIG-IP Next instances in that environment.
Workaround:
Allow BIG-IP Next Central Manager to connect to external endpoints by bypassing the "forward proxy" until BIG-IP Next Central Manager supports proxy configurations.
1365445 : Creating a BIG-IP Next instance on vSphere fails with "login failed with code 401" error message★
Component: BIG-IP Next
Symptoms:
Creating a BIG-IP Next VE instance in vShpere fails.
Conditions:
This happens when the randomly generated initial admin password contains an unsupported character.
Impact:
Creating a BIG-IP Next VE instance fails.
Workaround:
Try recreating the BIG-IP Next VE instance.
1365433 : Creating a BIG-IP Next instance on vSphere fails with "login failed with code 501" error message★
Component: BIG-IP Next
Symptoms:
Creating a BIG-IP Next VE instance fails and returns a code 503 error.
Conditions:
Attempting to create a BIG-IP Next VE instance from BIG-IP Next Central Manager when the vSphere environment has insufficient resources.
Impact:
Creating a BIG-IP Next VE instance fails.
Workaround:
Use one of the following workarounds.
- Retry creating the BIG-IP Next instance.
- Create the BIG-IP Next instance directly in the vSphere provider environment then add it to BIG-IP Next Central Manager.
1365417 : Creating a BIG-IP Next VE instance in vSphere fails when a backslash character is in the provider username★
Component: BIG-IP Next
Symptoms:
If you include a backslash character in the provider username when creating a BIG-IP Next VE instance creation fails because BIG-IP Next Central Manager parses it as an escape character.
Conditions:
Creating a BIG-IP Next VE instance that includes a backslash character in the provider username.
Impact:
Creation of the BIG-IP Next instance fails.
Workaround:
Do not use the backslash character in the provider username.
1365005 : Analytics data is not restored after upgrading to BIG-IP Next version 20.0.1
Component: BIG-IP Next
Symptoms:
After upgrading from BIG-IP Next version 20.0 to 20.0.1, analytic data is not restored.
Conditions:
After upgrading from BIG-IP Next version 20.0 to 20.0.1.
Impact:
Analytics data is not automatically restored after upgrading and cannot be restored manually.
1360709 : Application page can show an error alert that includes "FAST delete task failed for application"
Component: BIG-IP Next
Symptoms:
After you successfully delete a BIG-IP Next instance that has application services deployed to it, an alert banner on the Applications page states that the delete task failed even though it's successful.
Conditions:
Delete a BIG-IP Next instance and then navigate to the Applications page.
Impact:
This can cause confusion.
1360621 : Adding a Control Plane VLAN must be done only during BIG-IP Next HA instance creation
Component: BIG-IP Next
Symptoms:
If you attempt to edit a BIG-IP Next HA instance properties to add a Control Plane VLAN, it fails.
Conditions:
Editing the properties for an existing BIG-IP Next VE HA instance and attempting to add a Control Plane VLAN.
Impact:
The attempt to edit/add Control Plane VLAN fails.
Workaround:
Create the Control Plane VLAN when you initially create the BIG-IP Next HA instance.
1360121-1 : Unexpected virtual server behavior due to removal of objects unsupported by BIG-IP Next
Component: BIG-IP Next
Symptoms:
The migration process ensures that application services are supported by BIG-IP Next. If a property value is not currently supported by BIG-IP Next, it is removed and is not present in the AS3 declaration. If the object was a default value, the object is replaced by a default value that is supported by BIG-IP Next.
Conditions:
1. Migration a UCS archive from BIG-IP to BIG-IP Next Central Manager.
2. Review the AS3 declaration during the Pre Deployment staged.
Example for "cache-size" property of "web-acceleration" profile:
- BIG-IP config cache-size = 500mb OR 0mb
- AS3 schema supported range = 1-375mb
- BIG-IP Next stack (clientSide/caching/cacheSize) supported range 1-375mb
- AS3 output created by migration does not produce "cacheSize" property if cache-size is greater than 375mb or lower than 1mb.
- Deployment of AS3 declaration uses BIG-IP Next defaults in both cases (cache-size 375 or 0mb)
Impact:
Default values of virtual server's objects may change, impacting virtual server's behavior.
Workaround:
Although you cannot use values which are unsupported by BIG-IP Next, you can update the AS3 declaration with missing properties to specify values other than default ones added during the migration process.
To do so, read: https://clouddocs.f5.com/bigip-next/latest/schemasupport/schema-reference.html
to modify AS3 declaration by adding missing properties and specifying values within supported range.
1360097-1 : Migration highlights and marks "net address-list" as unsupported, but addresses are converted to AS3 format
Component: BIG-IP Next
Symptoms:
Objects of a type: "net address-list" are incorrectly marked as unsupported, while virtual servers in AS3 output contain the property "virtualAddresses".
Conditions:
If an address list is used to configure a virtual server, it will be highlighted as unsupported in the configuration editor even if it is properly translated to AS3 "virtualAddresses" property.
Example of the object:
net address-list /tenant3892a81b1f9e6/application_11/IPv6AddressList {
addresses {
fe80::1ff:fe23:4567:890a-fe80::1ff:fe23:4567:890b { }
fe80::1ff:fe23:4567:890c { }
fe80::1ff:fe23:4567:890d { }
}
description IPv6
}
Example of an AS3 property:
"virtualAddresses": [
"fe80::1ff:fe23:4567:890a-fe80::1ff:fe23:4567:890b",
"fe80::1ff:fe23:4567:890c",
"fe80::1ff:fe23:4567:890d"
],
Impact:
- The object is translated to virtualAddresses property in the AS3, but an application is marked as yellow.
- The object is translated, but one of the values from the address list is not supported on BIG-IP Next (IPv6 value range)
Workaround:
Verify that all addresses from 'net address-list' object are configured as "virtualAddresses" property value list in the AS3 output.
Verify that all addresses from 'net address-list' are supported on BIG-IP Next. Remove or modify virtualAddresses value list if needed.
1360093-1 : Abbreviated IPv6 destination address attached to a virtual server is not converted to AS3 format
Component: BIG-IP Next
Symptoms:
Service class in AS3 output does not have 'virtualAddresses' property, for example:
"Common_virtual_test": {
"snat": "none",
"class": "Service_TCP",
"profileTCP": {
"use": "/tenant017b16b41f5c7/application_9_SMtD/tcp_default_v14"
},
"persistenceMethods": []
}
Conditions:
Migrate an application service with abbreviated IPv6 address:
ltm virtual-address /tenant017b16b41f5c7/application_9_SMtD/aa::b {
address aa::b
arp enabled
traffic-group /Common/traffic-group-1
Impact:
Virtual server is misconfigured, no listener on a specific IP address is created.
Workaround:
All application services containing virtual servers configured with abbreviated IPv6 addresses should be updated once they are migrated to BIG-IP Next Central Manager.
Go to Applications -> My Application Services, find your application service name and edit it.
Find your virtual server name and update it with a property
"virtualAddresses": [
"aa::b",
]
like this:
"Common_virtual_test": {
"snat": "none",
"class": "Service_TCP",
"virtualAddresses": [
"aa::b",
],
"profileTCP": {
"use": "/tenant017b16b41f5c7/application_9_SMtD/tcp_default_v14"
},
"persistenceMethods": []
}
1359209-1 : The health of application service shown as "Good" when deployment fails as a result of invalid iRule syntax
Component: BIG-IP Next
Symptoms:
When an application servvice with an invalid iRule is deployed to an instance from BIG-IP Next Central Manager, deployment is shown as successful but the post deployment iRule validation failed on the instance. Health status should be changed to "Critical/Warning" but it is still shown as "good".
Conditions:
Deploy an application service with an invalid iRule.
Impact:
Incorrect status of the application service is shown in the My Application Services page.
Workaround:
Always try to use a valid iRule when deploying to BIG-IP Next.
1358985-1 : Failed deployment of migrated application services to a BIG-IP Next instance
Component: BIG-IP Next
Symptoms:
Deployment of a migrated application service to a BIG-IP Next instance might fail even if the declaration is valid. This can occur after the application service was successfully saved as draft on BIG-IP Next Central Manager
The following can appear in the deployment logs:
- No event with error code from deployment to instance in migration logs
- 202 response code "in progress" from deployment to instance in migration logs
- 503 response code "Configuration in progress" from deployment to instance in migration logs
Conditions:
1. Migrate an application service during a migration session
2. Select a deployment location and deploy the application service.
Review the migration log: the application service was successfully saved to BIG-IP Next Central Manager, but the deployment to the selected location failed with error.
Impact:
There are 3 different errors that can result in the deployment logs (Deployment Summary>View logs):
Reason 1:
Migration process started.
Application: <application name> saved as draft to BIG-IP Next Central Manager.
Migration process failed.
Reason 2:
Migration process started
Application: <application name> saved as draft to BIG-IP Next Central Manager.
Log Message: Deployment to <BIG-IP Next IP address> failed with the error: '{'code': 202, 'host': '<hostname>, 'message': 'in progress', 'runTime': 0, 'tenant': '<tenant name>'}'.
Migration process failed.
Reason 3:
If you are currently processing the same AS3 declaration sent from a different source or migration session:
Migration process started.
Application: <application name> saved as draft to BIG-IP Next Central Manager.
Log message: Deployment to <BIG-IP Next IP address> failed with the error: '{'code': 503, 'errors': [], 'message': 'Configuration operation in progress on device, please try again later.'}'.
Migration process failed.
Workaround:
The application service was successfully saved as a draft on BIG-IP Next Central Manager.
You can go to My Application Services, select the application service that failed to deploy, and deploy the application service to a selected instance location.
1355605 : "NO DATA" is displayed when setting names for appliction services, virtual servers and pools, that exceed max characters
Component: BIG-IP Next
Symptoms:
"NO DATA" is displayed in the application metrics charts when setting a name that exceeds 33 characters for an application service, pool, or virtual server.
Conditions:
1. Create an application service with a virtual server and a pool.
2. Set the name of each of the objects above to be 34 characters or longer.
3. Add an endpoint to the pool.
4. Deploy the application service, and wait for the application service to pass traffic.
Impact:
"NO DATA" is displayed in the application service, pool and virtual server data metrics charts.
Workaround:
When creating an application the names of the application services, pools and virtual servers cannot exceed 33 characters.
1354645 : Error displays when clicking "Edit" on the Instance Properties panel
Component: BIG-IP Next
Symptoms:
When editing the properties of a BIG-IP Next instances page, a "Error: unsupported platform type" displays.
Conditions:
When viewing the Instances page, the BIG-IP Next instance's hostname to view its properties. On the Instance Properties panel, click the Edit button.
Impact:
This can cause confusion.
Workaround:
Wait for the BIG-IP Next instance's hostname to load on Instance Properties panel before clicking the Edit button.
1354265 : The icb pod may restart during install phase
Component: BIG-IP Next
Symptoms:
The icb may generate a core during the install phase which will cause a restart of icb pod. However, we have observed icb to restart fine with no issues.
Conditions:
The issue is seen during the upgrade install.
Impact:
Post-first panic, icb restarts fine and no known bad impact is observed.
Workaround:
None
1353589 : Provisioning of BIG-IP Next Access modules is not supported on VELOS, but containers continue to run
Component: BIG-IP Next
Symptoms:
1) Containers that belong to the BIG-IP Next Access module keep running on BIG-IP Next all the time on VELOS & rSeries.
2) On VE, the containers run only if the BIG-IP Next Access module is provisioned using: /api/v1/systems/{systemID}/provisioning api
Conditions:
This is observed all the time when BIG-IP Next is deployed on VELOS/r-series.
Impact:
Containers that belong to the BIG-IP Next Access module keep running all the time and this can lead to wastage of resources on VELOS & rSeries.
Workaround:
If you do not want to run BIG-IP Next Access containers as part of a BIG-IP Next tenant deployment, you can use this workaround before installing the tenant:
1) Run the following command on the standby controller:
sed -i 's/access: true/access: false/g' /var/F5/partition<partition-ID>/SPEC/<IMAGE_VERSION>. yaml
2) Trigger failover from partition cli ->:
system redundancy go-standby
3) Install the tenant.
1352969 : Upgrades with TLS configuration can cause TMM crash loop
Component: BIG-IP Next
Symptoms:
After upgrading from a version prior to 20.0.1, connection is lost.
Conditions:
- Keys and certificates are configured as files in TLS configuration.
- Upgrading from a version prior to 20.0.1.
Impact:
An error similar to the following is logged: Failed to connect to <IP address port: xx> No route to host
Workaround:
After upgrading, reconfigure the private key files so that validation properly occurs.
Fix any existing mismatch keys and certificates.
1350365 : Performing licensing changes directly on a BIG-IP Next instance
Component: BIG-IP Next
Symptoms:
BIG-IP Next Central Manager will become out of sync with a managed BIG-IP Next instance if you perform licensing actions directly to the BIG-IP Next instance.
Conditions:
Add a BIG-IP Next instance to BIG-IP Next Central Manager. Perform licensing actions directly on the BIG-IP Next instance.
Impact:
BIG-IP Next Central Manager is no longer synchronized with its managed instance.
1350285-1 : Traffic is not passing after the tenant is licensed and network is configured
Component: BIG-IP Next
Symptoms:
After configuring and licensing the BIG-IP Next tenant, such as after an upgrade, traffic is not passing.
Conditions:
A BIG-IP Next tenant is configured without vlans, and a /PUT to create the L1 networking interface is performed; and then vlans are later allocated to the tenant. In this scenario, the (later-) allocated vlans will not take effect for the previously configured L1 network interface.
Impact:
Data traffic associated with the later-added vlans will not be processed.
Workaround:
Workaround is to allocate vlans to the BIG-IP Next tenant before the /PUT call to create the L1 network interface; at which point the L1 network interface will be associated with a vlan allocated to that BIG-IP Next instance.
1329853-1 : Application traffic is intermittent when more than one virtual server is configured
Component: BIG-IP Next
Symptoms:
After deploying an application containing multiple virtual servers, only one of the virtual servers responds to clients.
In the Central Manager GUI, one virtual server is marked as red and the other is marked as green, even though you can ping all of the pool members for each of the virtual servers.
Conditions:
-- The application contains multiple virtual servers
-- The virtual addresses for each of the virtual servers is identical and the port is identical
Alternatively, you could encounter this by deploying two different applications where the virtual address and port are identical.
Impact:
The application will deploy without error even if an IP address/port conflict occurs, and traffic will be disrupted to one or both of the virtual addresses.
Workaround:
Assign different virtual addresses and/or virtual ports for different application services. If any two existing applications has same listeners defined, you can change the data by adding unique listeners and re-deploy.
1325713-2 : Monthly backup cannot be scheduled for the days 29, 30, or 31
Component: BIG-IP Next
Symptoms:
You cannot schedule a monthly backup on the last 3 days of the month (29, 30, or 31) because some months do not contain these days (for example, February).
Conditions:
Creating a monthly backup schedule from BIG-IP Next Central Manager that contains the days 29, 30, or 31.
Impact:
If you select these days for your schedule, BIG-IP Next Central Manager returns a 500 error.
1134225 : AS3 declarations with a SNAT configuration do not get removed from the underlying configuration as expected
Links to More Info: K000138849
Component: BIG-IP Next
Symptoms:
AS3-configured L4-serversides object contains a SNAT property when it should not, given that SNAT was previously configured in the declaration and then subsequently removed.
Conditions:
SNAT configuration was specified in the AS3 declaration and then subsequently removed.
Impact:
A SNAT cannot be removed once it has been added.
Workaround:
Remove the L4-serversides object, either by removing the relevant configuration from the AS3 declaration or by using DELETE /api/v1/L4-serversides, and then re-POST the AS3 declaration without the SNAT.
1122689-3 : Cannot modify DNS configuration for a BIG-IP Next VE instance through API
Component: BIG-IP Next
Symptoms:
Making updates to BIG-IP Next Virtual Edition (VE) DNS configuration through onboarding or the API does not update the DNS configuration as expected.
Conditions:
Making updates to a BIG-IP Next DNS configuration through the API.
Impact:
The BIG-IP Next instance continues to use the DNS servers supplied by DHCP on the interface by default.
Workaround:
Prior to updating the BIG-IP Next DNS configuration through the API, issue the following commands.
$ rm -f /etc/resolv.conf; touch /etc/resolv.conf
This removes all DNS configurations. DNS can then be managed through the BIG-IP Next instance's API, and the DNS provided by DHCP is ignored.
★ This issue may cause the configuration to fail to load or may significantly impact system performance after upgrade
For additional support resources and technical documentation, see:
- The F5 Technical Support website: http://www.f5.com/support/
- The MyF5 website: https://my.f5.com/manage/s/
- The F5 DevCentral website: http://devcentral.f5.com/