How to: Install and configure BIG-IP Next Central Manager on KVM

Prerequisites

Before you can install a BIG-IP Next Central Manager image in an OpenStack environment, you need to configure the following in your KVM:

  • Security group

  • Key pair

  • Floating IP address for each externally accessible interface

  • KVM QEMU 6.2 on Ubuntu 22.04 (machine type i440fx)

  • CLI utilities to work with KVM images: virtinst, virt-inst, virt-viewer, virsh, cloud-localds

  • Instance - sizes for deployment:

Deployement Type Resources
Standalone Node
  • 8 vCPUs, 16 GB RAM
  • 350 GB disk
High Availability (3 nodes)
  • 3 × 8 vCPUs, 3 × 16 GB RAM
  • 3 × 350 GB disk
  • Disk resource: 350GB of disk per BIG-IP Next Central Manager virtual machine (VM)

  • Access to MyF5 for downloads

  • Inputs for your network

  • Review the appropriate Release Notes

If you are unfamiliar with these prerequisites, refer to the OpenStack documentation for details.

Procedures

Upload a BIG-IP Next Central Manager image to your OpenStack environment

Download a KVM Cloud Image (qcow2)

To install the BIG-IP Next Central Manager in OpenStack, the software image must be in the OpenStack environment.

  1. Log in to MyF5 Downloads.

  2. Accept the EULA and click Next.

  3. Under Group, select BIG-IP_Next.

  4. Under Product Line, select Central Manager.

  5. Under Select a product container, choose the appropriate version.

  6. Under Select a download file select the qcow2 image file.

  7. Under Download locations, select the appropriate location.

  8. Click Download.

  9. Repeat these steps to also download the appropriate checksum file.

  10. Save the qcow2 file temporarily to your workstation’s local storage.

  11. Log in to your OpenStack environment dashboard.

  12. Navigate to Project → Compute → Images to display the Images page, and then click Create Image.

  13. On the Create Image dialog box, type a name and an optional description for the BIG-IP Next Central Manager image in the Image Name and Image Description fields.

  14. Click Browse, and then navigate to the location in which you extracted the qcow2 image and select it.

  15. To comply with your policies or business requirements, you can specify values for other fields on this page and on the next two pages, but these additional values are optional.

  16. Click Create Image to start the image upload to the OpenStack environment. Although a progress bar should display to indicate the progress of this process, timeouts in the OpenStack user interface sometimes occur. If the process seems to be taking longer than it should, refresh to update the view.

Create a new disk volume

Before you can create a new disk volume for a BIG-IP Next Central Manager, you must have the F5 software image in your OpenStack environment.

You need a disk volume in your OpenStack environment that you can use to launch the new BIG-IP Next Central Manager. Depending on your business practices, permissions, or personal preference, you might choose to create the new volume as part of the launch process, instead of as a separate task. The workflow provided here is just one way to get the job done.

  1. From the OpenStack dashboard, click Project → Volumes → Volumes to display the Volumes page, and then click Create Volume.

  2. Type a Volume Name and Description. From Volume Source, select Image.

  3. From Use image as a source, select the image you uploaded for install.

  4. In Size (GiB), type the minimum disk size for the new volume you are creating. The volume size you specify must not be less than the actual size of the original image (QCOW2) file.
    For BIG-IP Next Central Manager, 350 GB is the recommended size.

  5. Click Create Volume to start the process.
    Although a progress bar should display to indicate the progress of this process, timeouts in the OpenStack user interface sometimes occur. If the process seems to be taking longer than it should, refresh to update.

Use the wizard to launch a new BIG-IP Next Central Manager

Before you launch a new BIG-IP Next Central Manager instance, you must have created the disk volume in your OpenStack environment

To use a BIG-IP Next Central Manager in your OpenStack environment, you need to launch the BIG-IP Next Central Manager virtual machine.
Note: Do not use configuration settings (CPU, RAM, and network adapters) that provide fewer resources than those recommended and described here.

  1. From the OpenStack dashboard, click Project → Compute → Instances to display the Instances page, and then click Launch Instance.
    The Launch Instance dialog box opens on the Details page.

  2. Type an Instance Name, and make sure the Count reflects how many BIG-IP Next Central Manager virtual machines you want to create, then click Next.
    The Source page opens.

  3. From Select Boot Source, choose Volume.

  4. Identify the volume you want, click to select it, and then click Next.
    Note: If you have a lot of available volumes and want to find the one you want faster, you can type into the filters box so only volumes that match your filter criteria are listed. Then, you can sort the columns so that the volume you are looking for is easier to find.
    The Flavor page opens. In the OpenStack environment, a Flavor is a predefined virtual hardware profile that you can launch.

  5. Identify the flavor you want to use, click to select it, and then click Next.
    Select the appropriate flavor to support the required specs (8 vCPU, 16 GB RAM, 350 GB Total Disk).
    Note Again, you can use the filter and column sort to find the image flavor more quickly.
    The Networks page opens.

  6. Starting with the management network, identify the network interfaces that you want the BIG-IP Next Central Manager to have, and click to add them.
    It is essential that you add the management network first. If the new BIG-IP Next Central Manager virtual machine requires further customization to comply with your business processes, you can use the Next and Back buttons to access the remaining pages and specify additional detail. Otherwise, this virtual machine is ready to launch.

  7. When you are satisfied with the specifications for the new BIG-IP Next Central Manager, click Launch Instance.
    Although a progress bar should display to indicate the progress of this process, timeouts in the OpenStack user interface sometimes occur. If the process seems to be taking longer than it should, refresh to update.

Set up the BIG-IP Next Central Manager management network

How you set up the BIG-IP Next Central Manager management user interface depends on how the network interfaces are attached.

  • If the interfaces attach to a tenant network subnet that allocates IP addresses from a pre-defined static IP pool, an IP address is automatically assigned to the interface during deployment. In this case, you must use this address to access the BIG-IP Next Central Manager user interface or use the command line interface (CLI).

  • If the interfaces (whether attached to tenant or external networks) allocate their IP addresses using a DHCP server, then an IP address is automatically assigned to the BIG-IP Next Central Manager interface during deployment. You can use this address to access the BIG-IP Next Central Manager user interface or use the CLI.

  • If the interfaces attach to a tenant or external network without a mechanism for allocating IP addresses, you must manually assign an unused address to the network interface that complies with the required subnet criteria.

Change the BIG-IP Next Central Manager default password

After the system completes the initialization process, a built-in admin account is enabled that provides you with the access you need to complete initial configuration and setup.

The admin account provides initial user access.
The initial admin account password is admin.

You should change the password for the admin account before bringing a system into production.

  1. From the OpenStack dashboard, click Project → Compute → Instances to display the Instances page.

  2. On the right side of the screen, from the Actions list of your BIG-IP Next Central Manager virtual machine, select Console to open a console session for this virtual machine.

  3. At the login prompt, type admin.

  4. At the password prompt, type admin.
    You are prompted to change the default password the first time you log in.

  5. Follow the prompts and set a new password.
    After setting the new password, the BIG-IP Next Central Manager Console will open.

  6. When you log in as admin with the new password the system displays a “welcome” banner along with information unique to your new BIG-IP Next Central Manager, similar to the following:

    ->  Pre-authentication banner message from server:
    |    ________   ___  _________    _______    _  __        __
        / __/ __/  / _ )/  _/ ___/___/  _/ _ \  / |/ /____ __/ /_
      / _//__ \  / _  |/ // (_ /___// // ___/ /    / -_) \ / __/
      /_/ /____/ /____/___/\___/   /___/_/    /_/|_/\__/_\_\__/
        _____         __           __  __  ___
      / ___/__ ___  / /________ _/ / /  |/  /__ ____  ___ ____ ____ ____
      / /__/ -_) _ \/ __/ __/ _ `/ / / /|_/ / _ `/ _ \/ _ `/ _ `/ -_) __/
      \___/\__/_//_/\__/_/  \_,_/_/ /_/  /_/\_,_/_//_/\_,_/\_, /\__/_/
    
    
      --- Welcome to the F5 BIG-IP Next Central Manager Console ---
    
    +-----------------------------------------------------------------------------------+
    | * To set up networking and install the software bundle, use the following command:|
    | -> setup                                                                          |
    +-----------------------------------------------------------------------------------+
    
    ->Platform Details
      Hostname:..........central-manager
      Release:...........20.1.0
      Platform Version:..0.8.109
      App Version:.......0.178.14
      BuildDate:.........2024.01.23    
      Flavor:............Small
      K8s Platform:......v1.27.7+k3s1
    

Install BIG-IP Next Central Manager

Run the setup script

Note: This is required if the user wants to configure a static IP address for the VM instance or the DNS server configurationsis is available only during the initial setup. After the CM services are started, adding these conifiguration settings are not available. Follow the instructions below.

  1. While still on the CM console, at the $ prompt, type setup
    Welcome… and instructions display.

    Note: Message if BIG-IP Next Central Manager is already installed:

    BIG-IP Next Central Manager has already been installed.
    Running setup again will destroy all current configuration and data.
    Please run /opt/cm-bundle/cm uninstall -c prior to running setup if you wish to continue.

  2. Type inputs

    Example values are shown within parentheses. If there is a default value, it will be shown within square brackets and will automatically be used if no value is entered.

Network with DHCP

Hostname (example.com):
['10.145.77.192'] found on the management interface.
Do you want to configure a static IP address (N/y) [N]:  
Primary NTP server address (0.pool.ntp.org) (optional):
Alternate NTP server address (1.pool.ntp.org (optional):

Network with a management IP address (No DHCP)

Hostname (e.g. example.com): central-manager-server-1
IP address(es) ['10.192.10.136'] found on the management interface.
Do you want to configure a static IP address (N/y) [N]: Y
Management IP Address & Network Mask [192.168.1.245/24]: 10.192.10.139/24
Management Network Default Gateway [192.168.1.1]: 10.192.10.1
Primary DNS nameserver (e.g. 192.168.1.2): 10.196.1.1
Alternate DNS nameserver (e.g. 192.168.1.3) (optional): 10.196.1.1
Primary NTP server address (i.e 0.ubuntu.pool.ntp.org) (optional):
Alternate NTP server address (e.g. 1.ubuntu.pool.ntp.org) (optional):
IPv4 network CIDR to use for service IPs [100.75.0.0/16]:
IPv4 network CIDR to use for pod IPs [100.76.0.0/14]:

Note: About the two inputs for service and pod IPs: the system uses the two internal IP addresses for communication between invidual containers. Make sure the defaults listed do not conflict with the existing IP address space on your network. If they do, choose a different IP range for the service and pod IPs to resolve the conflict.

Summary and Installation

Summary
-------
Hostname: central-manager-server-1
Management Network Already Configured: False
Management IP Address: 10.192.10.139/24
Management Gateway: 10.192.10.1
DNS Servers: 10.196.1.1, 10.196.1.1
IPv4 network CIDR to use for service IPs: 100.75.0.0/16
IPv4 network CIDR to use for pod IPs: 100.76.0.0/14
  • Would you like to complete configuration with these parameters (Y/n) [N]:

    Type Y to complete.

Access the BIG-IP Next Central Manager GUI

  1. From a web browser, navigate to the address you configured earlier: https://<cm-ip-address-or-hostname/>.

  2. Verify that the CM GUI appears.

    Note: The CLI password for admin and the GUI password are not the same. The default GUI password is admin/admin. If you set the CLI password for admin, it does not change the GUI password.

Proceed by creating a BIG-IP Next Instance to secure apps.

Setup the Standalone Node or High Availability (HA) using BIG-IP Next Central Manager GUI

Follow the steps to configure the BIG-IP Next Central Manager using GUI

  1. From the web browser, enter the IP address of your Virtual Machine (VM) instance to access the Central Manager GUI.

  2. Log in to the Central Manager GUI for the first time using the default admin/admin credentials. You will be prompted to create a new password the first time you log in.

  3. Type the Current Password, specify a New Password, re-enter the Confirm New Password, and then click Save. The password must meet the criteria displayed on the screen.

  4. You can now use this new password to sign in to BIG-IP Next Central Manager.

  5. Click Setup on the BIG-IP Next Central Manager window. Follow the instructions and click Next to proceed.

  6. If you want to deploy a BIG-IP Next Central Manager in Standalone Node, skip steps 8–10 and proceed to step 11.

  7. If you want to deploy a BIG-IP Next Central Manager in High Availability with three nodes then make sure to change the default credentials for addional two nodes as mentioned in step 3.

  8. From the BIG-IP Next Central Manager GUI Setup, click Nodes first, then click the +Add button to add a node to the Central Manager HA setup.

    Note: The +Add option is available only during the initial setup. After the CM services are started, adding more nodes is not possible, and the +Add option is disabled.

    a. Enter the Username, Password, and IP Address of the Virtual Machines (VMs) to be added.

    b. Click Save.

    c. Click Add on the Add Node and Enable Clustering? on the popup window.

    d. Verify the fingerprint of the Node and click Accept in the Continue Connecting? pop-up window.

    When the second node is added to the Central Manager HA setup, the setup needs to be enabled. During this process, the user will be logged out from the Central Manager GUI.

    Note: Wait for up to 15 minutes for Central Manager Services to start and become operational.

  9. Repeat step 8 to add more Nodes to the Central Manager HA setup. Verify the status of all Central Manager nodes that have been added as Ready.

  10. This step is optional, but setting up external storage (NFS or SAMBA) is highly recommended. Click Next and follow the procedure below to configure it for the BIG-IP Next Central Manager. External storage provides benefits such as storing instance and CM backup files, storing analytics, and preventing CM disk space from filling up.

    Note: The external storage can only be enabled and configured during the BIG-IP Next Central Manager installation and cannot be enabled or modified after installation.

    a. Toggle the Enable external storage for the BIG-IP Next Central Manager System.

    b. From the Select the Storage Type dropdown menu, you can choose either a NFS or SAMBA server.

    c. Enter the Storage Server IP Address.

    d. Enter the Storage Share Directory. This is the source directory in which the backup file will be stored.

    e. Enter the Storage Server Share Path. This is the destination directory from which the restore will be performed.

    f. Set the Username and Password for the Samba Storage Server.

    g. Click Test Connection to verify that the external storage is successfully configured.

    Please wait until you see the Test connection status Success message.

  11. Click Start CM services. Wait for up to 15 minutes for Central Manager Services to start and become operational.

  12. After installation, log in to BIG-IP Next Central Manager as admin, click the Workspace icon next to the F5 icon, and click System→CM Maintenance then click on Properties screen will display the CM status as Completed.

Prerequisite

  • Make sure that you create three Virtual Machine(VM) instances to configure high availability. It might take 5-10 minutes for each instance to completely boot up.

  • Authenticate with the BIG-IP Next Central Manager API. For details refer to How to: Authenticate with the BIG-IP Next Central Manager API

  • Change the default Central Manager password for all the three VM instances by using the following API.

    Note: You don’t need to SSH login into the VM. If you do for diagnostic purposes, make sure to change the default SSH password.

    POST  https://{{CM_Node_IP}}/api/change-password
    
    {
        "username": "admin",
        "temp_password": "temppwd",
        "new_password": "password"
    }
    

Create HA group and Start CM Services

  1. Login to CM_Node_1 by sending the POST request to /api/login endpoint.

    POST  https://{{/CM_Node_1_IP}}/api/login
    
    {
      "username": "username",
      "password": "password"
    }
    

    Important

    • If you select Node_1 as your first instance, make sure you do all operations on the same node.

  2. Optional: Check the node status by sending the GET request to system/infra/nodes endpoint. Identify the fingerprint address to collect the fingerprints.

    GET  https://{{CM_Node_1_IP}}/api/v1/system/infra/nodes
    
  3. Collect the fingerprints of the nodes by sending a GET request to Node_1 using system/infra/nodes/cert-fingerprint?address=<node_address> endpoint. Modify node_address with corresponding node addresses to get the respective node’s fingerprint.

    GET  https://{{CM_Node_1_IP}}/api/v1/system/infra/nodes/cert-fingerprint?address=<node_address>
    
  4. Create the 3 nodes group by sending the POST request to system/infra/nodes endpoint on Node 1.

    POST  https://{{CM_Node_1_IP}}/api/v1/system/infra/nodes
    

    For the request payload, use the following example, modifying the values as required.
    node_address is the IP address of the nodes.
    fingerprint is the node fingerprints for the validation of certificate with the node being added.

    [
        {
              "node_address": "{{CM_Node_2_IP}}",
              "username": "user1",
              "password": "password"       ,
              "fingerprint": "{{CM_Node_2_Fingerprint}}"  
        },
          
        {     "node_address": "{{CM_Node_3_IP}}",
              "username": "user2",
              "password": "password",
              "fingerprint": "{{CM_Node_3_Fingerprint}}"  
        }
    ]
    
  5. Check the nodes status again by sending the GET request to /system/infra/nodes endpoint, until you see the nodes are in ready state.

    GET  https://{{CM_Node_1_IP}}/api/v1/system/infra/nodes
    

    Note: It might take about 30 seconds for cluster to be in ready state.

  6. Optional: Configure the external storage by sending the POST request to /system/infra/external-storage endpoint on Node_1.

    POST https://{{CM_Node_1_IP}}/api/v1/system/infra/external-storage
    

    For the request payload, use the following example, modifying the values as required.

    {
    "storage_type": "NFS",
    "storage_address": "xxx.xxx.xxx.xxx",
    "storage_share_path": "/export/data",
    "storage_share_dir": ""
    }
    
  7. Optional: Select the configured external storages by sending the GET request to system/infra/external-storage endpoint on Node_1.

    GET https://{{CM_Node_1_IP}}/api/v1/system/infra/external-storage
    
  8. Start the CM services by sending the POST request to /system/infra/bootstrap endpoint on Node 1.

    POST  https://{{CM_Node_1_IP}}/api/v1/system/infra/bootstrap
    
  9. Check the bootstrap status by sending the GET request to system/infra/bootstrap endpoint on Node 1. Ensure that the bootstrap status is in the completed state.

    GET  https://{{CM_Node_1_IP}}/api/v1/system/infra/bootstrap
    

    Note: The status displays the progress of the Central Manger startup sequence, which takes approximately 15 minutes to complete.

  10. Delete the node by sending the DELETE request to /system/infra/nodes/{{NODE_NAME}} endpoint.

    DELETE https://{{M_Node_1_IP}}/api/v1/system/infra/nodes/{{NODE_NAME}}
    

    Note: You can delete the node only before bootstrapping the system.

Troubleshooting

This section describes some known issues related to the deployment of the BIG-IP Next Central Manager application and the possible remedies.

BIG-IP Next Central Manager installation times out

The BIG-IP Next Central Manager installation script (/opt/cm-bundle/cm install) can take up to 20 minutes to complete. If the application deployment times out instead of displaying Installation Complete, the system displays the following error:

Error: timed out waiting for the condition

In this case, a simple system reboot can sometimes address the issue. Run the following command on the BIG-IP Next Central Manager’s command-line terminal:

sudo systemctl reboot

After the system reboots, log in to the BIG-IP Next Central Manager terminal and make sure the BIG-IP Next Central Manager application Pods are all in a running state. Use the following command:

kubectl get pods

Additional Kubernetes deployment issues

If you run kubectl get pods and all Pods are running normally, your output should look similar to this:

13:29 $ kubectl get pods
NAME                                               READY   STATUS      RESTARTS   AGE
mbiq-vault-0                                       2/2     Running     0          25h
mbiq-db-postgresql-0                               2/2     Running     0          25h
mbiq-db-postgres-flyway-init-job-xm4ck             0/2     Completed   0          25h
mbiq-db-pgadmin4-78cb4c5bc7-vj55p                  2/2     Running     1          25h
mbiq-ado-feature-8b7579847-24jfp                   2/2     Running     0          25h
svclb-mbiq-ingress-nginx-controller-qgxq4          2/2     Running     0          25h
mbiq-nats-0                                        2/2     Running     0          25h
mbiq-kube-state-metrics-695f868d9-wm42v            1/1     Running     0          25h
mbiq-system-feature-5d9f6774df-fcvft               2/2     Running     0          25h
mbiq-ingress-nginx-controller-b4486c9d7-cklsj      1/1     Running     0          25h
alertmanager-mbiq-kube-prometheus-alertmanager-0   2/2     Running     0          25h
as3-workflow-feature-flyway-init-job-7znsl         0/2     Completed   0          25h
mbiq-node-exporter-59wc4                           1/1     Running     0          25h
mbiq-ui-667849dd97-ptk59                           1/1     Running     0          25h
as3-feature-flyway-init-job-ld5gk                  0/2     Error       0          25h
mbiq-app-deploy-utils-service-5569d44688-z2cwn     2/2     Running     0          25h
mbiq-license-feature-58fdb86c49-8k527              2/2     Running     0          25h
mbiq-ado-query-feature-68bfb68d69-5rbvq            2/2     Running     0          25h
mbiq-device-feature-6f994bfd5f-9xc42               2/2     Running     0          25h
mbiq-proxy-service-56557c5986-ckkb7                2/2     Running     1          25h
mbiq-kube-prometheus-operator-6c85cd89f9-cjf56     1/1     Running     0          25h
device-feature-flyway-init-job-zjgvt               0/2     Completed   0          25h
mbiq-apm-feature-85476b6654-h4x8m                  2/2     Running     0          25h
mbiq-as3-feature-84dc5fb498-gtgks                  2/2     Running     0          25h
mbiq-alert-feature-66c896bd4f-pcm6k                2/2     Running     1          25h
apm-feature-flyway-init-job-jqt2k                  0/2     Completed   0          25h
sslo-feature-flyway-init-job-pw55p                 0/2     Completed   0          25h
alert-feature-flyway-init-job-ck4tf                0/2     Completed   0          25h
mbiq-fast-feature-6b78b98689-hb7gx                 2/2     Running     0          25h
prometheus-mbiq-kube-prometheus-prometheus-0       2/2     Running     1          25h
mbiq-as3-workflow-feature-99674f76d-wcbcw          2/2     Running     1          25h
license-feature-flyway-init-job-zdgq5              0/2     Completed   0          25h
ado-query-feature-flyway-init-job-9pht9            0/2     Completed   0          25h
mbiq-certificate-feature-588ff78dd-97tw6           2/2     Running     0          25h
system-feature-flyway-init-job-47qn5               0/2     Completed   0          25h
mbiq-sslo-feature-bf9c466bc-khgpc                  2/2     Running     1          25h
certificate-feature-flyway-init-job-xqzzd          0/2     Completed   0          25h
fast-feature-flyway-init-job-lswrk                 0/2     Completed   0          25h
as3-feature-flyway-init-job-hvm8z                  0/2     Completed   0          25h
mbiq-waf-feature-666698bf57-6x4cd                  2/2     Running     1          25h
waf-feature-flyway-init-job-xd6sf                  0/2     Completed   0          25h
mbiq-fluentd-0                                     2/2     Running     0          25h
mbiq-gateway-feature-f9fd9b4d9-jmw99               2/2     Running     0          24h
mbiq-loki-0                                        2/2     Running     0          20h
mbiq-fast-service-85d97bcd6d-tzgm5                 2/2     Running     10         25h

If there is an issue with a specific Pod/container, you can check the logs for additional information on that container.
Note: When a Pod fails to start successfully, Kubernetes automatically attempts to start that Pod again. For troubleshooting purposes, you only need to be concerned with Pods that aren’t either Running or Completed after repeated attempts.

To check the log for a specific container, use the following command:

kubectl logs <pod name> -c <container name>

The following example provides a list of log entries for the container named mbiq-system-feature in the Pod named mbiq-system-feature-76ccf87577-nsdlc.

kubectl logs mbiq-system-feature-76ccf87577-nsdlc -c mbiq-system-feature

You can use a similar command syntax to investigate issues with other Pods.

You can also get information about the Kubernetes node that runs the BIG-IP Next Central Manager. Use the following command to get resource allocation details for the Kubernetes node:

kubectl describe node central-manager

The following is an excerpt from a typical response you can expect from this command:

Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests      Limits
  --------           --------      ------
  cpu                3840m (48%)   8800m (110%)
  memory             3856Mi (12%)  7508Mi (23%)
  ephemeral-storage  0 (0%)        0 (0%)
  hugepages-1Gi      0 (0%)        0 (0%)
  hugepages-2Mi      0 (0%)        0 (0%)

For a list of current, known issues, please refer to the release notes: (BIG-IP Next Fixes and Known Issues).