Last updated on: September 13 2023.

Warning

The F5OS TRM is no longer maintained and will be decommissioned on 9/29/2023.

F5OS-A 1.0.0 - Software Architecture Overview

Feature Overview

The base operating system for F5 rSeries platforms will use an all-new operating system called F5OS-A, which is derived from F5OS-C used for VELOS hardware. It can be used to launch classic BIG-IP Virtual Edition instances (starting with version 15.1.5) and BIG-IP NEXT instances.

Feature Deeper Overview

  • BIG-IP experts: F5OS-A replaces the vCMP host in legacy F5 hardware. It supports rSeries appliances.

  • VELOS experts: F5OS-A is derived from F5OS-C. It removes the redundancy of system controllers because appliances have none, and removes partition support.

F5OS-A will have everything needed in order to configure and manage the rSeries High platforms (r10xxx/r5xxx) and the rSeries Low platforms (r2xxx, r4xxx)

Base OS, Service, and ISO Software

Customers will primarily download and install a .iso image, but F5OS-A images are split into two parts that have their own, somewhat independent versions: Base OS version, and Service version. This follows the model of F5OS-C for VELOS platforms.

  • The “Base OS” includes the Linux operating system, which is hasn’t RedHat announced the discontinuation of CentOS as per https://access.redhat.com/announcements/6597981

  • The “Service” software is the specific F5 software.

  • The Base OS software and Service software are bundled together into a single .iso image.

If you install or upgrade a .iso version, it sets both the Base OS version and the Service version to the version of the ISO. But it will be possible to apply a different Base OS version than the Service version, and vice versa, should the need arise.

image

Kubernetes is included - but not exposed

F5OS-A uses Kubernetes (K8s) for container orchestration, health monitoring, and resource allocation.

For K8s experts:

  • On rSeries appliances, K8s is running as a single-node cluster with the K8s master able to run regular pod workloads.

  • BIG-IP Tenants are implemented as K8s pods

    • For classic BIG-IP, kubevirt is used to load a BIG-IP Virtual Edition (kvm) instance. Instances are launched as pods in the default namespace.

    • For BIG-IP Modular Architecture, the instance is deployed as a collection of pods that implement the various subsystems

F5OS-A is able to run a combination of classic BIG-IP tenants and BIG-IP Modular Architecture tenants.

The Kubernetes software will not be exposed to BIG-IP administrators; administrators will interact with the GUI, the CLI (confd), or the API to perform system tasks such as tenant provisioning, user management, interface management (including vlans and trunks), and licensing.

Kubernetes comes in many flavors. For F5OS-A, K3s is used, which is a lightweight Kubernetes distribution suitable for single-node clusters. For more information on K3s, seehttps://rancher.com/products/k3s.

systemctl status k3s output
[root@appliance-1 ~]# systemctl status k3s k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-11-04 17:59:46 UTC; 2h 8min ago
     Docs: https://k3s.io
  Process: 37468 ExecStartPre=/bin/cp -f /usr/libexec/k3s/images/virtctl /usr/local/bin/virtctl (code=exited, status=0/SUCCESS)
  Process: 36322 ExecStartPre=/bin/sleep 20 (code=exited, status=0/SUCCESS)
 Main PID: 37470 (k3s-server)
    Tasks: 347
   Memory: 903.3M
   CGroup: /system.slice/k3s.service
           ├─ 1541 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id 2c9f7cc8c42ee7d3e428...
           ├─23067 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id 22f2f1f117c51b60e1ab...
           ├─37470 /usr/local/bin/k3s server
           ├─37590 containerd
           ├─37678 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id 169b7562f2b760ce4be1...
           ├─41751 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3c8c2fbd1409814d46b2...
           ├─41944 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id 42b65fcfadbc631bfbbb...
           ├─42394 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id b9db7d93c21e5dc23b63...
           ├─42465 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id 54893d367a23ef70ad88...
           ├─42728 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id ef73bce4d256c2893202...
           ├─42973 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id e63bc4771ba7b5cd9628...
           ├─43015 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id 730b9341414b455976d3...
           ├─43917 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id ff5feb74187be536c36c...
           ├─44270 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id 663fe5aa42918e433cbf...
           ├─44616 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id 04c308e82830a7eef50f...
           ├─47890 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id e405c676e8cd0fed9c65...
           └─48160 /var/lib/rancher/k3s/data/2f28cf14a020a87eb72de81cb148d862849392eb2345d13ba28b8cf8c3beb64c/bin/containerd-shim-runc-v2 -namespace k8s.io -id 04161dda5a9c77cd419f...

Nov 04 20:03:48 appliance-1.chassis.local k3s[37470]: E1104 20:03:48.696080   37470 fieldmanager.go:191] "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed ..."/, Kind="
Nov 04 20:03:48 appliance-1.chassis.local k3s[37470]: E1104 20:03:48.698568   37470 node_controller.go:363] Error patching node with cloud ip addresses = [failed to patch...ce-1.chass
Nov 04 20:04:48 appliance-1.chassis.local k3s[37470]: E1104 20:04:48.707656   37470 fieldmanager.go:191] "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed ..."/, Kind="
Nov 04 20:04:48 appliance-1.chassis.local k3s[37470]: E1104 20:04:48.710085   37470 node_controller.go:363] Error patching node with cloud ip addresses = [failed to patch...ce-1.chass
Nov 04 20:05:48 appliance-1.chassis.local k3s[37470]: E1104 20:05:48.717845   37470 fieldmanager.go:191] "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed ..."/, Kind="
Nov 04 20:05:48 appliance-1.chassis.local k3s[37470]: E1104 20:05:48.720122   37470 node_controller.go:363] Error patching node with cloud ip addresses = [failed to patch...ce-1.chass
Nov 04 20:06:48 appliance-1.chassis.local k3s[37470]: E1104 20:06:48.728881   37470 fieldmanager.go:191] "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed ..."/, Kind="
Nov 04 20:06:48 appliance-1.chassis.local k3s[37470]: E1104 20:06:48.731268   37470 node_controller.go:363] Error patching node with cloud ip addresses = [failed to patch...ce-1.chass
Nov 04 20:07:48 appliance-1.chassis.local k3s[37470]: E1104 20:07:48.739068   37470 fieldmanager.go:191] "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed ..."/, Kind="
Nov 04 20:07:48 appliance-1.chassis.local k3s[37470]: E1104 20:07:48.741517   37470 node_controller.go:363] Error patching node with cloud ip addresses = [failed to patch...ce-1.chass
Hint: Some lines were ellipsized, use -l to show in full.

K3s, Kubernetes, Openshift and kubectl/oc

Openshift is used for container orchestration on VELOS chassis hardware running F5OS-C, but not on rSeries hardware running F5OS-A.

K3s is lightweight Kubernetes. This means that kubectl is bundled and can be used to run commands on the K3s API.

Openshift is Redhat’s implementation of Kubernetes. This means that Redhat’s command-line tool called ‘oc’ is not bundled in F5OS-A.

F5OS Versioning Scheme

image2020-9-23_13-30-40.png

Release Type Description/Frequency Lifecycle Test Scope Example Release
1 Major Release Usually, every year when major number changes or business need per PM.
Gives an opportunity to deprecate/retire older APIs(not previous release)
3 years Full regression testing of an existing feature.
Full manual and automated testing of new features
1.x.y-BuildNum
2.x.y-BuildNum
2 Minor Release Feature release based on PM discretion - usually 2-3 per year Up to 3 years Full regression testing of an existing feature.
Full manual and automated testing of new features
1.1.x-BuildNum
1.2.x-BuildNum
3 Patch release Bug fixes and usually on-demand based criticality of bugs. Up to 3 years Limited manual focusing on impacted area and fully automated test. 1.1.1-BuildNum
1.1.2-BuildNum
4 Engineering Hot Fix Nightly build+ blocking customer bug fix. Until next point release with fix is out NSIT testing + focus manual testing of the impacted area. 1.1.1-BuildNum-EHF-N

GUI Screen Shots

The BIG-IQ GUI framework is used for F5OS so elements and navigation should be familiar for those who have seen and used it before:

image

image

CLI

The CLI implements ConfD. This is the same approach that is taken in F5OS-C (VELOS chassis), but with reduced functionality (no support for partitions for example). The CLI documentation is posted publicly and can be found athttps://clouddocs.f5.com/api/velos-api/cli-index.html

API

F5OS-A listens on TCP port 443 for API requests. It uses RESTCONF to implement the API. RESTCONF is defined in RFC8040. RESTCONF uses a data modeling language called YANG (Yet Another Next Generation) v1.1, seeRFC 7950. The API is published publicly athttps://clouddocs.f5.com/api/velos-api/.

Restconf example: get the system license
$ curl -sku admin:admin https://<rSeries_mgmt_ip>/api/data/openconfig-system:system/f5-system-licensing:licensing/f5-system-licensing:config
...<outputs the license>...

$ curl -sku admin:admin -H "Accept: application/yang-data+json" https://<rSeries_mgmt_ip>/api/data/openconfig-system:system/f5-system-licensing:licensing/f5-system-licensing:config
...<outputs the license as a json object>...

iHealth

Generate a qkview in the GUI via System Settings :: System Reports. At the CLI, use**system diagnostics qkview capture**. Qkview from F5OS-A can be uploaded to iHealth.

Statistics

*Note: For BIG-IP tenants, the BIG-IP statistics will continue to function normally if you are logged into the BIG-IP tenant. You cannot access BIG-IP tenant statistics from the F5OS-A host.

Interface statistics can be found in the GUI in Network Settings :: Interface Statistics. At the command line (ConfD), use**show interfaces | tab** to see the same thing. (Tip: piping to tab tells confd to print the data as a table)

Logs

F5OS-A uses rsyslogd for system logging. For more information on logging commands, seeF5OS-A - system logging

Licensing, Provisioning, and other Requirements

As of this writing (November 2021), F5OS-A uses the same Auth5b licensing that classic BIG-IP uses. Standard BIG-IP entitlement checking applies.

Platform list for the license server:

  • R10600/10800/10900

  • R5600/5800/5900

  • R2600/2800

Good/better/best licenses are available, as well as individual module licenses. The F5OS-A host passes the license to the BIG-IP tenants.

You can apply a license using the GUI or via ConfD:

appliance-1(config)# system licensing install registration-key <key>

Is It Working or Is It Failing?

From the ConfD CLI:

show cluster
vanquish-01# show cluster
cluster state
cluster disk-usage-threshold state warning-limit 85
cluster disk-usage-threshold state error-limit 90
cluster disk-usage-threshold state critical-limit 97
cluster disk-usage-threshold state growth-rate-limit 10
cluster disk-usage-threshold state interval 60
cluster nodes node node-1
state enabled true
state node-running-state running
state platform fpga-state FPGA_RDY
state platform dma-agent-state DMA_AGENT_RDY
state node-info creation-time 2021-10-26T20:55:26Z
state node-info cpu 48
state node-info pods 110
state node-info memory 15729680Ki
state ready-info ready true
state ready-info last-transition-time 2021-10-26T21:08:54Z
state ready-info message "kubelet is posting ready status"
state out-of-disk-info last-transition-time ""
state out-of-disk-info message ""
state disk-pressure-info disk-pressure false
state disk-pressure-info last-transition-time 2021-10-26T21:02:46Z
state disk-pressure-info message "kubelet has no disk pressure"
state disk-usage used-percent 31
state disk-usage growth-rate 1
state disk-usage status in-range
DISK DATA DISK DATA
NAME VALUE
------------------------
available 77968138240
used 33831608320

STAGE NAME STATUS TIMESTAMP VERSION
--------------------------------------------------------------
K3SClusterInstall done 2021/10/26-20:55:53 1.21.1.1.8.0

cluster cluster-status summary-status "K3S cluster is initialized and ready for use."
INDEX STATUS
---------------------------------------------------------------------------------------------
0 2021-11-04 17:59:10.651696 - applianceMainEventLoop::Orchestration manager startup.
1 2021-11-04 17:59:10.656161 - Can now ping appliance-1.chassis.local (100.65.60.1).
2 2021-11-04 17:59:11.082261 - Successfully ssh'd to appliance 127.0.0.1.
3 2021-11-04 17:59:19.381679 - Appliance 1 is ready in k3s cluster.
4 2021-11-04 17:59:19.381735 - K3S cluster is ready.
5 2021-11-04 17:59:47.134061 - K3s IMAGE update is succeeded.

From the root prompt:

kubectl cluster-info
[root@appliance-1 ~]# kubectl cluster-info
Kubernetes control plane is running at https://100.75.3.71:6443
CoreDNS is running at https://100.75.3.71:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://100.75.3.71:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Tenants

F5OS-A’s primary function is to create, provision, and deploy BIG-IP tenants. F5OS-A does not pass live network application traffic, that’s the job of the tenants. In this version, only classic BIG-IP tenants are supported.

Before creating tenants, the rSeries administrator first needs to set up L2 networking (Create VLANs and assign interfaces to them), as these are needed in order to configure a BIG-IP tenant.

F5OS-A is multi-tenant, which means that you can create and deploy one or more BIG-IP tenants. BIG-IP tenants are assigned to VLANs, and VLANs are assigned to interfaces.

Tenants can be in one of three states (similar to vCMP):

State Meaning
Configured Tenant exists on partition
Hardware resources not allocated
Provisioned Virtual disks created
Deployed Tenant (VM) is running
Resources (CPU, memory, disk) in use