Points of Management in VELOS¶
There are three main points of management within the VELOS chassis: the system controllers, the chassis partitions, and the individual tenants. Each supports their own CLI, webUI, and API access and have their own authentications and users.
Additionally, they each run their own version of software; tenants are able to run specific versions of TMOS which have been approved to run on the VELOS platform, and system controllers and chassis partitions each have their own version of F5OS-C software. The supported TMOS tenant versions and their supported F5OS versions for the various VELOS platforms can be found here:
In general, TMOS versions 14.1.4 and later, 15.1.4 and later, and 17.1.x and later are supported on the BX110 blades in the CX410 chasssis. There are no plans to support versions 16.0.x, 16.1.x, or 17.0.x, and there are no plans to support versions prior to 14.1.4.
The F5OS-C platform layer in VELOS runs its own version of F5OS, which is unique to the VELOS chassis. On downloads.f5.com, the VELOS versions of F5OS are referred to as F5OS-C, where the C stands for chassis. The rSeries appliances also run F5OS, but that version is designated as F5OS-A, where A stands for appliance. Most of the code and configuration interfaces of F5OS are common between VELOS and rSeries, but VELOS has unique F5OS features that are chassis specific. VELOS has two layers of F5OS (system controller and chassis partition), and each of these have their own software images, in addition to the tenants that run TMOS.
At the F5OS-C platform layer, initial configuration consists of out-of-band management IP addresses, routing, and other system parameters like DNS and NTP. Licensing is also configured at the F5OS layer, and is similar to VIPRION with vCMP configured, in that it is applied at the chassis level and inherited by all tenants. All these items are configured at the system controller layer. The administrator also configures chassis partitions (groupings of VELOS blades/slots), that each have their own management IP address, and CLI, GUI, and API interfaces.
Inside the chassis partition F5OS layer, interfaces, in-band networking (VLANs, interfaces, Link Aggregation Groups) are configured. Once networking is set up, tenants can be provisioned, and deployed from the F5OS chassis partition management interfaces. Once the tenant is deployed, it is managed like any other BIG-IP instance. This is very similar to how vCMP guests are managed on iSeries or VIPRION. Please refer to the VELOS Systems Administration Guide on my.f5.com for more detailed information.
Differences from iSeries and VIPRION¶
The management of VELOS/F5OS has a lot of similarities to how vCMP is managed on iSeries or VIPRION, in that there are two distinct layers of management. In the diagram below on the left, a typical vCMP environment has a host layer and a guest layer. At the vCMP host layer, all networking is configured including interfaces, trunks, and VLANs. When vCMP guests are configured, they will be assigned a set of VLANs by the administrator that it will have access to. The administrator may not want to give the guest access to all VLANs in the system and may only assign a small subset of VLANs from the host layer to a specific guest. Inside the TMOS layer of the guest does not require manual configuration of interfaces or trunks, it is the VLANs that are inherited from the vCMP host configuration that will provide connectivity. The guest will only have access to the VLANs specifically assigned to it when it was created. On the right-hand side is an F5OS environment (in this case VELOS), at the F5OS chassis partition platform layer, all networking is configured including interfaces, trunks (now called LAGs), and VLANs. When F5OS tenants are configured, they are assigned VLANs by the administrator that they will have access to. Inside the tenant itself does not require configuration of interfaces or LAGs, and VLANs will be inherited from the F5OS platform layer configuration. The F5OS tenant will only have access to the VLANs specifically assigned to it when it was created.
Comparing the management of a non-VCMP (bare metal) iSeries or VIPRION to VELOS is going to be a little bit different with the introduction of the F5OS platform layer. With a bare-metal deployment on iSeries/VIPRION, configuration objects such as interfaces, trunks, and VLANs are directly configurable from within the TMOS layer. Monitoring of the lower layer networking can also be done within the TMOS layer. When moving to VELOS, the configuration and monitoring of the lower-level networking objects are done at the F5OS platform layer. For SNMP monitoring there are separate SNMP MIBs for the F5OS layer that can be used to monitor interfaces and platform level statistics. F5OS doesn’t use the term trunk to represent aggregated links, it uses the term Link Aggregation Group or LAG. There are also F5OS APIs to monitor and configure the platform layer. The F5OS tenants themselves still support monitoring of higher layers.
VLANs are created in the F5OS platform layer, and then they can be assigned to separate interfaces or LAGs. When a tenant is created, the administrator can then assign one or more of those VLANs to be accessible by the F5OS tenant. Once the tenant is deployed the configured VLANs will automatically be inherited and will show up in the VLAN configuration inside TMOS. VLANs will automatically show up in Route Domain 0 by default. If you need to assign these VLANs to another Route Domain inside the tenant, then you may delete them from Route Domain 0 inside TMOS and then recreate them with the same VLAN ID inside the proper Route Domain, and connectivity will be restored to the lower F5OS layer. This is the same behavior a vCMP guest would have inside of VIPRION or iSeries as outlined in the following link.
Monitoring for a bare metal iSeries or VIPRION is all done within TMOS, whereas in VELOS there are now two layers that can be monitored. Interfaces, LAGs, and other platform layer objects such as CPU, memory, temperature, disks can be monitored at the F5OS layer via CLI, GUI, API, or SNMP. Higher level monitoring of virtual servers, pools and L4-7 objects continue to be done inside the TMOS layer of the F5OS tenant.
In general, F5OS tenants in the VELOS platforms have no visibility into the underlying physical interfaces or LAGs that are configured at the F5OS layer. The tenant will be connected to specific interfaces or LAGs based on its VLAN membership. The only exception to this is the HA Group functionality inside the tenant, which has visibility into LAG state and membership to facilitate proper redundancy/failover. As an example, an F5OS tenant on a VELOS BX110 blade has no visibility into the physical interfaces at the F5OS layer. Instead, the tenant will see virtual interfaces and the number of interfaces within a tenant will be based upon the number of CPUs assigned to the tenant. The screenshot below shows the interfaces inside the tenant lining up with the number of physical CPU cores per tenant. In the example there are 22 vCPUs assigned to a single F5OS tenant, this will equate to 11 physical CPUs due to hyperthreading. As seen in the output below, the tenant has 22 vCPUs assigned.
If you were to look inside the tenant, you’ll notice that the number of Interfaces corelates to the number of CPU cores assigned to the tenant, in this case 11. Note how the tenant does not see the physical interfaces at the F5OS layer.