BIG-IP Next Central Manager Sizing Guidelines¶
Overview¶
This document details the supported scale limits for F5® BIG-IP® Next™ Central Manager (BIG-IP Next Central Manager) and its high availability setup with three nodes for the 20.2.1 release, considering various configurations and usage dimensions. Due to the extensive range of services supported by Central Manager and the diverse customer configurations, F5 cannot test all possible combinations. The information provided here offers guidance on maximum numbers and averages that customers can anticipate in their environment. These figures should serve as a starting point for customers to conduct their own sizing exercises and tailor them to their specific configuration and workload. It’s important to note that in addition to Central Manager product performance, scale limits are influenced by factors beyond F5 control, such as host CPU speed, memory, networking, storage performance, infrastructure being dedicated or shared, virtualization software, and more. As a result, the scale limits experienced in a customer environment may vary significantly. Also, work with your F5 representative before enabling and using any system in a production environment.
BIG-IP Next Central Manager Latency requirement¶
The network latency between the BIG-IP Next Central Manager High Availability (HA) nodes should not exceed 200 milliseconds.
BIG-IP Next Central Manager and BIG-IP Next version¶
This guideline document is applicable to BIG-IP-Next-CentralManager-20.2.1
BIG-IP Next Central Manager Standalone Setup¶
Device | Version |
---|---|
BIG-IP Next Central Manager | BIG-IP-Next-CentralManager-20.2.1-0.3.25 |
BIG-IP Next | BIG-IP-Next-20.2.1-2.430.2+0.0.48 |
BIG-IP Next Central Manager High Availability (HA) Setup¶
Device | Version |
---|---|
BIG-IP Next Central Manager | BIG-IP-Next-CentralManager-20.2.1-0.3.25 |
BIG-IP-Next | BIG-IP-Next-20.2.1-2.430.2+0.0.48 |
BIG-IP Next Central Manager Hardware Configuration¶
BIG-IP Next Central Manager comes with the following hardware configuration:
Component | vCPUs | RAM | Disk space |
---|---|---|---|
BIG-IP Next Central Manager | 8 vCPUs | 16 GB | 350 GB |
BIG-IP Next | 4 vCPUs | 8 GB | 80 GB |
Scale guidance configuration for BIG-IP Next Central Manager specific objects¶
Modules |
Metrics |
Max number of devices discovered in CM Standalone (1 node) |
Max number of devices discovered in CM High Availability (3 nodes) |
---|---|---|---|
No. of Instances |
50 |
50 |
|
No. of Concurrent Sessions1 |
4 |
4 |
|
LTM |
LTM apps2 |
500 |
500 |
No. of Pools2 |
500 |
500 |
|
No. of Pool Members / End Points2 |
500 |
500 |
|
No. of iRules supported |
500 |
500 |
|
No. of Certificates |
500 |
500 |
|
WAF |
WAF apps2 |
500 |
500 |
No. of WAF Policies - non-rating |
250 |
250 |
|
No. of WAF Policies - rating |
250 |
250 |
|
No. of WAF Policies - non-rating (max per instance) |
100 |
100 |
|
No. of WAF Policies - rating (max per instance) |
100 |
100 |
|
WAF Events peak throughput |
400 events/seconds |
1300 events/seconds |
|
WAF Logs |
6 million |
6 million |
|
Access |
Access apps2 |
1000 |
1000 |
Max access sessions (per instance) |
600 |
600 |
|
SSLO |
SSLO apps2 |
500 |
500 |
Layer3 inspection services (per instance) |
10 |
10 |
|
SSLO service chains with inspection 2+ services assigned (per instance) |
10 |
10 |
|
SSLO policies with 30+ rules and 10 service chains (per instance) |
20 |
20 |
|
Retention |
WAF Analytics3 |
24 days |
24 days |
WAF Events4 |
15 hrs |
15 hrs |
|
Server Error Analytics5 |
20 min |
20 min |
|
Rate of data injection6 |
1000 events/mins per instance |
1400 events/min per instance |
Table Notes¶
Number of Concurrent Sessions: The user is simultaneously deploying apps, invoking APIs for app listings, WAF policy listings, and other use cases.
The applications (LTM, WAF, Access, SSLO) are evenly distributed across all BIG-IP Next instances discovered by BIG-IP Next Central Manager.
WAF Analytics: The retention of the WAF Analytics index is based on the deployed applications and is estimated accordingly.
WAF Events: The retention of the WAF-events index is determined by the events generated per second during the traffic tests.
Server Error Analytics: According to the retention policy, the server-error-analytics index roll over every 10 minutes with two index shards. It is designed to retain 20 minutes of data.
Rate of data injection: The data injection rate refers to the number of events generated by applications deployed on an instance per unit of time.
The table below describes the types of metrics/events stored in the indices.
Index | Type of Metrics/Events |
---|---|
Server Error Analytics | Endpoint responses |
WAF events | All WAF traffic events |
WAF Analytics | WAF events aggregations (blocked, legal, alarmed, dropped etc.). |