Getting Familiar with the Lab

Welcome to the NGINX Performance Tuning and Security Hardening lab! This lab is divided into several sections:

  • Environment setup
  • Performance assessment & tuning
  • Security assessment & hardening

These sections are further subdivided into modules, accessible via the navigation on the left.

Materials

During this lab, we will be using the following systems, all running in F5’s Unified Demonstration Framework (UDF) cloud environment:

  • NGINX Proxy: A reverse proxy / load balancer on which we will be tuning the configuration to achieve better performance and security for our backend applications
  • App Server: Our mock backend application server that we will use to serve data and return information to inform our security posture
  • Locust Worker: Generates traffic load for our Performance Tuning modules and contains scripts to simulate various security attacks
  • Locust Controller: Manages the Locust Worker instance and provides a GUI interface for monitoring load
  • NGINX Instance Manager: A web based application for managing NGINX instances. You will use this to update the NGINX Proxy’s configuration file.

System Architecture Diagram

This diagram represents how the individual systems in the lab environment connect:

../../_images/lab-diagram.jpg

The Locust load generation tool will be retrieving a 1.5MB file from the backend application server.

Get to Know the Environment

Let’s familiarize ourselves with the UDF platform, lab instances and key utilities.

Note

This section will focus on the Performance Tuning portion of the lab. Security hardening systems, concepts and utilities will be covered in a later module.

  1. Log in to NGINX Proxy

Click on ACCESS and then WEB SHELL

../../_images/udf-proxy-webshell.png
  1. Review the nginx.conf file and some of the parameters already set
view /etc/nginx/nginx.conf
Copy to clipboard

As an example, the upstream app_servers config block includes:

  • zone backend 64k: Defines a shared memory zone allowing NGINX worker processes to synchronize information on the backend’s run-time state.

The server config block includes:

  • status_zone my_proxy: Defines a shared memory zone allowing NGINX worker processes to collect and synchronize information on the status of the server. This enables us to monitor HTTP server statistics in the NGINX Plus dashboard.
../../_images/codeblock.png

Quit out of the view application by typing :q followed by the Enter/Return key

  1. Go to NGINX Proxy Dashboard and review

Under ACCESS for the NGINX Proxy, select NGINX+ DASHBOARD

../../_images/udf-proxy-dashboard.png

Review the Dashboard and what is included under the tabs across the top of the page

../../_images/n-dashboard.png
  • HTTP Zones: this section contains the zone we defined in the proxy’s server block. It tracks collective requests, responses, traffic and SSL statistics. Note that SSL statistics are missing because for simplicity, we do not use SSL for this lab.
  • HTTP Upstreams: this sections contains statistics on the upstreams or backends that we defined in the proxy’s upstream block. It tracks connections, requests, responses, health statistics and other information related to the proxy’s connection to the application server.
  • Workers: this section contains statistics that are specific to individual NGINX worker processes.
  • Caches: this section is not yet visible. Later in the lab we will turn caching on and this section will display statistics related to the health of our proxy’s cache.
  1. Start up Locust controller software

Log on to the Locust Controller WEB SHELL

../../_images/udf-locust-controller-webshell.png

Review the Locust configuration files

cat /home/ubuntu/run_locust_controller.sh
Copy to clipboard
cat /home/ubuntu/locustfile.py
Copy to clipboard

Notice that the Locust load script is configured to get a file called “1.5MB.txt”, effectively putting load on the proxy.

Now start up the Locust Controller and web interface.

/home/ubuntu/run_locust_controller.sh
Copy to clipboard
  1. Access the Locust Controller Web Interface

Under Locust (Controller) ACCESS click on LOCUST to bring up the Web Interface

../../_images/udf-locust-controller-locust.png
  1. On Locust Worker node

Log on to the Locust (Worker) WEB SHELL

../../_images/udf-locust-worker-webshell.png

Verify 8-core machine, run this command to verify CPUs and their associated statistics.

mpstat -P ALL
Copy to clipboard

Warning

This command will output average statistics, including CPU %idle, since instance startup. In upcoming steps, we will append a 1 to the end of this command, which will instruct it to show data averaged from the preceding second.

../../_images/locus-cpu.png

Start up locust workers by running this command:

/home/ubuntu/start_locust_workers.sh
Copy to clipboard

This script with start all 8 workers (1 per CPU) in NOHUP mode, meaning you can close the shell window and they’ll keep running. However, it’s best to keep this window open to monitor the workers, which will log their output to nohup.out.

Tail the nohup.out file to monitor Locust workers

tail -f /home/ubuntu/nohup.out
Copy to clipboard

Sometimes, overloading Locust may cause worker threads to quit. We’ve tuned this lab so that shouldn’t happen, but if it does, you’ll want to terminate the workers and restart them. You can use the previously shown script to restart the workers. To terminate them, we’ve included the following script:

/home/ubuntu/terminate_locust_workers.sh

7. In Locust GUI, start the load generation Let’s begin with a basic test to get a performance baseline with our default settings.

Number of Users: 100

Spawn rate: 10

Host: http://10.1.1.9/

Advanced Options, Run time: 30s

../../_images/locus-10-100-30.png

Click the ‘Start swarming’ button

  1. Review graphs as they are generated

Click the Charts tab to review graphs as they are generated

../../_images/locust-menu.png

Note

What is happening with Total Request per Second and Response Time graphs

  1. Run same test again

Run the same test a 2nd time by clicking ‘New test’ at the top-right under ‘Status STOPPED’. Keep the settings the same as before and click the ‘Start swarming’ button.

../../_images/locus-new-test.png

Review NGINX Proxy CPUs while test is running. Back on NGINX Proxy WEB SHELL:

mpstat -P ALL 1
Copy to clipboard

Note

How much CPU is being used? Is the system fully saturated?

Review Locust GUI Charts

Note

Even when all test parameters are the same, tests will exhibit different results due to a multitude of external factors influencing system and network resources.

  1. Run another test but this time with more load

Number of Users: 500

Spawn rate: 50

Host: http://10.1.1.9/

Advanced Options, Run time: 30s

../../_images/locus-50-500-30.png

Review NGINX Proxy CPUs and the Locust GUI Charts

Note

How much CPU is being used? Is the system fully saturated? How was Total Request per Second affected by this additional load

  1. Run same test again and review NGINX Dashboard

How many Active Connections do you see?

Under HTTP Zones, review the total requests and responses count.