Work with the F5 DNS Load Balancer Service

The F5 DNS Load Balancer Service allows you to create individual load balancing services for your apps and websites, providing intelligent distribution of traffic across server resources located in multiple geographies for better speed and reliability. It does this based on Load Balanced Records (LBRs) that hold the top-level information on how the load balancer is to operate. The LBR allows you to specify which hosts you are load balancing, and the rules to use to select the best DNS server for each end-user request.

When an end-user request comes in, the DNS load balancer matches the request against the list of hosts in the LBR. It then determines which region the request is coming from and chooses a pool of IP endpoints based on the proximity rules specified in the LBR. Then it uses the load balancing method specified for that pool to determine which IP endpoint will handle the request. It also maintains a health status of each IP endpoint so that only healthy IP endpoints are returned for use by end users.

Use the DNS Load Balancer Cloud Service dashboard

Access the DNS Load Balancer Cloud Service dashboard by using the DNS Load Balancer tab in the Cloud Services navigation menu.


On this page, you can:

  • Survey the overall health and status of your DNS Load Balancer Cloud Service configuration (Health indicator details)
  • View the service overview of each load balancer service in your DNS Load Balancer Cloud Service environment
  • Follow the links in the table cells to view the details for a specific load balancer service like pools, regions, and monitors
  • Create and enable a load balancer service

Setup load balancing for a zone

If you have a good understanding of how load balancing works, you can follow the steps below to setup the DNS Load Balancer service. Alternatively, you can go through the free Getting Started with F5 DNS Load Balancer training course available through LearnF5. The steps below are based on the same example used in the training course.

Load balancing overview: As traffic comes in, the DNS Load Balancer will look at each incoming request and choose an IP endpoint to service that request, based on the origin of the incoming request, the configuration of the IP endpoints, and the status or health of each IP endpoint. To setup a load balancing solution, there are a few steps you’ll go through. First, the load balancer must understand the health of each IP endpoint to make sure it can service requests, so you will assign monitors to the IP endpoints to make that possible. Next, you will likely want to create pools of IP endpoints allowing you to prioritize groups of IP endpoints based on criteria or a load balancing strategy. Next you’ll create regions to categorize incoming requests. Your last step will be to define how requests from various regions will be distributed between the pools you have created. Here are two examples of possible load balancing strategies.

Geography traffic routing example: This example routes traffic based on request origin and pool location. Looking at the two proximity rules, all requests from Europe will go to the EU pool because it has a higher score. Conversely, all requests from a non-Europe location will go to the global pool because those are not covered by the EU pool’s region. You might choose this strategy to help satisfy Europe’s GDPR. You might think of these two proximity rules as the following logical statement: If the request is from Europe, then send it to the EU Pool; otherwise, send it to the Global pool.


Disaster recovery example: This example routes all traffic to a primary pool, but if the primary pool should go off-line, then all traffic would be routed to the secondary pool. This happens because both the primary and secondary pools service requests from the same region, but the one with the highest score (the primary pool) will service the request. However, if all endpoints at the primary pool become unable to service requests, then that entire pool goes offline and the load balancer starts sending requests to the secondary pool. Thus, the secondary site is only used as a backup for a disaster at the primary site or other situations like routine maintenance of the site.


Create a DNS Load Balancer service

The following steps will create a DNS Load Balancer service based on the geography traffic routing example shown above.

  1. Click the DNS Load Balancer tab in the Cloud Services navigation menu.

  2. On the DNS Load Balancer tab, click the Create button.

  3. The DEFINE NEW SERVICE slide panel will appear. Enter the zone and optionally the division. If you have already created regions, pools, IP endpoints, and monitors, then you can then click Configure to assemble them into an LBR. For your First LBR, it might be easier to click Create for the new service, and then build the pieces prior to building the LBR. That is the method this example will use.

  4. Click the newly created service name to see the Load Balanced Records (LBR) list. Above the list are five tabs that allow you to build the LBRs. The Service overview tab provides an overview of the service. Because you have only created a shell so far, this view will be empty. The other four tabs are where you will fill in the information for how to load balance your traffic. The following steps will walk through those tabs to create an LBR.

  5. Create the monitors required for this load balancing service using the Monitor tab. A monitor defines a health test method for an application (IP Endpoint). For example, using the http protocol, the monitor will send a test string to a specific port for the IP endpoint, and then it evaluates the received string against the expected receive string. Different protocols will have different options for testing the application. IP endpoints that don’t respond, take too long to respond, or don’t return the correct information will be marked as unhealthy so that the load balancer doesn’t use them for end-user request results. For more information, see Details for creating an application health monitor.

    Enter the following values for the monitor properties:

    • Name this monitor - you will use this name when setting up the IP endpoints
    • Please specify a unique ID
    • Choose protocol - the protocol will determine the other data entry required

  6. Click the Manage IP endpoints tab. An IP endpoint is the IP address of the network endpoint—the server hosting an instance of an app or website. When an end-user request comes in, one of these IP addresses will be selected to handle the end-users request. The Manage IP endpoints tab will allow you to define where those application instances exist and choose which monitor to use for a health check.

    For each application instance, click Create Enter the values for the monitor properties and then click Save:


    After creating your endpoints, your setup should look something like the following image. This example is creating two endpoints that are located in Europe and two that are elsewhere. This is done in anticipation of directing traffic based on regions.

  7. Click the Maintain pools tab. This will allow you to create pools of endpoints so that the load balancer can direct traffic to the appropriate endpoint. The load balancer will choose different endpoints within the pool for each DNS request based on the load balancing method chosen for the pool. If there is a problem with one of the endpoints as determined by its monitor, then that endpoint will be taken offline, and the load balancer will only use the other endpoints.

    For each pool you want to create, click Create in the Maintain Pools tab, enter the values for the pool properties, and then click Next. You will then be able to add pool members.



    One of the properties for a pool is the load balancing method, which determines how the service chooses an IP endpoint within the pool. For more information on the available load balancing methods, see the FAQ document for the DNS Load Balancer cloud service - F5 DNS Load Balancer FAQ.

  8. If you have more than one pool, you need to tell the load balancer how to split the traffic between the pools. To do this click on Group into regions and then click Create to create each region.

    Regions are geographic areas specified by continents, countries, and states/provinces. These will be used in the LBR for specifying proximity rules for end-user requests. In other words, you can direct end-user requests to specific pools based on their region of origin.



    The Region name and Include these continents fields are required, but the Include these countries and Include these states/provinces region fields will default to “none” if left blank. Furthermore, these fields are all independent of each other, and all are added together to create the region. For instance, if you select Europe, Japan, and California in these fields, then any request coming from the content of Europe or the country of Japan or the US state of California will be covered by this region.

  9. Now that we have all the pieces needed for the LBR (Monitors, IP endpoints, Pools, and Regions), the next step is to create the LBR for this service. To do so, click the Service overview tab and then click Create.



    • Hosts - You can add more than one host by clicking the + at the right of the field. Clicking the - at the end of the field will delete that host. ‘*’ and ‘?’ wildcards are allowed, i.e. ww* will match anything that starts with ww, and ww? will match any 3 letter string that starts with ww. Specifically, a valid host is defined by this regular expression: ‘^([a-zA-Z0-9\*\?]|([a-zA-Z0-9\*\?]+-[a-zA-Z0-9\*\?]+)){0,253}$’. Note, however, that you cannot have overlapping entries, so ww* and www2 will produce a validation error because they are not unique.
    • Checkbox Cache responses so that clients receive persistent answers - Once the connection (the relationship) between an app instance and a client is established, checking this box will tell the load balancer to try to continue giving that client the same app instance. This is useful for apps that are stateful, because sending a client to a different app instance will require the user to log in again. Checking this box will also reduce latency because the DNS doesn’t need to search for an app instance.
    • IPv4 Clients and IPv6 Clients - These are network size parameters for grouping clients and how to handle persistence.
    • PROXIMITY RULES - These allow you to direct traffic coming from a specific region to be handled by a specific pool. You must provide at least one proximity rule with a valid pool to enabled an LBR. Each proximity rule has a score, which defines the priority of the rule. Rules with larger scores have higher priorities, so if a request matches two regions, the request will be handled by the rule with the higher score.
  10. The service is now created, and the last step is to enable the service. To do this, click the DNS Load Balancer tab in the Cloud Services navigation menu to see an overview of all your load balancer services. Select the service name of the newly created service using the checkbox next to it, and then click Enable. You will see the service health update, and the status for the service will go from Disabled to Enabling… to Enabled.


Health Indicators

An important part of the DNS Load Balancer Cloud Service is to provide a quick and easy way to see the overall health of the service, and the ability to quickly find problem areas so they can be fixed with minimal downtime. To accomplish this goal, DNS Load Balancer shows a health status for both the general service as well as the individual load-balancing services. The image below shows the health for both the DNS and the DNS Load Balancer services on the Your F5 Cloud tab in the Cloud Services navigation menu. You can click on the health status to get more details.


The image below shows an example of the DNS Load Balancer Cloud Service dashboard with the health status highlighted. The SERVICE HEALTH shown at the top left gives an account-level health aggregation of all your active load-balancing services, whereas the list of Health statuses on the right shows the health of each individual load-balancing service. This allows you quickly see if you have any service problems, and if so quickly scroll down the list of services to see which ones are having a problem.


The SERVICE HEALTH shown above provides a general sense of the problem based on the state and color shown. The table below shows the different health states along with their color and meaning.

State Description
HEALTHY All individual load-balancing services are blue/healthy
DEGRADED One or more services are yellow/degraded, and no services are red and not in service
NOT IN SERVICE One or more services are red/not in service
N/A All services are currently inactive

The service-level Health column shows the health of each service. The meaning of each color/state is shown in the table below.

State Description
HEALTHY This service is up and running in all deployed regions
DEGRADED This service is up and running but requires attention possibly due to configuration problems, or the service is down in at least one of deployed regions, but not all of them
NOT IN SERVICE This service is down due to a critical error or down in all deployed regions
NA This service is currently inactive

Details for creating an application health monitor

Monitors are used to verify the health of an IP endpoint (application/server) so that DNS Load Balancer can properly direct traffic only to IP endpoints that are in good working order. A monitor does this verification by sending a request to the IP endpoint and then comparing the response to an expected return string. For these monitor requests to always have access to your application, you should add all of the IP addresses used for health monitors in the table below to your application’s allow list: whitelist

Deployment regions download: DNS Load Balancer Deployment Regions

Below is a table showing the different monitor types and the parameters required. Standard monitor types do not have user definable send and receive strings; instead, DNS Load Balancer will use a simple, standard send string (like a simple HEAD request for HTTP_Standard) and look for an error/non-error return to determine health.

Monitor Type Send/Receive Strings
HTTP_Standard Not available
TCP_Standard Not available
ICMP_Standard Not available
HTTP_Advanced Required
HTTPS_Advanced Required
TCP_Advanced Required
UDP_Advanced Required


In previous versions of the API, valid monitor types were HTTP, HTTPS, and ICMP. These values have been deprecated in favor of the standard and advanced variants of those monitor types (except HTTPS which is always advanced). Furthermore, there are new TCP and UDP monitors. Using the old names or using a standard monitor type with a send or receive string will result in a “400 Bad Request” status.

Advanced monitors allow you to specify both a send and receive string for maximum flexibility. The send string is typically a request for a specific file from the IP endpoint being evaluated, like “GET /admin/monitor.html”, which will return the contents of the file. If there is an error, like Status: 404 Not Found, then the monitor will show the IP endpoint as unhealthy. If there is no error, the monitor will compare the returned contents to the receive string to determine the health of the IP endpoint. The receive string can be a simple string to match, or it can be a regular expression for more complex evaluations.

Line termination characters:

By default, DNS Load Balancer uses HTTP 0.9 when sending monitor requests. An HTTP 0.9 request consists of only a request line, which must be terminated by a line feed (LF) character. The request line consists of a single line containing the HTTP request method, a space, and the name of the object requested (including the path, if necessary). No version is specified. No headers or request body are supported in HTTP 0.9.

You can also send an HTTP 1.0 or 1.1 request that accommodates a wider range of request options. An HTTP 1.0 or 1.1 request begins with a request line that consists of a single line containing the HTTP request method, a space, the name of the object requested (including the path, if necessary), a space, and the HTTP version of the request. The request line may be followed by a list of headers. The request line and each header line in an HTTP 1.0 or 1.1 request must be terminated with a single carriage return/line feed (CR/LF) sequence, and the end of the request headers is indicated by a terminating double carriage return/line feed sequence (CR/LF/CR/LF). For some request methods, the header list may be followed by a request body.

The header and body requirements vary depending on the request version and the request method, so the appearance of HTTP requests differ widely. For example, in an HTTP 1.0 request, no headers are required. However, in an HTTP 1.1 request, the Host header is required, although it may contain a null value. The Connection header was also added in HTTP 1.1, allowing management of Keep-Alive connections intended to serve multiple requests. While this header was not officially part of the HTTP 1.0 specification, many HTTP 1.0 implementations implicitly support or expect Keep-Alive behavior based on this header.

For detailed specifications for each HTTP version, refer to the following locations:

  • For more information about HTTP 0.9 and 1.0, refer to RFC 1945.
  • For more information about HTTP 1.1, refer to RFC 2616.

Constructing HTTP/HTTPS monitor send strings

The CR and LF line termination characters are key to the accurate construction and parsing of HTTP requests, regardless of the HTTP version. Depending on the tool used to examine or construct the request, the CR and LF characters may be represented by the following characters:

Character Text (used to construct monitor Send Strings) Hex
Carriage Return (CR) \r 0x0d
Line Feed (LF) \n 0x0a

Steps to create a monitor send string:

  1. Type the request line, including the HTTP method, the HTTP version (optional), and the path to the requested object, followed by a single \r\n sequence.
  2. Type any desired or required headers, following all but the last header with a single \r\n sequence.
  3. If the request contains headers but does not contain a body, terminate the Send String with a double \r\n sequence.
  4. If a request body is specified, precede it with a double \r\n sequence. Note: A double \r\n sequence must separate the last header from the request body.
  5. If the request contains a body, no terminating sequence is required.

HTTP 1.1 examples

GET /index.html HTTP/1.1\r\nHost:\r\nConnection: Close\r\n\r\n

POST /form.cgi HTTP/1.1\r\nHost:\r\nConnection: Close\r\n\r\nFirst=Joe&Last=Cool

HTTP 1.0 examples

GET /index.html HTTP/1.0\r\n\r\n

GET /index.html HTTP/1.0\r\nUser-agent: Mozilla/3.0Gold\r\nReferer:\r\n\r\n

POST /form.cgi HTTP/1.0\r\n\r\nFirst=Joe&Last=Cool

HTTP 0.9 example

GET /index.html\r\n

For a detailed look at send and receive strings:

For a detailed look at regular expressions in receive strings:

View/edit the JSON configuration

Sometimes it is convenient to see an entire load-balancing service at once, maybe to quickly compare the details of different pools in the service, or to copy some or all of your configuration for pasting into an external document. To see the JSON configuration for your service, click DNS Load Balancer in the Cloud Services navigation menu to go to the DNS Load Balancer dashboard, and then click on the service you’d like to see in detail. Toward the top, click JSON configuration to see all the LBRs for that service.


This window displays the service in its JSON view, just as you would see when using the API, but it also provides standard text editing capabilities, including copy/paste and column editing.

View the details for a load balancing service

To view the details for a load balancing service, click DNS Load Balancer in the Cloud Services navigation menu to go to the DNS Load Balancer dashboard.

On the details page for the load balancing service, you can view information about the zone, such as its status, its fully qualified domain name (FQDN), its IPv4 and IPv6 addresses, and its zone file. You also can change some details about the zone, such as its name and the IP address for its primary DNS server.

  • Manage your IP endpoints
  • View and define the monitors for your endpoints
  • Maintain the pools for your load balancing
  • View and create regions for your proximity rules