Overview: Virtual Servers¶
Introduction to virtual servers¶
A virtual server is one of the most important components of any BIG-IP Next configuration. A virtual server is a traffic-management object on the BIG-IP Next that is represented by a virtual IP address and a service port, for example, <ip-address>:<port number>
. When clients on an external network send traffic, the virtual server listens on the IP address and port, and directs the traffic to the destination using the destination address translation mapping.
The virtual server distributes traffic across the servers you specify. To further customize traffic handling, you can modify protocols and profiles. For TCP, UDP, HTTP, or other traffic types, choose a default or custom profile. For example, using protocols and profiles you can enable features like HTTP request data compression, SSL connection decryption/re-encryption, and/or SSL certificate verification. You can specify the pools that you want to use as the destination for any traffic coming from the virtual server.
To work effectively with virtual servers, familiarize yourself with the following concepts:
Types of virtual servers¶
You can select different types of virtual servers, depending on your particular configuration needs.
Following are the types of virtual servers:
Type | Description |
---|---|
Standard | A standard virtual server (also known as a load balancing virtual server) directs client traffic to a load balancing pool and is the most basic type of virtual server. When you first create a virtual server, you assign a pool to it. From then on, the virtual server automatically directs traffic to that pool. |
Forwarding (IP) | A Forwarding (IP) virtual server has no pool members to load balance. The virtual server forwards a packet directly to the configured destination IP address, based on what is defined in the BIG-IP Next routing table. Address translation is disabled when you create a forwarding (IP) virtual server, leaving the destination address in the packet unchanged. When creating a forwarding (IP) virtual server, as with all virtual servers, you can configure either a host IP forwarding virtual server, which forwards traffic for a single host address, or a network IP forwarding virtual server, which forwards traffic to the destination server. An example of a Forwarding (IP) virtual server is one that accepts all traffic on an external VLAN and forwards it to the virtual server destination IP address. Note: Pool is disabled when virtual server type is set to Forwarding (IP). |
About rate limiting¶
When you create a virtual server, you can configure a connection rate limit, in connections per second allowed for that virtual server. Setting a rate limit helps the system detect Denial of Service attacks, where too many connection requests can flood a virtual server. When the connection rate exceeds the configured rate limit, the system handles the excessive connections in different ways, depending on the connection type, either TCP or UDP:
When the TCP connection rate limit is too high, the BIG-IP Next resets the TCP connections and logs TCP reset messages. The logs record that the connection rate limit causes the resets.
When the rate limit is exceeded for UDP connections, the BIG-IP Next simply drops the connections.
Note: Use
rateLimit
parameter while configuring the declaration to the maximum number of connections per second allowed for a virtual server. For more details, refer AS3 schema CM Schema Reference.
About connection limit¶
The connection limit (maxConnections
) is used to limit the number of connections to the virtual server to avoid DoS attacks or to plan high-traffic events.
Note: Use
maxConnections
parameter while configuring the declaration to configure the connection limit to the virtual server. For more details refer AS3 schema CM Schema Reference.
About fallback persistence¶
The fallback persistence (fallbackPersistenceMethod
) creates a secondary persistence record for client connections.
Note: Use
fallbackPersistenceMethod
parameter while configuring the declaration to configure the fallback persistence profile. For more details, refer AS3 schema CM Schema Reference.
About wildcard virtual servers¶
A virtual server can direct client connections that have a specific destination IP address that the virtual server does not recognize, such as a transparent device. This type of virtual server is known as a wildcard virtual server. Examples of transparent devices are firewalls, routers, proxy servers, and cache servers.
Wildcard virtual servers are a special type of virtual server that have a network IP address as the specified destination address instead of a host IP address.
Unlike regular virtual servers, which are configured with specific destination IP addresses and ports, wildcard virtual servers offer greater flexibility. They use wildcard IP addresses and ports, allowing BIG-IP Next to handle a wider range of traffic.
Wildcard IP Address (0.0.0.0): Represents any IP address. When configured, the virtual server can accept traffic directed to any IP, making it adaptable to various network conditions.
Wildcard Port (0): Represents any port number. When set, the virtual server can manage traffic on any port, enabling it to handle different ports using the same IP address.
When the BIG-IP Next cannot find a specific virtual server that matches a client’s destination IP address, the BIG-IP Next matches the client’s destination IP address to a wildcard virtual server, designated by an IP address and port of 0.0.0.0:0
. The BIG-IP Next then forwards the client’s packet to one of the firewalls or routers assigned to that virtual server. Wildcard virtual servers do not translate the destination IP address of the incoming packet.
Note: For wildcard virtual servers to accept any traffic and forward it to any pool member, you must set the Enable address translation field in the Protocols and Profiles section to enable. In the case of wildcard pool member port, address translation and port translation need to be disabled.
Note: If the pool member port is set to 0, then it indicates that the server port translation is disabled.
Note: Address and Port translation are disabled by default on Forwarding IP type virtual server and enabled by default for Standard type virtual server.
Note: Connections are reset when accessing pool members with wildcard ports and access policy enabled in Security Policies. Monitors, like HTTP, HTTPS, or TCP, need specific service ports. But with pools with wildcard members (or port 0), the monitor will incorrectly reports the pool member as down and fail to pass traffic. Use an ICMP monitor instead when wildcard pool members are involved.
About protocols and profiles¶
Protocols and profiles are fundamental to the operation of a BIG-IP Next. Protocols ensure devices can communicate effectively, while profiles standardize configurations and enforce policies. Together, they enable administrators to manage complex networks efficiently and securely.
Protocols¶
In BIG-IP Next, protocols refer to the set of rules that dictate how data is transmitted and received across a network. Protocols ensure that different devices can communicate with each other effectively.
A few configurable protocols include:
HTTP/HTTPS (Hypertext Transfer Protocol/Secure): Enables secure web-based interface access and management.
SSH (Secure Shell): Provides secure administrative access to network devices.
Profiles¶
Profiles in a BIG-IP Next central manager are predefined configurations or settings that can be applied to devices or users to standardize and simplify management tasks.
Profiles might include:
User Profiles: These contain settings and permissions for individual users or groups, deciding what resources they can access and actions they can perform.
Device Profiles: These include configuration settings for different types of devices, ensuring they operate consistently and according to organizational policies.
Policy Profiles: These dictate various rules and policies that need to be enforced across the network, such as security settings, access control, or compliance requirements.
You can select multiple protocols and profiles. The following options are available:
Note: By default, SNAT, Auto SNAT, Address Translation, and Connection Mirroring are all enabled.
Enable HTTPS (Client-Side TLS) :
Enabling HTTPS (also known as Client-Side TLS) is an essential step for ensuring secure communication between clients and servers over the Internet. The below are a few scenarios how the HTTPS application is used:
Website Deployment:
When launching a new website or web application, enabling HTTPS is crucial to ensure that all data transmitted between the user’s browser and the server is encrypted.
This is especially important for e-commerce sites, online banking, and any platform that handles sensitive user information like personal data or payment details.
Compliance Requirements:
Organizations may be required to enable HTTPS to comply with regulatory standards such as GDPR, HIPAA, or PCI-DSS, which require secure data transmission practices.
Government and healthcare websites often need to adhere to strict security protocols, including HTTPS.
Search Engine Optimization (SEO):
Search engines like Google prefer HTTPS-enabled websites in their ranking algorithms. Enabling HTTPS can improve a website’s search engine ranking.
Google Chrome and other browsers mark HTTP sites as “Not Secure,” which can prevent visitors and impact user trust.
API Security:
When you make and deploy APIs that can be used over the internet, HTTPS is used to protect data sent between client applications and the API server.
Secure APIs are critical for mobile apps, web applications, and IoT devices that communicate with backend services.
Internal Systems:
Within corporate environments, HTTPS can be enabled for internal web applications and services to ensure secure communication within the network.
This is important for protecting internal data and maintaining the integrity of sensitive information.
Enable Server-side TLS : Enabling Server-side Transport Layer Security (TLS) is crucial for ensuring secure communication between servers and clients over the Internet. Here are several reasons why someone might enable Server-side TLS:
Data Encryption:
Purpose: TLS encrypts the data transmitted between the server and the client, making it unreadable to anyone who intercepts the communication.
Benefit: Protects sensitive information such as login credentials, personal data, and payment details from being accessed by malicious actors.
Authentication:
Purpose: TLS requires a digital certificate issued by a trusted Certificate Authority (CA), which verifies the server’s identity to the client.
Benefit: Ensures that clients are communicating with the legitimate server and not an imposter, preventing man-in-the-middle attacks.
Data Integrity:
Purpose: TLS includes mechanisms to ensure that the data sent between the server and the client has not been tampered with or altered during transmission.
Benefit: Guarantees the integrity of the data, protecting it from being corrupted or modified by attackers.
Compliance:
Purpose: Many industries have regulatory requirements that require the use of encryption for data transmission.
Benefit: Helps organizations comply with regulations such as GDPR, HIPAA, PCI-DSS, and others, avoiding legal penalties and maintaining trust.
User Trust and Confidence:
Purpose: Modern web browsers display visual indicators (like a padlock icon or green address bar) when a website uses TLS, signaling to users that the connection is secure.
Benefit: Makes users more likely to trust the website. This is especially important for online stores, online banking, and any platform that handles sensitive information.
SEO Benefits:
Purpose: Search engines like Google prefer HTTPS-enabled websites in their ranking algorithms.
Benefit: Improves search engine ranking and visibility, leading to increased traffic and better user engagement.
Protection Against Attacks:
Purpose: TLS helps protect against various types of cyberattacks, including eavesdropping, man-in-the-middle attacks, and session hijacking.
Benefit: Improves the overall security posture of the server and the network, reducing the risk of data breaches and other security incidents.
Secure API Communications:
Purpose: For applications that use APIs to communicate with backend services, enabling TLS ensures that the data exchanged via APIs is encrypted and secure.
Benefit: Protects sensitive data transmitted through APIs, which is critical for mobile apps, web applications, and IoT devices.
Internal Security:
Purpose: Within corporate environments, enabling TLS for internal web applications and services ensures secure communication within the network.
Benefit: Protects internal data from being intercepted and ensures that sensitive information remains confidential within the organization.
Enable SNAT
SNAT stands for Source Network Address Translation. It’s a type of Network Address Translation (NAT) that modifies the source address of IP packets as they pass through a router or firewall. This is typically used to allow multiple devices on a local network to access external networks, such as the Internet, using a single public IP address. By default, SNAT is enabled.Enable Auto SNAT
Auto SNAT (Automatic Source Network Address Translation) is a feature commonly used in cloud computing environments, particularly in services like Microsoft Azure. SNAT itself is a form of NAT where the source IP address of outgoing packets from a private network is replaced with a public IP address. This is essential for instances where internal resources need to communicate with external services on the Internet or other external networks. In the context of Azure, Auto SNAT simplifies the management of outbound connectivity for virtual machines (VMs) in a virtual network. When Auto SNAT is enabled, Azure automatically changes private IP addresses to public IP addresses. This lets VMs start communicating to other VMs without needing a manually set NAT gateway or load balancer.Enable FastL4
FastL4 refers to a setting or feature found in various network and application delivery controllers, such as those from F5 Networks. FastL4 is a configuration mode designed to improve the performance of Layer 4 (L4) traffic, which includes transport layer protocols like TCP and UDP. When you enable FastL4, you’re essentially configuring the system to handle traffic at the transport layer with minimal processing. This is particularly useful for scenarios where high throughput and low latency are critical, and where advanced application-layer (Layer 7) processing features are not required.Enable HTTP2 Profile
HTTP2 Profile typically refers to a configuration setting or option in web servers, load balancers, or CDN services that allows you to enable HTTP/2 protocol support. Enabling HTTP/2 can provide several benefits, including:Multiplexing: Multiple requests and responses can be sent in parallel over a single TCP connection, reducing latency.
Header Compression: HTTP/2 uses HPACK compression to reduce the size of HTTP headers, which can improve performance, especially for repetitive headers.
Stream Prioritization: Clients can specify the priority of different streams, allowing more important resources to be loaded first.
Binary Protocol: Unlike HTTP/1.1 which is text-based, HTTP/2 uses a binary format, reducing parsing overhead and making it more efficient. To enable HTTP/2, you generally need to ensure that your web server or CDN supports it and then turn it on through the appropriate configuration settings.
Enable Address Translation
Enable Address Translation typically enables Network Address Translation (NAT) on a router or firewall. NAT is a method used in networking to remap one IP address space into another by modifying network address information in the IP header of packets while they are in transit across a traffic routing device. This process is most commonly used to enable multiple devices on a local network to access the internet using a single public IP address. By default, the Address Translation profile is enabled.Enable HTTP Profile
Enable HTTP Profile refers to configuration settings or options related to the HTTP protocol used in web servers, load balancers, application delivery controllers, or similar networking equipment. These profiles help improve and manage HTTP traffic according to specific requirements or use cases. Enabling an HTTP profile can help improve performance, security, or compatibility. Configuring an HTTP profile can be a useful tool for managing web traffic.Enable UDP Profile
Enable UDP Profile typically refers to configuration settings or options related to the User Datagram Protocol (UDP) in networking equipment such as load balancers, application delivery controllers (ADCs), firewalls, or even certain types of servers. These profiles help manage and optimize UDP traffic according to specific requirements or use cases. In UDP Idle Timeout, enter the time in seconds; this specifies that a connection can remain idle (has no traffic) before the system deletes the connection.Note: Enabling the UDP Profile profile will automatically disable the TCP and FastL4 profiles.
Enable TCP Profile
Enable TCP Profile generally refers to configuration settings or options related to the Transmission Control Protocol (TCP) in networking equipment such as load balancers, application delivery controllers (ADCs), firewalls, or even certain types of servers. TCP is a connection-oriented protocol that ensures reliable and ordered delivery of data between devices on a network. Enabling a TCP profile typically involves fine-tuning TCP settings to improve performance, reliability, and security. In TCP Idle Timeout, enter the time in seconds; this specifies that a connection can remain idle (has no traffic) before the system deletes the connection.Note: Enabling the UDP Profile profile will automatically disable the UDP and FastL4 profiles.
Enable Connection Mirroring
Enable Connection Mirroring is a feature for load balancers, firewalls, and application delivery controllers (ADCs). This feature allows the duplication or mirroring of network traffic for specific connections to another device or location. Connection mirroring can be useful for various purposes, including troubleshooting, monitoring, analytics, and security. By default, connection mirroring profiles are enabled.
About security policies in virtual servers¶
You can add, remove, or change security policies on virtual servers.
You can select multiple security policies. The following options are available:
Use a WAF Policy: Select the WAF policy to attach to the virtual server. Click Create to create a new WAF policy or click Clone to clone a selected policy. For more information about properties while creating a WAF policy, refer to Create a new WAF policy.
Use an Access Policy: Select the access policy to attach to the application, you can also select per request access policy for the application. The drop-down lists the available access policies, to create an access policy, refer to How To: Create and manage policies using BIG-IP Central Manager.
Use an SSL Orchestrator Policy: Select SSL Orchestrator Policy for the virtual server. The drop-down lists the available SSLO policies, to create a new SSLO policy, refer to How to: Manage Security Policies.
Use an SSL Orchestrator Static Service Chain: Select one or more Inspection services for the virtual server, If no inspection services are available, click Start Adding and select the inspection services from the drop-down. To know more about inspection services and how to create inspection services, refer to Overview: Inspection Services.
About iRules in virtual servers¶
You can select one or more iRules to attach to the virtual server. The iRules are executed in the listed order, the first iRule in the list is executed first. Use the up or down arrows to change the priority order. For more information, refer to How to: Create and manage iRules on BIG-IP Next Central Manager.
About network configurations in virtual servers¶
Network configurations in BIG-IP Next central manager refers to the set of properties to manage the network.
The following properties are available:
Enable VLANs: Enable VLANs to filter VLANs on which the virtual server listens on to accept traffic for the application. Select the VLANs to set up a client device or network interface to detect and manage traffic coming from specific VLANs. The default VLAN options are the list of VLANs that are available in the instance selected for the application.
Enable VRFs: Enable VRFs or VLANs on a VRF to filter VRFs on which the virtual server listens on.
Auto Last Hop: When enabled, allows the BIG-IP Next to send return traffic from the pool to the MAC address of the original client, regardless of the network or interface listed in the routing table. This ensures that the client receives the return traffic even if there is no corresponding route, such as when the BIG-IP Next does not have a default route and the client is on a remote network. This is useful for load-balancing transparent devices that do not modify the source IP address of packets. Without Auto Last Hop, there is a risk of asymmetric routing if the BIG-IP Next sends return traffic to a different transparent node. The auto last hop is enabled by default.
Note: To configure virtual servers, refer to How to: Manage Virtual Servers.