Last updated on: 2024-03-26 06:01:13.

Xen Project: BIG-IP VE Setup

To deploy BIG-IP Virtual Edition (VE) on Xen Project, you will perform these tasks.

Step Details
1 Choose the license you want to buy, the BIG-IP VE modules you want, and the throughput you need. See K14810: Overview of BIG-IP VE license and throughput limits on the AskF5 Knowledge Base for details.
2 Confirm that you are running a hypervisor version that is compatible with a BIG-IP VE release. See BIG-IP Virtual Edition Supported Platforms for details.
3 Verify that the host hardware meets the recommended requirements.
4 If you plan to use SR-IOV, enable it on the hypervisor.
5 Download a BIG-IP VE image and deploy it.
6 If you are running a multi-NIC configuration without DHCP, manually assign an IP address for the BIG-IP Config Utility.

After you complete these tasks, you can log in to the BIG-IP VE system and run the Setup utility to perform basic network configuration.

About single NIC and multi-NIC configurations

A typical BIG-IP VE configuration can include four NICs: one for management, one for internal, one for external, and one for high availability.

However, if you want to create a VM for a quick test, you can create a configuration with just one NIC. In this case, BIG-IP VE creates basic networking objects for you.

When BIG-IP VE first boots, it determines the number of active NICs. If BIG-IP VE detects one NIC, then:

  • Networking objects (vNIC 1.0, a VLAN named Internal, and an associated self IP address) are created automatically for you.

  • The port for the Configuration utility is moved from 443 to 8443.

    Note

    If there is no DHCP server in your environment and no IP address automatically assigned, then the networking objects will not be created and the port will not be moved. As an example, do the following, which uses the same IP address 192.168.80.53/24 for management and self IP:

    1. Disable DHCP and enable setting a static address, tmsh modify sys global-settings mgmt-dhcp disabled. See this routes topic for more information.
    2. Disable single NIC auto-config, tmsh modify sys db provision.1nicautoconfig value disable. See this KVM topic for BIG-IP VE 13.1.X for more information.
    3. Ensure management route will persist, tmsh modify sys db provision.1nic value forced_enable.
    4. Move management port, tmsh modify sys httpd ssl-port 8443. See this K31003634 article for more information.
    5. Add TCP port to the default port lockdown protocols and services, tmsh modify net self-allow defaults add { tcp:8443 }.
    6. Configure static management IP address, tmsh create sys management-ip 192.168.80.53/24 description 'provisioned by tmos_static_mgmt'
    7. Create and attach internal VLAN to interface 1.0, tmsh create net vlan internal { interfaces replace-all-with { 1.0 { } } tag 4094 mtu 1450 }. Be aware that this configuration my already exist and can produce the following error: “The requested VLAN (/Common/internal) already exists in partition Common.”
    8. Create self IP, assign the same IP as the management IP, and assign internal VLAN to default port lockdown policy, tmsh create net self self_1nic { address 192.168.80.53/24 allow-service default vlan internal }.
    9. Create management route gateway, tmsh create sys management-route default gateway 192.168.80.1.
    10. Define the TMM default route, tmsh create net route default network default gw 192.168.80.1.
    11. Save the configuration, tmsh save sys config base.
  • High availability (failover) is not supported, but config sync is supported.

  • VLANs must have untagged interface.

If BIG-IP VE detects multiple NICs, then you create the networking objects manually:

  • The port for the Configuration utility remains 443.
  • You can change the number of NICs after first boot and move from single to multi-NIC and vice versa.
  • VLANs can have tagged interfaces.

Prerequisites for BIG-IP Virtual Edition

Host CPU requirements

The host hardware CPU must meet the following requirements.

  • The CPU must have 64-bit architecture.
  • The CPU must have virtualization support (AMD-V or Intel VT-x) enabled in the BIOS.
  • The CPU must support a one-to-one, thread-to-defined virtual CPU ratio, or on single-threading architectures, support at least one core per defined virtual CPU.
  • If your CPU supports the Advanced Encryption Standard New Instruction (AES-NI), SSL encryption processing on BIG-IP VE will be faster. Contact your CPU vendor for details about which CPUs provide AES-NI support.
  • Set CPU appropriately for the required MHz per core. For example, if the hypervisor has 2.0GHz cores, and the VE is set to 4 cores, you will need 4x2.0GHz reserved for 8GHz (or 8000MHz).

Host memory requirements

Number of cores Memory required
1 2 Gb
2 4 Gb
4 8 Gb
8 16 Gb

Configure SR-IOV on the hypervisor

To increase performance, you can enable Single Root I/O Virtualization (SR-IOV). You need an SR-IOV-compatible network interface card (NIC) installed and the SR-IOV BIOS must be enabled.

See the Xen Project documentation for details.

To complete SR-IOV configuration, after you deploy BIG-IP VE, you must add three PCI device NICs and map them to your networks.

Virtual machine memory requirements

The guest should have a minimum of 4 GB of RAM for the initial 2 virtual CPUs. For each additional CPU, you should add an additional 2 GB of RAM.

If you license additional modules, you should add memory.

Provisioned memory Supported modules Details
4 GB or fewer Two modules maximum. AAM can be provisioned as standalone only.
4-8 GB Three modules maximum. BIG-IP DNS does not count toward the module limit. Exception: Application Acceleration Manager (AAM) cannot be provisioned with any other module; AAM is standalone only.
8 GB Three modules maximum. BIG-IP DNS does not count toward the module-combination limit.
12 GB or more All modules. N/A

Important

To achieve licensing performance limits, all allocated memory must be reserved.

Virtual machine storage requirements

The amount of storage you need depends on the BIG-IP modules you want to use, and whether or not you intend to upgrade.

Provisioned storage Supported modules Details
9 GB (LTM_1SLOT) Local Traffic Manager (LTM) module only; no space for LTM upgrades. You can increase storage if you need to upgrade LTM or provision additional modules.
40 GB (LTM) LTM module only; space for installing LTM upgrades. You can increase storage if you decide to provision additional modules. You can also install another instance of LTM on a separate partition.
60 GB (ALL_1SLOT) All modules except Secure Web Gateway (SWG); no space for installing upgrades. The Application Acceleration Manager (AAM) module requires 20 GB of additional storage dedicated to AAM. If you are not using AAM, you can remove the datastore disk before starting the VM.
82 GB (ALL) All modules except SWG and space for installing upgrades. The Application Acceleration Manager (AAM) module requires 20 GB of additional storage dedicated to AAM. If you are not using AAM, you can remove the datastore disk before starting the VM.

For production environments, virtual disks should be deployed Thick (allocated up front). Thin deployments are acceptable for lab environments.

Note

To change the disk size after deploying the BIG-IP system, see Increase disk space for BIG-IP VE.

Virtual machine network interfaces

When you deploy BIG-IP VE, a specific number of virtual network interfaces (vNICs) are available.

Each virtual machine can have a maximum of 28 NICs.

Deploy BIG-IP VE in Xen Project

To deploy BIG-IP VE, you will create and execute a configuration file.

Important: Do not change the configuration (CPU, RAM, and network adapters) of the Azure guest environment with settings less powerful than those recommended and described here.

  1. In a browser, open the F5 Downloads page and log in.

  2. On the Downloads Overview page, click Find a Download.

  3. Under Product Line, click the link similar to BIG-IP v.x/Virtual Edition.

  4. Click the link similar to x.x.x_Virtual-Edition.

  5. If the End User Software License is displayed, read it and then click I Accept.

  6. Download the BIG-IP VE file package ending with qcow2.zip.

  7. Extract the file from the Zip archive and save it where your qcow2 files reside on the Xen Project server.

  8. Use VNC to access the Xen Project server, and then convert the qcow2 image to the raw format necessary for Xen Project. You can use the following syntax to convert the image.

    # qemu-img convert <qcow_file_name>.qcow2 <raw_file_name>.raw
    
  9. Generate a MAC address for the network interface card associated with the management interface.

    Important: Be sure that the MAC address you create starts with the prefix 00:16:3e:.

    You can use a tool such as MAC Address Generator to create this address.

  10. Use an editor to create a definition file that specifies the required parameters for your VM.

    Use the following example to create a configuration file with the parameters and settings you need.

    vi /etc/xen/<config_file_name>
    
    name = <config_file_name>
    maxmem = 4096
    memory = 4096
    vcpus = 2
    builder = "hvm"
    boot = "c"
    pae = 1
    acpi = 1
    apic = 1
    hpet = 1
    localtime = 0
    on_poweroff = "destroy"
    on_reboot = "restart"
    on_crash = "restart"
    sdl = 0
    vnc = 1
    vncunused = 1
    keymap = "en-us"
    disk = [ "file:/mnt/xen-bender/bigip/<raw_file_name.raw>,hda,w" ]
    vif = [ "mac=00:16:<mgmt_interface_mac>,bridge=mgmtbr,script=vif-bridge,type=vif",
    "mac=00:16:3e:<external_interface_mac>,bridge=ext_bridge,script=vif-bridge,type=vif",
    "mac=00:16:3e:<internal_interface_mac>,bridge=int_bridge,script=vif-bridge,type=vif",]
    parallel = "none"
    serial = "pty"
    #pci = [ '05:10.0', '05:10.1' ]
    

    Important: The last line of the example shows an optional entry, used for SR-IOV, with the IDs of the PCI external and internal network interface card (NIC). If you use this entry, omit the external and internal bridges specified in the vif section.

    After you have tested and saved your configuration file, you are ready to create the BIG-IP VE.

  11. Run the configuration file using an open source tool such as xm.

    xm create /etc/xen/<config_file_name>
    

    If the startup was successful, the console will display text such as: Started domain <config_file_name>(id=444).

  12. Allow sufficient time for the boot-up process in order to connect to the BIG-IP VE console.

    # xm console <config_file_name>
    

Use BIG-IP configuration utility tool to set management IP address

If your network has DHCP, an IP address is automatically assigned to BIG-IP VE during deployment. You can use this address to access the BIG-IP VE Configuration utility or tmsh command-line utility.

If no IP address was assigned, you can assign one by using the BIG-IP Configuration utility tool.

  1. Connect to the virtual machine by using the hypervisor’s console.

  2. At the login prompt, type root.

  3. At the password prompt, type default.

    Note

    If prompted, change your password.

  4. Type config and press Enter.

    The F5 Management Port Setup screen opens.

  5. Click OK.

  6. Select No and follow the instructions for manually assigning an IP address and netmask for the management port.

    You can use a hypervisor generic statement, such as tmsh show sys management-ip to confirm that the management IP address was set properly.

    You can now log into the BIG-IP VE Config utility using a browser, and license and provision BIG-IP VE.

See Also