Hierarchical Port Binding

Overview

Neutron Hierarchical Port Binding (HPB) allows users to dynamically allocate network segments for nodes connected to a switch fabric.

HPB relies on the Neutron ML2 drivers to identify network types and manage network resources. The F5 Integration for OpenStack Neutron LBaaS supports HPB on “vlan” and “opflex” networks. [1]

When using HPB, the F5 Agent needs to know which external provider network the BIG-IP device(s) connects to. This information allows the F5 Agent to discover Neutron provider attributes in that network and create corresponding network and LTM objects on the BIG-IP device(s).

Use Case - standard

A “standard” HPB deployment uses the built-in OpenStack ML2 drivers. It doesn’t depend on any one SDN controller or ML2 driver plugin to function. In this deployment, the F5 Agent can create services on networks with type: vlan

In this use case, you can create LBaaS objects on an undercloud physical BIG-IP device/cluster for VLANs that are dynamically created in a specific network segment. As noted in the OpenStack documentation, this can be useful if you need your Neutron deployment to scale beyond the 4K-VLANs-per-physical network limit. [2]

F5 LBaaSv2 Hierarchical Port Binding

F5 LBaaSv2 Hierarchical Port Binding

Use Case - Cisco APIC/ACI, OpenStack OpFlex, and Red Hat OSP

This HPB deployment is specific to the Cisco ACI with OpenStack OpFlex Deployment Guide for Red Hat. It requires the use of the Cisco Application Policy Infrastructure Controller (APIC) and Application Centric Infrastructure (ACI) fabric; Red Hat OpenStack Platform ; and the OpenStack OpFlex ML2 plugin driver. In this deployment, the F5 Agent can create services on networks with type: vlan or type: opflex.

Note

This use case describes a reference architecture developed in partnership with Cisco and Red Hat.

Network topology

For this use case, the test topology consists of:

  • a small ACI Spine/Leaf network fabric;
  • 1 APIC cluster used to manage the fabric;
  • 1 OpenStack Neutron controller;
  • 2 OpenStack compute nodes;
  • 1 2-NIC BIG-IP device.
Physical Connectivity
Interface Network connection
BIG-IP mgmt interface OpenStack management/api network
BIG-IP NIC 1 (e.g., 1.1) External network not managed by Neutron
BIG-IP NIC 2 (e.g., 1.2) Leaf switch ports in ACI fabric
OpenStack compute nodes Leaf switch ports in ACI fabric

Segmented VLANs from a specified VLAN pool (1600-1799) will carry traffic between the Neutron networks and the BIG-IP device. The BIG-IP device connects directly to an external network to simplify VIP allocation.

BIG-IP device setup

  • Two (2) VLANS configured in the Common partition: “external” and “internal”.
  • “Internal” connects to a switch port in the ACI fabric.
  • “External” connects to the external network (which Neutron doesn’t know about).
  • Each network has a self IP with the following properties:
    • Netmask: 255.255.255.0
    • Traffic Group: traffic-group-local-only
    • Partition: Common

Note

You do not need to manually configure the VLANs in the VLAN pool on the BIG-IP device; HPB and the F5 Agent will create them automatically.

ACI setup

  • Follow the Cisco ACI with OpenStack OpFlex Deployment Guide for Red Hat to set up ACI, OpenStack, and the OpFlex ML2 plugin.
  • Create a VLAN pool in your desired range (1600-1799, in this example).
  • Create a physical domain for the BIG-IP device.
  • Associate the physical domain with the VLAN pool and AEP you created for the OpenStack plugin.

Neutron setup

  • Two (2) subnets – Net100 and Net101
  • Dummy network; this is a flat network created using the CIDR for the external network connected to BIG-IP interface 1.1.
  • L3-Out network representing traffic back out to the external network core.

Adding the “dummy” network to Neutron lets Neutron and the BIG-IP device reserve IPs from the network for allocation to LBaaS objects.

Testing

  • Deploy a Neutron loadbalancer on subnet “Net100”.
  • Create a listener (virtual server) on the loadbalancer.
  • Add a pool and two (2) members to the pool in subnet “Net101”.
  • Send traffic to the loadbalancer and verify that it is load balanced across the BIG-IP pool member endpoints.

Footnotes

[1]The Cisco OpFlex ML2 plugin allows integration of the F5 Agent with Cisco ACI Fabric.
[2]OpenStack ML2 Hierarchical Port Binding specs.