Networking Overview


To support the high-performance networking demands of communication service providers (CoSPs), Service Proxy for Kubernetes (SPK) requires three primary networking components: SR-IOV, OVN-Kubernetes, and BGP. The sections below offer a high-level overview of each component, helping to visualize how they integrate together in the container platform:


SR-IOV uses Physical Functions (PFs) to segment compliant PCIe devices into multiple Virtual Functions (VFs). VFs are then injected into containers during deployment, enabling direct access to network interfaces. SR-IOV VFs are first defined in the OpenShift networking configuration, and then referenced using SPK Helm overrides. The sections below offer a bit more detail on these configuration objects:

OpenShift configuration

The OpenShift network node policies and network attachment definitions must be defined and installed first, providing SR-IOV virtual functions (VFs) to the cluster nodes and Pods.

In this example, bare metal interfaces are referenced in the network node policies, and the network attachment definitions reference node policies by Resource Name:


SPK configuration

The Ingress Controller installation requires the following Helm tmm parameters to reference the OpenShift network node policies and network node attachments:

  • cniNetworks - References SR-IOV network node attachments, and must be in the same order as the network node policies.
  • OPENSHIFT_VFIO_RESOURCE - References SR-IOV network node policies, and orders the f5-tmm container’s interface list.

Once the Ingress Controller is installed, TMM’s external and internal interfaces are configured using the F5SPKVlan Custom Resource (CR).

In this example, the SR-IOV VFs are referenced and ordered using Helm values, and configured as interfaces using the F5SPKVlan CR:



The Open Virtual Network (OVN) Kubernetes CNI is based on Open vSwitch. OVN-Kubernetes must be used as the default CNI to enable features relevant to SPK such as egress-gw.

_images/spk_info.png Note: OVN-Kubernetes is referred to as iCNI2.0 or Intellegent CNI 2.0.

The egress-gw feature in OVN-Kubernetes enables Pods within a specific Project to use alternate gateways. This is important for enabling Pods in the controller.watchNamespace Project to route egress traffic through Service Proxy TMM using Virtual Functions (VFs), instead of the default (virtual) node network. This configuration is enabled by adding OVN-Kubernetes Annotations to the Service Proxy TMM Pod.

Gateway Annotations

Using OVN, each node in the cluster is assigned an IP address subnet for the Pods they host, and they are designated as the default gateway for that subnet. When Pods initiate connections to external resources, OVN routes the network packets to Service Proxy TMM based on the following OVN annotations:

  • - Sets the Project for Pod egress traffic using the Ingress Controller watchNamespace Helm parameter.
  • - Sets the Pod egress gateway using the F5SPKVLan spec.internal parameter.

In this example, OVN creates mapping entries in the OVN DB, routing egress traffic to TMM’s internal VLAN self IP address:


Viewing OVN routes

Once the application (Pods) are installed in the Project, use the steps below to verify the OVN DB routes are pointing to Service Proxy TMM’s internal interface.

_images/spk_info.png Note: The OVN-Kubernetes deployment is in the openshift-ovn-kubernetes Project.

  1. Log in to the OVN DB:

    oc exec -it ds/ovnkube-master -n openshift-ovn-kubernetes -- bash
  2. View the OVN routing table entries using TMM’s VLAN self IP address as a filter:

    ovn-nbctl --no-leader-only find Logical_Router_Static_Route nexthop=<tmm self IP>

    In this example, TMM’s self IP address is

    ovn-nbctl --no-leader-only find Logical_Router_Static_Route nexthop=

    In this example, routing entries exist for Pods with IP addresses and, pointing to TMM self IP address

    _uuid               : 61b6f74d-2319-4e61-908c-0f27c927c450
    ip_prefix           : ""
    nexthop             : ""
    options             : {ecmp_symmetric_reply="true"}
    policy              : src-ip
    _uuid               : 04c121ff-34ca-4a54-ab08-c94b7d62ff1b
    ip_prefix           : ""
    nexthop             : ""
    options             : {ecmp_symmetric_reply="true"}
    policy              : src-ip

The OVN DB example confirms the routing configuration is pointing to TMM’s VLAN self IP address. If this entry does not exist, OVN annotations are not being applied and further OVN-Kubernetes troubleshooting should be performed.


When TMM is scaled beyond a single instance in the Project, each TMM Pod receives a self IP address from the F5SPKVlan IP address list. Also, OVN-Kubernetes creates a routing entry in the DB for each of the Service Proxy TMM Pods and routes as follows:

  • OVN applies round robin load balancing across the TMM Pods for each new egress connection.
  • Connection tracking ensures traffic arriving on an ECMP route path returns via the same path.
  • Scaling TMM adds or deletes OVN DB routing entries for each Running TMM replica.

In this example, new connections are load balanced and connection tracked:



SPK’s application traffic Custom Resources configure Service Proxy TMM with a virtual server IP address and load balancing pool. In order for external networks to learn TMM’s virtual server IP addresses, Service Proxy must deploy with the f5-tmm-routing container, and a Border Gateway Protocol (BGP) session must be established.

In this example, the tmm-routing container advertises TMM’s virtual IP address to an external BGP peer:


For assistance configuring BGP, refer to the BGP Overview.

Ingress packet path

With each of the networking components configured, and one of the SPK Custom Resources (CRs) installed, ingress packets traverse the network as follows:



Provide feedback to improve this document by emailing