Service Proxy for Kubernetes (SPK) is a cloud-native application traffic management solution, designed for communication service provider (CoSP) 5G networks. SPK integrates F5’s containerized Traffic Management Microkernel (TMM) and Custom Resource Definitions (CRDs) into the OpenShift container platform, to proxy and load balance low-latency 5G workloads.
This document describes the SPK features and software components.
SPK supports the following protocols and features:
- TCP, UDP, SCTP, NGAP and Diameter traffic management
- OVN-Kubernetes CNI and SR-IOV interface networking
- Multiple dual-stack IPv4/IPv6 capabilities
- Egress request routing for internal Pods
- Redundant data storage with persistence
- Diagnostics, statistics and debugging
- Centralized logging collection
- Pod health monitoring
SPK software comprises three primary components:
The Custom Ingress Controller watches the Kube-API for Custom Resource (CR) update events, and configures the Service Proxy Pod based on the update. The Ingress Controller also monitors Kubernetes Service object Endpoints, to dynamically update Service Proxy TMM’s load balancing pools.
Custom Resource Definitions¶
Custom Resource Definitions (CRDs) extend the Kubernetes API, enabling Service Proxy TMM to be configured using SPK’s Custom Resource (CR) objects. CRs configure TMM to process application traffic using UDP, TCP, SCTP, NGAP and Diameter. CRs also configure TMM’s networking components such as self IP addresses and static routes.
The Service Proxy Pod comprises one or more TMM containers to proxy and load balance low-latency application traffic between networks. Additional Service Proxy containers may also be installed to assist with dynamic routing, logging collection and debugging.