Manage HTTP/2 Traffic for 5G Applications¶
HTTP/2¶
The Hypertext Transfer Protocol (HTTP) is an application protocol that has been the de facto standard for communication on the World Wide Web since its invention in 1989. From the release of HTTP/1.1 in 1997 until recently, there have been few revisions to the protocol. But in 2015, a reimagined version called HTTP/2 was introduced which provided several ways of decreasing latency, especially when dealing with mobile platforms and server-intensive graphics and videos.
5G Core (5GC)¶
Unlike 4G most of the control plane signaling in the 5GC is now based on HTTP/2. Legacy protocols like GPRS Tunnelling Protocol (GTPv2) and DIAMETER are reserved for 4G/5G interworking. 5GC now adopts a new Service Based Architecture (SBA) where control plane communications between Network Functions (NFs) is implemented using RESTful APIs with HTTP/2 methods.
Such communication will be both long lived and peer to peer.
Long Lived¶
HTTP/2 connections are long lived because a single connection will be used to send messages on behalf of multiple sessions (end user devices). When a connection is established, the destination device will specify the maximum number of concurrent streams available (each identified by its own stream ID). The client device will continue to use that connection to send messages on a new stream until all streams have been used. Once all streams have been used, the connection will be closed and a new connection will be opened. But each message (in its own stream) could likely form part of a different session.
Peer to Peer¶
HTTP/2 connections are peer to peer because the relationship between different NFs is no longer simple client-server. Messages will flow in both directions between NFs. One NF will connect and send request messages to another NF. The receiving NF will then respond with an answer on the same connection. If the receiving NF needs to send something to the originating NF, it will open another connection in the other direction.
Binding Indication¶
A NF Service Consumer can communicate either directly or indirectly with a NF Service Producer. When communication is indirect, a Service Communication Proxy (SCP) is inserted between the Consumer and Producer.
When a Consumer communicates with a Producer, the Producer may return a binding indication to the Consumer. The Consumer stores the received binding indication and uses it for subsequent requests about the data context.
Service Based Interface (SBI)¶
The SBI is a core component of the Service Communication Proxy (SCP) which forms part of the control-plane in a 5G network. One of its functions is to control the routing of HTTP/2 messages between instances of consumer and producer NFs so that responses can be routed back to the originator, whether that be the consumer or producer.
Persistence will be required to ensure that all session communication occurs between the same two instances of a NF. Furthermore, this communication may often be bi-directional because many Network Function APIs are implemented using a subscribe notify pattern, so bi-directional persistence will be required.
iRules will be used to store and retrieve session persistence data in dSSM.
Distributed Session State Management (dSSM)¶
The service proxy (TMM) PODs have no knowledge of each other, even when they are installed in the same namespace, so if their state is to be shared, it must be stored elsewhere.
dSSM manages the centralised and persistent storage of session state including persistence data for all service proxy Pod’s using a Redis database.
Further information about dSSM can be found here.
Redis is an open source (BSD licensed), in-memory data structure store which can be used as a database, cache or message broker. It supports data structures such as strings, hashes, lists and sets.
Architecture¶
The dSSM subsystem is a stateful set of three Sentinel Pods and three Redis database Pods hosted on different physical nodes for redundancy.
Fluentbit is a sidecar which transmits Redis logs to a Fluentd Pod.
The dSSM and service proxy Pods should be in different Kubernetes namespaces.
NOTE: Sentinel and database Pods are co-located on the same node for faster inter-pod communication. This can be extended to include the TMM. Just bear in mind that the dSSM is a cluster wide resource and a single dSSM instance could support many different TMM deployments in many different namespaces (10 or more). Hence maintaining any affinity of TMM to dSSM may not be feasible. Datapath latency is only a secondary goal. The primary goal is to keep Redis and therefore the Sentinels available so that they can quickly detect a Pod failure.
Secrets¶
dSSM can operate in either TLS or plain text mode. By default, TLS is enabled and dSSM clients use Kubernetes Secrets to establish secure communication channel with dSSM. Presently the Sentinels and database use the same certificates and keys.
Run this script to generate the dSSM secret manifest files (certs-secret.yaml and keys-secret.yaml).
The secrets should be added to the namespace used by dSSM prior to installing the dSSM Helm chart.
oc get secrets -n peter-dssm | grep dssm
dssm-certs-secret Opaque 3 4m9s
dssm-keys-secret Opaque 1 3m51s
Security Context Constraint (SCC)¶
A SCC is an OpenShift resource that restricts a Pod to a group of resources, like a Kubernetes Security Context. The primary purpose of both is to limit a Pod’s access to the host environment. The namespace where dCCM is installed must have a privileged SCC.
oc get sa -n peter-dssm (service account)
NAME SECRETS AGE
builder 2 146m
default 2 146m
deployer 2 146m
#
oc adm policy add-scc-to-user privileged -n peter-dssm -z default
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "default"
#
oc describe rolebindings system:openshift:scc:privileged -n peter-dssm
Name: system:openshift:scc:privileged
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: system:openshift:scc:privileged
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount default peter-dssm
FluentD Helm Chart¶
Modify the default values.yaml in the Helm Chart so that FluentD can access a persistence volume. Also enable suitable logging.
#FluentD control to write logs to volume. FluentD PersistentVolumeClaim automatically created.
persistence:
enabled: true
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner.
##
storageClass: "nfs-client"
# Configuration for f5ingress logs
f5ingress_logs:
# Enable/disable f5ingress logs processing
enabled: true
# Enable/disable the output to stdout for the debug purposes
stdout: true
# Configuration for dssm logs
dssm_logs:
# Enable/disable dssm logs processing
enabled: true
# Enable/disable the output to stdout for the debug purposes
stdout: true
dssm_sentinel_logs:
# Enable/disable sentinel logs processing
enabled: true
# Enable/disable the output to stdout for the debug purposes
stdout: true
oc get sc (storage class)
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 55d
#
helm install fluentd f5-toda-fluentd-1.10.1.tgz -n peter-net --values values.yaml
NAME: fluentd
LAST DEPLOYED: Mon Jan 23 03:45:38 2023
NAMESPACE: peter-net
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Log aggregator - FluentD is deployed, which get logs from fluentbit sidecars.
FluentD outputs:
'stdout' is "true"
'persistent volume' is "true"
Persistent volume claim created with:
accessModes: "ReadWriteOnce"
storage: "3Gi"
storageClassName: nfs-client
FluentD hostname: f5-toda-fluentd.peter-net.svc.cluster.local.
FluentD port: "54321"
Use this info to connect to it:
--set f5-toda-logging.fluentd.host="f5-toda-fluentd.peter-net.svc.cluster.local."
--set f5-toda-logging.fluentd.port=54321
TLS is NOT enabled, connection is insecure. Deploy with --set tls.enabled=true
FluentD service IP family:
serviceIpFamily: .Values.serviceIpFamily
dSSM Helm Chart¶
The name of the Service Account (SA) must be specified. Here default is being used and since it already exists it does not have to be created. Persistence storage is enabled by default. To disable persistence –set db.persistent_storage=disable
helm install f5-dssm f5-dssm-0.22.18.tgz -n peter-dssm --set serviceAccount.name="default" --set serviceAccount.create="false"
NAME: f5-dssm
LAST DEPLOYED: Mon Jan 23 02:22:57 2023
NAMESPACE: peter-dssm
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Check the status of all pods/service by running
kubectl --namespace peter-dssm get all
F5Ingress¶
tls-keys-certs-secret¶
Use the commands given below to generate a self-signed SSL/TLS certficate and key, Base64 encode them and then create the tls-keys-certs-secret secret.
NOTE: In production SSL/TLS certificates should be signed by a well-known certificate authority (CA).
openssl genrsa -out ca.key 4096
openssl req -x509 -new -nodes -key ca.key -sha256 -days 365 -out ca.crt \
-subj "/C=US/ST=WA/L=Seattle/O=F5/OU=Dev/CN=ca"
openssl genrsa -out client.key 4096
openssl req -new -key client.key -out client.csr \
-subj "/C=US/ST=WA/L=Seattle/O=F5/OU=PD/CN=client.com"
openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key \
-set_serial 101 -outform PEM -out client.crt -extensions req_ext -days 365 -sha256
openssl base64 -A -in client.crt -out client-encode.crt
openssl base64 -A -in client.key -out client-encode.key
echo "apiVersion: v1" > tls-keys-certs-secret.yaml
echo "kind: Secret" >> tls-keys-certs-secret.yaml
echo "metadata:" >> tls-keys-certs-secret.yaml
echo " name: tls-keys-certs-secret" >> tls-keys-certs-secret.yaml
echo "data:" >> tls-keys-certs-secret.yaml
echo -n " client.crt: " >> tls-keys-certs-secret.yaml
cat client-encode.crt >> tls-keys-certs-secret.yaml
echo " " >> tls-keys-certs-secret.yaml
echo -n " client.key: " >> tls-keys-certs-secret.yaml
cat client-encode.key >> tls-keys-certs-secret.yaml
oc apply -f tls-keys-certs-secret.yaml -n peter-net
Helm Chart¶
Modify values.yaml for the chart f5ingress to support dSSM.
NOTE: The Cluster Wide Controller (CWC) is responsible for the licensing and normal operation of SPK so must be setup beforehand.
NOTE: Any environment variable starting with TMM_MAPRES_ is reserved for F5 internal/testing. They are unsupported and their behaviour could change.
NOTE: PAL_CPU_SET: Determines how many TMM threads can be started. Otherwise, TMM can use all the CPU cores available on the node, which is neither recommended or supported since these CPU should be used by both the node’s own management processes and other workloads. Do not use this environmental variable when Topology Manager is enabled.
Further information on Kubernetes Topology Manager can be found here
NOTE: To use the F5SPKIngressHTTP2 CR set tmm.tlsStore.enabled to true. This allows TMM to mount the secret tls-keys-certs-secret created previously.
NOTE: Here the application namespace peter-app is being watched for changes.
tmm:
image:
repository: sea.artifactory.f5net.com/f5-gsnpi-docker/spk
topologyManager: false
# these are the network attachment definitions
cniNetworks: peter-net/macvlan-internal,peter-net/macvlan-external
tlsStore:
enabled: true
customEnvVars:
- name: "PAL_CPU_SET"
value: "4"
- name: REDIS_CA_FILE
value: "/etc/ssl/certs/dssm-ca.crt"
- name: REDIS_AUTH_CERT
value: "/etc/ssl/certs/dssm-cert.crt"
- name: REDIS_AUTH_KEY
value: "/etc/ssl/private/dssm-key.key"
- name: SESSIONDB_DISCOVERY_SENTINEL
value: "true"
- name: SESSIONDB_EXTERNAL_SERVICE
value: "f5-dssm-sentinel.peter-dssm"
sessiondb:
useExternalStorage: "true"
controller:
watchNamespace: peter-app
fluentbit_sidecar:
enabled: true
vlan_grpc:
enabled: true
f5_lic_helper:
enabled: true
name: f5-lic-helper
cwcNamespace: default
rabbitmqCerts:
ca_root_cert: LS0tLS1CRUdJTiBDRVJUSUZ...
...
f5-toda-logging is a subchart of the Ingress Helm chart.
helm install f5ingress f5ingress-dev/f5ingress --version 7.0.13 -n peter-net --values values.yaml
Error: INSTALLATION FAILED: execution error at (f5ingress/charts/f5-toda-logging/templates/fluentbit_cm.yaml:90:25): f5-toda-logging.fluentd.host is required
#
helm install f5ingress f5ingress-dev/f5ingress --version 7.0.13 -n peter-net --set f5-toda-logging.fluentd.host="f5-toda-fluentd.peter-net.svc.cluster.local." --values values.yaml
NAME: f5ingress
LAST DEPLOYED: Mon Jan 23 06:38:45 2023
NAMESPACE: peter-net
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The F5Ingress Controller has been installed.
TMM debug sidecar is deployed. To access: kubectl exec -it deployment/f5-tmm -c debug -n peter-net -- bash
VLANs¶
apiVersion: "k8s.f5net.com/v1"
kind: F5SPKVlan
metadata:
name: "vlan-internal"
spec:
name: internal
interfaces:
- "1.1"
selfip_v4s:
- 10.11.23.1
prefixlen_v4: 24
internal: true
---
apiVersion: "k8s.f5net.com/v1"
kind: F5SPKVlan
metadata:
name: "vlan-external"
spec:
name: external
interfaces:
- "1.2"
selfip_v4s:
- 10.12.23.1
prefixlen_v4: 24
Troubleshooting¶
The following checks can be performed to determine whether the SPK Container Native Engine (CNE) infrastructure required for this use case seems to be complete and operational.
ns peter-dssm¶
Check whether all the expected Kubernetes objects are installed. The dSSM database and Sentinels can run on up to three worker nodes (here only two are available). DB-0 is the master, whereas DB-1, DB-2 are the replicas.
oc get all -n peter-dssm -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/f5-dssm-db-0 2/2 Running 0 7m28s 10.131.1.22 ocp-jwong-worker1 <none> <none>
pod/f5-dssm-db-1 2/2 Running 0 6m54s 10.128.2.15 ocp-jwong-worker2 <none> <none>
pod/f5-dssm-db-2 0/2 Pending 0 6m19s <none> <none> <none> <none>
pod/f5-dssm-sentinel-0 2/2 Running 0 7m28s 10.131.1.21 ocp-jwong-worker1 <none> <none>
pod/f5-dssm-sentinel-1 2/2 Running 0 6m47s 10.128.2.16 ocp-jwong-worker2 <none> <none>
pod/f5-dssm-sentinel-2 0/2 Pending 0 6m14s <none> <none> <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/f5-dssm-db ClusterIP 172.30.59.70 <none> 6379/TCP 7m28s app=f5-dssm-db
service/f5-dssm-sentinel ClusterIP 172.30.19.122 <none> 26379/TCP 7m28s app=f5-dssm-sentinel
NAME READY AGE CONTAINERS IMAGES
statefulset.apps/f5-dssm-db 2/3 7m28s f5-dssm,fluentbit artifactory.f5net.com/f5-mbip-docker/f5-dssm-store:v1.21.0,artifactory.f5net.com/f5-toda-docker/f5-fluentbit:v0.2.0
statefulset.apps/f5-dssm-sentinel 2/3 7m28s f5-dssm,fluentbit artifactory.f5net.com/f5-mbip-docker/f5-dssm-store:v1.21.0,artifactory.f5net.com/f5-toda-docker/f5-fluentbit:v0.2.0
Check whether all the correct secrets are installed.
oc get secrets -n peter-dssm | grep Opaque
dssm-certs-secret Opaque 3 30h
dssm-keys-secret Opaque 1 30h
ns peter-net¶
Check whether all the expected objects are installed.
oc get all -n peter-net
NAME READY STATUS RESTARTS AGE
pod/f5-tmm-6cfcb8f678-hs2st 3/3 Running 0 3m7s
pod/f5-toda-fluentd-5c67dbbf94-cjpfw 1/1 Running 0 5h35m
pod/f5ingress-f5ingress-7fb475854b-ghq5l 3/3 Running 0 3m7s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/f5-toda-fluentd ClusterIP 172.30.137.199 <none> 54321/TCP 5h35m
service/f5-validation-svc ClusterIP 172.30.73.124 <none> 5000/TCP 3m8s
service/grpc-svc ClusterIP 172.30.201.222 <none> 8750/TCP 3m8s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/f5-tmm 1/1 1 1 3m8s
deployment.apps/f5-toda-fluentd 1/1 1 1 5h35m
deployment.apps/f5ingress-f5ingress 1/1 1 1 3m8s
NAME DESIRED CURRENT READY AGE
replicaset.apps/f5-tmm-6cfcb8f678 1 1 1 3m7s
replicaset.apps/f5-toda-fluentd-5c67dbbf94 1 1 1 5h35m
replicaset.apps/f5ingress-f5ingress-7fb475854b 1 1 1 3m7s
Check whether all the expected secrets are installed.
oc get secrets -n peter-net | grep Opaque
certs-secret Opaque 5 46d
client-certs Opaque 3 2m37s
dssm-certs-secret Opaque 3 4d3h
dssm-keys-secret Opaque 1 4d3h
f5ingress-f5ingress-default-server-secret Opaque 2 2m37s
keys-secret Opaque 5 46d
server-certs Opaque 3 48d
tls-keys-certs-secret Opaque 2 3m52s
Check whether dSSM is reachable.
oc exec -it pod/f5-tmm-6c96c59856-lgmsw -n peter-net -c debug -- bash
debuguser@f5-tmm-6c96c59856-lgmsw:~$ ping f5-dssm-db-0.f5-dssm-db.peter-dssm
PING f5-dssm-db-0.f5-dssm-db.peter-dssm.svc.cluster.local (10.131.1.22) 56(84) bytes of data.
64 bytes from 10.131.1.22 (10.131.1.22): icmp_seq=1 ttl=64 time=4.08 ms
64 bytes from 10.131.1.22 (10.131.1.22): icmp_seq=2 ttl=64 time=2.47 ms
64 bytes from 10.131.1.22 (10.131.1.22): icmp_seq=3 ttl=64 time=1.44 ms
Check whether Internal and External Self IPs have been added to the f5-tmm container.
oc exec -it f5-tmm-6c96c59856-lgmsw -c f5-tmm -n peter-net -- sh
# ip a show dev internal
11: internal: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
link/ether 72:d6:5a:47:c5:ab brd ff:ff:ff:ff:ff:ff
inet 10.11.23.1/24 brd 10.11.23.0 scope global internal
...
# ip a show dev external
12: external: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
link/ether 96:46:7e:8e:ce:e6 brd ff:ff:ff:ff:ff:ff
inet 10.12.23.1/24 brd 10.12.23.0 scope global external
...
Persistent Volume Claim (PVC)¶
Check whether DSSM and fluentd is bound to persistence storage.
oc get pvc -n peter-net
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-f5-dssm-db-0 Bound pvc-ed25465f-c2d1-4e43-ad98-df0e786167d1 1Gi RWO nfs-client 3d
data-f5-dssm-db-1 Bound pvc-c769487d-49d6-4677-8c78-1cd3a87577a4 1Gi RWO nfs-client 2d20h
data-f5-dssm-db-2 Bound pvc-5b414684-ac9f-4008-9878-8b331bee944f 1Gi RWO nfs-client 2d20h
f5-toda-fluentd Bound pvc-7848848a-d5bd-4958-b2b9-d4f41b3b1f72 3Gi RWO nfs-client 110s
Redis¶
Check whether the database is available, the number of masters and slaves and the number of connected clients.
oc exec -it f5-dssm-db-0 -n peter-dssm -- bash
Defaulted container "f5-dssm" out of: f5-dssm, fluentbit
f5docker@f5-dssm-db-0:/data$ redis-server -v
Redis server v=6.2.7 sha=00000000:0 malloc=libc bits=64 build=6451b1e2df9a9a8f
f5docker@f5-dssm-db-0:/data$ redis-cli -v
redis-cli 6.2.7
f5docker@f5-dssm-db-0:/data$ redis-cli --tls --cacert /etc/ssl/certs/dssm-ca.crt --cert /etc/ssl//certs/dssm-cert.crt --key /etc/ssl/certs/dssm-key.key
127.0.0.1:6379> role
1) "master"
2) (integer) 38141578
3) 1) 1) "10.128.2.15"
2) "6379"
3) "38141578"
127.0.0.1:6379> info
# Server
...
uptime_in_days:1
hz:10
...
# Clients
connected_clients:6
cluster_connections:0
...
# Stats
total_connections_received:86977
total_commands_processed:443292
...
# Replication
role:master
connected_slaves:1
slave0:ip=10.128.2.15,port=6379,state=online,offset=11974346,lag=1
master_failover_state:no-failover
...
Use “redis-cli” on port 26379 to connect to sentinel. It is also possible to tail the /var/log/sentinel/sentinel.log on all sentinels to see the cluster interaction.
oc exec -it f5-dssm-sentinel-0 -n peter-dssm -- bash
Defaulted container "f5-dssm" out of: f5-dssm, fluentbit
f5docker@f5-dssm-sentinel-0:/data$ redis-cli --tls --cacert /etc/ssl/certs/dssm-ca.crt --cert /etc/ssl//certs/dssm-cert.crt --key /etc/ssl/certs/dssm-key.key -p 26379
127.0.0.1:26379> SENTINEL MASTERS
1) 1) "name"
2) "dssmmaster"
3) "ip"
...
SPK HTTP/2 Customer Resource Definition (CRD)¶
The f5-spk-ingresshttp2s CR configures the Service Proxy Traffic Management Microkernel (TMM) to proxy and load balance low-latency 5G SBI messages using an HTTP/2 protocol Virtual Server and a load balancing pool consisting of 5G NF endpoints. The F5SPKIngressHTTP2 CR fully supports SSL/TLS termination, bi-directional traffic flow and connection persistence based on network packet headers, footers and JSON contents.
Static Route Field¶
By default persistence is bidirectional.
Value | Description |
---|---|
persistBidirectional | Specifies whether persistence should be bidirectional when a packet match occurs: true (default) or false. |
persistField | Specifies a custom field that matching requests use as a persistence key. The field specifies the name of a particular HTTP header (e.g. X-Session-ID) or a pseudo-header such as :m (method), or :u (uri). Paths to values in the JSON payload may also be specified such as :JSON:key1:key2, with each :key in the path navigating one level deeper into the JSON object tree. |
oc get crd/f5-spk-ingresshttp2s.k8s.f5net.com -n default -o jsonpath='{.spec.versions[*].schema.openAPIV3Schema.properties.spec.properties.staticRoutes}{"\n"}' | jq
{
"description": "A set of custom routes applied in the order defined and before default routing behavior for a request\n",
"items": {
"description": "A custom route defined by a set of conditions and the resultant behavior for a request that matches these conditions\n",
"properties": {
"conditions": {
"description": "An array of up to four conditions that must all be met for this static route to be selected.",
"items": {
"properties": {
"caseSensitive": {
"default": true,
"description": "Specifies if the operation for this conditional should be evaluated in a case-sensitive manner.",
"type": "boolean"
},
"comparisonOp": {
"default": "SR_COMPARE_EQUALS",
"description": "The operation to perform as the comparison between the field-name and value to determine whether a condition is met.",
"enum": [
"SR_COMPARE_NONE",
"SR_COMPARE_EQUALS",
"SR_COMPARE_NOT_EQUALS",
"SR_COMPARE_STARTS_WITH",
"SR_COMPARE_ENDS_WITH",
"SR_COMPARE_CONTAINS",
"SR_COMPARE_EXISTS",
"SR_COMPARE_NOT_EXISTS"
],
"type": "string"
},
"fieldName": {
"description": "A string representing the name of a field in the message, such as an http header name, http pseudo header (for example, ':m'), or JSON path (for example, ':JSON:some:path:in:json:payload').",
"type": "string"
},
"values": {
"description": "A list of constant values used to compare against the field extracted from a request. If multiple are provided, the field will be compared against each, and the condition will be met if any of the values are matched.",
"items": {
"type": "string"
},
"type": "array"
}
},
"type": "object"
},
"maxItems": 4,
"type": "array"
},
"customIruleProc": {
"default": "",
"description": "The name of the custom irule proc to run on this static route\n",
"type": "string"
},
"persistBidirectional": {
"default": true,
"description": "Specifies whether persistence should be bidirectional for requests matching this route. The default value is true.\n",
"type": "boolean"
},
"persistField": {
"default": "",
"description": "Specifies a custom field that matching requests will use as their persistence key. The field specifies the name of an http header whose value should be used or a pseudo-header such as :m (method), or :u (uri). A path to a value in the JSON payload may also be specified for the field with the syntax :JSON:key1:key2..., with each :key in the path navigating one level deeper into the JSON object tree\n",
"maxLength": 200,
"type": "string"
},
"persistTimeout": {
"default": 0,
"description": "Specifies the persistence timeout for this static route. It overrides the global persist_timeout set in sbi profile. Default value 0 indicates this static route applies the global persistence_timeout\n",
"format": "int32",
"type": "integer"
},
"service": {
"default": "",
"description": "The name of the service that matching requests should be routed to, from the list of services in this custom resource.\n",
"maxLength": 255,
"type": "string"
},
"snatPool": {
"description": "If snat-type is SRC_TRANS_SNATPOOL, this value is the name of the snatpool to use when forwarding requests that match this static route.\n",
"type": "string"
},
"snatType": {
"default": "SRC_TRANS_AUTOMAP",
"description": "The type of snat to use when forwarding requests that match this static route.\n",
"enum": [
"SRC_TRANS_NONE",
"SRC_TRANS_SNATPOOL",
"SRC_TRANS_AUTOMAP"
],
"type": "string"
}
},
"type": "object"
},
"type": "array"
}
Message Routing¶
Much of the data required to route the message can be found in the path portion of the request URI (from the HTTP/2 request’s header).
HTTP/2 has some built in ‘field names’ to access data in the first line of the header.
name | description |
---|---|
:m | method |
:u | uri |
:v | version |
:p | path |
:q | query |
:s | status |
:r | request/response header |
:k | keep alive |
:c | custom meta |
NOTE: “:p” will be extended to specify a particular section of the path. For example “:p:s3” will match the third section, leaving “:p” to match the full path. The MESSAGE::field iRule command can be used to select these fields (e.g. [MESSAGE::field value “:JSON:supi”], [MESSAGE::field value “:u”])
NOTE: With the rollout of HTTP/2, all header field names are converted to lower case headers.
NOTE: A SUPI is a 5G globally unique Subscription Permanent Identifier (SUPI) allocated to each subscriber and defined in 3GPP specification TS 23.501. A SUPI is usually a string of 15 decimal digits. The first three digits represent the Mobile Country Code (MCC) while the next two or three form the Mobile Network Code (MNC) identifying the network operator. The remaining (nine or ten) digits are known as Mobile Subscriber Identification Number (MSIN) and represent the individual user of that particular operator. The 5G SUPI is equivalent to the 4G IMSI.
Test Applications¶
To test this use case a test application was deployed in the namespace (peter-app) watched by the F5 Ingress Controller. It can reference the following Custom Resource Definitions (CRDs) installed on the cluster.
oc get crd | grep spk
f5-spk-addresslists.k8s.f5net.com 2023-01-13T12:51:19Z
f5-spk-dnscaches.k8s.f5net.com 2023-01-13T12:51:19Z
f5-spk-egresses.k8s.f5net.com 2023-01-13T12:50:24Z
f5-spk-ingressdiameters.k8s.f5net.com 2023-01-13T12:50:24Z
f5-spk-ingressegressudps.k8s.f5net.com 2023-01-13T12:50:24Z
f5-spk-ingressgtps.k8s.f5net.com 2023-01-13T12:50:25Z
f5-spk-ingresshttp2s.k8s.f5net.com 2023-01-25T14:43:16Z
f5-spk-ingressngaps.k8s.f5net.com 2023-01-13T12:50:25Z
f5-spk-ingresssips.k8s.f5net.com 2023-01-13T15:53:29Z
f5-spk-ingresstcps.ingresstcp.k8s.f5net.com 2023-01-13T12:50:25Z
f5-spk-ingressudps.ingressudp.k8s.f5net.com 2023-01-13T12:50:26Z
f5-spk-portlists.k8s.f5net.com 2023-01-13T12:51:19Z
f5-spk-snatpools.k8s.f5net.com 2023-01-13T12:51:19Z
f5-spk-staticroutes.k8s.f5net.com 2023-01-13T12:51:19Z
f5-spk-vlans.k8s.f5net.com 2023-01-13T12:51:19Z
Below is an example of a Custom Resource (CR) configured for this use case. It is the contents of the file h2-values.yaml shown below
ingress:
enabled: false
externalPort: 11443
---
app:
port: 11443
protocol: TCP
clientAuth: true
clientTLSKey: tmp/ssl/tls-keys-certs/tls-client.key
clientTLSCert: tmp/ssl/tls-keys-certs/tls-client.crt
serverAuth: true
serverTLSKey: tmp/ssl/tls-keys-certs/tls-server.key
serverTLSCert: tmp/ssl/tls-keys-certs/tls-server.crt
irule: |
proc insert_coble_header {} {
log local0. "insert a new HTTP header"
HTTP::header insert "x-custom-proc" "sbi-B"
set acctVal [MESSAGE::field value ":JSON:account"]
log local0. "######### acctValB = $acctVal"
}
proc insert_coble_header2 {} {
log local0. "insert a new HTTP header"
HTTP::header insert "x-custom-proc" "sbi-A"
set acctVal [MESSAGE::field value ":JSON:account"]
log local0. "######### acctValA = $acctVal"
}
proc insert_coble_header3 {} {
log local0. "insert a new HTTP header"
HTTP::header insert "x-custom-proc" "sbi-BB"
set acctVal [MESSAGE::field value ":JSON:account"]
log local0. "######### acctValBB = $acctVal"
}
proc get_path_header {} {
log local0. "check :path header"
set pathVal [HTTP2::header :path]
log local0. "######### PATH HEADER :path= $pathVal"
set contentTypeVal [HTTP::header content-type]
log local0. "######### HEADER content-type= $contentTypeVal"
set xEchoRequestVal [HTTP::header x-echo-request]
log local0. "######### HEADER x-echo-request= $xEchoRequestVal"
set methodVal [HTTP2::header :method]
log local0. "######### HEADER :method= $methodVal"
set fieldVal_s1 [MESSAGE::field value :p:s1]
log local0. "######### HEADER fieldVal_s1= $fieldVal_s1"
set fieldVal_p [MESSAGE::field value :p]
log local0. "######### HEADER fieldVal_p= $fieldVal_p"
}
services:
- name: gen-5nfva
port: 11441
- name: gen-5nfvb
port: 11442
staticRoutes:
- persistField: ""
persistTimeout: 60
persistBidirectional: true
customIruleProc: "get_path_header"
service: "gen-5nfva"
conditions:
- fieldName: ":p:s1"
comparisonOp: "SR_COMPARE_EQUALS"
values:
- "a"
caseSensitive: false
- persistField: ""
persistTimeout: 60
persistBidirectional: true
customIruleProc: "get_path_header"
service: "gen-5nfvb"
conditions:
- fieldName: ":p:s1"
comparisonOp: "SR_COMPARE_EQUALS"
values:
- "b"
caseSensitive: false
The test app is then deployed.
helm install mytestapp f5ingress-dev/h2-test-app --set app.ipfamilies=IPv4 --set app.ip=10.11.23.1 --set app.port=1144 -n peter-app -f h2-values.yaml
NAME: mytestapp
LAST DEPLOYED: Wed Jan 25 06:46:11 2023
NAMESPACE: peter-app
STATUS: deployed
REVISION: 1
TEST SUITE: None
oc get all -n peter-app
NAME READY STATUS RESTARTS AGE
pod/gen-5nfva-6f45db587c-n54b9 1/1 Running 0 17m
pod/gen-5nfvb-578c79c48d-wt8nl 1/1 Running 0 17m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gen-5nfva ClusterIP 172.30.178.41 <none> 11441/TCP 17m
service/gen-5nfvb ClusterIP 172.30.242.182 <none> 11442/TCP 17m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/gen-5nfva 1/1 1 1 17m
deployment.apps/gen-5nfvb 1/1 1 1 17m
NAME DESIRED CURRENT READY AGE
replicaset.apps/gen-5nfva-6f45db587c 1 1 1 17m
replicaset.apps/gen-5nfvb-578c79c48d 1 1 1 17m
The F5ingress Controller sends gRPC messages to the TMM (as indicated below) to build the configuration. Their absence may indicate that the wrong namespace is being watched. Below a TCP profile is being deployed as part of the installation.
oc logs deployment.apps/f5ingress-f5ingress -n peter-net -c f5ingress-f5ingress
...
I0126 17:06:48.342564 1 grpccfg2.go:443] gRPC - Send GRPC Message:
{
"embedded": {
"@type": "declTmm.transaction_start",
"transaction_number": 0
}
}
{
"embedded": {
"@type": "declTmm.create_msg",
"revision": 0,
"embedded": {
"@type": "declTmm.profile_tcp",
"id": "peter-app-mytestapp-tcp-profile",
"name": "peter-app-mytestapp-tcp-profile",
"reset_on_timeout": true,
"time_wait_recycle": true,
"delayed_acks": true,
"proxy_mss": false,
"ip_df_mode": "IP_PKT_DF_PMTU",
"ip_ttl_mode": "IP_PKT_TTL_PROXY"
...
and two Virtual Servers are created as a result
oc exec -it deploy/f5-tmm -c debug -n peter-net -- tmctl -d blade virtual_server_stat -s name
name
--------------------------
peter-app-mytestapp-int-vs
peter-app-mytestapp-ext-vs
NOTE: Command line access to the TMM debug sidecar shell that customers presently enjoy will be phased out to improve security. They will have to use the CWC API instead once it is fully supported.
More information on how to use the Debug API can be found here