DAG CNFs

CNFs based Disaggregation (DAG) solution layer is placed either in the same cluster as current CNF layer, or in a different CNF layer. This makes it highly scalable, easier to manage, and more efficient. The DAG layer requires a pool of destinations to distribute the incoming traffic. The DAG would also require a persistence criteria for sticking incoming connections to the same destination. The pool of destinations should be continuously monitored, so that ‌incoming traffic is always distributed among available destinations.

For CNFs, the destination pool consists of self IPs of TMM pods installed during CNF product setup. These pods enable features such as CGNAT and firewall functionality. The DAG layer can exist in the same cluster as the CNF layer (Intra cluster) or it can exist in a different cluster (Inter cluster).

Packet flow

  • The DAG layer uses a wildcard virtual server with persistence and FastL4 profiles. The self IPs of CNF TMM Pods serve as pool members.
  • The client initiates traffic with destination as server’s IP.
  • Packets from Client should reach the TMM pod in the DAG layer, by setting a route to the TMM self IPs.
  • A TMM in the DAG layer would pick a TMM Pod in the CNF Layer, based on the hash algorithm and forward the packet without translating the destination IP.
  • The TMM Pod in the CNF Layer that receives the packet performs the regular CNF functionality such as NAT, Firewall, and forwards the packet to the server.
  • The return traffic would go back in the same path or from the CNF Layer. It can be routed to the client directly through Direct Source Routing (DSR), using routes in the CNF Layer.

Architecture

High level design view

The following diagram depicts the high-level design architecture of DAG CNF in Kubernetes cluster.

_images/cnf-dag-architecture.png

DAG on client side

The following diagram depicts the DAG architecture on client-side traffic flow.

_images/cnf-dag-on-client.png

Procedures

Install DAG and CNF layers

Intra cluster

When the DAG layer and CNF layer are present in the same cluster it is an Intra cluster architecture.

To install the DAG layer in Intra cluster architecture, CRDs and f5ingress helm charts alone are required to install the f5ingress and TMM pods. However, installing F5ingress and TMM pods alone works only if the required common components are present in any namespace. For example, if the user has three namespaces as cnf, dag-cnf, and default, the common components can be installed in any of the cnf or default namespaces.

Following is the procedure to install the f5ingress and TMM pods.

  1. To Install the CRDs, follow the procedure mentioned in Install CRDs page.

  2. Install the f5ingress Pod using the newer version Helm chart of CNFs v2.0:

    helm upgrade <release> tar/<helm-chart>.tgz -n <namespace> -f <values>.yaml
    

    In the following example, the Helm chart new version of CNFs v2.0 is v0.761.1-0.0.216.

    helm upgrade f5ingress tar/f5ingress-v0.761.1-0.0.216.tgz -n cnf-gateway -f f5ingress_overrides.yaml
    

Inter cluster

When the DAG layer and CNF layer are present in different clusters, it is an Inter cluster architecture.

  1. To install the DAG layer and CNF layer in Intra cluster architecture, follow the procedures mentioned in CNFs Software page.
  2. For CNF layer, configure the CNF layer as per the desired use case. For more information on how to configure the CNFs CRs, see CNFs CRs page.

DAG-CNF Configuration

  • (Optional) Create namespace for DAG layer and CNF layer

_images/spk_info.png Note: This step is required only if the DAG and CNF layers exist on the same cluster.

The DAG layer and CNF layer should exist in different namespaces for the same Kubernetes cluster.

In the architecture diagrams provided in Architecture section, the DAG layer resides in dag-cnf namespace, and the CNF layer resides in the cnf namespace.

Following are the example commands to create the namespaces:

  • For DAG-CNF: kubectl create namespace dag-cnf
  • For CNF: kubectl create namespace cnf

Persistence Profile

  1. Copy the following example in persistence-profile.yaml file.

    apiVersion:   "k8s.f5net.com/v1"
    kind:   F5BigPersistenceProfile
    metadata:
      name:   source-address-persistence-profile
    spec:
      persistenceType:   "SRC_ADDR"
      addressAffinity:
        ipv4PrefixLength:   24
        hashAlgorithm:   "CARP"
    
  2. Apply the persistence profile CR in DAG layer namespace.

    kubectl apply -f persistence-profile.yaml -n <dag layer namespace>
    
  3. Verify that the persistence profile is applied by checking the F5ingress logs.

    In the following example, the BIG-IP Controller logs indicate the F5BigPersistenceProfile CR was added/updated:

    I0218 12:56:15.796613      13 event.go:364] Event(v1.ObjectReference{Kind:"F5BigPersistenceProfile", Namespace:"dag-cnf", Name:"source-address-persistence-profile", UID:"bc41002a-0ab0-41af-ad4c-38c99b113d6b", APIVersion:"", ResourceVersion:"5533491", FieldPath:""}): type: 'Normal' reason: 'Added/Updated' PersistenceProfile dag-cnf/source-address-persistence-profile was added/updated
    

    For more information on Persistence Profile CRD, see F5PersistenceProfile page.

Secure Context

On DAG Layer

  1. Copy the following example into the secure_context_dag.yaml file. Since the secure context is required to be catch-all, the destination address should be 0.0.0.0/0. The pool members will be the DAG layer-facing self IP addresses of the TMM pods in the CNF layer.

    apiVersion:   "k8s.f5net.com/v1"
    kind:   F5BigContextSecure
    metadata:
    name:  f5-secure-context-dag
    spec:
    destinationAddress: 0.0.0.0/0
    ipProtocol: "any"
    destinationPort: 0
    persistenceProfile: f5-carp-profile
    profile: "fastL4"
    monitors:
      icmp:
        -interval:5
    pool:
      members:
       -address: "20.1.1.150"
       -address: "20.1.1.151"
       -address: "20.1.1.152"
    
  2. Apply the Secure Context CRD in DAG layer namespace.

    kubectl apply -f secure_context_dag.yaml -n <dag layer namespace>
    
  3. Verify that the Secure Context CR is applied by checking the F5ingress logs.

    In the following example, the BIG-IP Controller logs indicate the F5BigContextSecure CR was added/updated:

    I0303 11:28:31.039758      13 event.go:364] Event(v1.ObjectReference{Kind:"F5BigContextSecure", Namespace:"dag-cnf", Name:"f5-secure-context-cnf", UID:"74b4348c-86ad-4da7-bc56-e093583cd85a", APIVersion:"", ResourceVersion:"8354565", FieldPath:""}): type: 'Normal' reason: 'Added/Updated' SecureContext dag-cnf/f5-secure-context-cnf was added/updated
    

On CNF Layer

Configure the CNF layer as per the desired use cases. For more information on how to configure the CNFs CRs, see CNFs CRs page.

FastL4 Profile

The return traffic would go back through the DAG layer if ‌Direct Source Routing (DSR) is configured. For the DSR configuration to work, the looseclose parameter should be enabled in the FastL4Profile CR. For more information on FastL4 profile configuration, see FastL4profile page for configuration.

DAG CNFs statistics

Verify the F5BigContextSecure virtual server statistics. By opening the TMM debug sidecar container, execute the following tmctl command.

This verification can be done on the DAG layer.

tmctl -d blade virtual_server_stat -s name, clientside.pkts_in
name                                   clientside.pkts_in
------------------------------------- --------------------
dag-cnf-f5-secure-context-SecureContext_vs          15

Feedback

To provide feedback and help improve this document, please email us at cnfdocs@f5.com.