Deploying Applications and Testing Traffic

Prerequisites

  • Make sure that the external and internal VLAN CRs are configured, see F5SPKVLAN.

Ingress Traffic

To test ingress traffic, you have the option to deploy a test nginx web application on your Kubernetes cluster’s Host node. The Ingress path originates from the external Client and is directed towards the Virtual Server IP (destination address). The Virtual Server then distributes the request among the available application pods in its pool for load balancing.

Example

  1. Create a nginx-app-deploy file with the content below:

    vi nginx-app-deploy.yaml
    

    Contents:

    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx-tcp
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx-tcp
      template:
        metadata:
          labels:
            app: nginx-tcp
        spec:
          containers:
          - name: nginx-tcp
            image: nginx:latest
            ports:
            - containerPort: 80
            imagePullPolicy: IfNotPresent
    --- 
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-app-svc
    spec:
      type: LoadBalancer
      ports:
      - port: 80
        targetPort: 80
      selector:
        app: nginx-tcp
    
  2. Deploy the nginx application into a namespace that your BIG-IP Next for Kubernetes is watching (watchNamespace).

    
    kubectl apply -f nginx-app-deploy.yaml -n <your application namespace>
    

    Example:

    kubectl apply -f nginx-app-deploy.yaml -n app-ns
    
  3. Develop a Gateway API application with custom resources to make the nginx application accessible through a service proxy endpoint. The IP Address should be within the same CIDR range as your external network and must not be in use.

    For example, if your external network’s CIDR range is 192.168.10.0/24, any IP address between 192.168.10.1 and 192.168.10.255, such as , will work.

    Create a gateway-tcp-cr.yaml file with the contents below:

    vi gateway-tcp-cr.yaml
    

    Contents:

    apiVersion: gateway.networking.k8s.io/v1
    kind: GatewayClass
    metadata:
      name: f5-gateway-class
      namespace: app-ns
    spec:
      controllerName: "f5.com/f5-gateway-controller"
      description: "F5 BIG-IP Kubernetes Gateway"
    
    ---
    
    apiVersion: gateway.k8s.f5net.com/v1
    kind: Gateway
      metadata:
      name: my-l4route-tcp-gateway
    namespace: app-ns
    spec:
      addresses:
      - type: "IPAddress"
        value: <IP Address>
      gatewayClassName: f5-gateway-class
      listeners:
      - name: foo
        protocol: TCP
        port: 80
        allowedRoutes:
          kinds:
          - kind: L4Route
    
    ---
    
    apiVersion: gateway.k8s.f5net.com/v1
    kind: L4Route
    metadata:
      name: l4-tcp-app
      namespace: app-ns
    spec:
      protocol: TCP
      parentRefs:
      - name: my-l4route-tcp-gateway
        sectionName: foo
    rules:
    - backendRefs:
        - name: nginx-app-svc
        namespace: app-ns
        port: 80
    
  4. Once the gateway CR has been deployed, you will be able to ping and curl the IP Address from a Node outside of your cluster that falls within the same CIDR range, run:

    curl <IP Address>
    

    Response:

    
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
    html { color-scheme: light dark; }
    body { width: 35em; margin: 0 auto;
    font-family: Tahoma, Verdana, Arial, sans-serif; }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
    

Egress Traffic

Egress traffic is when you want to reach a server outside of your Kubernetes cluster from a Pod that is running inside of your cluster.  In the below example egress test, we perform a “curl” from the nginx application pod to the Client on the / external network.

So, the Client has a nginx web server running on it.

Note: Ensure that there is an IP on the interface that is similar to the VLAN connectivity check.
  1. Check for IPs in the interface ens17f0np0 within the /24 CIDR range. Make sure that there is at least one IP in the 192.168.10.x/24 range such as 192.168.10.10/24. On the Client ( external), run:

    ip a
    

    Response:

    4: ens17f0np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether b8:3g:dg:c0:49:20 brd ff:ff:ff:ff:ff:ff
        altname enp152s0f0np0
        inet <IP Address>/24 scope global ens17f0np0
        valid_lft forever preferred_lft forever
    
  2. You will need the Nginx application pod to be running in you Kubernetes cluster. See step 1 and 2 from Ingress Traffic to install Nginx test application pod.

Egress with VLAN

  1. Create a snatpool and egress custom resource file with the contents below:

    vi snatpool-cr.yaml
    

    The snatpool address will be used as the snat IP to identify the egress traffic that is coming from BIG-IP Next for Kubernetes. The snatpool IP “” should be in your network CIDR range on the external network.

    Contents:

    snatpool-cr:

    apiVersion: "k8s.f5net.com/v1"
    kind: F5SPKSnatpool
    metadata:
      name: "egress-snat-3000"
    spec:
      name: "egress_snatpool"
      sharedSnatAddressEnabled: true
      addressList:
        - - <IP Address> # <-- snat IP
    

    egress_with_vlan:

    apiVersion: k8s.f5net.com/v3
    kind: F5SPKEgress
    metadata:
      name: egress-vlan
    spec:
      dualStackEnabled: true
      snatType: SRC_TRANS_SNATPOOL
      maxTmmReplicas: 1
      egressSnatpool: egress_snatpool
      pseudoCNIConfig:
        namespaces:
          - app-ns
        appPodInterface: eth0
        appNodeInterface: <HOST NODE INTERFACE NAME>
        vlanName: <TMM INTERNAL VLAN INTERFACE NAME>
    
  2. Apply your snatpool-cr.yaml:

    kubectl apply -f snatpool-cr.yaml -n default
    
  3. To test egress traffic, you can shell (exec -it) into the Nginx application Pod inside the cluster.

    a. Get the pod ID for the nginx application:

     ```kubectl get pods -n app-ns```
    

    Response:

     ```
    
     NAME                            READY   STATUS    RESTARTS   AGE
     nginx-deploy-5798c85b9c-qtqnd   1/1     Running   0          4d21h
    
     ```
    

    b. Shell into the application pod:

     ```kubectl exec -it pod/nginx-deploy-5798c85b9c-qtqnd -n app-ns -- bash```
    
  4. Perform a curl from inside the Nginx application pod in the Kubernetes cluster to the Client Server’s nginx web server. In the Nginx application pod, run:

    curl <IP Address>
    
  5. If your setup is done correctly, you will receive a response from the Nginx server, which is located on the Client server outside of your Kubernetes cluster.

    Response:

    
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
    html { color-scheme: light dark; }
    body { width: 35em; margin: 0 auto;
    font-family: Tahoma, Verdana, Arial, sans-serif; }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
    
    

Egress with VXLAN

Prerequisite

  • Make sure that the external and internal VLAN CRs are configured, see F5SPKVLAN.
  1. Create F5SPKVLAN and F5SPKEgress CRs file with the contents below:

    vi egress-vxlan-cr.yaml

    Content:

    
    apiVersion: k8s.f5net.com/v3
    kind: F5SPKEgress
    metadata:
      name: egress-cr-primary-vxlan-105
    spec:
      dualStackEnabled: true
      snatType: SRC_TRANS_SNATPOOL
      maxTmmReplicas: 1
      egressSnatpool: egress_snatpool
      pseudoCNIConfig:
        namespaces:
          - app-ns
        appPodInterface: eth0
    
        appNodeInterface: <HOST NODE VXLAN INTERFACE NAME>
        vlanName: <TMM VXLAN INTERFACE NAME>
    
  2. Verify the VXLAN interfaces that have been created on the Host node.

    ip a | grep vxlan
    

    You should see the vxlan100 interface.

    
    
    66700: vxlan100.100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
        inet <IP Address>/<subnet> scope global vxlan100
    
  3. Repeat the same test as the VLAN egress test above.