Last updated on: 2024-04-23 04:45:25.

Frequently Asked Questions (FAQ)

Index


What is Container Ingress Services?

Container Ingress Services (CIS) is an F5 open-source solution being offered to support Ingress control (BIG-IP) app services in leading containerized and PaaS environments.

Container Ingress Services enables app self-service selection and service automation for DevOps’ containerized applications deployments. CIS integrates with the native container environment management/orchestration system, such as Kubernetes and Red Hat OpenShift platform as a service (PaaS). CIS consists of three components:

  • The bigip-controller-<environment>: Integrates the BIG-IP control plane into the specific container environment
  • F5 BIG-IP Controller for Kubernetes
  • F5 BIG-IP Controller for OpenShift

What is the latest version of Container Ingress Services?

Refer to the CIS Release notes or the Github Release for the latest available release version.

CIS releases are available for download from Docker Hub and RedHat Container Registry.


What platforms has it been validated on?

CIS is tested on the latest updates to Kubernetes, OpenShift, and F5 BIG-IP.

Refer to CIS Compatibility matrix for CIS version and their compatability with Kubernetes, OpenShift, and F5 BIG-IP.


What are some limitations I should be aware of in CIS 2.0?

  • Master Node label must set to node-role.kubernetes.io/master=true when operating on K8S version 1.13.4 or OSCP version 4.1 and above in nodeport mode. If not set, BIG-IP treats master node as any other pool member.
  • CIS considers secure-serverssl annotation as true irrespective of the configuration.
  • VXLAN tunnel name starting with prefix “k8s” is not supported.CIS uses prefix “k8s” to differentiate managed and user created resources.

What is F5’s positioning with regards to Container Ingress Services?

  • Dynamic application services integration for containerized environments that enable app delivery by enabling self-service selection of app services in orchestration for DevOps and automates spin up and down based on event discovery.
  • Improve user experience and productivity through integration with existing native app deployment workflows and new URL Rewrite paths.
  • Integrate control plane connectors for BIG-IP into the container environment management and orchestration systems. CIS supports Kubernetes and Red Hat OpenShift.

Note

Red Hat OpenShift PaaS uses Kubernetes container for management and orchestration though OpenShift has a slightly different additional command line utility and graphical User Interface (UI). You can use the BIG-IP CIS for Kubernetes for both Kubernetes and Red Hat OpenShift environments.

  • Container Ingress Services allows BIG-IP to enable Ingress control services, including HTTP routing, URI routing, and API versioning into the container environment. In addition, Ingress services include load balancing, scaling, security services, and programmability.
  • Simplify Container Ingress Services deployment using pre-configured Kubernetes Helm Charts and provide flexible deployment with pre-existing configurations.

What is the customer value proposition for Container Ingress Services?

Container Ingress Services delivers F5 inline integration for app performance and security services orchestration and management by native integrations with container environments. CIS enables self-service selection within orchestration management for container applications, and automated discovery and services insertion based on app events. In addition, CIS easily creates appropriate service configurations of the inline BIG-IP for Ingress control within container environments for performance, security, and management of container app traffic. Finally, CIS simplifies deployment in Helm Charts with pre-configured Kubernetes resources, and enables flexible deployment of OpenShift Routes with pre-existing configurations.


Who are the Target Customers?

Network architects, Network operations, AppDev, DevOps & System Infrastructure. Container Ingress Services provides the ability to expose BIG-IP services to NetOps, Network Architects’ customers, the application owners, and system infrastructure teams through their container management/orchestration system for self-service and standardization. Many times, NetOps aren’t able to keep up with the many container app service IT requests from AppDev/System teams for DevOps process. NetOps needs to provide through integration self-service and automation of app services within the container orchestration UI.


What are Container Ingress Services use cases?

  • Dynamic app services for container environments: The BIG-IPs can have application level objects (VIPs, Pools, Pool Members) provisioned and managed from within the container orchestration environment, enabling auto-scaling of pool members up or down depending on app services demand.
  • Auto-scaling and security in cloud and on premesis container environments: Self-service selection within the orchestration UI or automated app performance and security services based on event discovery for on premesis and across cloud container applications.
  • Advanced container app protection Container Ingress Services integrated to BIG-IP and the container environment provides simplified and centralized app and network protection. Integrate with vulnerability assessment for patching and gain attack insights from F5 and data stream export to Prometheus, Splunk or SIEM/Analytics solution.
  • Streamline app migration and scale multiple app versions simultaneously: Blue/Green deployments for multiple app versions in Red Hat OpenShift PaaS for production at the same time for scaling and moving to newer applications. It provides A/B testing traffic management of two or more app versions in Red Hat OpenShift for development and testing at the same time.

What are the behavior changes for CIS version 2.0?

CIS Behavior Changes in version 2.0: - The default CIS agent is AS3. - When CIS deployment parameter --agent is set to as3 and has ConfigMap without the label as3=true, CIS silently discards the ConfigMaps that are processing. - Deletion of ConfigMap introduced for both override ConfigMap and user-defined ConfigMap. - Informer is implemented for override ConfigMap. You do not need to restart CIS to modify override ConfigMap. - AS3 3.18 is required for CIS 2.0 release. - CIS 2.0 will use the local AS3 3.18 schema and will not use the latest schema from the GitHub repository. - CIS will populate ARP entries on successful L4-L7 configurations in BIG-IP.


Why does CIS need admin credentials?

CIS needs admin credentials because it requires administrative privileges to BIG-IP iControl REST partition to configure objects.

The BIG-IP user account must have the appropriate role defined: For nodeport type pool members, the role must be Administrator. For cluster type pool members, the role must be Administrator.

CIS also allows you to manage your BIG-IP credentials and let you store and manage sensitive information in K8S secrets.

The credentials-directory option is an alternative to using the bigip-username, bigip-password, or bigip-url arguments.

When you use this argument, the controller looks for three files in the specified directory: “username”, “password”, and “url”. If any of these files do not exist, the controller falls back to using the CLI arguments as parameters.

Each file should contain only the username, password, and URL, respectively. You can create and mount the files as Kubernetes Secrets.

Important

Do not project the Secret keys to specific paths, because the controller looks for the “username”, “password”, and “url” files directly within the credentials directory.


What is Ingress and how is it different than ingress?

Ingress with a capital “I” refers to HTTP Routing or a collection of rules to reach the cluster services. In addition, ingress, many times with a lower-case “i”, refers to inbound connections, app load balancing, and security services.


How can you use Client Certificate Constrained Delegation (C3D) in a declaration?

C3D is used to support complete end-to-end encryption when interception of SSL traffic in a reverse proxy environment is required and when client certificates are used for mutual authentication.

This declaration creates the following objects on the BIG-IP:

  • Partition (tenant) named ConfigMap_C3D.
  • An app named MyApps_C3D.
  • A server TLS profile (which creates a BIG-IP Client SSL profile) named serverssl_c3d with C3D features enabled.
  • A client TLS profile (which creates a BIG-IP Server SSL profile) named clientssl_c3d with C3D features enabled.
basic_deployment.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
kind: ConfigMap
apiVersion: v1
metadata:
  name: f5demo-as3-configmap
  labels:
    f5type: virtual-server
    as3: "true"
data:
  template: |
    {
      "class": "AS3",
      "declaration": {
        "class": "ADC",
        "schemaVersion": "3.1.0",
        "id": "f5demo",
        "label": "CIS AS3 Example",
        "remark": "Example of using CIS ConfigMap with C3D",
        "ConfigMap": {
          "class": "Tenant",
          "MyApps": {
             "class": "Application",
             "template": "generic",
             "frontend": {
                "class": "Service_HTTPS",
                "virtualAddresses": ["10.1.10.10"],
               "remark":"C3D demo",
               "persistenceMethods":[],
               "virtualPort": 443,
                "pool": "frontend_tls_pool",
               "serverTLS": {"bigip":"/Common/clientssl_c3d"},
               "clientTLS": {"bigip":"/Common/serverssl_c3d"},
               "profileHTTP":{"use": "XFF_HTTP_Profile"}
             },
             "frontend_tls_pool": {
                "class": "Pool",
                "monitors": [ "tcp" ],
                "members": [{
                   "servicePort": 8443,
                   "serverAddresses": [],
                   "shareNodes": true
                }]
             },
            "XFF_HTTP_Profile": {
              "class": "HTTP_Profile",
                "xForwardedFor": true
             }
       }
       }
    }
    }

What happens when CIS pod crashes?

CIS is a control-plane and not a data-plane component. So traffic will not be impacted during a CIS outage and once a new replica is online, CIS will gather the state of the cluster and apply that to the associated F5 BIG-IP.


How do I deploy CIS in an air-gapped or disconnected environment?

Follow these steps to download and deploy CIS in air-gapped or disconnected environments.

  1. Find the mirror registry and its authentication credentials to mirror CIS release images.

  2. Pull the CIS image to the local environment using the command below.

    $> docker pull docker.io/F5Networks/k8s-bigip-ctlr:latest
    
  3. Tag the image with the mirror registry URL.

    $> docker tag docker.io/F5Networks/k8s-bigip-ctlr:latest <--MIRROR-REGISTRY-URL-->/k8s-bigip-ctlr
    
  4. Login to the mirror registry and push the above tagged CIS release image.

    $> docker login <--MIRROR-REGISTRY-URL-->/k8s-bigip-ctlr
    $> docker push <--MIRROR-REGISTRY-URL-->/k8s-bigip-ctlr
    
  5. Update the CIS deployment to use release image from mirror registry.

    containers:
      - image: "<--MIRROR-REGISTRY-URL-->/k8s-bigip-ctlr"
    
  6. Update the ImagePullSecrets in the CIS deployment with credentials to access the mirror registry to pull above uploaded image if required.


What is the recommendation/guidance on auto config-sync feature when BIG-IP HA members are managed by CIS?

For CIS deployments in BIG-IP HA, F5 recommends automatic config-sync be disabled.


Is NodePortLocal available for all CNIs?

The feature NodePortLocal can be used only with Antrea CNI and the feature must be enabled in Antrea feature gates.


Can we use Kubernetes Service of type NodePort as backend of an Ingress/Virtualserver in NodePortLocal mode?

No. Users can only use service of type ClusterIP as backend of Ingress/Virtualserver in this mode.


Is NodePortLocal supported in k8s and Openshift clusters?

NodePortLocal is currently validated on Tanzu k8s cluster.


How do I configure OVN Kubernetes CNI in NodePort Mode?

OCP (Openshift) versions 4.9 and 4.10 do not require annotations added for the cluster mode at the project/namespace resource. Please remove the corresponding annotations added for OVN K8S CNI in cluster mode to work in Nodeport Mode. You need to remove OVN-Kubernetes advanced networking CNI-specific annotations to all namespaces that CIS is monitoring and configuring on BIG-IP in NodePort Mode. This example uses the namespace default.

1
2
3
4
apiVersion: v1
kind: Namespace
metadata:
  name: default

Note

OVN K8S CNI-specific annotations are k8s.ovn.org/hybrid-overlay-external-gw and k8s.ovn.org/hybrid-overlay-vtep.


Why CIS gets success POST response on restart even if nothing has changed?

If CIS gets a success response on posting of declaration on restart then please check if the Virtual server health monitors are configured properly. BIG-IP expects health monitors to have timeout > interval, if this condition is not satisfied then BIG-IP resets the interval value to some value which is less than the timeout specified. For this reason CIS gets a success response as the declaration which is posted by CIS differs from the one successfully deployed on BIG-IP.



Note

To provide feedback on Container Ingress Services or this documentation, please file a GitHub Issue.