Troubleshoot Your Kubernetes Deployment

How to get help

If the issue you’re experiencing isn’t covered here, try one of the following options:

General Kubernetes troubleshooting

The following troubleshooting docs may help with Kubernetes-specific issues.

BIG-IP Controller troubleshooting

Hint

You can use kubectl commands to check the BIG-IP Controller configurations using the command line.

kubectl get pod -o yaml [--namespace=kube-system]          \\ Returns the Pod's YAML settings
kubectl describe pod myBigIpCtlr [--namespace=kube-system] \\ Returns an information dump about the Pod you can use to troubleshoot specific issues

Hint

When in doubt, restart the Controller.

Just like your wifi at home, sometimes you just need to turn it off and turn it back on again. With the BIG-IP Controller, you can do this by deleting the k8s-bigip-ctlr Pod. A new Pod deploys automatically, thanks to the ReplicaSet.

kubectl get pod --namespace=kube-system
NAME                             READY     STATUS            RESTARTS   AGE
k8s-bigip-ctlr-687734628-7fdds   0/1       CrashLoopBackoff  2          15d

kubectl delete pod k8s-bigip-ctlr-687734628-7fdds --namespace=kube-system

I just deployed the Controller; how do I verify that it’s running?

  1. Find the name of the k8s-bigip-ctlr Pod.

    kubectl get pod --namespace=kube-system
    NAME                             READY     STATUS    RESTARTS   AGE
    k8s-bigip-ctlr-687734628-7fdds   1/1       Running   0          15d
    
  2. Check the status of the Pod.

    Kubectl get pod k8s-bigip-ctlr-687734628-7fdds -o yaml --namespace=kube-system
    
  3. View the Controller logs.

    View the logs
    kubectl logs k8s-bigip-ctlr-687734628-7fdds --namespace=kube-system
    
    Follow the logs
    kubectl logs -f k8s-bigip-ctlr-687734628-7fdds --namespace=kube-system
    
    View logs for a container that isn’t responding
    kubectl logs --previous k8s-bigip-ctlr-687734628-7fdds --namespace=kube-system
    

How do I set the log level?

To change the log level for the BIG-IP Controller:

  1. Edit the Deployment yaml and add the following to the args section.

    "--log-level=DEBUG"
    
  2. Replace the BIG-IP Controller deployment

    kubectl replace -f f5-k8s-bigip-ctlr.yaml
    
  3. Verify the Deployment updated successfully.

    kubectl describe deployment k8s-bigip-ctlr -o wide --namespace=kube-system
    

Why didn’t the k8s-bigip-ctlr show up when I ran “get pods”?

If you launched the BIG-IP Controller in the --kube-system namespace, you should add the --namespace flag to your kubectl get command.

kubectl get pod --namespace=kube-system
kubectl get pod myBigIpCtlr --namespace=kube-system

What happened to my BIG-IP configuration changes?

If you make changes to objects in the partition managed by the BIG-IP Controller – whether via configuration sync or manually – the Controller will overwrite them. By design, the BIG-IP Controller keeps the BIG-IP system in sync with what it knows to be the desired configuration. For this reason, F5 does not recommend making any manual changes to objects in the partition(s) managed by the BIG-IP Controller.


The BIG-IP pool members use the Kubernetes Node IPs instead of the Pod IPs

The BIG-IP Controller uses node IPs when running in its default mode, nodeport. See Nodeport mode vs Cluster mode for more information.


Why didn’t the BIG-IP Controller create any objects on my BIG-IP?

Check the BIG-IP Controller settings against those of the Service you want it to watch to make sure everything aligns correctly.

Do the namespaces match?

By default, the BIG-IP Controller watches all Kubernetes Namespaces (as of v1.3.0). If you do specify a Namespace to watch in the k8s-bigip-ctlr Deployment, make sure it matches that of the Kubernetes Resources you want to manage.

In the examples below, the Namespace in the Service doesn’t match that provided in the sample Deployment. [1]

Sample Kubernetes Service
kind: Service
apiVersion: v1
metadata:
  name: hello
namespace: test
Excerpt from a sample Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: k8s-bigip-ctlr-deployment
  namespace: kube-system
...
          args: [
            "--bigip-username=$(BIGIP_USERNAME)",
            "--bigip-password=$(BIGIP_PASSWORD)",
            "--bigip-url=10.10.10.10",
            "--bigip-partition=kubernetes",
            # THE LINE BELOW TELLS THE CONTROLLER WHAT NAMESPACE TO WATCH
            "--namespace=prod",
            ]
...

Are the Service name and port correct?

Make sure the name and port in your virtual server ConfigMap match those defined for the Service.

Service field ConfigMap data.data. field
metadata.name virtualServer.backend.serviceName
spec.ports.[port | targetPort] virtualServer.backend.servicePort

In the examples below, the servicePort and serviceName don’t match the name and port in the example Service. [1]

Sample Kubernetes Service
kind: Service
apiVersion: v1
metadata:
  name: hello
namespace: test
spec:
  selector:
    app: hello
    tier: backend
  ports:
  - protocol: TCP
    port: 80
    targetPort: http
Excerpt from a sample virtual server ConfigMap
kind: ConfigMap
apiVersion: v1
...
data:
 schema: "f5schemadb://bigip-virtual-server_v0.1.7.json"
  data: |
    {
      "virtualServer": {
        "backend": {
          "servicePort": 8080,
          "serviceName": "helo",
        },
   ...
    }
[1](1, 2) Example Service referenced from Connect a Front End to a Back End Using a Service

Do the service types match?

The default type used for Service resources in Kubernetes is clusterIP. The corresponding setting for the k8s-bigip-ctlr – pool-member-type – defaults to nodeport.

If you didn’t specify a type in the Service definition –OR– a pool-member-type in the BIG-IP Controller Deployment, you probably have a service type mismatch.

See Nodeport mode vs Cluster mode for more information about each service type and its recommended use.

Did you provide valid JSON?

The settings provided in the data.data section of your ConfigMap must be valid JSON. Run your desired configurations through a JSON linter before use to avoid potential object creation errors.

Have you used the correct version of the F5 schema?

Additions to the F5 schema made with each version release support the features in that specific version. For example, if you use v1.3.0 of the Controller with v0.1.2 of the schema, the Controller’s core functionality would be fine. You wouldn’t, however, be able to use the features from k8s-bigip-ctlr v1.3.0.

Are you looking in the correct partition on the BIG-IP system?

If you’re in the Common partition, switch to the partition managed by the BIG-IP Controller to find the objects it deployed.

  • In the BIG-IP configuration utility (aka, the GUI), check the partition drop-down menu.

    ../_images/bigip-partition_gui.png
  • In the BIG-IP Traffic Management shell (TMSH), check the name of the partition shown in the prompt.

    ../_images/bigip-partition_tmsh.png

Why didn’t the BIG-IP Controller create the pools/rules for my Ingress?

When you create multiple rules in an Ingress that overlap, Kubernetes silently drops all but one of them. If you don’t see all of the pools and/or rules you expect to see on the BIG-IP system, double-check your Ingress resource for redundant or overlapping settings.

For example, say you want to create a pool for your website’s frontend app, with one (1) pool member for each of the Services comprising the app.

Good: 1 rule that includes both Services comprising the frontend app
host: mysite.example.com
   path: /frontend
   - service: svc1
   - service: svc2
Bad: 2 rules that both attempt to route traffic for the frontend app
host: mysite.example.com
   path: /frontend
   - service: svc1

host: mysite.example.com
   path: /frontend
   - service: svc2

In the latter case, Kubernetes would drop one of the overlapping rules and the BIG-IP Controller would only create one (1) pool member on the BIG-IP system.


Why don’t my Annotations work?

Are you using Annotations recommended for a different Kubernetes Ingress Controller ?

Annotations aren’t universally applicable. You should only use Annotations included in the list of Ingress annotations supported by the BIG-IP Controller.


Why did I see a traffic group error when I deployed my iApp?

You may see the error below when deploying an iApp. This error means the iApp is attempting to create a vip in a traffic group that conflicts with the partition’s default traffic group.

Configuration error: Unable to to create virtual address (/myPartition/127.0.0.2) as part of application
(/myPartition/default_k8s.http.app/default_k8s.http) because it matches the self ip (/Common/selfip.external)
which uses a conflicting traffic group (/Common/traffic-group-local-only)

You should be able to resolve this error by either changing the default traffic group for the partition or specifying the desired traffic group in the F5 resource ConfigMap for the iApp. If these options do not resolve the issue, contact your F5 Support representative for assistance.

Note

The first option only requires a single config change on the BIG-IP system [PREFERRED].

If you don’t update the default traffic group for your partition, you will have to specify the correct traffic group in your F5 Resource ConfigMap every time you use the BIG-IP Controller to deploy an iApp.

Change the default traffic group for the partition on the BIG-IP system

You can use a TMOS shell or the configuration utility to set the default traffic group for the partition the BIG-IP Controller manages.

  • In a TMOS shell:

    Run the commands shown below. Substitute the items in bold with the appropriate information for your environment.

    modify /sys folder /<partition>/ traffic-group <desired-traffic-group>
    save /sys config partitions all
    
  • In the config utility:

    1. Go to System ‣ Users ‣ Partition List.
    2. Click on the partition you want to manage.
    3. Select the correct traffic group.
    4. Click Update.

Set the desired traffic group in your F5 resource ConfigMap

Include the following in the frontend section of your iApp F5 resource:

"iappOptions": {
   "description": "os.routing-virtual iApp",
   "trafficGroup": "/Common/traffic-group-1"
},

Note

If you choose this option, you must specify the desired traffic group every time you use the BIG-IP Controller to deploy an iApp.