Troubleshoot Your Kubernetes Deployment

How to get help

If the issue you’re experiencing isn’t covered here, try one of the following options:

General Kubernetes troubleshooting

The following troubleshooting docs may help with Kubernetes-specific issues.

BIG-IP Controller troubleshooting


You can use kubectl commands to check the BIG-IP Controller configurations using the command line.

kubectl get pod -o yaml [--namespace=kube-system]          \\ Returns the Pod's YAML settings
kubectl describe pod myBigIpCtlr [--namespace=kube-system] \\ Returns an information dump about the Pod you can use to troubleshoot specific issues


When in doubt, restart the Controller.

Just like your wifi at home, sometimes you just need to turn it off and turn it back on again. With the BIG-IP Controller, you can do this by deleting the k8s-bigip-ctlr Pod. A new Pod deploys automatically, thanks to the ReplicaSet.

kubectl get pod --namespace=kube-system
NAME                             READY     STATUS            RESTARTS   AGE
k8s-bigip-ctlr-687734628-7fdds   0/1       CrashLoopBackoff  2          15d

kubectl delete pod k8s-bigip-ctlr-687734628-7fdds --namespace=kube-system

I just deployed the Controller; how do I verify that it’s running?

  1. Find the name of the k8s-bigip-ctlr Pod.

    kubectl get pod --namespace=kube-system
    NAME                             READY     STATUS    RESTARTS   AGE
    k8s-bigip-ctlr-687734628-7fdds   1/1       Running   0          15d
  2. Check the status of the Pod.

    Kubectl get pod k8s-bigip-ctlr-687734628-7fdds -o yaml --namespace=kube-system
  3. View the Controller logs.

    View the logs
    kubectl logs k8s-bigip-ctlr-687734628-7fdds --namespace=kube-system
    Follow the logs
    kubectl logs -f k8s-bigip-ctlr-687734628-7fdds --namespace=kube-system
    View logs for a container that isn’t responding
    kubectl logs --previous k8s-bigip-ctlr-687734628-7fdds --namespace=kube-system

How do I set the log level?

To change the log level for the BIG-IP Controller:

  1. Edit the Deployment yaml and add the following to the args section.

  2. Replace the BIG-IP Controller deployment

    kubectl replace -f f5-k8s-bigip-ctlr.yaml
  3. Verify the Deployment updated successfully.

    kubectl describe deployment k8s-bigip-ctlr -o wide --namespace=kube-system

Why didn’t the k8s-bigip-ctlr show up when I ran “get pods”?

If you launched the BIG-IP Controller in the --kube-system namespace, you should add the --namespace flag to your kubectl get command.

kubectl get pod --namespace=kube-system
kubectl get pod myBigIpCtlr --namespace=kube-system

What happened to my BIG-IP configuration changes?

If you make changes to objects in the partition managed by the BIG-IP Controller – whether via configuration sync or manually – the Controller will overwrite them. By design, the BIG-IP Controller keeps the BIG-IP system in sync with what it knows to be the desired configuration. For this reason, F5 does not recommend making any manual changes to objects in the partition(s) managed by the BIG-IP Controller.

The BIG-IP pool members use the Kubernetes Node IPs instead of the Pod IPs

The BIG-IP Controller uses node IPs when running in its default mode, nodeport. See Nodeport mode vs Cluster mode for more information.

Why didn’t the BIG-IP Controller create any objects on my BIG-IP?

Check the BIG-IP Controller settings against those of the Service you want it to watch to make sure everything aligns correctly.

Do the namespaces match?

By default, the BIG-IP Controller watches all Kubernetes Namespaces (as of v1.3.0). If you do specify a Namespace to watch in the k8s-bigip-ctlr Deployment, make sure it matches that of the Kubernetes Resources you want to manage.

In the examples below, the Namespace in the Service doesn’t match that provided in the sample Deployment. [1]

Sample Kubernetes Service
kind: Service
apiVersion: v1
  name: hello
namespace: test
Excerpt from a sample Deployment
apiVersion: extensions/v1beta1
kind: Deployment
  name: k8s-bigip-ctlr-deployment
  namespace: kube-system
          args: [

Are the Service name and port correct?

Make sure the name and port in your virtual server ConfigMap match those defined for the Service.

Service field ConfigMap field virtualServer.backend.serviceName
spec.ports.[port | targetPort] virtualServer.backend.servicePort

In the examples below, the servicePort and serviceName don’t match the name and port in the example Service. [1]

Sample Kubernetes Service
kind: Service
apiVersion: v1
  name: hello
namespace: test
    app: hello
    tier: backend
  - protocol: TCP
    port: 80
    targetPort: http
Excerpt from a sample virtual server ConfigMap
kind: ConfigMap
apiVersion: v1
 schema: "f5schemadb://bigip-virtual-server_v0.1.7.json"
  data: |
      "virtualServer": {
        "backend": {
          "servicePort": 8080,
          "serviceName": "helo",
[1](1, 2) Example Service referenced from Connect a Front End to a Back End Using a Service

Do the service types match?

The default type used for Service resources in Kubernetes is clusterIP. The corresponding setting for the k8s-bigip-ctlr – pool-member-type – defaults to nodeport.

If you didn’t specify a type in the Service definition –OR– a pool-member-type in the BIG-IP Controller Deployment, you probably have a service type mismatch.

See Nodeport mode vs Cluster mode for more information about each service type and its recommended use.

Did you provide valid JSON?

The settings provided in the section of your ConfigMap must be valid JSON. Run your desired configurations through a JSON linter before use to avoid potential object creation errors.

Have you used the correct version of the F5 schema?

Additions to the F5 schema made with each version release support the features in that specific version. For example, if you use v1.3.0 of the Controller with v0.1.2 of the schema, the Controller’s core functionality would be fine. You wouldn’t, however, be able to use the features from k8s-bigip-ctlr v1.3.0.

Are you looking in the correct partition on the BIG-IP system?

If you’re in the Common partition, switch to the partition managed by the BIG-IP Controller to find the objects it deployed.

  • In the BIG-IP configuration utility (aka, the GUI), check the partition drop-down menu.

  • In the BIG-IP Traffic Management shell (TMSH), check the name of the partition shown in the prompt.


Why didn’t the BIG-IP Controller create the pools/rules for my Ingress?

When you create multiple rules in an Ingress that overlap, Kubernetes silently drops all but one of them. If you don’t see all of the pools and/or rules you expect to see on the BIG-IP system, double-check your Ingress resource for redundant or overlapping settings.

For example, say you want to create a pool for your website’s frontend app, with one (1) pool member for each of the Services comprising the app.

Good: 1 rule that includes both Services comprising the frontend app
   path: /frontend
   - service: svc1
   - service: svc2
Bad: 2 rules that both attempt to route traffic for the frontend app
   path: /frontend
   - service: svc1

   path: /frontend
   - service: svc2

In the latter case, Kubernetes would drop one of the overlapping rules and the BIG-IP Controller would only create one (1) pool member on the BIG-IP system.

Why don’t my Annotations work?

Are you using Annotations recommended for a different Kubernetes Ingress Controller ?

Annotations aren’t universally applicable. You should only use Annotations included in the list of Ingress annotations supported by the BIG-IP Controller.