Debug Sidecar

The Service Proxy Pod’s debug sidecar provides a set of command-line utilities for obtaining low-level, diagnostic data and statistics about the Service Proxy Traffic Management Microkernel (TMM). The debug sidecar deploys by default with the SPK Controller.

Command-Line Utilities

The table below lists and describes the available command-line utilities.

Utility Description
tmctl Displays various TMM traffic processing statistics, such as pool and virtual server connections.
core-tmm Creates a diagnostic core file of the TMM process.
bdt_cli Displays TMM networking information such as ARP and route entries. See the bdt_cli section below.
mrfdb Enables reading and writing dSSM database records. See the mrfdb section below.
configview Displays Custom Resource (CR) configuration objects using their logged UUID.
tcpdump Displays packets sent and received on the specified network interface.
ping Send ICMP ECHO_REQUEST packets to remote hosts.
traceroute Displays the packet route in hops to a remote host.
netkvest Performs connectivity checks to a remote host from the specified source SNAT pool using the ping and traceroute diagnostic utilities. See the netkvest section below.

Note: Type man f5-tools in the debug container to get a full list of TMM specific commands.

Connecting to the debug sidecar

To connect to the debug sidecar and begin gathering diagnostic information, use the commands below.

  1. Connect to the debug sidecar.

    In this example, the debug sidecar is in the spk-ingress Project:

    oc exec -it deploy/f5-tmm -c debug -n spk-ingress -- bash
    
  2. Execute one of the available diagnostic commands.

    In this example, ping is used to test connectivity to a remote host with IP address 192.168.10.100:

    ping 192.168.10.100
    
    PING 192.168.10.100 (192.168.10.100): 56 data bytes
    64 bytes from 192.168.10.100: icmp_seq=0 ttl=64 time=0.067 ms
    64 bytes from 192.168.10.100: icmp_seq=1 ttl=64 time=0.067 ms
    64 bytes from 192.168.10.100: icmp_seq=2 ttl=64 time=0.067 ms
    64 bytes from 192.168.10.100: icmp_seq=3 ttl=64 time=0.067 ms
    
  3. Type Exit to leave the debug sidecar.

Command Examples

tmctl

Use the tmctl utility to query Service Proxy TMM for application traffic processing statistics.

  1. Connect to the debug sidecar.

    oc exec -it deploy/f5-tmm -c debug -n <project> -- bash
    

    In this example, the debug sidecar is in the spk-ingress Project:

    oc exec -it deploy/f5-tmm -c debug -n spk-ingress -- bash
    
  2. To view virtual server connection statistics run the following command.

    tmctl -d blade virtual_server_stat -s name,clientside.tot_conns
    
  3. To view pool member connection statistics run the following command.

    tmctl -d blade pool_member_stat -s pool_name,serverside.tot_conns
    

bdt_cli

Use the bdt_cli utility to query the Service Proxy TMM for networking data.

Commands:

  • arp - Get ARP routes and their status.

  • check - Get TMM Check Magic.

  • completion - Generate the autocompletion script for the specialized shell.

  • connection or connection list - Get the list of connections.

  • help - Help about any command.

  • l2forward - Get L2 Forwarding entries.

  • route - Get Route List.

  • logLevel - Set the TMM log level.

  • connection delete - Delete the connections based on filter operations.

Supported flags to filter connections for both list and delete commands:

  1. cs_client_addr - Clientside client IP address

  2. cs_client_port - Clientside client port

  3. cs_server_addr - Clientside server IP address

  4. cs_server_port - Clientside server port

  5. ss_server_addr - Serverside server IP address

  6. ss_server_port - Serverside server port

  7. ss_client_addr - Serverside client IP address

  8. ss_client_port - Serverside client port

  9. type - Connection Type

  10. protocol - Protocol

  11. idle_time - Idle Time

  12. connection_id - Connection ID

  13. vs_name - Virtual Server Name

  14. cs_client_prefix - Clientside client prefix

  15. cs_server_prefix - Clientside server prefix

  16. vlan_name - Vlan Name

Command Example

  1. Connect to the debug sidecar.

    oc exec -it deploy/f5-tmm -c debug -n <project> -- bash 
    

    In this example, the debug sidecar is in the spk-ingress Project:

    oc exec -it deploy/f5-tmm -c debug -n spk-ingress -- bash
    
  2. Connect to TMM.

    bdt_cli -u -s tmm0:8850 [command] 
    
  3. Example of showing routes.

    bdt_cli -u -s tmm0:8850 route
    
    routeType:1 isIpv6:false destNet:{ip:{addr:<none>, rd:0} pl:0} gw:{ip:{addr:10.59.147.121, rd:0}} gwType:1 interface:external
    routeType:1 isIpv6:false destNet:{ip:{addr:10.19.148.120, rd:0} pl:29} gw:{ip:{addr:<none>, rd:0}} gwType:0 interface:external
    routeType:1 isIpv6:false destNet:{ip:{addr:192.168.202.0, rd:0} pl:24} gw:{ip:{addr:<none>, rd:0}} gwType:0 interface:internal
    routeType:0 isIpv6:false destNet:{ip:{addr:169.254.1.1, rd:0} pl:32} gw:{ip:{addr:<none>, rd:0}} gwType:0 interface:eth0
    routeType:1 isIpv6:false destNet:{ip:{addr:169.254.0.0, rd:0} pl:24} gw:{ip:{addr:<none>, rd:0}} gwType:0 interface:tmm
    
  4. To set the f5-tmm container’s logging level to Error, run the following command.

    The logging levels are listed below in the order of message severity. More severe levels generally log messages from the lower severity levels as well.

    1-Debug, 2-Informational, 3-Notice (Default), 4-Warning, 5-Error, 6-Critical, 7-Alert, 8-Emergency

    bdt_cli logLevel -l 5 
    
  5. List all connections.

    bdt_cli -u -s tmm0:8850 connection 
    

    (or)

    bdt_cli -u -s tmm0:8850 connection list 
    
  6. List Connection with a filter.

    Note: The system supports both filter and wildcard operations for retrieving the list of connections.

    bdt_cli -u -s tmm0:8850 connection list --flag 
    

    In this example, listing a connection with a filter like Clientside client port is 5506:

    bdt_cli -u -s tmm0:8850 connection delete --cs_client_port 5506 
    
  7. Delete Connection with a filter.

    Note: Currently, the system only supports filter operations but not wildcard for deleting connections.

    bdt_cli -u -s tmm0:8850 connection delete --flag 
    

    In this example, deleting a connection with a filter like Serverside server port is 8051:

    bdt_cli -u -s tmm0:8850 connection delete --cs_server_port 8051 
    

mrfdb

The mrfdb utility enables reading and writing dSSM database records. The mrfdb tool queries the dSSM Database Sentinel Pod, sending commands to the dssmmaster DB, and relaying the response back to the debug sidecar.

The mrfdb command uses these four subcomands:

  • The IP address of the dSSM Sentinel service to be queried.

  • The serverName designating the dSSM server-farm controlled by the dssmmaster DB.

  • The type designating the command category: dns46, cgnat, custom.

  • The command that is specific to the chosen type (category).

Command Example

  1. Obtain the IP address of the dSSM Sentinel.

    In this example, dSSM is installed in the spk-utilities Project.

    oc get svc -n spk-utilities
    

    In this example, the Sentinel IP address is 10.203.180.204.

    NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)  
    f5-dssm-db         ClusterIP   10.108.254.57    <none>        6379/TCP 
    f5-dssm-sentinel   ClusterIP   10.103.180.204   <none>        26379/TCP
    
  2. Login to the debug sidecar container.

    In this example, the debug sidecar is in the spk-ingress Project.

    oc exec -it deploy/f5-tmm -c debug -n spk-ingress -- bash
    
  3. Run the mrfdb utility.

    In this example, the mrfdb utility queries for all DB records.

    mrfdb -ipport=10.103.180.204:26379 -serverName=server -displayAllBins
    

Detailed Examples

For detailed examples using mrfdb, refer to the following:

configview

Use the configview utility to show configuration object created by the installed SPK CRs.

  1. View the TMM deployment logs, and grep for UUID events.

    In this example, TMM is in the spk-ingress Project:

    oc logs deploy/f5-tmm -n spk-ingress | grep UUID
    

    In this example, the first log UUID spk-ingress-net-external-vlan will be used to query with configview.

    <134>Jan 1 1:10:11 f5-tmm-7d5b489c5b-fffgt tmm1[36]: 01010058:6: audit log: action: CREATE; UUID: spk-ingress-net-external-vlan; event: declTmm.vlan; Error: No error
    
  2. Connect to the debug sidecar.

    In this example, the debug sidecar is in the spk-ingress Project:

    oc exec -it deploy/f5-tmm -c debug -n spk-ingress -- bash
    
  3. Execute the configview utility.

    configview uuid spk-ingress-net-external-vlan
    

    The example output displays the CR parameters and values.

    request:[declTmm.vlan]:{name:"external" id:"spk-ingress-net-external-vlan" tag:3350 mtu:1500 tagged_interfaces:"1.2"}
    

netkvest

Note: The netkvest utility supports only the ping and traceroute diagnostic utilities.

Use the netkvest utility to check connectivity to a remote host from a specified source SNAT pool.

  1. Connect to the debug sidecar.

    oc exec -it deploy/f5-tmm -c debug -n <project> -- bash
    

    In this example, the debug sidecar is in the spk-ingress Project

    oc exec -it deploy/f5-tmm -c debug -n spk-ingress -- bash
    
  2. To check the connectivity to a remote host from a specified source SNAT pool using the ping diagnostic utility, run the following command.

    oc exec -it deploy/f5-tmm -c debug -- netkvest -s <source_SNAT_pool_name> -d <remote_host> -u <diagnostic utility>
    

    In this example, the netkvest utility checks for destination 22.22.22.100 from egress-snatpool source SNAT pool using ping diagnostic utility.

    oc exec -it deploy/f5-tmm -c debug -- netkvest -s egress-snatpool -d 22.22.22.100 -u ping
    

    Sample Output

    PING 22.22.22.100 (22.22.22.100) 64 data bytes
    64 bytes from 22.22.22.100: icmp_seq=0 ttl=63 
    64 bytes from 22.22.22.100: icmp_seq=1 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=2 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=3 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=4 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=5 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=6 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=7 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=8 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=9 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=10 ttl=63
    PING 22.22.22.100 (22.22.22.100) 64 data bytes
    64 bytes from 22.22.22.100: icmp_seq=0 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=1 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=2 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=3 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=4 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=5 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=6 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=7 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=8 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=9 ttl=63
    64 bytes from 22.22.22.100: icmp_seq=10 ttl=63
    2025-06-18 14:21:12 [info]: main.main: Execution is successful
    
  3. To check the connectivity to a remote host from a specified source SNAT pool using the traceroute diagnostic utility, run the following command.

    oc exec -it deploy/f5-tmm -c debug -- netkvest -s <source_SNAT_pool_name> -d <remote_host> -u <diagnostic utility> 
    

    In this example, the netkvest utility checks for destination 22.22.22.100 from source SNAT pool using the traceroute diagnostic utility.

    oc exec -it deploy/f5-tmm -c debug -- netkvest -s egress-snatpool -d 22.22.22.100 -u traceroute
    

    Sample Output

    traceroute to 22.22.22.100 (22.22.22.100), 64 hops max, 64 byte packets
    1 33.33.33.254
    2 22.22.22.100
    traceroute to 22.22.22.100 (22.22.22.100), 64 hops max, 64 byte packets
    1 33.33.33.254
    2 22.22.22.100
    2025-06-18 14:21:12 [info]: main.main: Execution is successful
    

    Limitations:

    The netkvest utility has limitations based on the IP version, as shown below:

    Note: When using the netkvest utility, make sure the source and destination IP addresses are of the same type—either both IPv4 or both IPv6. Mixing them will cause the command to fail.

    If the user specifies a diagnostic command with an IPv4 source, but provides an IPv6 destination, the command will fail with an error.

    Example 1 – IPv4 source with IPv6 destination.

    oc exec -it deploy/tmm -c debug -- netkvest -s 11.11.11.11 -d 2002::22:22:22:100 -u ping
    

    Sample Output

    2025-06-18 12:01:06 [error] main.main: Execution failed: Destination type is IPv6, but no IPv6 addresses found in source. Command terminated with exit code 2.
    

    Similarly, if the user specifies a diagnostic command with an IPv6 source, but provides an IPv4 destination, the command will also fail:

    Example 2 – IPv6 source with IPv4 destination.

    oc exec -it deploy/tmm -c debug -- netkvest -s 2002::11:11:11:11 -d 22.22.22.100 -u ping
    

    Sample Output

    2025-06-18 12:05:45 [error] main.main: Execution failed: Destination type is IPv4, but no IPv4 addresses found in source. Command terminated with exit code 2.
    

Persisting files

Some diagnostic utilities, such as tcpdump produce files that require further analysis by F5. When you install the SPK Controller, you can configure the debug.persistence Helm parameter to ensure diagnostic files created in the debug sidecar container are saved to a filesystem. Use the steps below to verify a PersistentVolume is available, and to configure persistence.

  1. Verify a StoraceClass is available for the debug container.

    oc get storageclass
    
    NAME                  PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   
    managed-nfs-storage   storage.io/nfs   Delete          Immediate          
    
  2. Set the persistence.enabled parameter to true, and configure the storageClass name.

    Note: In this example, managed-nfs-storage value is obtained from the NAME field in step 1:

    debug:
    
      persistence:
        enabled: true
        storageClass: "managed-nfs-storage"
        accessMode: ReadWriteOnce
        size: 1Gi
    
  3. After you deploy the Controller and Service Proxy Pods, find the bound PersistentVolume.

    oc get pv | grep f5-debug-sidecar
    

    In this example, the pv is Bound in the spk-ingress Project as expected:

    pvc-42a5ef7-5c5f-4518-930f-851abf32c67   1Gi   Bound  spk-ingress/f5-debug-sidecar  managed-nfs-storage
    
  4. Use the PersistentVolume ID to find the Server name and the Path, or location on the cluster node where diagnostic files are storeed.

    Important: Files must be placed in the debug sidecar’s /shared directory to be persisted.

    oc describe pv <pv_id> | grep -iE 'path|server'
    

    In this example, the PersistentVolume ID is pvc-42a5ef7-5c5f-4518-930f-851abf32c67:

    oc describe pv pvc-42a5ef7-5c5f-4518-930f-851abf32c67 | grep -iE 'path|server'
    

    The Server and Path information will resemble the following:

    Server:  provisioner.ocp.f5.com
    Path:    /opt/local-path-provisioner/pvc-42a5ef7-5c5f-4518-930f-851abf32c67_ingress_f5-debug-sidecar
    

Disabling the debug sidecar

The TMM debug sidecar installs by default with the Controller. You can disable the debug sidecar by setting the debug.enabled parameter to false in the Controller Helm values file:

debug:
  enabled: false

Feedback

Provide feedback to improve this document by emailing spkdocs@f5.com.

Supplemental