White box analysis of message flows, ESQL or Java is essential during development and test phases. The App Connect toolkit has a flow debugger that is very useful for exactly that reason. It allows the user to pause execution, inspect, and change in-flight data and variables, and to resume execution. The communication between the debugger and the runtime is not secured. Care must therefore be taken when debugging a remote runtime. When the runtime is deployed to Kubernetes there are specific networking aspects to consider. The port forwarder that is built into the Kubernetes CLI can provide a secure and relatively straight forward for remote debugging App Connect on Kubernetes.
When there is a need to debug issues with flows running in Kubernetes, your preferred option should be to recreate that issue on an integration server on a developer workstation and use the debugger locally.
In some cases, that is non trivial. For example, your flows may be connecting to other endpoints on the Kubernetes cluster that are not accessible from outside the cluster.
It may be possible to use a port forwarder to expose those endpoints to the integration server running on your developer workstation. However, there are cases where that is not feasible or when the very issue that you are trying to debug is related to the networking configuration or some other condition that means the flow really does need to be debugged in situ.
As a last resort, the instructions in this article can be used to connect the flow debugger to the integration server running on a Kuberentes cluster.
Debugging remotely over a public network
Any communication between toolkit and the Integration Server is typically by public IP addresses of the cluster. When deploying IBM Cloud Pak for Integration or App Connect Certified Containers on Open Shift Container platform, that ingress is provided by a “route”. This supports HTTP or HTTPS protocols and is adequate and secure for things like deploying BAR files, running the flow exerciser, and invoking RESTAPI flow or flows with HTTPInput nodes.
The debugger connects to the Integration Server using the Java Debug Wire Protocol (JDWP). This is a TCP-based protocol. The ingress described above using OpenShift routes only support HTTP or HTTPS traffic. For ingress of non-HTTP traffic in Kubernetes, it is common to use a Service of type NodePort. This would open a port on the Worker Node’s public network interface. However, this would not be recommended for the flow debugger because the JDWP port does not have the option to use TLS encryption nor any authentication. Exposing this port on a public network using NodePort would be a security exposure and could provide bad actors with access to data and even the opportunity to inject code.
Fortunately, the Kubernetes command line includes a port forwarder that allows you to create a secure tunnel from your developer workstation into the cluster’s private network. Using this, you can connect the flow debugger to the runtime securely without the need to expose the debug port on the public network.
- First, you need to enable the jvmDebugPort. There are 2 parts to this:
- You need to configure the IntegrationServer to listen on that port and you also need to configure the Pod to expose that port to the cluster’s private network.
- Once you have done that, you can port forward from a port on your developer workstation to that port on the cluster’s private network.
- Finally, you connect your flow debugger to that port on your developer workstation.
Configure the Integration Server to listen on the jvmDebugPort
Before you created the Integration Server, you probably created a configuration secret using the generateSecrets.sh script. One of the options when doing that is to provide a serverconf.yaml. The serverconf.yaml is where we will specify the
jvmDebugPort configuration. To enable the jvmDebugPort, place the following into your serverconf.yaml file and regenerate the configuration secret using
ResourceManagers: JVM: jvmDebugPort: 8000
See https://www.ibm.com/support/knowledgecenter/en/SSTTDS_11.0.0/com.ibm.ace.icp.doc/icp0007_.htm for more details about running generateSecrets.sh and configuring an Integration Server.
Expose the port to the cluster’s private network
PodSpec in the
deployment for your Integration Server e.g. using
oc edit or
kubectl edit. Add the following to the
ports array in the main container. The main container is the one that has the same name as the deployment.
- containerPort: 8000 name: ace-debug protocol: TCP
Note: I recommend that you change the value of the
replicas field to
1. This will help to simplify your debug session. You will attach the debugger to a single pod. If you have multiple replicas then there is no guarantee which pod receives which input message. You would need to send many messages to be sure to trigger the pod you attached to. If need multiple replicas to reproduce the problem that you are trying to debug, then you should attach the debugger to each of them. Just repeat the next 2 steps for each pod.
Run Port Forwarder
You can either use
oc CLI. The syntax is identical.
oc port-forward pod/<your-pod-name> 8000:8000
You should see the following response
Forwarding from 127.0.0.1:8000 -> 8000 Forwarding from [::1]:8000 -> 8000
Note: If you do intend to attach to multiple pods at once, you will need to chose a different local port for each one but you must forward to the same port as specified on the
For example, the following will open port 8001 on the developer workstation and forward to port 8000 on the pod
oc port-forward pod/<your-pod-name> 8001:8000
Connect flow debugger
- Open your App Connect Enterprise Toolkit and go to the Debug Perspective.
- On the window menu bar, click Run > Debug Configurations.
- Right click on “IBM App Connect Enterprise Debug” and then select “New”.
- Provide a sensible name for this new configuration
- Set Host Name to
- Set Java Debug Port to
- Finally, click “Apply” and the “Debug” and you should see the debugger being attached.