Learn more >
by Ted Neward, Alex Theedom | Updated May 17, 2017 - Published June 29, 2010
When application performance suffers, most developers panic, and with good reason. Tracking the source of Java application bottlenecks has historically been a major pain, both because the Java virtual machine has a black-box effect, and because profiling tools for the Java platform have traditionally fallen short.
All of that changed with the introduction of JConsole, however. JConsole is a built-in Java performance profiler that works from the command-line and in a GUI shell. It’s not perfect, but it’s a more than adequate first line of defense when pointy-head boss comes at you with a performance problem — and it’s a whole lot better than consulting Papa Google.
In this edition of the 5 things series, I’ll show you five easy ways to use JConsole (or its visually sophisticated cousin, VisualVM) to monitor Java application performance and track bottlenecks in your Java code.
So you think you know about Java programming? The fact is, most developers scratch the surface of the Java platform, learning just enough to get the job done. In this series, Ted Neward digs beneath the core functionality of the Java platform to uncover little-known facts that could help you solve even the stickiest programming challenges.
JConsole (or, for more recent Java platform releases, VisualVM) is a built-in profiler that is as easy to launch as the Java compiler. From a command prompt that has the JDK on the PATH, just run jconsole. From a GUI shell, navigate to the JDK installation directory, open the bin folder, and double-click jconsole.
When the profiler tool pops up (depending on which version of Java is running and how many other Java programs are running at the moment), it either presents a dialog box asking for a URL of a process to connect to, or lists a number of different local Java processes to connect to — sometimes, including the JConsole process itself.
Java processes are set up by default to be profiled. It is not necessary to pass the command-line argument —-Dcom.sun.management.jmxremote— at startup. You only need to start the application and it will automatically be available for monitoring. Once a process is picked up by JConsole, you can just double-click it to start profiling.
Profilers have their own overhead, so it’s a good idea to spend a few minutes figuring out what that is. The easiest way to discover JConsole’s overhead is to first run an application by itself, then run it under the profiler, and measure the difference. (The app shouldn’t be too large or too small; my favorite is the SwingSet2 demo app that ships with the JDK.) So, for instance, I tried running SwingSet2 with -verbose:gc to see garbage collection sweeps, then ran the same app and connected the JConsole profiler to it. When JConsole was connected, a steady stream of GC sweeps happened that didn’t occur otherwise. That was the performance overhead of the profiler.
Because Web application profilers assume connectivity across a socket for profiling, you only need a little configuration to set up JConsole (or any JVMTI-based profiler, for that matter) to monitor/profile applications running remotely.
For example, if Tomcat were running on a machine named “webserver” and that JVM had JMX enabled and listening on port 9004, connecting to it from JConsole (or any other JMX client) would require a JMX URL of “service:jmx:rmi:///jndi/rmi://webserver:9004/jmxrmi”.
In essence, all you need to profile an application server running in a remote data center is the JMX URL. (See Related links for more about remote monitoring and management with JMX and JConsole.)
Common responses to discovering a performance problem in application code vary, but they’re predictable, too. Developers who have been programming since the early days of Java are likely to fire up the old IDE and start doing code reviews of major parts of the code base, looking for familiar “red flags” in the source like synchronized blocks, object allocations, and the like. With fewer years of programming, a developer will probably pore over the -X flags that the JVM supports, looking for ways to optimize the garbage collector. And newbies, of course, go straight to Google, hoping that somebody else out there has found the JVM’s magical “make it go fast” switch, so that they can avoid having to rewrite any code.
There’s nothing intrinsically wrong with any of these approaches, but they’re all a crapshoot. The most effective response to a performance problem is to use a profiler — and now that they’re built in to the Java platform, we really have no excuse not to!
JConsole has a number of tabs that are useful for collecting statistics, including:
These tabs (and the associated graphs) are all courtesy of the JMX objects that every Java VM registers with the JMX server, which is built-in to the JVM. The complete list of beans available within a given JVM is listed in the MBeans tab, complete with some metadata and a limited user interface for seeing that data or executing those operations. (Registering for notifications is beyond the JConsole user interface, however.)
Say a Tomcat process keeps dying from OutOfMemoryErrors. If you want to find out what’s going on, open JConsole, click the Classes tab, and keep a lazy eye on the class count as time goes by. If the count steadily rises, then you can assume that either the app server or your code has a ClassLoader leak somewhere and will run out of PermGen space before long. Check the Memory tab if you need to further confirm the problem.
Things often move quickly in a production environment, and you may not have quality time to spend with your application profiler. Instead, you can take a snapshot of everything in your Java environment and save it to look at later. You can do this in JConsole, and do it even better in VisualVM.
Start by navigating to the MBeans tab, where you’ll open the com.sun.management node, followed by the HotSpotDiagnostic node. Now select Operations, and note the “dumpHeap” button that appears in the right-hand pane. If you pass dumpHeap a filename to dump to in the first (“String”) input box, it will take a snapshot of the entire JVM heap and dump it to that file.
Later, you can use a variety of different commercial profilers to analyze the file, or use VisualVM to analyze the snapshot. (Remember that VisualVM is available as a stand-alone download.)
As a profiler utility, JConsole is nice, but other tools are nicer. Some profilers come with analysis add-ons or a slick user interface, and some track more data by default than JConsole does.
What’s truly fascinating about JConsole is that the entire program is written in “plain old Java,” meaning that any Java developer could write a utility like it. In fact, the JDK even includes an example of how to customize JConsole by creating a new plug-in for it. VisualVM, being built on top of NetBeans, takes the plug-in concept much further.
If JConsole (or VisualVM, or any other tool) doesn’t quite do what you want, or track what you’re looking to track, or track in quite the way you want to track, you could write your own. And if Java code seems too cumbersome, there’s always Groovy or JRuby or any of a dozen other JVM languages to help you get it done faster.
All you really need is a quick-and-dirty command-line tool connected via JMX, and you can track exactly the data you’re interested in, exactly the way you want to.
Java performance monitoring doesn’t end with JConsole or VisualVM — there’s a whole raft of tools hiding out in the JDK that most developers don’t know about. The next article in the series will dig into some experimental command-line tools that could help you dig out more of the performance data you need. Because these tools are generally focused on specific data, they’re smaller and more lightweight than a complete profiler, and so they don’t incur the same performance overhead.
November 16, 2019
November 11, 2019
October 31, 2019
Back to top