/ Java Heap Dump: Are you up to the task? ~ Java EE Support Patterns


Java Heap Dump: Are you up to the task?

If you are as much enthusiasm as I am on Java performance, heap dump analysis should not be a mystery to you. If it is then the good news is that you have an opportunity to increase your Java troubleshooting skills and JVM knowledge.

The JVM has now evolve to a point that it is much easier today to generate and analyze a JVM heap dump vs. the old JDK 1.0 – JDK 1.4 days.

That being said, JVM heap dump analysis should not be seen as a replacement for profiling & JVM analysis tools such as JProfiler or Plumbr but complementary. It is particularly useful when troubleshooting Java heap memory leaks and java.lang.OutOfMemoryError problems.

This post will provide you with an overview of a JVM heap dump and what to expect out of it. It will also provide recommendations on how and when you should spend time analyzing a heap dump. Future articles will include tutorials on the analysis process itself.

Java Heap Dump overview

A JVM heap dump is basically a “snapshot” of the Java heap memory at a given time. It is quite different than a JVM thread dump which is a snapshot of the threads.

Such snapshot contains low level detail about the java objects and classes allocated on the Java heap such as:

  • Java objects such as Class, fields, primitive values and references
  • Classloader related data including static fields (important for classloader leak problems)
  • Garbage collection roots or objects that are accessible from outside the heap (System classloader loaded resources such as rt.jar, JNI or native variables, Threads, Java Locals and more…)
  • Thread related data & stacks (very useful for sudden Java heap increase problems, especially when combined with thread dump analysis)
Please note that it is usually recommended to generate a heap dump following a full GC in order to eliminate unnecessary “noise” from non-referenced objects.

Analysis reserved for the Elite?

One common misperception I have noticed over the last 10 years working with production support teams is the impression that deeper analysis tasks such as profiling, heap dump or thread dump analysis are reserved for the “elite” or the product vendor (Oracle, IBM…).

I could not disagree more.

As a Java developer, you write code potentially running in a highly concurrent thread environment, managing hundreds and hundreds of objects on the JVM. You do have to worry not only about concurrency issues but also on garbage collection and the memory footprint of your application(s). You are in the best position to perform this analysis since you are the expert of the application.

Find below typical questions you should be able to answer:

  • How much concurrent threads are needed to run my application concurrently as per load forecast? How much memory each active thread is consuming before they complete their tasks?
  • What is the static memory footprint of my application? (libraries, classloader footprint, in-memory cache data structures etc.)
  • What is the dynamic memory footprint of my application under load? (sessions footprint etc.)
  • Did you profile your application for any memory leak?
Load testing, profiling your application and analyzing Java heap dumps (ex: captured during a load test or production problem) will allow you to answer the above questions. You will then be in position to achieve the following goals:

  • Reduce risk of performance problems post production implementation
  • Add value to your work and your client by providing extra guidance & facts to the production and capacity management team; allowing them to take proper IT improvement actions
  • Analyze the root cause of memory leak(s) or footprint problem(s) affecting your client IT production environment
  • Increase your technical skills by learning these performance analysis principles and techniques
  • Increase your JVM skills by improving your understanding of the JVM, garbage collection and Java object life cycles
 The last thing you want to reach is a skill “plateau”. If you are not comfortable with this type of analysis then my recommendations are as per below:

  • Ask a more senior member of your team to perform the heap dump analysis and shadow his work and approach
  • Once you are more comfortable, volunteer yourself to perform the same analysis (from a different problem case) and this time request a more experienced member to shadow your analysis work
  • Eventually the student (you) will become the mentor
When to use

Analyzing JVM heap dumps should not be done every time you are facing a Java heap problem such as OutOfMemoryError. Since this can be a time consuming analysis process, I recommend this analysis for the scenarios below:

  • The need to understand & tune your application and / or surrounding API or Java EE container itself memory footprint
  • Java heap memory leak troubleshooting
  • Java classloader memory leaks
  • Sudden Java heap increase problems or trigger events (has to be combined with thread dump analysis as a starting point)

 Now find below some limitations associated with heap dump analysis:

  • JVM heap dump generation is an intensive computing task which will hang your JVM until completed. Proper due diligence is required in order to reduce impact to your production environment
  • Analyzing the heap dump will not give you the full Java process memory footprint e.g. native heap. For this purpose, you will need to rely on other tools and OS commands for that purpose
  • You may face problems opening & parsing heap dumps generated from older version of JDK’s such as 1.4 or 1.5
Heap dump generation techniques

JVM heap dumps are typically generated as a result of 2 actions: 

  • Auto-generated or triggered as a result of a java.lang.OutOfMemoryError (e.g. Java Heap, PermGen or native heap depletion)
  • Manually generated via the usage of tools such as jmap, VisualVM (via JMX) or OS level command
# Auto-triggered heap dumps

If you are using the HotSpot Java VM 1.5+ or JRockit R28+ then you will need to add the following parameter below at your JVM start-up:


The above parameter will enable to HotSpot VM to automatically generate a heap dump following an OOM event. The heap dump format for those JVM types is HPROF (*.hprof).

If you are using the IBM JVM 1.4.2+, heap dump generation as a result of an OOM event is enabled by default. The heap dump format for the IBM JVM is PHD (*.phd).

# Manually triggered heap dumps

Manual JVM heap dumps generation can be achieved as per below:

  • Usage of jmap for HotSpot 1.5+
  • Usage of VisualVM for HotSpot 1.6+ * recommended *
** Please do your proper due diligence for your production environment since JVM heap dump generation is an intrusive process which will hang your JVM process until completion **

If you are using the IBM JVM 1.4.2, you will need to add the following environment variables from your JVM start-up:

export IBM_HEAPDUMP=true
export IBM_HEAP_DUMP=true

For IBM JVM 1.5+ you will need to add the following arguments at the Java start-up:


java -Xdump:none -Xdump:heap:events=vmstop,opts=PHD+CLASSIC
JVMDUMP006I Processing Dump Event "vmstop", detail "#00000000" - Please Wait.
JVMDUMP007I JVM Requesting Heap Dump using
JVMDUMP010I Heap Dump written to
JVMDUMP007I JVM Requesting Heap Dump using
JVMDUMP010I Heap Dump written to
JVMDUMP013I Processed Dump Event "vmstop", detail "#00000000".

Please review the Xdump documentation  for IBM JVM1.5+. 

For Linux and AIX®, the IBM JVM heap dump signal is sent via kill –QUIT or kill -3. This OS command will trigger JVM heap dump generation (PHD format).

I recommend that you review the MAT summary page on how to acquire JVM heap dump via various JVM & OS combinations.

Heap dump analysis tools

My primary recommended tool for opening and analyzing a JVM heap dump is Eclipse Memory Analyzer (MAT). This is by far the best tool out there with contributors such as SAP & IBM. The tool provides a rich interface and advanced heap dump analysis capabilities, including a “leak suspect” report. MAT also supports both HPROF & PHD heap dump formats.

I recommend my earlier post for a quick tutorial on how to use MAT and analyze your first JVM heap dump. I have also a few heap dump analysis case studies useful for your learning process.

Final words

I really hope that you will enjoy JVM heap dump analysis as much as I do. Future articles will provide you with generic tutorials on how to analyze a JVM heap dump and where to start. Please feel free to provide your comments.


Nice post. Thank you. Although I like MAT, I also recommend IBM HeapAnalyzer to analyze heap dumps (it doesn't only analyze IBM JVM heapdumps). It has an awesome function called "Tree View" that lets browse heap trees, helping you find patterns or root causes for OOME problems.

You can find it here.


Hi Esteban and thanks for sharing this one,

I will definitely have a look and use it for my upcoming heap dump analysis so I can compare with MAT.


nice post. but the issue i face is in our prod system , a single java jvm normally has 40-60G heap on unix box (yes we do hold lot of data in memory). sometime we have to do heap analysis on it. Doing a heap dump on this kind of size of jvm is extremely slow, although the server is most powerful one. and if you want to load the dump into your local tools like MAT,VisualVM) for analysis is mission impossible. Do you know any tools that could do the heap analysis on a running system without dump whole heap onto disc and works on unix system as well?

Hi Steven,

That is correct, binary heap dump generation can be a real challenge for a 64-bit JVM of that size. Can you please let me know which JVM vendor you are using? HotSpot, Jrockit or IBM VM? 2 recommendations for you:

1) Explore just generating just a histogram for now and determine at least your top Java object consumers, you can use jmap for that purpose
2) You may want to explore and do POC of JVM memory analysis tools such as Plumbr (see link in the article intro) which are active agents keeping track of your JVM memory vs. memory dump on the server

Please let me know your JVM specifications so I can refine my recommendations.


thanks for your reply. we are using HotSpot 64bit jvm on linix.

Hi Steven,

I suggest that you run jmap and generate only the histogram as per below process:

- Monitor your Java Heap and wait until the footprint of OldGen space is big enough e.g. 20+ GB

- Run the JDK jmap command under your JDK_HOME >> jmap -histo JAVA_PID
* make sure you run it outside your production traffic as it will affect the performance *

- jmap can take several minutes to run depending of the size and speed of your hardware

- Analyze the results, you can also send the output to me via email (http://javaeesupportpatterns.blogspot.com/p/java-ee-it-consulting.html)so I can have a look and pinpoint to your your top Java object consumers or post it to the Java EE forum (link at the top of the page)


Hi P-H,

Thanks for your article, it's very interesting :)

Here you say that "Please note that it is usually recommended to generate a heap dump following a full GC in order to eliminate unnecessary “noise” from non-referenced objects."

I couldn't agree more, but afterwards, in the section "Heap dump generation techniques", there's no way to generate automatically a heap dump after a full GC.

I'm using an IBM's JVM 5 on an AIX 6.1 system, and this article can't help me neither:

Any tips to achieve this?


Hi Carlos,

For IBM J9, my main recommendation is manually trigger a Heap Dump using kill -3 (after overriding the JVM settings via Xdump). Since the generation of a Heap Dump is intrusive, ensure you get maximum value with minimal risk.

Monitor your JVM utilization, following a major collection, you can trigger manually a Heap Dump. It all depends of the goal of what you are trying to achieve e.g. leak analysis, memory footprint analysis etc.


I am new to Java , there are many resources on JVM and JVM troubleshooting , can you kindly recommend a good resource to start with.


Post a Comment