/ java.lang.outofmemoryerror: Java heap space - Analysis and resolution approach ~ Java EE Support Patterns


java.lang.outofmemoryerror: Java heap space - Analysis and resolution approach

java.lang.OutOfMemoryError: Java heap problem is one of the most complex problems you can face when supporting or developing complex Java EE applications.

This article will provide you with a description of this JVM HotSpot OutOfMemoryError error message and how you should attack this problem until its resolution.

For a quick help guide on how to determine which type of OutOfMemoryError you are dealing with, please consult the related posts found in this Blog. You will also find tutorials on how to analyze a JVM Heap Dump and identify potential memory leaks. A troubleshooting guide video for beginners is also available from our YouTube channel.

java.lang.OutOfMemoryError: Java heap space – what is it?

This error message is typically what you will see your middleware server logs (Weblogic, WAS, JBoss etc.) following a JVM OutOfMemoryError condition:

·         It is generated from the actual Java HotSpot VM native code
·         It is triggered as a result of Java Heap (Young Gen / Old Gen spaces) memory allocation failure (due to Java Heap exhaustion)

Find below a code snippet from the OpenJDK project source code exposing the JVM HotSpot implementation. The code is showing which condition is triggering the OutOfMemoryError: Java heap space condition.

# collectedHeap.inline.hpp

I strongly recommend that you download the HotSpot VM source code from OpenJDK yourself for your own benefit and future reference.

Ok, so my application Java Heap is exhausted…how can I monitor and track my application Java Heap?

The simplest way to properly monitor and track the memory footprint of your Java Heap spaces (Young Gen & Old Gen spaces) is to enable verbose GC from your HotSpot VM. Please simply add the following parameters within your JVM start-up arguments:

-verbose:gc –XX:+PrintGCDetails –XX:+PrintGCTimeStamps –Xloggc:<app path>/gc.log

You can also follow this tutorial  which will help you understand how to read and analyze your HotSpot Java Heap footprint.

Ok thanks, now I can see that I have a big Java Heap memory problem…but how can I fix it?

There are multiple scenarios which can lead to Java Heap depletion such as:

·         Java Heap space too small vs. your application traffic & footprint
·         Java Heap memory leak (OldGen space slowly growing over time…)
·         Sudden and / or rogue Thread(s) consuming large amount of memory in short amount of time etc.

Find below a list of high level steps you can follow in order to further troubleshoot:

·         If not done already, enabled verbose GC >> -verbose:gc
·         Analyze the verbose GC output and determine the memory footprint of the Java Heap for each of the Java Heap spaces (YoungGen & OldGen)
·         Analyze the verbose GC output or use a tool like JConsole to determine if your Java Heap is leaking over time. This can be observed via monitoring of the HotSpot old gen space.
·         Monitor your middleware Threads and generate JVM Thread Dumps on a regular basis, especially when a sudden increase of Java Heap utilization is observed. Thread Dump analysis will allow you to pinpoint potential long running Thread(s) allocating a lot of objects on your Java Heap in a short amount of time; if any
·         Add the following parameter within your JVM start-up arguments: -XX:HeapDumpOnOutOfMemoryError This will allow your HotSpot VM to generate a binary Heap Dump (HPROF) format. A binary Heap Dump is a critical data allowing to analyze your application memory footprint and / or source(s) of Java Heap memory leak

From a resolution perspective, my recommendation to you is to analyze your Java Heap memory footprint using the generated Heap Dump. The binary Heap Dump (HPROF format) can be analyzed using the free Memory Analyzer tool (MAT). This will allow you to understand your java application footprint and / or pinpoint source(s) of possible memory leak. Once you have a clear picture of the situation, you will be able to resolve your problem by increasing your Java Heap capacity (via –Xms & Xmx arguments) or reducing your application memory footprint and / or eliminating the memory leaks from your application code. Please note that memory leaks are often found in middleware server code and JDK as well.

I tried everything I could but I am still struggling to pinpoint the source of my OutOfMemoryError

Don’t worry, simply post a comment / question at the end of this article or email me directly @phcharbonneau@hotmail.com. I currently offer free IT / Java EE consulting. Please provide me with your problem description along you’re your generated data such as a download link to your Heap Dump, Thread Dump data, server logs etc. and I will analyze your problem for you.


Good Job.Most of the time people don't differentiate between OOOM in Heap and OOOM in Perm Gen, even though both are completely different and require different approach to solve. I come to know when I got this error on tomcat and see 2 ways to solve java.lang.OutOfMemoryError:Perm Gen Space in Java article, quite useful for me.

Thank you for your comments,

Fully agree, PermGen space issues are totally different and often related to memory / Class metadata leak triggered at the middleware side (Weblogic, JBoss etc.) following dynamic application redeploy operations.

I have an article describing the most commen PermGen space problem patterns.

Thanks again.

Hi P-H,

I had an issue where server went to Warning state. When I was going thru thread dump I found message "Java heap is almost exhausted : 1% free Java heap". Is this the root cause or could be some other reason.Even I am seeing java/lang/OutOfMemoryError(0x00000001143043C0) in thread dump as well.


Thank you Aman for your comments.


Hello, let me suggest a case:

MyFaces app on Tomcat running on 14GB heap in a 1.6_18 JVM.

A 7GB heap dump containing 4.2 GB of objcts was analyzed MAT and showed large heap usage in some classes. Developers agree the size figure can be correct in the core classes (they hold a catalog of products copied to 160 countries), and memory use in sessions can be correct because session data is maintained for 45 minutes. That's the way it is as Marketing requested it.

Then, Jconsole is connected to a production instance. Major and minor GC display for eden, survivors and tenured generations, so beautifully I'd like to print them in DIN A3 sheets and decorate the room of my company's own developers so that they finally understand GC innards.


PermGen grew steadily from 75MB to 95MB in 1:45 hours, so did the Code Cache (naeh way, no hot redeployments in between). JVM options had been changed to allow 512MB to PermGen to avoid OOM's, and now physical machines are rebooted (32 instances "cluster") as soon as they start to do some swapping, so I can't see actual logs with OOM, all I know is they said "OOM in PermGen? Yes, I think I remember something..."

Back to MAT, which I understand analyzes *heap* dumps but should give clues to *non heap* memory spaces as PermGen.
Relevant finding is that besides the system class loader and the Tomcat webapp classloader there are 1500+ instances of sun.reflect.DelegatingClassLoader, which at first looks like mucho weirdo.

This is as far as I got up to the moment of doing this writing.

Next steps are:
- Add the three famous -XX options for concurrent mark and sweeping, to see if they help in class GC (no -Xnoclassgc in options at present).
- Get some jmap -permstat from the (distant) production site.

Any hints are welcome, MAT stuff can be made available.

Thanks for the post, before I forget it :)

Hi Gudari and thanks for your comment,

Before we go any further in the analysis, can you please validate the 2 points below:

- Since your Java Heap is quite big e.g. 14 GB, can you please verifiy if the PermGen is cleanup after a Full GC. PermGen space is only cleaned up during major collections e.g. Full GC so you may see an increase until Full GC is triggered. If still growing after Full GC then leak is 100% confirmed

- Please verifiy your JVM settings and ensure -Xnoclassgc is not added in your JVM arguments. This option turns off GC PermGen space which trigger leak when using dynamic class loading / Reflection API.


Could I send you an email with a garbage collector so you can asist me optimizing it?

Hi Anyul,

Sure send me of sample of the GC logs captured either during peak load and/or outofmemoryerror condition and I will have a look.



Hi, I have received the below exception and it resolved on restarting the java application process. But isthere anyway to identify if this is because of bad request or memory leak? [org.apache.catalina.core.ContainerBase.[jboss.web].[localhost].[/].[jsp]] Servlet.service() for servlet jsp threw exception
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2882)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390)
at java.lang.StringBuffer.append(StringBuffer.java:224)
at com.icc.dto.xmlapi.XMLProcessorResponse.toString(XMLProcessorResponse.java:119)
at java.lang.String.valueOf(String.java:2826)
at java.lang.StringBuilder.append(StringBuilder.java:115)
at com.icc.xmlapi.processors.RetrieveImageProcessor.execute(RetrieveImageProcessor.java:77)
at com.icc.gui.servlet.XMLServiceServlet.processRequest(XMLServiceServlet.java:94)
at com.icc.gui.servlet.XMLServiceServlet.doPost(XMLServiceServlet.java:171)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:710)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:654)
at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:445)
at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:379)
at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:292)
at org.apache.jasper.runtime.PageContextImpl.doForward(PageContextImpl.java:694)
at org.apache.jasper.runtime.PageContextImpl.forward(PageContextImpl.java:665)
at org.apache.jsp.xmlgateway.xml_002drequest_jsp._jspService(xml_002drequest_jsp.java:55)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:373)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:336)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.icc.gui.filter.XMLLoginFilter.filterRequest(XMLLoginFilter.java:207)
at com.icc.gui.filter.XMLLoginFilter.doFilter(XMLLoginFilter.java:147)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

Thanks in anticipation

Hi Abdul,

The stack trace indicates your ran out of Java Heap memory when attempting to process your XML data. This could be a symptom of a very large request / XML data, capacity problem or leak.

What JVM vendor & versionare you using?
Do you have any logging of the size (in bytes) of the XML data that you are processing?

A JVM Heap Dump may help but first you need to understand if you are dealing with a leak vs. sudden event/requests.

I recommend that you enable JVM verbose:gc, it will help you narrow it down a problem pattern related to a leak vs. trigger event/requests.



Korzystając a aplikacji (JDownloader 2)
z ustawionym parametrem

ciągle otrzymuję błąd

java.lang.OutOfMemoryError: GC overhead limit

Jak rozwiązać ten problem?

Hi P-H,
I am struggling to find the root cause of OutOfMemoryError being thrown in my application, which is leading to server restart. Can you please provide some suggestion, in fact Solution is highly appreciated :)
Please let me know to send thread dump file, Thanks much in advance!


Hi P-H,

I am getting the OutOfMemoryError thrown by my application server due to this my server getting hang. it requires restart. Kindly provide any suggestion based on the below given logs.

14/04/2014 18:39:37 INFO EmailAlertBatch - Entering EmailAlertBatch.checkDisputes [Status, Duration]: 1, 5
14/04/2014 18:39:37 INFO EmailAlertBatch - EmailAlertBatch.checkDisputes() Disputes list query: SELECT DSPT_ID, TO_CHAR(CREATEDDATE + 7, 'DD/MM/YYYY') FROM DISPUTE_TRANS WHERE STATUS_CODE = 1 AND SYSDATE >= CREATEDDATE + 5 AND SECOND_MAIL_DATE_DOF IS NULL
14/04/2014 18:40:02 INFO EmailAlertBatch - EmailAlertBatch.checkDisputes() Disputes list: []
14/04/2014 18:40:02 INFO EmailAlertBatch - Leaving EmailAlertBatch.run()
at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
at weblogic.socket.JSSEFilterImpl.(JSSEFilterImpl.java:47)
at weblogic.socket.JSSEFilterImpl.(JSSEFilterImpl.java:33)
at weblogic.server.channels.DynamicJSSEListenThread.registerSocket(DynamicJSSEListenThread.java:88)
Truncated. see log file for complete stacktrace
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at weblogic.utils.io.ChunkedOutputStream.writeTo(ChunkedOutputStream.java:193)
at weblogic.servlet.internal.ServletOutputStreamImpl.writeHeader(ServletOutputStreamImpl.java:167)
at weblogic.servlet.internal.ResponseHeaders.writeHeaders(ResponseHeaders.java:444)
at weblogic.servlet.internal.ServletResponseImpl.writeHeaders(ServletResponseImpl.java:1272)
at weblogic.servlet.internal.ServletOutputStreamImpl.sendHeaders(ServletOutputStreamImpl.java:281)
at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:424)
at weblogic.servlet.internal.ChunkOutput$2.checkForFlush(ChunkOutput.java:648)
at weblogic.servlet.internal.ChunkOutput.write(ChunkOutput.java:333)
at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:148)
at weblogic.servlet.jsp.JspWriterImpl.write(JspWriterImpl.java:275)
at jsp_servlet.__requestprocessnew._jspService(__requestprocessnew.java:672)
at weblogic.servlet.jsp.JspBase.service(JspBase.java:34)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
Exception in thread "DynamicListenThread[Default]" java.lang.OutOfMemoryError: Java heap space
<[STUCK] ExecuteThread: '79' for queue: 'weblogic.kernel.Default (self-tuning)' has been busy for "680" seconds working on the request "weblogic.servlet.internal.ServletRequestImpl@35e91046[
POST /WSEntryweb/WSEntry HTTP/1.0
Content-Type: text/xml; charset=utf-8
Accept: application/soap+xml, application/dime, multipart/related, text/*
User-Agent: Axis/1.3
Cache-Control: no-cache
Pragma: no-cache
SOAPAction: ""
Content-Length: 948

added the logs to continue

<[STUCK] ExecuteThread: '81' for queue: 'weblogic.kernel.Default (self-tuning)' has been busy for "703" seconds working on the request "weblogic.servlet.internal.ServletRequestImpl@4a62c47d[
POST /Authentication/SPServlet HTTP/1.1
Accept: image/jpeg, application/x-ms-application, image/gif, application/xaml+xml, image/pjpeg, application/x-ms-xbap, */*
Referer: http://eservices.dubaitrade.ae/eMirsal/confirmPayment.do?referenceNumber=1-594569678
Accept-Language: en-US
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; AskTbARS/
Content-Type: application/x-www-form-urlencoded
Content-Length: 143
Connection: Keep-Alive
Cache-Control: no-cache
Cookie: JSESSIONID=ZclyTLrT9sShNqghGpxr0hyBLGFKvTzLhJzj1tFQS3GVn8Tc4cM3!-724582756; ePayPROD=235214346.4821.0000

]", which is more than the configured time (StuckThreadMaxTime) of "600" seconds. Stack trace:
com.ipay.ejbs.ValidationBean_qzwmc2_EOImpl.__WL_invoke(Unknown Source)
com.ipay.ejbs.ValidationBean_qzwmc2_EOImpl.logDeptDetails(Unknown Source)
com.ipay.ejbs.EntryBean_7t1xog_EOImpl.__WL_invoke(Unknown Source)
com.ipay.ejbs.EntryBean_7t1xog_EOImpl.getRequestHandler(Unknown Source)
com.ipay.ejbs.EntryBean_7t1xog_EOImpl_WLSkel.invoke(Unknown Source)
com.ipay.ejbs.EntryBean_7t1xog_EOImpl_1034_WLStub.getRequestHandler(Unknown Source)

Hi S.K,

It is clear that your JVM ran out of memory. My first recommendation is for you to verify if you are dealing with a sudden Java heap vs. memory leak. For this, if not done already, you will need to ebable the verbose:Gc logs. Please follow the instructions from this other article below:


If you already use a monitoring tool, revisit the Java heap history, determine any sudden increase or leak. Generating a JVM heap dump may also be required at some point, at least to enable the flag for any new re-occurrence.



Hi PH,

I got OOM error on AdminServer which is allocated 1GB of space which runs OSM and EM I have a Problem Suspect here :

The classloader/component "sun.misc.Launcher$AppClassLoader @ 0xc016c520" occupies 550,365,472 (62.79%) bytes. The memory is accumulated in one instance of "java.util.WeakHashMap$Entry[]" loaded by "".

Class Name

Shallow Heap

Retained Heap


sun.misc.Launcher$AppClassLoader @ 0xc016c520
80 550,365,472 62.79%
\java.util.Vector @ 0xc07c8ad8
32 547,047,568 62.41%
.\java.lang.Object[40960] @ 0xc8dfc258
163,856 547,047,536 62.41%
..\class weblogic.management.jmx.MBeanServerInvocationHandler$ObjectNameManagerFactory @ 0xc8d4ab88
8 537,225,144 61.29%
...\java.util.Collections$SynchronizedMap @ 0xc8d4abe8
32 537,225,136 61.29%
....\java.util.WeakHashMap @ 0xc8d4ac08
56 537,225,104 61.29%
.....\java.util.WeakHashMap$Entry[256] @ 0xeb93ccb8
1,040 537,225,000 61.29%
......+java.util.WeakHashMap$Entry @ 0xe4092768
40 13,875,800 1.58%
......+java.util.WeakHashMap$Entry @ 0xf2752280
40 13,566,232 1.55%
......+java.util.WeakHashMap$Entry @ 0xe30f45f8
40 13,561,776 1.55%
......+java.util.WeakHashMap$Entry @ 0xdf457688
40 13,559,952 1.55%
......+java.util.WeakHashMap$Entry @ 0xd9f44778
40 13,559,192 1.55%
......+java.util.WeakHashMap$Entry @ 0xf20b62b8
40 13,526,248 1.54%
......+java.util.WeakHashMap$Entry @ 0xe17a58f8
40 7,172,160 0.82%
......+java.util.WeakHashMap$Entry @ 0xf13950c0
40 7,166,608 0.82%
......+java.util.WeakHashMap$Entry @ 0xed041330
40 7,162,176 0.82%
......+java.util.WeakHashMap$Entry @ 0xecf51898
40 7,161,352 0.82%
......+java.util.WeakHashMap$Entry @ 0xd55847f0
40 7,160,072 0.82%
......+java.util.WeakHashMap$Entry @ 0xd70c5270
40 7,154,040 0.82%
......+java.util.WeakHashMap$Entry @ 0xede3d048
40 7,150,016 0.82%
......+java.util.WeakHashMap$Entry @ 0xdda09b00
40 7,147,536 0.82%
......+java.util.WeakHashMap$Entry @ 0xeeb5cc78
40 7,143,768 0.81%
......+java.util.WeakHashMap$Entry @ 0xec0396c8
40 7,136,824 0.81%
......+java.util.WeakHashMap$Entry @ 0xd7641608
40 7,118,888 0.81%
......+java.util.WeakHashMap$Entry @ 0xec76ed18
40 7,107,640 0.81%
......+java.util.WeakHashMap$Entry @ 0xdfaf9490
40 7,104,320 0.81%
......+java.util.WeakHashMap$Entry @ 0xdda0b578
40 7,101,536 0.81%

Post a Comment