As you may
have seen from my previous tutorials and case studies, Java Heap Space OutOfMemoryError problems can be complex to pinpoint and resolve. One of the common problems I
have observed from Java EE production systems is OutOfMemoryError: unable to
create new native thread; error thrown when the HotSpot JVM is unable to
further create a new Java thread.
This
article will revisit this HotSpot VM error and provide you with recommendations
and resolution strategies.
If you are
not familiar with the HotSpot JVM, I first recommend that you look at a high
level view of its internal HotSpot JVM memory spaces. This knowledge is important in
order for you to understand OutOfMemoryError problems related to the native (C-Heap)
memory space.
OutOfMemoryError: unable to create new native
thread – what is it?
Let’s
start with a basic explanation. This HotSpot JVM error is thrown when the
internal JVM native code is unable to create a new Java thread. More precisely,
it means that the JVM native code was unable to create a new “native” thread
from the OS (Solaris, Linux, MAC, Windows...).
We can
clearly see this logic from the OpenJDK 1.6 and 1.7 implementations as per
below:
Unfortunately
at this point you won’t get more detail than this error, with no indication of
why the JVM is unable to create a new thread from the OS…
HotSpot JVM: 32-bit or 64-bit?
Before you
go any further in the analysis, one fundamental fact that you must determine from
your Java or Java EE environment is which version of HotSpot VM you are using
e.g. 32-bit or 64-bit.
Why is it so
important? What you will learn shortly is that this JVM problem is very often related
to native memory depletion; either at the JVM process or OS level. For now
please keep in mind that:
- A 32-bit JVM process is in theory allowed to grow up
to 4 GB (even much lower on some older 32-bit Windows versions).
- For a 32-bit JVM process, the C-Heap is in a race with the Java Heap and PermGen
space e.g. C-Heap capacity = 2-4 GB – Java Heap size (-Xms, -Xmx) – PermGen size (-XX:MaxPermSize)
- A 64-bit JVM process is in theory allowed to use
most of the OS virtual memory available or up to 16 EB (16 million TB)
As you can
see, if you allocate a large Java Heap (2 GB+) for a 32-bit JVM process, the
native memory space capacity will be reduced automatically, opening the door for
JVM native memory allocation failures.
For a
64-bit JVM process, your main concern, from a JVM C-Heap perspective, is the
capacity and availability of the OS physical, virtual and swap memory.
OK great but how does native memory affect Java
threads creation?
Now back
to our primary problem. Another fundamental JVM aspect to understand is that
Java threads created from the JVM requires native
memory from the OS. You should now start to understand the source of your
problem…
The high
level thread creation process is as per below:
- A new Java thread is requested from the Java program
& JDK
- The JVM native code then attempt to create a new
native thread from the OS
- The OS then attempts to create a new native thread
as per attributes which include the thread stack size. Native memory is
then allocated (reserved) from the OS to the Java process native memory
space; assuming the process has enough address space (e.g. 32-bit process)
to honour the request
- The OS will refuse any further native thread & memory
allocation if the 32-bit Java process size has depleted its memory address
space e.g. 2 GB, 3 GB or 4 GB process size limit
- The OS will also refuse any further Thread & native
memory allocation if the virtual memory of the OS is depleted (including Solaris
swap space depletion since thread access to the stack can generate a SIGBUS
error, crashing the JVM * http://bugs.sun.com/view_bug.do?bug_id=6302804
In
summary:
- Java threads creation require native memory available
from the OS; for both 32-bit & 64-bit JVM processes
- For a 32-bit JVM, Java thread creation also requires memory available from the C-Heap or process address space
Problem diagnostic
Now that
you understand native memory and JVM thread creation a little better, is it now
time to look at your problem. As a starting point, I suggest that your follow
the analysis approach below:
- Determine if you are using HotSpot 32-bit or 64-bit
JVM
- When problem is observed, take a JVM Thread Dump and
determine how many Threads are active
- Monitor closely the Java process size utilization before
and during the OOM problem replication
- Monitor closely the OS virtual memory utilization before
and during the OOM problem replication; including the swap memory space
utilization if using Solaris OS
Proper
data gathering as per above will allow you to collect the proper data points,
allowing you to perform the first level of investigation. The next step will be
to look at the possible problem patterns and determine which one is applicable
for your problem case.
Problem pattern #1 – C-Heap depletion (32-bit
JVM)
From my
experience, OutOfMemoryError: unable to create new native thread is quite
common for 32-bit JVM processes. This problem is often observed when too many
threads are created vs. C-Heap capacity.
JVM Thread
Dump analysis and Java process size monitoring will allow you to determine if
this is the cause.
Problem pattern #2 – OS virtual memory
depletion (64-bit JVM)
In this
scenario, the OS virtual memory is fully depleted. This could be due to a few
64-bit JVM processes taking lot memory e.g. 10 GB+ and / or other high memory
footprint rogue processes. Again, Java process size & OS virtual memory
monitoring will allow you to determine if this is the cause.
Also, please verify if you are not hitting OS related threshold such as ulimit -u or NPROC (max user processes). Default limits are usually low and will prevent you to create let's say more than 1024 threads per Java process.
Also, please verify if you are not hitting OS related threshold such as ulimit -u or NPROC (max user processes). Default limits are usually low and will prevent you to create let's say more than 1024 threads per Java process.
Problem pattern #3 – OS virtual memory
depletion (32-bit JVM)
The third
scenario is less frequent but can still be observed. The diagnostic can be a
bit more complex but the key analysis point will be to determine which
processes are causing a full OS virtual memory depletion. Your 32-bit JVM
processes could be either the source or the victim such as rogue processes
using most of the OS virtual memory and preventing your 32-bit JVM processes to
reserve more native memory for its thread creation process.
Please
note that this problem can also manifest itself as a full JVM crash (as per below sample) when running
out of OS virtual memory or swap space on Solaris.
#
# A fatal
error has been detected by the Java Runtime Environment:
#
# java.lang.OutOfMemoryError: requested 32756 bytes
for ChunkPool::allocate. Out of swap space?
#
# Internal Error (allocation.cpp:166),
pid=2290, tid=27
# Error: ChunkPool::allocate
#
# JRE
version: 6.0_24-b07
# Java VM:
Java HotSpot(TM) Server VM (19.1-b02 mixed mode solaris-sparc )
# If you
would like to submit a bug report, please visit:
#
http://java.sun.com/webapps/bugreport/crash.jsp
#
--------------- T H R E A D
---------------
Current
thread (0x003fa800): JavaThread
"CompilerThread1" daemon [_thread_in_native, id=27,
stack(0x65380000,0x65400000)]
Stack:
[0x65380000,0x65400000],
sp=0x653fd758, free space=501k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code,
C=native code)
………………
Native memory depletion: symptom or root
cause?
You now
understand your problem and know which problem pattern you are dealing with.
You are now ready to provide recommendations to address the problem…are you?
Your work is
not done yet, please keep in mind that this JVM OOM event is often just a “symptom”
of the actual root cause of the problem. The root cause is typically much
deeper so before providing recommendations to your client I recommend that you
really perform deeper analysis. The last thing you want to do is to simply
address and mask the symptoms. Solutions such as increasing OS physical /
virtual memory or upgrading all your JVM processes to 64-bit should only be
considered once you have a good view on the root cause and production environment
capacity requirements.
The next
fundamental question to answer is how many threads were active at the time of
the OutOfMemoryError? In my experience with Java EE production systems, the
most common root cause is actually the application and / or Java EE container
attempting to create too many threads at a given time when facing non happy
paths such as thread stuck in a remote IO call, thread race conditions etc. In
this scenario, the Java EE container can start creating too many threads when
attempting to honour incoming client requests, leading to increase pressure
point on the C-Heap and native memory allocation. Bottom line, before blaming
the JVM, please perform your due diligence and determine if you are dealing
with an application or Java EE container thread tuning problem as the root
cause.
Once you
understand and address the root cause (source of thread creations), you can
then work on tuning your JVM and OS memory capacity in order to make it more
fault tolerant and better “survive” these sudden thread surge scenarios.
Recommendations:
- First, quickly rule out any obvious OS memory (physical & virtual memory) & process capacity (e.g. ulimit -u / NPROC) problem.
- Perform a JVM Thread Dump analysis and determine
the source of all the active threads vs. an established baseline.
Determine what is causing your Java application or Java EE container to
create so many threads at the time of the failure
- Please ensure that your monitoring tools closely
monitor both your Java VM processes size & OS virtual memory. This
crucial data will be required in order to perform a full root cause
analysis. Please remember that a 32-bit Java process size is limited between 2 GB - 4 GB depending of your OS
- Look at all running processes and determine if your JVM
processes are actually the source of the problem or victim of other
processes consuming all the virtual memory
- Revisit your Java EE container thread configuration
& JVM thread stack size. Determine if the Java EE container is allowed
to create more threads than your JVM process and / or OS can handle
- Determine if the Java Heap size of your 32-bit JVM is too large, preventing the JVM to create enough threads to fulfill your client requests. In this scenario, you will have to consider reducing your Java Heap size (if possible), vertical scaling or upgrade to a 64-bit JVM
Capacity planning analysis to the rescue
As you may
have seen from my past article on the Top 10 Causes of Java EE Enterprise Performance Problems, lack of capacity planning analysis is often the source of
the problem. Any comprehensive load and performance testing exercise should also properly
determine the Java EE container threads, JVM & OS native memory requirement
for your production environment; including impact measurements of "non-happy" paths. This approach
will allow your production environment to stay away from this type of problem
and lead to better system scalability and stability in the long run.
Please
provide any comment and share your experience with JVM native thread troubleshooting.
8 comments:
Actually I would start with a fourth recommendation before anything else. You should check if your system limits are right, i.e. not too low. It's easy to miss something like "max user processes" on unix-like systems:
# ulimit -a
...
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
Hi Michal and thanks for your comments,
I agree, OS level configuration assessment is also important and can be ruled out fairly quickly as a possible root cause early in the root cause analysis process.
Regards,
P-H
Excellent article.
I'll also add that if you are on Windows OS, look into the VMMap utility to explore how the OS is allocating it's memory chunks for the native threads. Windows 2003 32bit Server grabs increasingly larger chunks to the point where one additional thread will cost you 1Gb of C-Heap.
Thanks David your your comments and tips regarding VMMap utility.
Interesting observation regarding native memory allocation...I assume you are referring to internal C-Heap/Windows 2003 32-bit fragmentation?
Regards,
P-H
Threads can be implemented by extending Thread class, implementing Runnable interface and Callable interface.
If you want to return an value or throw an exception then use Callable otherwise use Runnable as extending Thread class limits the Class inheritance and also makes the process heavy.
Check this :
Different ways to implement Threads in Java
Currently we got this issue in our java app which is running in solaris. we hadn't set the default heap space in the environment variable. the exception was encountered when trying to access an external webservice
-Muralikrishna.CN
On linux also increasing of kernel.threads-max sysctl option might help.
Good recommendation, thank you.
Post a Comment