/ Java 8: From PermGen to Metaspace ~ Java EE Support Patterns

2.01.2013

Java 8: From PermGen to Metaspace

As you may be aware, the JDK 8 Early Access is now available for download. This allows Java developers to experiment with some of the new language and runtime features of Java 8.

One of these features is the complete removal of the Permanent Generation (PermGen) space which has been announced by Oracle since the release of JDK 7. Interned strings, for example, have already been removed from the PermGen space since JDK 7. The JDK 8 release finalizes its decommissioning.

This article will share the information that we found so far on the PermGen successor: Metaspace. We will also compare the runtime behavior of the HotSpot 1.7 vs. HotSpot 1.8 (b75) when executing a Java program “leaking” class metadata objects.

The final specifications, tuning flags and documentation around Metaspace should be available once Java 8 is officially released.

Metaspace: A new memory space is born

The JDK 8 HotSpot JVM is now using native memory for the representation of class metadata and is called Metaspace; similar to the Oracle JRockit and IBM JVM's.

The good news is that it means no more java.lang.OutOfMemoryError: PermGen space problems and no need for you to tune and monitor this memory space anymore…not so fast. While this change is invisible by default, we will show you next that you will still need to worry about the class metadata memory footprint. Please also keep in mind that this new feature does not magically eliminate class and classloader memory leaks. You will need to track down these problems using a different approach and by learning the new naming convention.

I recommend that you read the PermGen removal summary and comments from Jon on this subject. 

In summary:

PermGen space situation

  • This memory space is completely removed.
  • The PermSize and MaxPermSize JVM arguments are ignored and a warning is issued if present at start-up.

Metaspace memory allocation model

  • Most allocations for the class metadata are now allocated out of native memory.
  • The klasses that were used to describe class metadata have been removed.

Metaspace capacity

  • By default class metadata allocation is limited by the amount of available native memory (capacity will of course depend if you use a 32-bit JVM vs. 64-bit along with OS virtual memory availability).
  • A new flag is available (MaxMetaspaceSize), allowing you to limit the amount of native memory used for class metadata. If you don’t specify this flag, the Metaspace will dynamically re-size depending of the application demand at runtime.

Metaspace garbage collection

  • Garbage collection of the dead classes and classloaders is triggered once the class metadata usage reaches the “MaxMetaspaceSize”.
  • Proper monitoring & tuning of the Metaspace will obviously be required in order to limit the frequency or delay of such garbage collections. Excessive Metaspace garbage collections may be a symptom of classes, classloaders memory leak or inadequate sizing for your application.

Java heap space impact

  • Some miscellaneous data has been moved to the Java heap space. This means you may observe an increase of the Java heap space following a future JDK 8 upgrade.

Metaspace monitoring

  • Metaspace usage is available from the HotSpot 1.8 verbose GC log output.
  • Jstat & JVisualVM have not been updated at this point based on our testing with b75 and the old PermGen space references are still present.

Enough theory now, let’s see this new memory space in action via our leaking Java program…

PermGen vs. Metaspace runtime comparison

In order to better understand the runtime behavior of the new Metaspace memory space, we created a class metadata leaking Java program. You can download the source here.

The following scenarios will be tested:

  • Run the Java program using JDK 1.7 in order to monitor & deplete the PermGen memory space set at 128 MB.
  • Run the Java program using JDK 1.8 (b75) in order to monitor the dynamic increase and garbage collection of the new Metaspace memory space.
  • Run the Java program using JDK 1.8 (b75) in order to simulate the depletion of the Metaspace by setting the MaxMetaspaceSize value at 128 MB.

JDK 1.7 @64-bit – PermGen depletion

  • Java program with 50K configured iterations
  • Java heap space of 1024 MB
  • Java PermGen space of 128 MB (-XX:MaxPermSize=128m)


As you can see form JVisualVM, the PermGen depletion was reached after loading about 30K+ classes. We can also see this depletion from the program and GC output.

Class metadata leak simulator
Author: Pierre-Hugues Charbonneau
http://javaeesupportpatterns.blogspot.com
ERROR: java.lang.OutOfMemoryError: PermGen space


Now let’s execute the program using the HotSpot JDK 1.8 JRE.

JDK 1.8 @64-bit – Metaspace dynamic re-size

  • Java program with 50K configured iterations
  • Java heap space of 1024 MB
  • Java Metaspace space: unbounded (default)





As you can see from the verbose GC output, the JVM Metaspace did expand dynamically from 20 MB up to 328 MB of reserved native memory in order to honor the increased class metadata memory footprint from our Java program. We could also observe garbage collection events in the attempt by the JVM to destroy any dead class or classloader object. Since our Java program is leaking, the JVM had no choice but to dynamically expand the Metaspace memory space.

The program was able to run its 50K of iterations with no OOM event and loaded 50K+ Classes.

Let's move to our last testing scenario.

JDK 1.8 @64-bit – Metaspace depletion

  • Java program with 50K configured iterations
  • Java heap space of 1024 MB
  • Java Metaspace space: 128 MB (-XX:MaxMetaspaceSize=128m)





As you can see form JVisualVM, the Metaspace depletion was reached after loading about 30K+ classes; very similar to the run with the JDK 1.7. We can also see this from the program and GC output. Another interesting observation is that the native memory footprint reserved was twice as much as the maximum size specified. This may indicate some opportunities to fine tune the Metaspace re-size policy, if possible, in order to avoid native memory waste.

Now find below the Exception we got from the Java program output.

Class metadata leak simulator
Author: Pierre-Hugues Charbonneau
http://javaeesupportpatterns.blogspot.com
ERROR: java.lang.OutOfMemoryError: Metadata space
Done!

As expected, capping the Metaspace at 128 MB like we did for the baseline run with JDK 1.7 did not allow us to complete the 50K iterations of our program. A new OOM error was thrown by the JVM. The above OOM event was thrown by the JVM from the Metaspace following a memory allocation failure.

#metaspace.cpp



Final words

I hope you appreciated this early analysis and experiment with the new Java 8 Metaspace. The current observations definitely indicate that proper monitoring & tuning will be required in order to stay away from problems such as excessive Metaspace GC or OOM conditions triggered from our last testing scenario. Future articles may include performance comparisons in order to identify potential performance improvements associated with this new feature.

Please feel free to provide any comment.

23 comments:

Hi P-H, thank for early coverage of Metasapce. Does this mean now more Java.lang.OutOfMemoryError: PermGen Space in Java 8?

Thanks Javin the for your comments,

Actually, it means the end of an aura: java.lang.OutOfMemoryError: PermGen Space and the rise of a new Java Exception : java.lang.OutOfMemoryError: Metadata space along with other native memory related OOM errors.

Regards,
P-H

I see some GC (Allocation Failure) entries in the fullgclog, do you know what they mean

59.160: [GC (Allocation Failure) [PSYoungGen: 268992K->59744K(289728K)] 417350K->220502K(988800K), 0.1218960 secs]

Also I dont see minor collections anymore, is this due to the change in the GC algorithm in JDK 8?

Hi Rajesh,

Those are normal and indicates the JVM was unable to allocate memory for the desired object, which is why GC is triggered.

Minor collections are still there (YoungGen space) for this default parallel GC collector.

Regards,
P-H

Hi Sumit,

From a Java learning perspective, I usually recommend to first learn the Java fundamentals, language syntax and OO programming principles.

You should also have specific training materials on the JVM itself, garbage collection.

Finally, you should also cover more advances principles on troubleshooting, performance tuning, tools etc.

Regards,
P-H

Nice & Excellent article....keep posting !!!

Hi,

I am getting similar error as Rajesh. Tried few way but so far no solution.
Error:
-------
2014-10-27T16:37:06.664+0000: [GC (Allocation Failure) [PSYoungGen: 1376870K->1568K(1385984K)] 1387522K->12220K(4182528K), 0.0132978 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]

Could you please suggest what is the solution to get rid of this problem. My java version "1.8.0_05", 64-Bit Server VM (build 25.5-b02, mixed mode).
My JAVA_OPTS = -Xmx3072m -Xms3072m -Dfile.encoding=UTF-8 -XX:+UseParallelGC


Regards
Hassan

Seeing GC Allocation Failure in GC logs is totally NORMAL and not a problem by itself.

When using the Parallel GC collector/compactor, the Heap is split in between Young and Old spaces. GC Allocation failure is your case simply indicates the JVM reaches its max capacity of the YG space and now has to cleanup the short-lived objects or "garbage" from your Java application.

This situation only becomes problematic is being executed too frequently and/or taking too much time to complete given it is a stop-the-world event.

Thanks.
P-H

Thank u for ur posting

There is something I don't quite get it
From JDK8 version,
Is JVM not able to control a native memory where as far as I know were only controlled with OS or hardware?

So, what happens if no limit is set on MetaData space and the demand for memory keeps increasing infinitely?

Hi Mannan,

2 scenarios:

1) If using 32-bit JVM, eventually the JVM will thrown an OutOfMemoryError due to the depletion of the 32-bit memory address space e.g. once reaching 4 GB for the entire Java process.
2) If using 64-bit JVM, eventually the JVM process will deplete the RAM followed by the virtually memory of the OS. Severe performance degradation, OS hang and/or excessive disk paging activity are typical symptoms observed.

Thanks.
P-H

Hi P-H,

Thanks for such a nice article.

I have a situation where I have frequent GC happening.


S0C S1C S0U S1U EC EU OC OU MC MU CCSC CCSU YGC YGCT FGC FGCT GCT
2048.0 2048.0 0.0 479.5 440832.0 370728.9 890368.0 783255.7 147272.0 129316.0 19632.0 16553.7 7053 216.362 6 21.859 238.221
2048.0 2048.0 0.0 479.5 440832.0 426995.1 890368.0 783255.7 147272.0 129316.0 19632.0 16553.7 7053 216.362 6 21.859 238.221
2048.0 1536.0 479.5 0.0 441344.0 48575.9 890368.0 783263.7 147272.0 129316.0 19632.0 16553.7 7054 216.392 6 21.859 238.251
2048.0 1536.0 479.5 0.0 441344.0 104135.5 890368.0 783263.7 147272.0 129316.0 19632.0 16553.7 7054 216.392 6 21.859 238.251
2048.0 1536.0 479.5 0.0 441344.0 155950.0 890368.0 783263.7 147272.0 129316.0 19632.0 16553.7 7054 216.392 6 21.859 238.251
2048.0 1536.0 479.5 0.0 441344.0 209430.3 890368.0 783263.7 147272.0 129316.0 19632.0 16553.7 7054 216.392 6 21.859 238.251
2048.0 1536.0 479.5 0.0 441344.0 280164.1 890368.0 783263.7 147272.0 129316.0 19632.0 16553.7 7054 216.392 6 21.859 238.251
2048.0 1536.0 479.5 0.0 441344.0 339467.9 890368.0 783263.7 147272.0 129316.0 19632.0 16553.7 7054 216.392 6 21.859 238.251
2048.0 1536.0 479.5 0.0 441344.0 392331.2 890368.0 783263.7 147272.0 129316.0 19632.0 16553.7 7054 216.392 6 21.859 238.251
1536.0 1536.0 0.0 479.5 441344.0 17738.2 890368.0 783279.7 147272.0 129316.0 19632.0 16553.7 7055 216.410 6 21.859 238.269
1536.0 1536.0 0.0 479.5 441344.0 70740.9 890368.0 783279.7 147272.0 129316.0 19632.0 16553.7 7055 216.410 6 21.859 238.269


What options can i now try with the JVM tuning as increasing permGen size will be ignored by Java 8.

Regards,
Arshad

Hi Arshad,

Can you please post a fresh output of the GC logs (-verbose:gc) output instead of jstats? It will be much easier to assess your current JVM GC health situation.

Thanks.

Hi PH,
Thanks for this write up. This is neat.

Right now I am investigating a MetaSpace issue. We are using: G1 and app is based on reflection. In our case, number of classes loaded in MetaSpace is remaining same(around 24K) after initial "warm-up" but MetaSpace usage keeps growing. This is increasing memory footprint of the Tomcat process on our Windows servers. VisualVM/PrintHeapAtGC/PerfMon data shows that. Java heap is cleaning up normally with Young GCs. So it is confirm that we are not loading too many classes but we are doing something crazy for which JVM has to load same thing repeateadly - that's my theory. I tried to bumped up JIT compilation threshold from 10000 to 50000 to reduce bytecode to native code conversion footprint but that did not helped much. Right now I am going through heap dumps and GC.class_stats(from JMC) data. Have you run into this type of problem before?

I have not found any good tool to look inside MetaSpace except for JMC diagnostic commands.

Thanks!
Suvendu

One intersting update. With JRE 1.7 and same app, we are not seeing any leak in Permgen. What is happening is that, when JVM is hitting the Maxpermgen size limit, it is forcing a Full GC which is cleaning up 90% of the Permgen space.

Now that left us in more confused state. Question is, is it our app bug or JRE bug?

Hi Suvendu,

This is an interesting observation but I am not surprised. Here is what I suspect is happening and some facts:

- The G1 collector is designed to prevent any Full GC, in order to reduce response time latency as much as possible.

- The Metaspace objects e.g. Class metadata waste are only collected during a Full GC event as per design, not a bug.

- JIT compilation tuning will not help here since your concern is related to a Metaspace excessive growth e.g. not performance problem due to lack of native code execution.

Conclusion, with optimal G1 collector, right Heap sizing and application using frequent Reflection, a potential Metaspace growth can be observed since Full GC is not triggered at all or rarely triggered.

In order to confirm the above, can you please try to trigger a Full GC manually (outside business hours) and see if the Metaspace space usage is reduced.

Another recommendation from me is to implement a -XX:MaxMetaspaceSize to a proper size such as 1 or 1.5 GB. This will force the JVM into Full GC at some point and clean the Metadata/Reflection waste and prevent excessive consumption of your physical/virtual memory.

Please keep me posted on your latest results...

Thanks.

Hi PH,
I agree, G1 tries to avoid Full GCs as much as it can. That's why I triggered Full GC manually but that did not help. Sorry, I should have mentioned that in my original post.

Our app runs on Tomcat. We can recreate this issue during load testing. So what I did was, once all the virtual users logged off from the app, I triggered a Full GC. It did not reduced MetaSpace size but cleaned up Java heap by 80%. Then I waited for few more hours and again triggered Full GC. MetaSpace usage remained same. Then I thought probably something is retaining reference(s) to single/multiple objects and that pins every class loaded by the web application in the MetaSpace. And thought we might have a classloader leak and started searching evidence for that.

But I was surprised to see that with JDK 1.7 there was no problem. Even with 300MB Permgen we are not seeing issue. It was filling up but with Full GC, it was also getting cleaned up. Throughput was almost steady. I suppose with larger Permgen we will get much better result. We tried with larger 1GB MetaSpace. Once it reached the max capacity, it started triggering series of Full GCs as it could not clean up the area. That affected throughput badly.

GC.class_stats data is showing:
- over time some compiled JSPs are growing in size. JIT?
- A high number of classloaders of type sun/reflect/DelegatingClassLoader loaded GeneratedMethodAccessor class. Indicating frequent reflection. Inflation threshold has been increased to see the effect.

Out of curiosity, we tried IBM JDK8 and we are not seeing any leak. :)
I will keep you posted.

Thanks!

Hi,

Please send me a snippet of the GC log showing the Metaspace retention after a Full GC event.

Also, see if the Heap itself is leaking slightly over time. You may be able to pinpoint the leak from the heap dump, since the Metadata also have some Java objects representation as well. Take one heap dump as baseline, generate enough traffic and load the Heap & Metaspace to a high value then take a second heap dump snapshot.

Use the heap dump compare feature in MAT and see which objects are growing over time...

Thanks.

Here is the snippet from GC log which shows even after Full GC, MetaSpace size did not reduced:

Heap after GC invocations=597 (full 1):
garbage-first heap total 4194304K, used 1388214K [0x00000006c0000000, 0x00000006c0108000, 0x00000007c0000000)
region size 1024K, 15 young (15360K), 15 survivors (15360K)
Metaspace used 220430K, capacity 227489K, committed 227640K, reserved 1261568K
class space used 15100K, capacity 16115K, committed 16152K, reserved 1048576K
}
[Times: user=0.72 sys=0.00, real=0.75 secs]
{Heap before GC invocations=597 (full 1):
garbage-first heap total 4194304K, used 1537250K [0x00000006c0000000, 0x00000006c0108000, 0x00000007c0000000)
region size 1024K, 158 young (161792K), 15 survivors (15360K)
Metaspace used 220437K, capacity 227489K, committed 227640K, reserved 1261568K
class space used 15100K, capacity 16115K, committed 16152K, reserved 1048576K
2017-03-07T05:43:49.543-0500: 2769.765: [Full GC (Heap Inspection Initiated GC) 2017-03-07T05:43:50.122-0500: 2770.346: [SoftReference, 486 refs, 0.0001567 secs]2017-03-07T05:43:50.122-0500: 2770.347: [WeakReference, 13159 refs, 0.0022139 secs]2017-03-07T05:43:50.122-0500: 2770.349: [FinalReference, 2513 refs, 0.0024440 secs]2017-03-07T05:43:50.122-0500: 2770.351: [PhantomReference, 3 refs, 402 refs, 0.0000747 secs]2017-03-07T05:43:50.122-0500: 2770.351: [JNI Weak Reference, 3.4842794 secs] 2776.021: [G1Ergonomics (Heap Sizing) attempt heap shrinking, reason: capacity higher than max desired capacity after Full GC, capacity: 4294967296 bytes, occupancy: 422548848 bytes, max desired capacity: 1408496159 bytes (70.00 %)]
2776.227: [G1Ergonomics (Heap Sizing) shrink the heap, requested shrinking amount: 2886471137 bytes, aligned shrinking amount: 2885681152 bytes, attempted shrinking amount: 2885681152 bytes]
1502M->402M(1344M), 6.6585387 secs]
[Eden: 143.0M(189.0M)->0.0B(67.0M) Survivors: 15.0M->0.0B Heap: 1502.2M(4096.0M)->403.0M(1344.0M)], [Metaspace: 220437K->220437K(1261568K)]
Heap after GC invocations=598 (full 2):
garbage-first heap total 1376256K, used 412645K [0x00000006c0000000, 0x00000006c0102a00, 0x00000007c0000000)
region size 1024K, 0 young (0K), 0 survivors (0K)
Metaspace used 220437K, capacity 227489K, committed 227640K, reserved 1261568K
class space used 15100K, capacity 16115K, committed 16152K, reserved 1048576K
}
[Times: user=7.20 sys=0.25, real=6.66 secs]

GC.class_stats and String table size is giving us the nearly accurate measure and "under the hood" details of MetaSpace.

Hi PH,
It turns out that one missing Jar was creating all these problem! Once that Jar was added back to the WAR - problem went away. It was quite unexpected though. Now MetaSpace usage is ~150MB after several hours load testing.

Another thing I learned during this journey. There is no publicly available tool which can give me 100% details about MetaSpace contents. I was excited about GC.class_stats data but later it turned out it only shows 50%-60% of the MetaSpace contents. This is someway proves JVMs keeps more "stuffs" other than class metadata in MetaSpace. I am going through MetaSpace related code to see if I find anything.

Thanks for your help!

Is Full GC only triggers in the particular case whenever currently allocated memory for MetaSpace is not sufficient as per the current demand of application so it is considered viable option by jvm before doing dynamic resizing of allocated memory for metaspace Or there can be other usecases of trigering of Full GC

Hi P-H,

Excellent article, keep on publish kind of interesting articles. Thank you very much!

Best Regards,
Thangavel L

Post a Comment