Problems with the Oracle JDBC Driver can be hard to pinpoint. Sometimes, a particular SQLException might be a symptom of another problem such as intermittent network latency, packet loss etc. The good news is that Oracle has a “debug” version of the JDBC Driver available which provides a lot of debugging information.
This article will provide you a step by step on how setup and activate the “debug” version of the Oracle JDBC Driver within Weblogic 11g.
Environment specifications
·Java EE server: Weblogic 11g
·JDK: HotSpot VM 1.6
·Oracle JDBC Driver: Oracle JDBC Driver version - 11.1.0.6.0-Production
· Oracle JDBC Driver Library: ojdbc6_g.jar
Library and configuration file location
The debug library is located within your Weblogic 11g home directory as per below: <WL11g HOME>/wlserver_10.3/server/ext/jdbc/oracle/11g/ojdbc6_g.jar
You will also need to edit and configure the “OracleLog.properties” file as per your desired logging level along with the output log file name and location.
In order to use the debug library, you first have to add it within the Weblogic 11g system classpath. You can add it to the commEnv.sh/ commEnv.cmd at the beginning of the WEBLOGIC_CLASSPATH or to the Weblogic PRE_CLASSPATH.
The commEnv script file is located under:
<WL11g HOME>/wlserver_10.3/common/bin/
** Please note that any PRE_CLASSPATH present in setDomainEnv.sh can override any entry in the WEBLOGIC_CLASSPATH; no affect in this case. In that situation, simply add ojdbc6_g.jar to your WL11g user domain setDomainEnv.sh PRE_CLASSPATH or add the library in commEnv.sh by adding the following line below:
# Add ojdbc6_g.jar first in PRE_CLASSPATH
PRE_CLASSPATH=%WL_HOME%\server\ext\jdbc\oracle\11g\ojdbc6_g.jar
Debug library activation
The next step is to add 2 Java System Property parameters to your JVM start-up arguments:
The first –D parameter will activate the JDBC debug trace.
The second –D parameter is used to specify the location of your choice of the debug library property file (please see detail below).
Once all the above steps are completed, simply shutdown and restart your Weblogic server and monitor your JDBC debug output log for debugging traces.
Find below a snapshot of the OracleLog.properties as a reference.
Finally, if you are not using Weblogic, you can still download the OracleLog.properties from the link below which is part of the Oracle JDBC driver download package within the demo directory.
I often get a lot of questions from other application support colleagues regarding the permanent generation space. This short post will provide you with both a high level summary and graphical view of this special Java HotSpot VM memory block.
PermGen space
The Java HotSpot VM permanent generation space is the JVM storage used mainly to store your Java Class objects. The Java Heap is the primary storage that is storing the actual short and long term instances of your PermGen Class objects.
The PermGen space is fairly static by nature unless using third party tool and/or Java Reflection API which relies heavily on dynamic class loading.
It is important to note that this memory storage is applicable only for a Java HotSpot VM; other JVM vendors such as IBM and Oracle JRockit do not have such fixed and configurable PermGen storage and are using other techniques to manage the non Java Heap memory (native memory).
Find below a graphical view of a JVM HotSpot Java Heap vs. PermGen space breakdown along with its associated attributes and capacity tuning arguments.
A JVM Heap Dump is a crucial collection of information that provides full view on your Java EE application memory footprint. This article provides a step by step tutorial on how you can analyze an AIX IBM SDK Heap Dump in order to identify the Java Heap memory session data footprint of your Java EE Web application.
Please note that we will use a real production system as an example following a session data footprint analysis we did on an older Weblogic 8.1 Web ordering application.
Environment specifications
·Java EE server: Weblogic 8.1 SP6
·Hardware: IBM,9117-MMA - PowerPC_POWER6
·OS: AIX 5.3
·JDK: IBM AIX SDK 1.4.2 SR9
·Platform type: Ordering Portal
Monitoring and troubleshooting tools
·JVM Heap Dump (IBM AIX format)
·Memory Analyzer 0.6.0.2 (via IBM Support Assistant 4.1)
Problem overview
An increased of Java Heap memory footprint was observed following a major release of our application. As part of our capacity planning process, it was agreed to delay the production release in order to identify the root cause along with a resolution.
Memory Analyzer background
The Memory Analyzer is one of best tool available that allows you to load and analyze both HotSpot (xyz.hprof format) and IBM SDK (heapdump.xyz.phd format) Java VM Heap Dump files.
This tool can be downloaded as a plug-in within the IBM Support Assistant (ISA) tool:
The first step is to download the Heap Dump file from your server to your workstation then please follow the instructions below:
1) Open ISA > Launch Activity > Analyze Problem
2) Launch Memory Analyzer > Browse > Remote Artifact > Select your file and click Next
3) Wait until parsing is completed; this may take several minutes depending on Heap Dump size and specifications of your system
4) Once parsing is completed, select Leak Suspects and click Finish
At this point you should get a pie chart with a list of potential leak suspects. In our example, the tool found one leak suspect as per below. This does not mean necessarily that you are facing a memory leak. The tool is simply presenting you with possible leak candidates or components using larger portion of your Java Heap memory.
5) Leak suspect analysis
The next step is to scroll down and have a closer look at your leak suspects. As you can see in our example below, the primary problem is that our Weblogic Session Data (client HttpSession object) is showing a footprint between 4.5 MB – 11.5 MB; much larger than the original size before this application change (~ 1 MB).
6) Memory footprint deep dive
The next step is to do a deep dive and better understand why our session data size is so big e.g. what the main application contributors are.
For this exercise, click on the Overview tab and left click a portion of the pie chart you are interesting in. You will see a menu with a list of options; select List objects > with outgoing references. A new tab will then open allowing you to inspect this Object data further. At this point, you just need to explode the object references and navigate through all object relationships from up to bottom.
In our example, we did choose to inspect a Weblogic session data object of 11.5 MB. The idea is now to navigate through object references until we find the root cause. A Weblogic session data is basically a Hashtable that contain a list of attribute objects. Each of such object instances normally represents an attribute from your application HttpSession data. The analysis found several object instances from our application using close to 1 MB of data footprint.
Root cause and resolution
The Heap Dump analysis did reveal some very clear problems with a few of our application code client session attribute objects using too much Java Heap memory. Such analysis did allow us to fix the problem without further need to profile our application.
Additional JVM Heap Dump snapshots were captured post code fix and were analysed in order to confirm problem resolution.
Conclusion
This Heap Dump analysis did clearly demonstrate how powerful such Heap Dump data is when combined with a tool like Memory Analyzer.
Heap Dump files allow you to perform a fast analysing of your application footprint without any need to install additional application profilers. There are some scenarios that you will need a real application profiler but Heap Dump analysis is quite sufficient in several scenarios and is available out-of-the-box for most JVM vendors.
This post is to inform you that I provide free Java EE troubleshooting consulting services via the root cause analysis forum. I will do my very best to provide you with guidance and share with you my tips on how you should attack your problem.
Java EE related problems can be quite complex to troubleshoot so please do not hesitate to ask me any technical question. At the end of day, the goal is that YOU understand the problem vs. me showing you the problem.
A Java memory problem such as java.lang.OutOfMemoryError: PermGen space is one of the most frequent and complex problems a Java EE application support person can face with a production system. This article is part #1 of a series of posts that will focus on a particular OOM flavour: PermGen space depletion of a Java HotSpot VM.
Part #1 primary objective is to revisit the fundamentals of the permanent generation space and to teach you how to identify a particular pattern of PermGen space problem and possible causes of PermGen memory leak.
Please refer to HotSpot PermGen space for a high level overview of this special Java HotSpot VM memory storage. A PermGen Troubleshooting Guide video is also available from our YouTube channel.
java.lang.OutOfMemoryError: PermGen space patterns
Find below some of the most common patterns of OutOfMemoryError due to the depletion of the PermGen space.
Pattern
Symptoms
Possible root cause scenarios
Resolution
OOM observed during or after a migration of a Java EE server to newer version
- OOM may be observed on server start-up at deployment time
- OOM may be observed very shortly after server start-up and after 1 or 2+ hours of production traffic
- Higher PermGen capacity is often required due to increased Java EE server vendor code and libraries
- Increase your PermGen space capacity via
-XX:MaxPermSize
OOM observed after a certain period of time
- OOM observed after a longer but consistent period of time (days)
- PermGen space monitoring will show hourly or daily increase during your application business hours
- There are many possible causes of PermGen space memory leak. The most common is a class loader leak: increasing number of Class objects overtime
- Improper JVM arguments like usage of the Xnoclassgc flag (turn OFF Class garbage collection)
- Review your JVM HotSpot start-up arguments for any obvious problem like Xnoclassgc flag
- Analyse the JVM HotSpot Heap Dump as it can provides some hints on the source of a class loader leak
- Investigate any third party API you are using for any potential class loader leak defect
- Investigate your application code for any improper use of Reflection API and / or dynamic class loading
OOM observed following a redeploy of your application code (EAR, WAR files...)
- OOM may be observed during or shortly after your application redeploy process
- Unloading and reloading of your application code can lead to PermGen leak (class loader leak) and deplete your PermGen space fairly quickly
- Open a ticket with your Java EE vendor for any known class loader leak issue
- Shutdown and restart your server (JVM) post deployment to cleanup any class loader leak
Conclusion
I hope this short article has helped you understand the role of the JVM HotSpot permanent generation space and recognize some common OutOfMemoryError patterns.
The part #2 will focus on the diagnostic and deep dive analysis of your PermGen space problem.You can also refer to OutOfMemoryError PermGen space for a real case study on that problem.
This post provides a Java memory footprint comparison analysis between a Weblogic Integration 9.2 application running on an IBM SDK 32-bit VM vs. an IBM SDK 64-bit VM.
We will demonstrate how proper analysis and understanding of the impact is required before you decide to upgrade your application to a 64-bit VM.
Environment specifications
·Java EE server: Weblogic Integration 9.2 MP2 (WLI)
·Platform type: Middle tier - Order Provisioning System
Background and objectives
Following a recent capacity planning exercise and production performance metric analysis, it was decided to explore as a POC (proof of concept) the option to upgrade our application from a 32-bit to a 64-bit VM.
Objectives:
·Identify any application impact and/or code change required
·Identify any impact on existing Java Heap footprint
·Identify any impact on existing Native Heap footprint
·Identify any impact on CPU utilization or other negative side effect
Java Heap analysis
The Java Heap memory footprint was captured during the load test cycles using the JConsole monitoring tool.
32-bit Java Heap footprint
64-bit Java Heap footprint
The 64-bit Java Heap footprint is clear showing an increase of ~ 400 MB which represents a 45% increase vs. the 32-bit VM for the exact same application and peak load
Native Heap analysis
The Native Heap memory footprint was captured during the load test cycles using the svmon command:
## Native Heap segment name for 32-bit svmon is: working storage.
## Native Heap segment name for 64it svmon is: work text data BSS heap
svmon -P <java process pid> -m -r | grep 'work text data BSS heap'
129a5f 11 work text data BSS heap s 64352 0 0 64352
1f5a68 12 work text data BSS heap s 24537 0 0 24537 ......................................................................................................
The differences of Native Heap memory footprint observed between the 32-bit VM and 64-bit VM were non significant.
Comparison matrix
Analysis
32-bit VM
64-bit VM
Delta +/- %
Application code
n/a
No code change
No code change
Java Heap capacity
1.75 GB
1.75 GB
Equal capacity but increased capacity potential
Java Heap footprint
0.9 GB (~50%)
1.3 GB (~ 75%)
+ 45%
Native Heap capacity
768 MB
Unbounded
Increased capacity
Native Heap footprint
368 MB
347 MB
Non significant
Full GC process
4-5 times per 5 min
8-10 times per 5 min
+100%
CPU load
20%
30%
+50%
Facts derivation:
·No application code change required
·The Java Heap with 64-bit VM has increased by 45%
·The Native Heap is mainly left unchanged
·The GC occurrences has almost doubled and correlate with the increased Java Heap footprint
·The CPU utilization has increased by 50% and correlate with the GC occurrences increase
* Please note that the increased Java Heap footprint was expected since each Java Heap object occupies more space in a 64-bit VM *
Conclusion and recommendations
Our testing strategy clearly demonstrated that upgrading an application from a 32-bit to a 64-bit VM can cause some impact to the existing memory and performance figures. The increased Java Heap footprint along with GC and CPU proves that proper capacity planning is a must in order to identify any vertical scaling of your application server and/or hardware upgrade requirement.
That being said, the main benefit of a 64-bit VM is a huge capacity potential for both Java Heap and Native Heap. Applications that rely heavily on memory data caching along with fast hardware will clearly benefit from it.
Our final recommendations can be summarized as per below:
·Upgrading from a 32-bit to 64-bit VM is not that straightforward and requires proper capacity planning and impact analysis
·Java EE applications that do not require huge allocation of Java Heap and/or Native Heap should not systemically upgrade to a 64-bit VM
·Before upgrading to a 64-bit VM, you should first revisit your existing application memory footprint and identify any reduction opportunity
This post is about informing you on a new critical Oracle security alert that was released by Oracle on February 8, 2011. We will also demonstrate how it can affect your Java EE environment using a negative test case againts one of our Internet facing Webologic Portal 10.0 application using JRockit R27.5 (JDK 1.5).
Security alert detail and patches
This Security Alert addresses security issue CVE-2010-4476 (Java Runtime Environment hangs when converting "2.2250738585072012e-308" to a binary floating-point number), which is a vulnerability in the Java Runtime Environment component of the Oracle Java SE and Java for Business products. This vulnerability allows unauthenticated network attacks (i.e. it may be exploited over a network without the need for a username and password). Successful attack of this vulnerability can result in unauthorized ability to cause a hang or frequently repeatable crash (complete Denial of Service) of the Java Runtime Environment. Java based application and web servers are especially at risk from this vulnerability.
Problem reproduction and test case
Following the notification from Oracle, we decided to attempt to replicate the problem in a testing environment.
System.out.println("<br>Before the execution.<br>");
double d = Double.parseDouble(bfpn);
out.println("Result: " + d);
System.out.println("Result: " + d);
%>
</body>
</html>
The problem was replicated easily, simulating an attack by injection using 2.2250738585072012e-308 as the binary floating point number. The JSP immediately ended up in a hang state, causing a 25% surge of our CPU utilization.
PRSTAT execution
The Solaris command prstat is extremely useful to pinpoint Java thread(s) using a lot of CPU. You can combine the output with a generated Thread Dump to quickly pinpoint the root cause and code contributing to high CPU utilization.
The following command did generate both a prstat snapshot along with a JVM Thread Dump:
at sun/misc/FDBigInt.sub(FloatingDecimal.java:2691)[optimized]
at sun/misc/FloatingDecimal.doubleValue(FloatingDecimal.java:1547)
at java/lang/Double.parseDouble(Double.java:482)
at general/_resources/__tools/__test._jspService(__test.java:125)
at weblogic/servlet/jsp/JspBase.service(JspBase.java:34)[optimized]
at weblogic/servlet/internal/StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:226)[optimized]
at weblogic/servlet/internal/StubSecurityHelper.invokeServlet(StubSecurityHelper.java:124)[inlined]
at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:283)[optimized]
at weblogic/servlet/internal/TailFilter.doFilter(TailFilter.java:26)[optimized]
at weblogic/servlet/internal/FilterChainImpl.doFilter(FilterChainImpl.java:42)[optimized]
at org/apache/beehive/controls/runtime/servlet/ControlFilter.doFilter(ControlFilter.java:96)
at weblogic/servlet/internal/FilterChainImpl.doFilter(FilterChainImpl.java:42)[optimized]
at xyz/xyz/web/filters/urlrewrite/RuleChain.handleRewrite(RuleChain.java:176)
at xyz/xyz/web/filters/urlrewrite/RuleChain.doRules(RuleChain.java:145)
at xyz/xyz/web/filters/urlrewrite/UrlRewriter.processRequest(UrlRewriter.java:92)
at xyz/xyz/evolution/presentation/navigation/urlmapping/BellUrlRewriteFilter.doFilter(BellUrlRewriteFilter.java:131)
at weblogic/servlet/internal/FilterChainImpl.doFilter(FilterChainImpl.java:42)[optimized]
at xyz/xyz/portal/content/PortletContentsTrackerFilter.doFilter(PortletContentsTrackerFilter.java:189)
at weblogic/servlet/internal/FilterChainImpl.doFilter(FilterChainImpl.java:42)[optimized]
at org/apache/beehive/netui/pageflow/PageFlowPageFilter.continueChainNoWrapper(PageFlowPageFilter.java:454)[inlined]
at org/apache/beehive/netui/pageflow/PageFlowPageFilter.runPage(PageFlowPageFilter.java:431)[inlined]
at org/apache/beehive/netui/pageflow/PageFlowPageFilter.doFilter(PageFlowPageFilter.java:283)[optimized]
at weblogic/servlet/internal/FilterChainImpl.doFilter(FilterChainImpl.java:42)[optimized]
at com/bea/p13n/servlets/PortalServletFilter.doFilter(PortalServletFilter.java:315)[optimized]
at weblogic/servlet/internal/FilterChainImpl.doFilter(FilterChainImpl.java:42)[inlined]
at weblogic/servlet/internal/WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3393)[optimized
at weblogic/security/acl/internal/AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)[optimized]
at weblogic/security/service/SecurityManager.runAs(Lweblogic/security/acl/internal/AuthenticatedSubject;Lweblogic/security/acl/internal/AuthenticatedSubject;Ljava/security/PrivilegedAction;)Ljava/lang/Object;(Unknown Source)[inlined]
at weblogic/servlet/internal/WebAppServletContext.securedExecute(WebAppServletContext.java:2140)[inlined]
at weblogic/servlet/internal/WebAppServletContext.execute(WebAppServletContext.java:2046)[optimized]
at weblogic/servlet/internal/ServletRequestImpl.run(ServletRequestImpl.java:1366)[optimized]
at weblogic/work/ExecuteThread.execute(ExecuteThread.java:200)[optimized]
at weblogic/work/ExecuteThread.run(ExecuteThread.java:172)
at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method)
-- end of trace
The Thread snapshot clearly confirmed the problem replication e.g. Thread #23 is currently involved in an infinite loop within the following Sun classes:
at sun/misc/FDBigInt.sub(FloatingDecimal.java:2691)[optimized]
at sun/misc/FloatingDecimal.doubleValue(FloatingDecimal.java:1547)
at java/lang/Double.parseDouble(Double.java:482)
Conclusion
The problem replication exercise did clearly demonstrate the risk of this Java defect as it could be exploited by hackers in an attempt to generate DoS (Denial of Service) type of attack in an attempt to inject such complex binary floating point value within any Java EE applications exposed over the Internet that may be exposing such binary floating point from a GUI / HTML page.
Our team is currently planning to conduct a full assessment, prioritize and proceed with installation of the available Oracle patch across our affected client Java EE environments.
You can find more detail about the affected Java SDK versions along with available patches below: