Tuning Java virtual Machines

Posted By Sagar Patil

The application server, being a Java process, requires a Java virtual machine (JVM) to run, and to support the Java applications running on it. As part of configuring an application server, you can fine-tune settings that enhance system use of the JVM.

A JVM provides the runtime execution environment for Java based applications. WebSphere Application Server is a combination of a JVM runtime environment and a Java based server runtime. It can run on JVMs from different JVM providers. To determine the JVM provider on which your Application Server is running, issue the java -fullversion command from within your WebSphere Application Server app_server_root/java/bin directory. You can also check the SystemOut.log from one of your servers. When an application server starts, Websphere Application Server writes information about the JVM, including the JVM provider information, into this log file.

From a JVM tuning perspective, there are two main types of JVMs:

* IBM JVMs
* Sun HotSpot based JVMs, including Sun HotSpot JVM on Solaris and HP’s JVM for HP-UX

Even though JVM tuning is dependent on the JVM provider general tuning concepts apply to all JVMs. These general concepts include:

* Compiler tuning. All JVMs use Just In Time (JIT) compilers to compile Java byte codes into native instructions during server run-time.
* Java memory or heap tuning. The JVM memory management function, or garbage collection provides one of the biggest opportunities for improving JVM performance.
* Class loading tuning.

Procedure

* Optimize the startup performance and the runtime performance

In some environments, it is more important to optimize the startup performance of your WebSphere Application Server rather than the runtime performance. In other environments, it is more important to optimize the runtime performance. By default, IBM JVMs are optimized for runtime performance while HotSpot based JVMs are optimized for startup performance.

The Java JIT compiler has a big impact on whether startup or runtime performance is optimized. The initial optimization level used by the compiler influences the length of time it takes to compile a class method and the length of time it takes to start the server. For faster startups, you can reduce the initial optimization level that the compiler uses. This means that the runtime performance of your applications may be degraded because the class methods are now compiled at a lower optimization level.

It is hard to provide a specific runtime performance impact statement because the compilers might recompile class methods during runtime execution based upon the compiler’s determination that recompiling might provide better performance. Ultimately, the duration of the application is a major influence on the amount of runtime degradation that occurs. Short running applications have a higher probability of having their methods recompiled. Long-running applications are less likely to have their methods recompiled. The default settings for IBM JVMs use a high optimization level for the initial compiles. You can use the following IBM JVM option if you need to change this behavior:

-Xquickstart This setting influences how the IBM JVM uses a lower optimization level for class method compiles, which provides for faster server startups, at the expense of runtime performance. If this parameter is not specified, the IBM JVM defaults to starting with a high initial optimization level for compiles. This setting provides faster runtime performance at the expense of slower server starts.

Default: High initial compiler optimizations level
Recommended: High initial compiler optimizations level
Usage: -Xquickstart can provide faster server startup times.

JVMs based on Sun’s Hotspot technology initially compile class methods with a low optimization level. Use the following JVM option to change this behavior:

-server JVMs based on Sun’s Hotspot technology initially compile class methods with a low optimization level. These JVMs use a simple complier and an optimizing JIT compiler. Normally the simple JIT compiler is used. However you can use this option to make the optimizing compiler the one that is used. This change will significantly increases the performance of the server but the server takes longer to warm up when the optimizing compiler is used.

Default: Simple compiler
Recommended: Optimizing compiler
Usage: -server enables the optimizing compiler.

* Set the heap size The following command line parameters are useful for setting the heap size.

* -Xms This setting controls the initial size of the Java heap. Properly tuning this parameter reduces the overhead of garbage collection, improving server response time and throughput. For some applications, the default setting for this option might be too low, resulting in a high number of minor garbage collections

Default: 256 MB
Recommended: Workload specific, but higher than the default.
Usage: -Xms256m sets the initial heap size to 256 megabytes

* -Xmx This setting controls the maximum size of the Java heap. Properly tuning this parameter can reduce the overhead of garbage collection, improving server response time and throughput. For some applications, the default setting for this option is too low, resulting in a high number of minor garbage collections.

Default: 512 MB
Recommended: Workload specific, but higher than the default.
Usage: -Xmx512m sets the maximum heap size to 512 megabytes

* -Xlp This setting can be used with the IBM JVM to allocate the heap using large pages. However, if you use this setting your operating system must be configured to support large pages. Using large pages can reduce the CPU overhead needed to keep track of heap memory and might also allow the creation of a larger heap.

See Tuning operating systems for more information about tuning your operating system.

* The size you should specify for the heap depends on your heap usage over time. In cases where the heap size changes frequently, you might improve performance if you specify the same value for the Xms and Xmx parameters.

* Tune the IBM JVM’s garbage collector.

Use the Java -X option to see the list of memory options.

* -Xgcpolicy Setting gcpolicy to optthruput disables concurrent mark. If you do not have pause time problems, denoted by erratic application response times, you should get the best throughput using this option. Setting gcpolicy to optavgpause enables concurrent mark with its default values. This setting alleviates erratic application response times caused by normal garbage collection. However, this option might decrease overall throughput.

Default: optthruput
Recommended: optthruput
Usage: Xgcpolicy:optthruput

* -Xnoclassgc By default the JVM unloads a class from memory when there are no live instances of that class left, but this can degrade performance. Turning off class garbage collection eliminates the overhead of loading and unloading the same class multiple times.

If a class is no longer needed, the space that it occupies on the heap is normally used for the creation of new objects. However, if you have an application that handles requests by creating a new instance of a class and if requests for that application come in at random times, it is possible that when the previous requester is finished, the normal class garbage collection will clean up this class by freeing the heap space it occupied, only to have to re-instantiate the class when the next request comes along. In this situation you might want to use this option to disable the garbage collection of classes.

Avoid trouble: This option should be used with caution, if your application creates classes dynamically, or uses reflection, because for this type of application, the use of this option can lead to native memory exhaustion, and cause the JVM to throw an Out-of-Memory Exception. When this option is used, if you have to redeploy an application, you should always restart the application server to clear the classes and static data from the pervious version of the application.gotcha

Default: class garbage collection enabled
Recommended: class garbage collection disabled
Usage: Xnoclassgc disables class garbage collection

* Tune the Sun JVM’s garbage collector

On the Solaris platform, the WebSphere Application Server runs on the Sun Hotspot JVM rather than the IBM JVM. It is important to use the correct tuning parameters with the Sun JVM in order to utilize its performance optimizing features.

The Sun HotSpot JVM relies on generational garbage collection to achieve optimum performance. The following command line parameters are useful for tuning garbage collection.

* -XX:SurvivorRatio The Java heap is divided into a section for old (long lived) objects and a section for young objects. The section for young objects is further subdivided into the section where new objects are allocated (eden) and the section where new objects that are still in use survive their first few garbage collections before being promoted to old objects (survivor space). Survivor Ratio is the ratio of eden to survivor space in the young object section of the heap. Increasing this setting optimizes the JVM for applications with high object creation and low object preservation. Since WebSphere Application Server generates more medium and long lived objects than other applications, this setting should be lowered from the default.

Default: 32
Recommended: 16
Usage: -XX:SurvivorRatio=16

* -XX:PermSize The section of the heap reserved for the permanent generation holds all of the reflective data for the JVM. This size should be increased to optimize the performance of applications that dynamically load and unload a lot of classes. Setting this to a value of 128MB eliminates the overhead of increasing this part of the heap.

Recommended: 128 MB
Usage: XX:PermSize=128m sets perm size to 128 megabytes.

* -Xmn This setting controls how much space the young generation is allowed to consume on the heap. Properly tuning this parameter can reduce the overhead of garbage collection, improving server response time and throughput. The default setting for this is typically too low, resulting in a high number of minor garbage collections. Setting this setting too high can cause the JVM to only perform major (or full) garbage collections. These usually take several seconds and are extremely detrimental to the overall performance of your server. You must keep this setting below half of the overall heap size to avoid this situation.

Default: 2228224 bytes
Recommended: Approximately 1/4 of the total heap size
Usage: -Xmn256m sets the size to 256 megabytes.

* -Xnoclassgc By default the JVM unloads a class from memory when there are no live instances of that class left, but this can degrade performance. Turning off class garbage collection eliminates the overhead of loading and unloading the same class multiple times.

If a class is no longer needed, the space that it occupies on the heap is normally used for the creation of new objects. However, if you have an application that handles requests by creating a new instance of a class and if requests for that application come in at random times, it is possible that when the previous requester is finished, the normal class garbage collection will clean up this class by freeing the heap space it occupied, only to have to re-instantiate the class when the next request comes along. In this situation you might want to use this option to disable the garbage collection of classes.

Default: class garbage collection enabled
Recommended: class garbage collection disabled
Usage: Xnoclassgc disables class garbage collection

* Tune the HP JVM’s garbage collector

The HP JVM relies on generational garbage collection to achieve optimum performance. The following command line parameters are useful for tuning garbage collection.

* -Xoptgc This setting optimizes the JVM for applications with many short-lived objects. If this parameter is not specified, the JVM usually does a major (full) garbage collection. Full garbage collections can take several seconds and can significantly degrade server performance.

Default: off
Recommended: on
Usage: -Xoptgc enables optimized garbage collection.

* -XX:SurvivorRatio The Java heap is divided into a section for old (long lived) objects and a section for young objects. The section for young objects is further subdivided into the section where new objects are allocated (eden) and the section where new objects that are still in use survive their first few garbage collections before being promoted to old objects (survivor space). Survivor Ratio is the ratio of eden to survivor space in the young object section of the heap. Increasing this setting optimizes the JVM for applications with high object creation and low object preservation. Since WebSphere Application Server generates more medium and long lived objects than other applications, this setting should be lowered from the default.

Default: 32
Recommended: 16
Usage: -XX:SurvivorRatio=16

* -XX:PermSize The section of the heap reserved for the permanent generation holds all of the reflective data for the JVM. This size should be increased to optimize the performance of applications which dynamically load and unload a lot of classes. Specifying a value of 128 megabytes eliminates the overhead of increasing this part of the heap.

Default: 0
Recommended: 128 megabytes
Usage: -XX:PermSize=128m sets PermSize to 128 megabytes

* -XX:+ForceMmapReserved By default the Java heap is allocated “lazy swap.” This saves swap space by allocating pages of memory as needed, but this also forces the use of 4KB pages. This allocation of memory can spread the heap across hundreds of thousands of pages in large heap systems. This command disables “lazy swap” and allows the operating system to use larger memory pages, thereby optimizing access to the memory making up the Java heap.

Default: off
Recommended: on
Usage: -XX:+ForceMmapReserved will disable “lazy swap”.

* -Xmn This setting controls how much space the young generation is allowed to consume on the heap. Properly tuning this parameter can reduce the overhead of garbage collection, improving server response time and throughput. The default setting for this is typically too low, resulting in a high number of minor garbage collections.

Default: No default
Recommended: Approximately 1/4 of the total heap size
Usage: -Xmn256m sets the size to 256 megabytes

* Virtual Page Size Setting the Java virtual machine instruction and data page sizes to 64MB can improve performance.

Default: 4MB
Recommended: 64MB
Usage: Use the following command. The command output provides the current operating system characteristics of the process executable:

chatr +pi64M +pd64M /opt/WebSphere/
AppServer/java/bin/PA_RISC2.0/
native_threads/java

* -Xnoclassgc By default the JVM unloads a class from memory when there are no live instances of that class left, but this can degrade performance. Turning off class garbage collection eliminates the overhead of loading and unloading the same class multiple times.

If a class is no longer needed, the space that it occupies on the heap is normally used for the creation of new objects. However, if you have an application that handles requests by creating a new instance of a class and if requests for that application come in at random times, it is possible that when the previous requester is finished, the normal class garbage collection will clean up this class by freeing the heap space it occupied, only to have to re-instantiate the class when the next request comes along. In this situation you might want to use this option to disable the garbage collection of classes.

Default: class garbage collection enabled
Recommended: class garbage collection disabled
Usage: Xnoclassgc disables class garbage collection

Leave a Reply

You must be logged in to post a comment.

Top of Page

Top menu