Crafting Nimble Java: Strategies for Optimal Performance and Agility

Who says you cannot be Big and Nimble?

Numerous online enterprises rely on application servers, with platforms like Liberty and JBoss being prominent examples, all leveraging Java technology. Java presents users with many advantages, including a comprehensive class library, impressive throughput performance, robust debugging and tooling features, and additional benefits.

A typical JVM execution begins with class loading and initialization, followed by initial interpretation of methods until they are compiled by the Just-In-Time (JIT) compiler into machine code. The outcome? Lengthier start-up times and an initial phase of increased memory usage as the system ramps up. In this blog post, we will discuss some of the key strategies to make a nimble Java.

Where Even Your Bugs Have Their Own Little Cubicles: Modular Architecture

By breaking down applications into smaller, reusable modules, developers can achieve greater flexibility, scalability, and maintainability. Embrace Java’s modules and jlink to help with modularity, to enforce encapsulation, and to create custom runtime images.

Modules allow you to organise your code into discrete units of functionality. Each module specifies its dependencies on other modules and what packages it makes available to other modules.

Jlink takes a set of modules (along with their dependencies) as input and generates a runtime image that contains only the modules necessary to run your application. This helps to reduce the size of your application distribution and improves startup time, as you’re only bundling the required modules.

jlink --module-path=$JAVA_HOME/jmods:mlib \
      --add-modules com.demo.main \
      --output testRuntimeImage \
      --strip-debug \
      --no-header-files

In the example above, jlink will create a custom runtime image with the modules provided, excluding unnecessary debug information and header files from the JDK modules.

For a fully working setup: My sample code on modules and jlink.

Because Life’s Too Short to Waste on Booting Up: InstantOn

In a world where streaming services are omnipresent, we have the freedom to pause a video at any moment and can resume from the same position where we left off, even if we switch to a different device.

Consider the prospect of applying a similar concept to our Java applications!

Thanks to IBM Semeru Runtime’s InstantON feature. IBM Semeru Runtime’s InstantON feature is built on OpenJ9 CRIU Support technology. It makes use of a Linux feature, CRIU (checkpoint — restore in user space).

CRIU support is not enabled by default. You must enable it by specifying the -XX:+EnableCRIUSupport command-line option when you start your application.

For a fully working setup: Getting started with openj9 CRIU support

Zip and Zing with Shared Class Cache and Dynamic Ahead-of-Time Compilation

Shared classes in a cache offer a straightforward and effective method to enhance startup performance. During the initial run of an application, the VM is tasked with loading all necessary classes, a process that consumes time. However, by caching these classes, subsequent runs experience significantly reduced initialization times.

The Shared Class Cache (SCC) technology involves a shared memory area designed to cache entities like ROMClass (a read-only representation of a Java class), AOT compiled code, interpreter profiling information, and more. Implementing a shared classes cache not only streamlines startup performance but also minimizes memory usage by enabling multiple VMs to share class data. Furthermore, debugging-related class data remains on disk rather than in memory, contributing to a smaller memory footprint.

Another key technology that improves start-up time is dynamic AOT. It is called dynamic because methods are compiled to a relocatable format at runtime and stored in the shared class cache. Any subsequent JVM instance can use these methods after a cheap relocation process, which is about 100 times faster than a JIT compilation. With AOT, the JVM can transition faster from the interpreter to machine code, which improves start-up time as a result.

In Semeru-Xshareclasses command-line option is highly configurable, allowing you to specify where to create the cache, how much space to allocate for AOT code and much more.

Microservices Are Even Applied at the JVM Level!!

Semeru Cloud Compiler (aka JIT server) technology decouples the JIT compiler from the VM and lets the JIT compiler run remotely in its own process. This mechanism prevents your Java application from suffering possible negative effects due to CPU and memory consumption caused by JIT compilation. With the Semeru Cloud Compiler, the application is not affected by the stability of the compilation; if the Semeru Cloud Compiler goes down, the application continues to run, because the JVM retains the ability to compile locally using its embedded JIT compiler.

Ok, now you will be thinking why dynamic AOT if we have Semeru Cloud Compiler to the rescue. The code quality of the AOT code is about 10% less (in terms of throughput) than the code quality of JIT compiled code, which is tailor-fitted for a particular JVM. This 10% throughput gap is typically addressed by recompiling hot AOT-compiled methods with the regular JIT compiler. Moreover, some methods cannot be subjected to AOT compilations, and they need to be compiled with the regular JIT compiler.

Thus, the AOT and Semeru Cloud Compiler technologies complement each other, rather than competing with one another: The AOT solution can be used to quickly bring the throughput to reasonable levels, while the Semeru Cloud Compiler can be used to reach peak throughput by performing recompilations and compilations not handled by AOT.

Trying to Fit an Elephant into a Tupperware: Container Conundrum

Containers, paired with orchestration tools like Kubernetes, are currently the talk of the town. When it comes to running Java applications within containers, both Alpine Linux and UBI Image offer compelling advantages. Here’s a comparison to help you decide which option best suits your needs:

  • Alpine Linux: Ideal for resource-constrained environments, lightweight deployments, and security-conscious applications. Well-suited for microservices architectures, DevOps workflows, and cloud-native deployments. Base image can be as small as 3 MB on disk.
  • UBI Image: Suitable for enterprise-grade deployments, mission-critical applications, and environments requiring extensive support and certification. Recommended for organizations with existing investments in the Red Hat ecosystem and a preference for standardized, certified container images. At 72 MB, this is one of the larger base images available.

The Unsung Hero of Memory Optimization: Compressed Object References

On 64-bit systems, the VM can use compressed references to decrease the size of Java objects and make better use of the available space in the Java heap. By storing objects in a 32-bit representation, the object size is identical to that in a 32-bit VM, which creates a smaller memory footprint. These 4-byte (32-bit) compressed references are converted to 64-bit values at runtime with minimal overhead. Smaller objects enable larger heap sizes that result in less frequent garbage collection and improve memory cache utilization. Overall, the performance of 64-bit applications that store compressed rather than uncompressed 64-bit object references is significantly improved.

In OpenJ9, compressed references are used by default when the maximum Java heap size is in the range 0–57 GB. To turn off compressed references, use the -Xnocompressedrefs command-line option.

Wrapping It Up: Fast Startup and Small Footprint

For latency-sensitive applications, fast start-up is a requirement if one is to use a dynamic scaling approach to provision resources. With OpenJ9 features like InstantOn, Semeru Cloud Compiler, and metadata caching techniques such as the Shared Classes Cache (SCC), alongside dynamic ahead-of-time (AOT) compilation, adopting a modular runtime architecture, and selecting appropriate container images significantly reduces start-up times and will minimize memory footprints.

Write once, debug anywhere!

Total
0
Shares
Previous Post
SoftwareArchitectureStyles

Demystifying Software Architecture: Styles/Patterns – PART 1

Next Post

Modernize Java Applications with Amazon EKS: A Cloud-Native Approach

Related Posts