The “Iceberg” meme is an internet phenomenon that humorously and sometimes unsettlingly, illustrates levels of knowledge or initiation into a given topic – from simple, widely known facts at the tip of the iceberg to the dark, esoteric depths comprehensible only to the most battle-hardened veterans. Picture an iceberg floating on water: what’s visible on the surface is just the beginning, while the real magic (or nightmare) lurks beneath, in increasingly inaccessible layers.
Personally, I love it. So I decided to create Java one.

That’s why this article will take you on a Lovecraftian journey through the successive levels of understanding the Java ecosystem – from innocent beginnings to an abyss where code becomes something far more than mere instructions. Brace yourself, for the deeper you dive, the harder it will be to resurface!
PS: For using the word “pragmatic,” you’d be thrown overboard on this ship! Consider yourself warned.
Level 1: The Tip

At this level, you’re like a newbie who just installed Java and thinks that “Hello World” is the pinnacle of programming achievement and your biggest drama is a missing semicolon at the end of a line.
At the same time, you start noticing the technologies that “professionals” talk about, and each one feels like a new level of initiation into some esoteric knowledge.
Spring Boot
Spring Boot is like a streamlined version of classic Spring. With traditional Spring, you get a toolbox and have to assemble everything yourself, like building LEGO without instructions. Boot was created to simplify this – no more manual wiring, just a setup that works out of the box.
The “need for speed” led to its creation. Classic Spring was powerful but slow to configure, making it frustrating for simple tasks. In 2014, Pivotal introduced Boot to automate setup and let developers focus on coding instead of configuration. It’s still Spring, just with less hassle.
Quarkus
Quarkus was born to tackle the need for agile tools that fly on GraalVM and Kubernetes, where old Java frameworks lumbered like dinosaurs. Quarkus keeps it snappy and slim, perfect for microservices and serverless – none of that bloated nonsense. Unlike runtime-heavy frameworks, it shifts the grunt work to build time, launching quick and light, with a slick “Dev Mode” to keep coding smooth.
Started by Red Hat, it went big under their watch, then hopped over to the Commonhaus Foundation in 2024, joining a wider crew to keep rocking the Java world.
Micronaut
Like two previous, a Micronaut Java project emerges as a framework tailored for crafting nimble, high-performing applications, especially suited for microservices and serverless setups. Its creation was led by Graeme Rocher, co-creator of the Grails framework, poured their expertise into rethinking Java’s potential. Unlike the clunky, reflection-heavy Java frameworks of yesteryear, Micronaut flips the script with ahead-of-time (AOT) compilation, slashing startup delays and trimming memory bloat. Micronaut syncs effortlessly with GraalVM and can be treated as “official” solution for that technology.
Lombok
Project Lombok is a tool saving developers from the soul-crushing monotony of typing out getters, setters, and constructors till their fingers bleed. With annotations like @Data
or @
Getter, it whispers to your compiler, “Hey, let’s generate that boilerplate nonsense behind the scenes,” leaving your code looking sleek and sassy. Simply speaking, it frees Java developers from Kotlin envy.
Sure, a few purists grumble it’s like letting a wizard loose in your codebase, cranking “magic” and potentially turning debugging into a treasure hunt, but most folks just enjoy the ride nowadays.
Level 2: Just Below the Surface

Here, you start to realize that Java isn’t just about writing code – it’s also a battle with classpaths, as if you were Indiana Jones searching for a lost artifact. You’ve already had your first encounter with NullPointerException and are beginning to suspect that it’s not a coincidence, but Java’s revenge for your attempt to understand what was meant to remain hidden.
Java 21 features
In an era where many are still grappling with JDK 1.8 in production, last year’s release of Java 21 introduced such a wealth of new features that its debut resonated beyond the usual JVM enthusiast circles. One of the most anticipated additions was Virtual Threads, which have revolutionized concurrent programming in Java. These lightweight threads allow developers to create and manage threads more efficiently by decoupling Java threads from OS threads, Virtual Threads enable the handling of numerous concurrent tasks without the overhead traditionally associated with thread management.
Another significant enhancement in Java 21 is Pattern Matching for switch statements. This feature enables more elegant and readable operations on various data types, allowing developers to write concise and type-safe code while avoiding excessive casting and complex conditional structures. Additionally, the introduction of Record Patterns facilitates easier deconstruction of objects, streamlining data handling and improving code clarity.
Jakarta EE
The first version of Jakarta EE, then known as Java EE was released in December 1999, bringing standards for Java-based enterprise applications. In the following years, the platform evolved, gaining popularity with the introduction of technologies such as servlets, Java Server Pages (JSP), and Enterprise JavaBeans (EJB).
Yes, this truly used to excite the community.
In 2017, Oracle announced it was open-sourcing Java EE to the Eclipse Foundation. Due to legal issues related to the name “Java,” – which is a trademark – the new version of the platform was renamed Jakarta EE. The first release under the new name, Jakarta EE 8, was released in September 2019 and was fully compatible with Java EE 8, allowing a smooth transition for developers. Subsequent versions, such as Jakarta EE 9 in 2020, introduced significant changes, including the migration of the namespace from “javax” to “jakarta.” The latest version, Jakarta EE 10, was released in September 2022 – although by the time of this publication, we should already be enjoying Jakarta EE 11.
Helidon
Like Micronaut or Quarkus, Helidon emerges as a modern Java framework, particularly suited for microservices and cloud-native environments. Created by Oracle itslf, it offers two distinct styles: Helidon SE (standalone version), and Helidon MP, which aligns with the MicroProfile project. Helidon integrates smoothly with tools like GraalVM and thrives in containerized setups such as Docker… yata, yata, yata, you already know the story.
However, the most interesting part of Helidon ecosystem is Helidon Níma. Built on Java’s Project Loom and its virtual threads, Níma redefines concurrency by replacing the old thread-per-request approach with a lightweight, scalable model that handles thousands of tasks effortlessly. This makes it the most exciting piece of the Helidon puzzle, offering blazing-fast performance and a slim memory footprint, perfect for high-concurrency microservices and proving that Helidon isn’t just keeping up with the times but pushing Java into the future.
GraalVM
As the knights of old sought the Holy Grail, so doth GraalVM rise as a mighty force within the Java realm, wrought by the hands of Oracle for deeds of high performance in the lands of microservices and cloud-native contrivances. It is a universal engine, not merely toiling at Java’s behest, but enkindling it with the craft of ahead-of-time (AOT) compilation, whereby bytecode is transmuted into native executables. In fellowship with trusty tools such as Helidon and Quarkus, and flourishing within the bounds of Docker’s vessels, GraalVM doth cleave the delays of startup and lessen the weight of memory’s burden verily, thou knowest well this tale.
Yet, the chief wonder of GraalVM’s domain lieth in its native image cunning. By forging Java works into solitary binaries, it casteth off the heavy mantle of the common JVM, granting speeds swift as the wind and a footprint scant as a miser’s trove. This maketh it the most stirring piece of GraalVM’s riddle, meet for microservices and serverless labours, shewing forth that GraalVM doth not only match the pace of present needs, but redefineth how Java doth fare in the wilds beyond.
Modern Java Garbage Collectors
One of the widely discussed is the ZGC, introduced in JDK 11. ZGC was designed for applications needing minimal interruptions, offering pauses in the millisecond range, even with multi-gigabyte memory heaps. A key innovation in ZGC is the use of so-called “colored pointers,” which enable efficient tracking of objects in memory without requiring prolonged pauses in application threads.
Another garbage collector is Shenandoah, which also emerged around JDK 11 and is developed with a single focus – to minimize latency. Like ZGC, Shenandoah operates concurrently but stands out with its approach to heap compaction, which it performs without halting the application. This process traditionally caused significant pauses in older GCs like CMS or G1.
Level 3: Deeper Level

In this realm, the light fades, and project names sound like the forgotten names of ancient gods (Valhalla, huh). You feel like you’re walking on the edge of madness, the code pulses with life and seems to guard hidden secrets. Something watches you from the darkness, something that knows.
Project Valhalla
Project Valhalla, kicked off in 2014, is Java’s ambitious stab at fixing its performance quirks and healing the primitive-object schism. It’s packing Value Classes – think lean, identity-free data types that ditch Object overhead, sitting inline in memory to slash heap usage and GC churn. Then there are Primitive Classes, improving int, double, etc., to play nice in generics. Add specialization to the mix, and generics stop erasing to Object, getting custom, type-specific bytecode instead. The payoff? Tighter memory layouts, fewer cache misses, and a VM that hums on modern hardware.
But here’s the rub: after a decade, Valhalla’s still a no-show. It’s tangled in technical quicksand – retrofitting the JVM, preserving bytecode compatibility, and rejigging a crusty type system without sparking a revolt. Early drafts (JEPs 401, 402) and test builds tease us, but the finish line keeps shifting. It’s a slog through backward compatibility hell and edge-case nightmares, all to make Java a leaner, meaner beast for data-crunching workloads.
Tonight, we (rather not) dine in Valhalla!
Project Lilliput
In 2021, Roman Kennke from Red Hat initiated Project Lilliput to reduce object header sizes in the HotSpot JVM from 128 bits to 64 bits, and potentially even to 32 bits. In the 64-bit HotSpot JVM, each object comprises a 64-bit “mark word” (used for synchronization and hash codes) and a 64-bit class pointer – for small objects (typically 5-6 words), this header constitutes a significant overhead. Project Lilliput introduced Compact Object Headers (JEP 450), which restructured the object header to 64 bits by relocating class metadata to a separate structure and optimizing lock management. This enhancement leads to tangible benefits, including reduced heap usage, less frequent garbage collection, and improved CPU cache utilization.
Unlike the perpetually deferred Project Valhalla, Project Lilliput has delivered on its promises. JEP 450 was integrated into JDK 24 (with a premiere in march of 2025) as an experimental feature, with ongoing efforts to further shrink headers to 32 bits. Key hurdles include maintaining compatibility with existing JVM code, optimizing lightweight locking mechanisms, and experimenting with further compression techniques. The outcome is a leaner Java memory footprint, with prospects for even greater efficiency in the future, this time without endless delays.
Spring AI, Langchain4j, Semantic Kernel
Alright, let’s dive into the wild world of Spring AI, LangChain4j, and Semantic Kernel – the trio of Java frameworks for AI Bros.
Spring AI is part of the Spring ecosystem, strutting in like it owns the place with its mission to sprinkle AI magic on your Java apps. It’s all about wrapping foundational models and vector stores in cozy Spring Boot autoconfiguration, complete with slick APIs for chatting up models and RAG (Retrieval-Augmented Generation) support for smarter answers.
LangChain4j, meanwhile, is the scrappy indie kid, porting the Python LangChain vibe to Java with a “we don’t need no stinkin’ Python”. It’s got AI Services – think declarative interfaces that hide LLM plumbing – plus tools for memory, embeddings, and chaining prompts.
Then there’s Semantic Kernel, Microsoft’s entry into the fray, supporting Java alongside C# and Python. Launched in 2023 too, it’s got big dreams of multi-agent systems, but its Java support feels like it’s still doing warm-up laps compared to the Python crew.
Spring Modulith
Spring Modulith is a clever little gem in the Spring ecosystem, helping you tidy up your Spring Boot project into neat, modular modules. Armed with structural validation and a ruleset for keeping things loosely coupled, it helps turn monolith into a maintainable project, all without forcing you to jump on the microservices bandwagon. Perfect for devs who want to keep things simple yet scalable.
JBang
JBang is a tool simplifying the process of running, managing, and creating Java scripts (pun intended), without the need for a traditional, heavyweight project setup. It allows developers to execute Java code as easily as a shell script by downloading dependencies, compiling, and running the code in one seamless step—all from a single command.
With JBang, you can write a Java file, include dependencies using simple comments (like //DEPS
), and run it instantly with jbang script.java
, making it ideal for prototyping, scripting, or even building small utilities.
Level 4: Even deeper

Here, you lose all sense of time, just understanding why these initiatives were even started feels like a challenge in itself. The depth crushes your mind, yet you can’t stop descending – something is calling you, something closer to the essence of knowledge (and the virtual machine) than yet another CRUD.
jSpecify
This initiative aims to standardize nullability annotations in Java, providing a consistent set of annotations that enable static code analysis to identify potential NullPointerException issues, thereby reducing such errors.
In July 2024, version 1.0 of the jSpecify project was announced. Project introduces four key annotations:
- @Nullable: Indicates that a variable, parameter, or return value can be null.
- @NonNull: Indicates that a variable, parameter, or return value cannot be null.
- @NullMarked: Applied at the package or class level, it designates all unannotated types within that scope as non-nullable by default, reducing the need for repeated use of the @NonNull annotation.
- @NullUnmarked: Allows disabling the effect of @NullMarked within a specific scope, facilitating a gradual adoption of annotations in large projects.
CRaC, WRAP and InstantOn
CRaC, WRAP, and InstantOn are technologies aimed at improving Java application startup and performance, each with roots in the broader concept of checkpoint/restore mechanisms, notably influenced by CRIU (Checkpoint/Restore In Userspace).
CRaC, or Coordinated Restore at Checkpoint, is an OpenJDK project that enhances Java startup times by taking a snapshot of a fully warmed-up JVM – after Just-In-Time (JIT) compilation and application initialization – and saving it as a checkpoint. Using CRIU under the hood, CRaC pauses the application, ensures resources like files and sockets are closed via its Java API (with callbacks like beforeCheckpoint and afterRestore), and dumps the process state to disk – think Save States in video game emulators. Later, this snapshot can be restored rapidly, skipping the usual warmup phase, delivering near-instant startup and full performance from the get-go. Frameworks like Spring and Quarkus have integrated CRaC support, leveraging its ability to coordinate resources explicitly, a step beyond CRIU’s more generic, uncoordinated process freezing.
WRAP (Warp Restore Acceleration Platform) and InstantOn build on similar checkpoint/restore ideas but cater to distinct use cases while sharing CRIU’s foundational influence. WRAP, part of Azul’s commercial offerings, extends CRaC-like functionality by optimizing the restoration process for large-scale deployments, focusing on reducing memory footprint and accelerating repeated startups – ideal for cloud-native or containerized environments. InstantOn, meanwhile, targets embedded systems and IoT, using CRIU to create a pre-initialized JVM image that boots in milliseconds, bypassing traditional JVM startup entirely. Unlike CRaC’s runtime coordination, InstantOn emphasizes a one-time checkpoint baked into a deployable artifact, while WRAP refines the process for scalability.
All three trace their origins to CRIU’s ability to freeze and restore Linux processes, but they diverge in scope: CRaC as an open-source JVM enhancement, WRAP as an enterprise optimization, and InstantOn as a lightweight, embedded solution, each tailoring CRIU’s raw power to Java’s unique needs.
JExtract
jextract is a tool developed as part of OpenJDK’s Project Panama to simplify calling native C libraries from Java by automatically generating Java bindings from C header files. It works by parsing the header files using the Clang C API, then producing Java classes that leverage the Foreign Function & Memory API (FFM API, finalized in JDK 22) to interact with native code.
This eliminates the need to manually create method handles or memory layouts, letting developers call native functions as if they were regular Java methods, complete with type safety and resource management via arenas. It’s particularly useful for integrating legacy C libraries or system APIs into Java applications, streamlining what was once a tedious process with JNI (Java Native Interface).
OpenRewrite
OpenRewrite is an automated refactoring tool designed to help developers eliminate technical debt in their codebases. Project features an engine that applies pre-built, community-driven “recipes” to automate tasks like framework migrations, security vulnerability fixes, and stylistic consistency updates, reducing what might take hours or days into mere minutes. These recipes operate on a Lossless Semantic Tree (LST) representation of the source code, preserving original formatting while enabling precise, type-aware changes.
Initially focused on Java, OpenRewrite is now expanding to support other languages. While the project is maintained under an Apache 2.0 license, Moderne, the company behind OpenRewrite, is trying to scale this technology for enterprise use and build the proper business around it. In February 2025, they announced a $30 million Series B funding round.
Project Leyden
Project Leyden is an OpenJDK initiative aimed at improving the startup time, peak performance, and memory footprint of Java applications by introducing a shift toward a more static, ahead-of-time (AOT) compilation model. Launched to address long-standing criticisms of Java’s slow warmup and resource demands, Leyden seeks to evolve the JVM by exploring “closed-world” assumptions – where the entire application is known at build time – allowing more aggressive optimizations than the traditional dynamic, “open-world” approach of the JVM.
It builds on existing technologies like the GraalVM Native Image and the Ahead-of-Time compilation in the JDK, but with a focus on integrating these ideas directly into OpenJDK. By pre-computing class loading, linking, and JIT compilation steps into a single executable or a pre-initialized state, Leyden aims to deliver faster startup and reduced runtime overhead, making Java more competitive with languages like Go or Rust for serverless and containerized environments.
The project operates in phases, starting with concepts like “condensers” that transform bytecode into optimized forms, potentially producing native binaries or cached runtime states, while preserving Java’s portability and security guarantees. Unlike tools like CRaC or InstantOn, which snapshot a running JVM, Leyden emphasizes build-time transformations, avoiding runtime snapshot dependencies. It’s still experimental, with no fixed release date as of early 2025, but it’s guided by a collaborative effort within the OpenJDK community to balance performance gains with Java’s flexibility.
Level 5: The Bottom

You’ve reached a place where there is no bottom, only an eternal void, writhing with the most ephemeral of projects, ready to devour you the moment you dare to ask yourself, “But how would I use this in production?”
And then, you will become part of the abyss, not a programmer anymore, but a vessel for something that should never have been awakened. Knowledge has consumed you, and you… you will never return to write another controller.
Chicory
Chicory serves as a Java-powered interpreter for WebAssembly, allowing Wasm modules to run natively inside the JVM. Rather than converting Wasm into machine-level instructions like conventional compilers, it processes the bytecode directly, bypassing any extra compilation phases for a streamlined approach.
Built entirely on Java’s core library, Chicory avoids external dependencies altogether. This lean design boosts its portability across systems and slashes the chances of security flaws linked to third-party code, though it sacrifices some speed, as interpreted execution tends to lag behind the efficiency of pre-compiled binaries.
Manifold
On the top layer, we had Lombok, and now it’s time to focus on its more hidden companion.
Manifold is a Java compiler plugin that extends the language with a slew of advanced features such as metaprogramming, extension methods, operator overloading, properties, and templates. As an example, it allows developers to integrate various data formats (e.g., JSON, SQL, XML) directly in Java code in a type-safe way, eliminating the need for separate code generation steps.
A key advantage of Manifold is increased productivity by simplifying code and reducing boilerplate. For example, you can add your own methods to existing classes (e.g., String), use native SQL in Java code with immediate type safety, or write templates with full Java expression support.
Manifold differs from other tools like Lombok because it functions as a compiler extension rather than a separate language, although technically, it is more fragile because it relies on internal mechanisms of javac, which ties it to specific JDK versions.
TornadoVM
TornadoVM was born as a research initiative aimed at accelerating Java code execution on modern accelerators – specialized hardware designed to accelerate specific computations, often through massive parallel processing, such as GPUs or FPGAs. The project started taking shape around 2018 as part of research at the University of Manchester.
The key challenge the developers set for themselves was to create a specialized just-in-time (JIT) compiler and multi-layered runtime logic that would automatically detect parts of the code suitable for parallel execution and offload them dynamically to the GPU (or other devices). Over time, TornadoVM has expanded to include additional backends (OpenCL, CUDA PTX, and SPIR-V), making it a universal environment for accelerating Java code on various types of hardware.
In simple terms: the goal of TornadoVM is to reduce the effort of developers who want to write in pure Java but still leverage computational power beyond traditional CPUs.
Project Babylon & HAT
Babylon, a new OpenJDK project, enhances Java reflection with Code Reflection, enabling deep analysis of methods, lambdas, and runtime behavior beyond simple class and field inspection. This facilitates advanced metaprogramming in pure Java, allowing dynamic code generation and modification. While distinct from TornadoVM’s hardware focus, Babylon’s reflective innovations suggest potential synergies for optimizing Java applications across diverse platforms.
Its subproject, HAT (Heterogeneous Accelerator Toolkit), targets GPU acceleration with an abstraction layer supporting CUDA, OpenCL, and Vulkan APIs. HAT lets developers define “code models” at compile time, which are translated into low-level OpenCL C or CUDA PTX at runtime. This enables parallel Java applications to harness GPU power efficiently, keeping development within the Java ecosystem, akin to TornadoVM’s goals but driven by reflection-based techniques.
The goal of Babylon and HAT is to empower developers to write high-performance, GPU-accelerated Java applications using Code Reflection’s flexibility and HAT’s hardware integration. This combination supports optimized solutions for tasks like scientific computing or real-time data processing, all in pure Java, without low-level expertise. Still evolving, Babylon could complement projects like TornadoVM, strengthening Java’s high-performance computing capabilities.