In a digital world far away from here, runs a Java application without problems. The logs are quiet, the servers are stable, and Emma and her team live in peace. But then, the first attack arrives! First innocent. A little spike in the memory use. But than another, and another. Within a couple of days, it turns into a disaster. Garbage collectors are going wild. The latency rises. The servers are shaking. The memory realm is flooded by invisible enemies: unnecessarily inflated object headers. They hide in every object, wasting valuable bytes and creating chaos in the heap.
This is the start of the memory wars!
Emma and her team are puzzled. Their code hasn’t changed. Their traffic is stable. Where does this growth come from? They dig into the heap and discover the hidden enemy: the unnecessarily inflated object headers.
What are object headers?
In Java, every object in memory carries a small amount of extra data known as an object header. This header stores important metadata that the JVM uses to manage the object. Typically, it includes things like a mark word (for garbage collection, synchronization and hash codes) and a class pointer (to identify the object’s class and its methods). While this metadata is crucial for the JVM’s functionality it adds overhead. For example, each object can carry 12 to 16 bytes of header data. Which adds up quickly in applications with millions of objects, leading to wasted memory and increased garbage collection pressure.

When is an object header created?
An object header is created the moment an object is instantiated with the new keyword. When the JVM allocates memory for an object on the heap, it not only reserves space for the object’s fields but also for its header. This happens even before the object’s constructor finishes running. So, from the very beginning of an object’s life, the JVM is already tracking it with that metadata.
A silent war rages in their system. The enemy is invisible, but the damage is real. Emma’s team is fighting with everything they got. They tweak the garbage collection. They analyse the heap dumps. They try manual optimalisations. Even object pooling isn’t working. But the object headers keep growing as an enemy army. Every object takes 16 bytes of unnecessary ballast with them. The servers are getting overloaded. The application is in danger of being defeat.
Emma stands before a hard choice. Does she keep fighting with old techniques and endless optimalisations, or does she search for a new weapon? Something that can change this war? The team feels the pressure or the deadlines and the increasing performance issues. There is tension in the air. The enemy seems stronger than ever, the change to get grip on the situation, slims every minute.
But deep in the Java 24 release notes, Emma discovers the secret weapon: JEP 450, compact object headers! JEP stands for JDK Enhancement Proposal
Why is the compact object header slimmer?
The compact object header in Java 24 is slimmer because it trims away unused bits and optimizes how essential metadata is stored. For example, the old object header reserved bits for features like biased locking, which sped up synchronization but was disabled in Java 15. Those bits were just sitting there, useless. The compact header repurposes or discards these bits, packing necessary data, like class pointers and mark words, more efficiently. This tighter layout reduces the header size from 12 or 16 bytes to as little as 8 bytes in some cases, meaning less memory overhead per object and better cache locality for faster access.

This is no normal update. This is a strategic breakthrough in the war against memory waste. Emma takes her change. She starts the application with this new weapon:
java -XX:+UseCompactObjectHeaders -jar mijn-geweldige-craftsmen.jar
The -XX is needed because this is still an experimental feature in Java 24.
The clock ticks. Emma runs her application with the new activated weapon. She sees immediate change. The memory usage slims. The object headers shrink, freeing up more space in the heap. The system, which was just about to collapse under the pressure, is finally breathing a sigh of relief.
Because of the JEP 450 flag, the JVM optimizes object headers by reorganizing how metadata is stored. It’s compressing this metadata, so headers can shrink by up to 50%. Reducing memory waste, improving cache efficiency, and lowering garbage collection activity. Emma watches her metrics. Memory usage drops, garbage collection pauses shorten, and response times improve. Without touching a single line of code, her application transformed into a lean, efficient machine. All thanks tot the power of compact object headers!
The memory wars show us an important lesson. In every application is a silent war raging. It’s not just about writing good code, but also on using the opportunities that the JVM gives us. And unused bytes and unnecessary overhead can lead to a big problem, when they aren’t addressed.
Slow startup time
The dust had barely settled from the Memory Wars. The team felt victorious. But in software, peace is fleeting.
It was a regular Friday afternoon. The team was winding down, wrapping up their last commits, and debating wat everybody was going to eat that night. Emma, always the curious one, lingered a little longer at her desk. When John, the backend engineer with an uncanny ability to break things, let out a dramatic sigh.
“Our app takes forever to start. I wish we could fix that!” he blurted, spinning slowly in his chair.
Emma chuckled. “It’s not that bad, right?”
John stopped spinning and stared at her. “I went for a coffee, came back, and it was still booting. And I make slow coffee!”
Everyone laughed, but Emma couldn’t shake the thought. The weekend arrived, but her mind kept replaying John’s words. Could they really fix the startup time? And why did she care so much? Was it jus professional pride or maybe something else?
Saturday morning Emma sat at her desk, with a fast coffee in her hand. Determined to crack the mystery. She dove into research. Digging through JVM documentation like an archaeologist searching for a lost treasure. What she found was fascinating.
How does class loading works?
The JVM, like an overcautious librarian, loads classes on demand. Whenever the application references a class. The JVM scrambles to find and load it, resolving dependencies as it goes. In small applications, this isn’t a big deal. But in large, complex systems, especially microservices, this class-by-class loading adds serious startup lag. It’s like inviting guests to a party and making each one wait at the door while you look up their name on a list. Emma leaned back, frowning. “What if, we could load the guests before the party starts?”

JEP 483: Ahead-of-Time class loading & linking
By Sunday afternoon, Emma had discovered her golden ticket: JEP 483, Ahead-of-Time class loading & linking. JEP 483 introduced a way to optimize class loading by allowing the JVM to load and link classes ahead of time. Instead of waiting to load each class on demand. The JVM can preload and prepare them during the build process. Reducing the work needed at runtime. It’s like the JVM setting up the entire party room in advance, so when the application starts, it can just swing the doors open and let everyone in without delay.
Emma’s heart raced. She could see the solution taking shape. She didn’t text John. Not yet. She wanted to test it herself.
Sunday evening. Emma’s living room was dark except for the glow of her screen. She started by packaging her hobby project:
mvn clean package
Running an executable JAR in production isn’t always ideal. So, she created an exploded Jar instead:
java -Djarmode=tools -jar target/this-amazing-application.jar extract
With her app ready, she measured the baseline startup time:
java -jar this-amazing-application.jar
The app started in 2.09 seconds. Not bad for a small side project. But Emma wanted more speed. Next, she recorded the application run to capture loaded classes:
Java -XX:AOTMode=record -XX:AOTConfiguration=app.aotconf \ -cp this-amazing-application.jar com.emmasworld.common.ThisAmazingApplication
Note: By the time this article is published, Java 24 will be out, and you might not need the -XX flag anymore, as AOT features will no longer be experimental.
After using the app, click here and there, to load the necessary classes. Emma created the ahead-of-time (AOT) bundle:
Java -XX:AOTMode=create -XX:AOTConfiguration=app.aotconf \ -XX:AOTCache=app.aot -cp this-amazing-application.jar
Finally, she ran the app with the ahead of time cache:
Java -XX:AOTCache=app.aot -cp this-amazing-application.jar com.emmasworld.common.ThisAmazingApplication
It worked! The app now started in 1.09 seconds. Almost twice as fast. Emma jumped up, startling her cat awake with her victory cheer.
The Rust temptation
The team basked in the glow of their recent success, having optimized their Java application’s startup time by 40%. Yet, John, ever the performance enthusiast, remained restless
One morning, he casually floated an idea during the daily stand-up:
“What if we rewrote the performance-critical parts of our application in Rust?”
The suggestion hung in the air, met with raised eyebrows and thoughtful nods. Rust, with its promise of memory safety and zero-cost abstractions, had been making waves in the systems programming world. John argued that its efficiency could push their application’s performance to new heights.
Emma listened intently but felt a pang of hesitation. Transitioning to Rust would mean a steep learning curve and potential integration challenges. She wondered if there was a way to achieve similar performance gains within the Java ecosystem they were already familiar with.
Rust is fast and handles threads well. But managing millions of threads manually? That’s like untangling a pile of yarn by hand. Meanwhile, Emma’s cat, oblivious to the elegance of threads, got tangled in real-life threads from her knitting project. As Emma carefully freed him, she laughed. “Let’s stick with Java,” she said, watching her cat bat at the remaining yarn. “It’s less of a headache for both of us.
JEP 491: Synchronize Virtual Threads without Pinning
Determined to explore all avenues, Emma delved into recent advancements in Java. Her research led her to JEP 491: Synchronize Virtual Threads without Pinning. This enhancement aimed to improve the scalability of Java applications by allowing virtual threads to release their underlying platform threads when blocked in synchronized methods or statements. This change would enable applications to handle a massive number of threads more efficiently.
Prior to this enhancement, virtual threads were pinned to platform threads when entering synchronized blocks, limiting scalability. With JEP 491, this limitation was addressed, allowing virtual threads to unmount even within synchronized contexts, thus freeing up platform threads for other tasks.
Emma saw potential in this approach. By leveraging virtual threads, they could achieve high concurrency without the complexities of integrating a new programming language.
How threads works
In classic Java, when you create a thread, it maps directly to an operating system (OS) thread. The JVM (Java Virtual Machine) asks the OS to create a “platform thread,” which is a wrapper around a real kernel-level thread. The OS manages this thread, switching between different threads when needed a process called “context switching.” But these platform threads are heavy. They use a lot of memory, and the OS can only handle so many at a time. If a thread waits (like for a database call), the platform thread is stuck doing nothing, wasting resources.

How virtual threads work
Virtual threads change the game! Instead of each Java thread using an OS thread, the JVM creates lightweight virtual threads. These virtual threads run on top of a small pool of platform threads, like passengers taking turns on a limited number of seats. When a virtual thread needs to wait, Java pauses it, freeing up the platform thread to handle another task. When the waiting is over, Java resumes the virtual thread right where it stops. This approach massively reduces memory use and lets Java apps run millions of threads without overwhelming the OS. It’s a simpler, more efficient way to handle lots of concurrent tasks!

If this method blocks during the read operation, the JVM will pin the virtual thread to its platform thread, essentially consuming the resources until data is available. Such blocking limits scalability, as pinned virtual threads occupy platform threads that could otherwise support additional virtual threads. JEP 491 proposes allowing virtual threads to unmount even within synchronized blocks, freeing their platform carriers to execute other virtual threads.

For example, a virtual thread that calls Object.wait() in a synchronized block will unmount and remount when signaled to continue, as shown below:
With this approach, applications can scale better without requiring developers to abandon synchronized in favour of java.util.concurrent locks.
Experimenting with JEP 491
Eager to validate her findings, Emma proposed an experiment. They would refactor a performance-critical section of their application to utilize virtual threads and observe the impact. The team agreed, intrigued by the possibility.
Over the next week, they refactored the application’s data processing module to use virtual threads. This involved replacing traditional thread pools with virtual threads, allowing each task to run independently without consuming significant system resources.
Upon deployment, the results were impressive. The application’s throughput increased by 50%, and resource utilization dropped significantly. Tasks that previously waited for thread availability now executed promptly, leading to a smoother and more responsive system.
Empowering a Team of Java Explorers
Emma’s relentless curiosity didn’t just speed up their app, it sparked something bigger. Her team, energized by the success with virtual threads and compact headers, started hosting regular “JEP Fridays.” Every week, someone would dive into a new JDK Enhancement Proposal, experimenting, sharing findings, and brainstorming ways to improve their system. Even failed experiments were valuable, turning into mini-knowledge sessions that helped everyone level up.
The Power of Shared Knowledge
What started as a quest to fix startup times became a culture shift. Emma, once just the developer who stayed late out of curiosity, became a mentor who encouraged others to explore, break things, and learn. Because that’s the magic of knowledge-sharing. It turns individual discoveries into team superpowers. And while they never rewrote the app in Rust, John did knit Emma’s cat a tiny sweater out of leftover thread.
Written by Lutske de Leeuw, software engineer at Craftsmen.