The Past, The Present and The Future
Over the last 30 years, Java has evolved from an exotic “write once, run anywhere” language to one of the dominant platforms for software development worldwide. In the early years, Java was justifiably considered slow compared to languages such as C/C++, which was mainly due to the initial interpreter approach. However, the last three decades have shown that the VM concept with HotSpot’s adaptive optimization is clearly the superior approach.
by Ingo Düppe
Earlier JVM versions executed bytecode in a purely interpreted manner, which made Java programs 10 to 20 times slower than equivalent C code. So, performance was a challenge from the beginning, but continuous optimizations have drastically improved execution speed since then.
We take this opportunity to provide a detailed look at the evolution of Java performance. We look back at the past – from the first JVMs of the 90s to the introduction of the JIT compiler and early garbage collector strategies. We then look at the present of modern JVMs: the HotSpot engine, current tiered compilers (C1, C2 and GraalVM), advanced GCs such as G1, Shenandoah and ZGC, improvements in threading (e.g. Project Loom) and memory optimizations.
A look into the future shows what developers can expect with Loom, CRaC, native compilation and new projects. Finally, a few practical tips to avoid getting lost in the Java performance universe.
Past: The beginnings of Java performance
Java 1.0 and 1.1 in the years 1996 – 1997 relied exclusively on interpreters. The Java bytecode was therefore emulated command by command at runtime instead of being translated into native machine code. This approach ensured portability but led to considerable overhead. Average Java applications initially ran 10 to 20 times slower than programs written in C. Developers in the 90s therefore often derided Java as “too slow for serious applications”.
A just-in-time compiler (JIT) was first introduced in 1997 with Java 1.1. The JIT compiler dynamically translates frequently executed bytecode sequences into native code in order to significantly speed up execution. However, early JIT compiling was itself computationally intensive, so that only limited use could be made of it at first. The real turning point came from the HotSpot team: their technology was initially made available as an option for Java 1.2 in 1999 and became standard as of Java 1.3 (2000) as the HotSpot JVM. HotSpot introduced adaptive optimization: The JVM observes the code at runtime, identifies “hot spots” (frequently used code paths) and selectively compiles them with the JIT, while less critical code can remain in interpreted form. This selective approach minimized the compilation effort and led to massive speed gains – benchmarks showed up to 10-fold acceleration through HotSpot compared to purely interpreted code.
Also worth mentioning from this era is the threading model. Platform independence initially imposed restrictions on the use of operating system threads. Java 1.1 used green threads on some platforms – i.e. threads managed by the JVM runtime system in user space instead of native OS threads. Although this allowed threading even on systems without their own thread support and had low costs for thread switches, it did not scale on multi-processor systems. All green threads of a process shared one kernel thread, so one blocking operation (e.g. I/O) could freeze the entire VM.
Java 1.3 therefore initiated the switch to native threads, which drastically improved parallelism on multi-core systems – albeit at the cost of slightly higher thread creation and context switching costs. Interestingly, Java has recently returned to the concept of ultra-light runtime-managed threads with Project Loom – more on this later.
Memory management and garbage collection (GC) of the 90s
Another key performance issue in the early days was memory management. Java freed developers from manual memory allocation and deallocation, reducing programming errors but transferring the responsibility of memory cleanup to the JVM. The very first JVMs (1.0, 1.1) used a simple mark-and-sweep GC that periodically scanned the heap, marked unused objects as garbage and freed their memory. This procedure often led to heap fragmentation. In addition, the collection was performed with a stop-the-world scenario – in other words, the entire application paused while the GC was running, which led to noticeable lags. In addition, the GC did not scale with increasing memory.
With Java 1.2 (1998), the generational GC was introduced. The Weak Generational Hypothesis states that most objects are very short-lived and only a few objects are very long-lived. The JVM divided the heap into young and old generations accordingly and cleaned up the young generation (with the majority of the short-lived garbage) very frequently, and the old generation correspondingly less frequently. In addition, fragmentation could be reduced by copying collector approaches. This generational GC design proved to be significantly more performant. For example, it ensured that large heaps were better compacted and memory leaks were avoided. Despite these advances, GC pauses remained a problem in large Java applications of the late 1990s – concurrency in the GC itself was still rudimentary, causing the application to halt significantly during memory cleanup.

Weak Generational Hypothesis – most objects die young
The challenge of platform independence
All in all, the late 1990s were a time when fundamental performance techniques were just maturing for Java. Many challenges arose directly from platform independence: every advantage had to be achieved at runtime and without deep integration into the operating system. Early versions of Java suffered from slow GUI performance because AWT/Swing used platform-neutral code instead of native widgets. File and network accesses were noticeably slower than C APIs due to abstract streams and security checks. The JNI overhead made native libraries inefficient, which is why performance-critical code was often written in C/C++.
Differences in thread scheduling made optimal tuning difficult, as early JVMs did not map Java threads 1:1 to OS threads. Nevertheless, these years laid the foundation for later optimizations: With JIT, generational GC and native threads, Java became significantly faster and more scalable around 2000, a trend that still characterizes the platform today.
Present: Optimized Performance in Modern JVMs
Today’s Java versions (Java 17 to 23+) feature highly optimized virtual machines that are the result of decades of research and development.
HotSpot Engine and Adaptive Optimization
For over 20 years, the HotSpot JVM has been the foundation of the Java SE platform. True to its name, it identifies hotspots in the code and aggressively optimizes them at runtime. It combines a bytecode interpreter with two JIT compilers: C1, which compiles quickly with limited optimizations, and C2, which generates highly optimized machine code for long-running server applications.
Tiered Compilation leverages both compilers in stages: methods start by being interpreted, C1 takes over after a few invocations, and frequently used code is eventually optimized by C2. This approach enables rapid warmup and maximum performance. During execution, HotSpot dynamically optimizes using techniques such as inlining, loop collapsing, and peephole optimizations, adapting based on runtime profiles. Speculative optimizations make assumptions about the code that are automatically rolled back if they prove invalid.
A milestone was the introduction of escape analysis in Java 6/7. This technique identifies objects that do not escape a certain scope and optimizes their memory management: objects can be allocated on the stack instead of the heap or completely eliminated through scalar replacement, thereby reducing the GC load.
Thanks to these optimizations, HotSpot Java achieves near-native performance in many scenarios. As early as 2008, benchmarks demonstrated that advanced JIT techniques allowed Java to lag behind C++ by only 10 – 30%, sometimes matching or even outperforming it due to runtime optimizations unavailable to static compilers.
Modern Garbage Collection: G1, ZGC, Shenandoah & Co.
JVM garbage collectors have made significant steps in recent years. In addition to the classic Serial and Parallel GCs, CMS introduced concurrent GC aspects for the first time, though it had shortcomings like heap fragmentation. With Java 7/8, G1 GC emerged as a modern replacement and became the standard in Java 9, while CMS was removed again in Java 14.
G1 GC operates on a regional basis, targeting the most memory-intensive regions for cleanup to keep configurable pause times (typically around 200 ms by default) within limits. Through concurrent marking and selective evacuation, G1 strikes a good balance between throughput, moderate pauses, and minimal tuning effort, making it especially suitable for medium to large heaps.
For even lower latencies, Shenandoah and ZGC were developed. Shenandoah reduces stop-the-world pauses by performing fully concurrent heap compaction using Brooks pointers, keeping pauses in the sub-10-ms range even for 200-GB heaps – albeit without generational behavior in its early versions. ZGC follows a similar approach using colored pointers, which embed state information directly into pointers to efficiently manage object movements. It guarantees pauses under 10 ms, regardless of heap size. With Java 21, Generational ZGC was introduced to collect short-lived objects more efficiently and further boost throughput.
Today, the JVM offers a wide range of GC algorithms that can be chosen according to the specific application needs. While Oracle uses G1 as the standard and is looking to generational ZGC for the future, Red Hat often prefers Shenandoah. Thanks to these advances, Java can perform efficiently even with terabyte-sized heaps, without long stop-the-world pauses becoming an issue.
Threading and Concurrency: From Fork/Join to Loom
Since Java 5 (2004) and the java.util.concurrent
package, a lot has happened in terms of parallelization: thread pools, locks, atomic variables and concurrent collections enable modern multi-core use. With Java 7, the fork/join framework (JSR 166y) for recursive parallelization was added, and from Java 8 onwards, parallel streams and lambdas further simplified this approach. At the same time, improvements such as lock striping and more efficient data structures (such as the ConcurrentHashMap
with block-wise locking) were developed. This was followed in Java 11 by VarHandle and spin-wait hints via Thread.onSpinWait()
, which facilitate high-performance concurrency. A typical example looks like this:
while (!lock.isAvailable()) {
Thread.onSpinWait(); // Gives the CPU a hint for optimisation
}
In addition, thanks to lock elision via escape analysis, synchronization is no longer necessary if objects are only used thread-locally. Nevertheless, one hurdle remains: each Java thread corresponds to an operating system thread, which leads to high memory requirements (usually 1 MB per stack/thread) and expensive context switches for very large quantities (approx. over 100k). Scheduling then no longer scales linearly, and the limits of the OS are reached.
This was one motivation for Project Loom, which is arguably the most significant upheaval in Java’s concurrency model in decades. Project Loom introduced Virtual Threads (also known as fibers), which have been available as a preview since Java 19 and were finalized in Java 21.
Virtual threads are ultra-lightweight threads managed by the JDK that are not permanently bound to kernel threads. They can be seen as a modern equivalent to green threads—but without their drawbacks.
The idea is that a Java program can create hundreds of thousands of concurrent threads without overloading the operating system. Virtual threads are scheduled by the JVM and mapped onto a smaller number of kernel threads (so-called carrier threads). When a virtual thread blocks (for example, during I/O or Thread.sleep
), the entire carrier thread is not blocked too – the JVM can decouple the carrier and assign it to another virtual thread while the blocked thread waits in the background. As a result, blocking calls become virtually asynchronous automatically, without the programmer having to write complex callback or future logic.
In practice, Loom enables the well-known thread-per-request model to scale to high levels. Until now, handling thousands of simultaneous connections (e.g., in a web server) required resorting to asynchronous I/O or reactive programming (Netty, Vert.x, etc.) to avoid blocking an equal number of OS threads. With virtual threads, each request can now be handled in its own (virtual) thread – using simple, synchronous code – while the JVM ensures that the hardware is optimally utilized. Tests show that millions of sleeping virtual threads cause virtually no issues. IO-intensive applications also benefit enormously: In one experiment with 1 million parallel HTTP requests, the new virtual threads were able to handle the load with very little overhead, whereas 1 million traditional threads would have rendered the system unusable.
An example illustrates how virtual threads are handled in Java 21:
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
IntStream.range(0, 100_000).forEach(i ->
executor.submit(() -> {
try {
Thread.currentThread().sleep(1_000);
System.out.println("Thread #" + i + " done");
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
})
);
}
In this code, an ExecutorService
is created that starts a new virtual thread for each submitted task. We submit 100,000 tasks, each of which sleeps for 1 second and then prints a message. This would be unfeasible with traditional threads – simply starting 100k OS threads would demand immense resources. With virtual threads, however, it is possible; the JVM multiplexes them onto a few dozen real threads, according to the number of available CPU cores.
For Java developers, Loom means that the complexity of concurrent programming can be significantly reduced: Many problems that were previously solved using asynchronous callbacks or reactive streams (to save threads) can now be written as simple sequential logic – and still scale to massive parallelism. It is important to note that virtual threads are not a magic bullet: They do not speed up CPU-bound workloads (here, more cores or better parallel algorithms are still necessary). However, IO-bound workloads and systems with high concurrency demands will be considerably easier to write and maintain.
Java objects and memory consumption
A key aspect of Java performance is the memory requirements of objects and the memory organization (heap vs. stack).
Objects & Heap: Java objects have an overhead (class membership, synchronization state, etc.). On 64-bit JVMs, the header is typically 16 bytes for each object. There may also be padding, as objects are usually aligned to multiples of 8 bytes for alignment reasons. Java 6 introduced compressed OOPs (Ordinary Object Pointers): If the heap is smaller than ~ 32 GB, the JVM could use 32-bit pointers instead of 64-bit, which saves 4 bytes per object field. If more memory is required, you can switch to 16-byte alignment (-XX:ObjectAlignmentInBytes)
, for example, which means that even heaps up to 64 GB still use compressed pointers. As a rule, the switch from 32-bit to 64-bit pointers requires 1.7 times more memory.
String: Since Java 9, the JVM uses compact strings internally. This means that if a string only consists of pure ASCII characters, it saves the characters as byte[] (8-bit per character) instead of char[] (16-bit per character) as before. This saves nearly half of the storage space for the many strings that are typically ASCII (e.g. JSON texts, protocol messages). In individual applications, a performance gain of ~40% was observed simply by switching from Java 8 to 9.
- Metaspace: Java 8 replaced the “PermGen” with Metaspace, which is managed in the native memory (off-heap). Metaspace grows dynamically and eliminates the old problem of a fixed permanent heap, which could previously lead to OutOfMemoryError. On the performance side, this did not bring any direct acceleration, but increased stability and simplified tuning.
- Heap vs. off-heap: In addition to the regular Java heap (for objects), the JVM also uses native memory in various situations: e.g. the metaspace (see above),
DirectByteBuffer
allocations for NIO (to keep byte buffers outside the heap, which avoids expensive copy operations into kernel SPACE), as well as C heaps for JNI libraries. For performance, it can be useful to make targeted use of off-heap memory – this relieves the GC, but at the expense of more complex memory management. - Stack allocation: Using escape analysis (see above), the JVM tries to avoid storing objects on the heap if it is not necessary. This effectively results in something like a form of value types for short-lived objects even today: the object lives implicitly only on the stack and is discarded with the method frame, completely without GC.
In summary, a lot has been done in the memory layout to reduce the footprint of Java applications. This is not just about saving memory: less occupied memory and less fragmentation also means less work for the GC and better use of CPU caches. For example, more compact objects can improve cache locality, which leads to noticeable speedups, especially with large data structures. Developers should nevertheless make conscious decisions: Using primitive arrays vs. object lists, using string sparingly and knowing the cost of autoboxing. The JVM mitigates a lot, but a basic understanding of memory usage helps to avoid performance bottlenecks.
JVM Implementations: HotSpot, OpenJ9, Azul Prime, GraalVM
HotSpot (OpenJDK/Oracle JDK) is the most widely used JVM, but there are powerful alternatives with specific advantages:
- Eclipse OpenJ9, IBM’s open-source JVM, is optimized for fast startup and low memory usage. With IBM’s Testarossa JIT and its own GC, it’s ideal for cloud environments with limited RAM, though it slightly lags behind HotSpot in peak throughput.
- Azul Platform Prime (formerly Zing) is designed for low-latency applications. Its pauseless C4-GC minimizes pauses even with large heaps, while the LLVM-based Falcon JIT often produces more efficient code than HotSpot C2. Azul has been using these technologies in production for over a decade, making it a trusted solution for latency-critical systems.
- GraalVM is an extended JDK distribution from Oracle Labs with polyglot support for JavaScript, Python, R, Ruby, and WebAssembly. Its Native Image feature enables ahead-of-time compilation, allowing Java applications to start instantly and consume less RAM. While ideal for serverless and FaaS workloads, a well-optimized HotSpot VM often delivers better latency under sustained load.
These alternatives demonstrate that Java is no longer a monolithic system but an ecosystem of diverse VM technologies. GraalVM Native Image has gained traction in the cloud era, making Java more attractive for fast-starting CLIs and FaaS. Azul Prime remains relevant for high-frequency financial systems, while OpenJ9 is a strong option for memory-sensitive workloads. The diversity of JVMs fosters innovation and continuously drives performance improvements.
Milestones of the Java versions (performance-relevant)
Finally, in this present block, a compact overview of some important versions and their performance innovations:
Version | Year | Key Performance Features |
Java 5 (Tiger) | 2004 | java.util.concurrent (thread pools, futures), 64-bit JVM, improved JIT, CMS GC (low-pause GC as an option) |
Java 6 (Mustang) | 2006 | Faster JIT compilation, Escape Analysis (experimental), Compressed OOPs, Parallel Old GC, Improvements in synchronization |
Java 7 (Dolphin) | 2011 | Fork/Join framework, G1 GC (experimental, official from 7u4), invokedynamic, NIO 2 (asynchronous channels), tiered compilation (C1+C2) |
Java 8 (Spider) | 2014 | Lambdas & Streams (easier parallelisation), Metaspace instead of PermGen, Compact Strings, Tiered Compilation activated by default |
Java 9 (Jigsaw) | 2017 | G1 GC as standard, Flight Recorder (OracleJDK commercial, from OpenJDK 11 free), AOT compilation (experimental), spin-wait hint, 8-byte-aligned heap |
Java 11 (LTS) | 2018 | ZGC (experimental), Epsilon GC (No-Op), Flight Recorder + Mission Control in OpenJDK, new HTTP client (better performance than HttpUrlConnection) |
Java 15 | 2020 | Shenandoah GC integrated into OpenJDK (JEP 379); ZGC now ready for production (no more experiment flag, JEP 377); text blocks (syntax, without performance impact); hidden classes (for frameworks such as bytecode generators, slightly more efficient) |
Java 17 (LTS) | 2021 | Stronger encapsulation in the JDK (security vs. performance), new platform intrinsics (e.g. CRC32, GHASH), Shenandoah in Oracle JDK, consolidated updates from 11-16. |
Java 19 | 2022 | Virtual Threads (Preview) for high IO parallelism, Structured Concurrency (Incubator), Foreign Function & Memory API (Preview) for faster access to Native/Off-Heap |
Java 21 (LTS) | 2023 | Virtual Threads GA (JEP 444), Generational ZGC (Preview), overall big step through Loom and improved GC. |
Java 22 | 2024 | Region Pinning for G1 (JEP 423): Stabilizes/improves GC by fixing certain heap regions. Foreign Function & Memory API (JEP 454): More efficient native interop, lower overhead. Vector API (JEP 460): Exploits SIMD instructions for data-parallel speedups. Stream Gatherers (JEP 461): Reduces I/O overhead via batched data gathering. |
Java 23 | 2024 | Vector API (JEP 469, 8th Incubator): Accelerates data-parallel tasks via SIMD Stream Gatherers (JEP 473, Second Preview): Improves throughput by batching multiple I/O operations. ZGC: Generational Mode by Default (JEP 474): More efficient GC, better handling of short-lived objects |
Java 24 | 2025 | It speeds up with generational Shenandoah and Compact Object Headers (both experimental), along with improvements to G1 that reduce GC overhead. It also offers faster AOT loading, refined streaming, and an extended Vector API for SIMD (9th incubator), among other enhancements. |
You can see that the performance of Java has been improved in almost every version, either directly (faster JVM) or indirectly through new language/library features that enable more efficient code. The LTS versions (8, 11, 17, 21) in particular often included the experimental features of the intermediate releases in a stable form. It is always worth using the latest Java version in order to benefit from all optimizations – especially the performance boost has a direct effect without having to compile new byte code.
Future: What’s Next for Java?
The Java platform continues to evolve, with several key projects shaping its future performance and scalability:
- Project Loom & Virtual Threads: Now part of Java 21, Loom will take time to be fully adopted as frameworks like Tomcat, Jetty, Spring, and Quarkus integrate it. Virtual threads could eventually surpass reactive programming models by simplifying concurrency, allowing thread-per-request architectures to scale efficiently. Structured concurrency will further enhance clarity in concurrent code. While ideal for I/O-bound tasks, CPU-heavy parallel streams benefit less, as they remain OS-limited.
- CRaC (Coordinated Restore at Checkpoint): OpenJDK’s CRaC project aims to dramatically reduce startup time by snapshotting a “warmed-up” JVM state, which can be restored in milliseconds. This is highly relevant for cloud environments where rapid scaling is needed. Applications must adapt, handling resource cleanup before a snapshot. Currently, CRaC is Linux-only and still in development, but it presents an alternative to GraalVM Native Image by accelerating cold starts without losing JIT optimizations.
- Native Compilation & Project Leyden: Inspired by GraalVM Native Image, Project Leyden seeks to standardize ahead-of-time (AOT) compilation for Java. The goal is to tackle slow startup times, particularly for cloud-based applications, while addressing current limitations like restricted reflection and dynamic loading. Future Java versions may introduce separate JIT and AOT modes, making Java binaries more common in environments where fast startup is crucial.
- Project Valhalla (Value Types): Future Java releases will likely introduce value types, allowing objects to be stored flat in arrays or containers, improving cache efficiency and reducing pointer overhead. Though still in development, Valhalla will further optimize Java’s memory footprint and performance.
Java’s future is focused on faster startup, better concurrency, and reduced memory overhead. Virtual threads will reshape how Java handles scalability, CRaC addresses startup delays, and Valhalla modernizes object handling. The last two are still a dream of the future, but these innovations would make Java more flexible, allowing applications to be fine-tuned for deployment needs – whether through native binaries or new concurrency models.

GC-Selection: Latency vs. Throughput
Practical Performance Tips for Hitchhiker’s
To optimize Java application performance effectively, consider these key strategies:
- Choosing the Right Garbage Collector: The choice of GC impacts latency and throughput. G1 has been the default for most server applications since Java 11. For ultra-low pause times (real-time, UI), ZGC (stable since Java 17) or Shenandoah (preferred in Red Hat/SAP JDKs) is ideal. Batch workloads benefit from Parallel GC, while small tools with short runtimes may perform best with Serial GC to avoid parallelization overhead.
- Profiling with the Right Tools: Identify performance bottlenecks before optimizing. Java Flight Recorder (JFR) and Async Profiler provide deep insights. Profiling early helps establish baselines, revealing top CPU consumers and allocation hotspots. For deeper analysis (e.g., lock contention, native bottlenecks), you should use specialized profilers or tools. Start broad (high-resolution CPU profiling) and refine focus gradually. Avoid guesswork – bottlenecks often lie where least expected (e.g., I/O instead of CPU, excessive GC instead of slow loops). The quickest way to find bottlenecks is with JProfiler [10] – my faithful travel companion for decades.
- Efficient Data Structures & Memory Management: Choose the right structures to reduce overhead. Use primitives over wrapper types (e.g.,
int
instead ofInteger
) to avoid unnecessary boxing. For large collections, consider specialized libraries like Trove or FastUtil for primitive-backed lists. Avoid excessive String concatenation in loops – use StringBuilder instead. Minimize unnecessary object allocations, but avoid outdated object pools, as modern JVMs handle short-lived objects efficiently. - Concurrency & Parallelism: Manage threads efficiently. For CPU-bound tasks, the optimal thread count is roughly equal to the number of CPU cores. For I/O-bound workloads, virtual threads (Java 21) allow scaling to thousands of tasks without excessive overhead. Minimize synchronization (Locks) and favor lock-free alternatives like Atomics, StampedLock, or LMAX Disruptor. When working with collections in multithreading, use Concurrent data structuresor immutable snapshots instead of global locks. Always measure scalability – adding threads mostly doesn’t improve performance linearly. Use JMH benchmarks [11] to determine optimal parallelism levels.
- Stay Up-to-Date with JDK Versions: New JDK releases offer significant performance improvements. Upgrading from Java 8 to Java 11 can reduce GC time by 20% due to G1 optimizations, while Java 17 improves string handling and GC efficiency further. Benchmarks show Java 17 increasing throughput by ~15% over Java 11 with lower latency, even without code changes. Regular updates ensure free performance gains without additional effort.
Keep It Simple – Avoid premature optimization—focus on architecture and algorithms first. The JVM is highly optimized, but when fine-tuning is needed, leverage profiling tools and modern features. With the right approach, Java can achieve exceptional performance.
Conclusion
Java has come a long way in the last 30 years. Thanks to highly developed JIT compilers, modern garbage collectors and innovative concurrency concepts, the platform that once had a reputation for being slow is now one of the most powerful environments for scalable server applications. The continuous improvements – whether in the HotSpot engine, in alternative JVMs such as GraalVM or in forward-looking projects such as Loom and CRaC – show that performance in Java remains a lively and dynamic topic.
Developers should consistently use the many tools for analysis and optimization (JFR, JMC, Async Profiler, JMH) and orient themselves towards both proven and new technologies. In this way, they can get the best possible performance out of their applications – regardless of whether they are dealing with compute-intensive processes, latency-critical systems or modern cloud environments.
The future of the Java platform promises to become even more powerful through native compilation, optimized concurrency and energy-efficient approaches. Those who rely on the right optimization techniques today will not only benefit from higher speed, but also from better scalability and maintainability of the software. Java will therefore remain a central technology in the future – ready to master the challenges of modern applications.
Literature
[01] https://www.academia.edu/50669704/Escape_analysis_for_Java[02] https://www.azul.com/blog/why-we-called-our-new-jit-compiler-falcon/
[03] https://shipilev.net/talks/javazone-Sep2018-shenandoah.pdf
[04] https://openjdk.org/projects/leyden/
[05] https://www.infoq.com/articles/java-virtual-threads-a-case-study/
[06] https://webtide.com/jetty-12-virtual-threads-support/
[07] https://www.infoq.com/news/2024/11/tomcat-11/
[08] https://citeseerx.ist.psu.edu/document?doi=93b48eef8acfd110755b44ff1a346b65422a2bf4#
[09] https://youtu.be/HCcq6VLuXe0
[10] https://www.ej-technologies.com/jprofiler
[11] https://github.com/openjdk/jmh

Interested in the Evolution of Java?
Ingo Düppe is a speaker at JCON.
This article covers Java’s performance journey – and his JCON session takes a look at 30 years of UI development in Java.
Couldn’t attend live? The session video will be available after the conference – worth checking out!