With Java’s adoption of a six-month release cycle, new features are being introduced to the language rapidly. This article will explore the latest enhancements from Java versions 21 to 24. Our focus will be on language-specific changes, excluding updates to the JVM. Additionally, we will not cover the Foreign Functions and Memory API, which serves as a modern and safer alternative to JNI.
I would categorize the changes made to the language into three groups:
- Faster
- Better
- Easier
The first group would undoubtedly include the contributions from Project Loom, such as Virtual Threads, Structured Concurrency, and Scoped Values.
Virtual Threads
Virtual Threads (VTs) are lightweight alternatives to the traditional threads in Java. As illustrated below, they operate on top of the existing Platform threads.

Virtual Threads (VTs) are managed by the JVM and are designed to be lightweight in terms of memory usage and construction overhead. When a VT is ready to run, the JVM assigns it to an available Platform Thread. If a blocking operation, such as an I/O operation, occurs, the VT is removed from the Platform Thread and another VT takes its place. These context switches are inexpensive, and running a virtual thread has nearly the same overhead as a regular method call. This efficiency, combined with their low memory footprint, enables you to run millions of VTs in your application.
VT can be created via a factory method in the Thread class, like:
var vt = Thread.ofVirtual(runnable);
There is also a new ExecutorService that creates VTs:
var es = Executors.newVirtualThreadPerTaskExecutor();
es.submit(runnable);
One final word, VTs should never be pooled, just discard them when you’re done with them.
Structured Concurrency
Structured concurrency in Java simplifies managing multiple threads by organizing them into a structured task scope. This ensures that tasks are started, managed, and completed in a predictable manner. It uses policies like ShutdownOnFailure to cancel all tasks if one fails, and ShutdownOnSuccess to stop all tasks once one completes successfully. This approach enhances readability, reliability, and maintainability of concurrent code.

Essentially, you create a StructuredTaskScope and apply one of two default policies. The first policy, ShutdownOnFailure, cancels all forked tasks if any task fails. The second policy, ShutdownOnSuccess, shuts down all other tasks once a single task returns a result.
For a more detailed discussion, please refer to my dedicated article on Structured Concurrency in this magazine.
Scoped Values
Scoped Values act as an alternative, rather than a replacement, to ThreadLocals. Thread Locals have limitations such as mutability, potential exposure of sensitive information, scalability issues due to copying, and memory leaks.
In contrast, Scoped Values are immutable, passed by reference, and have a limited lifetime, which corresponds to the lifetime of the runnable they are assigned to.
public final ScopedValue<String> USER_PRINCIPAL = ScopedValue.newInstance();
ScopedValue.where(USER_PRINCIPAL, “sysadmin”).run(runnable);
During the execution of the runnable, the value for USER_PRINCIPAL is set and available. Accessing it outside of the runnable’s code will throw an IllegalArgumentException. This ensures that only the specified runnable can access the value, preventing memory leaks caused by failing to remove the value.
This brings us to the second category of changes, which I would like to dub ‘Better’.
In Java 17, the first pattern matching was introduced, the instanceof pattern. Since then, a considerable number of patterns have been added to the language.
Unnamed pattern and variables
The unnamed pattern and variables allows for not naming a local or pattern variable if you do not use them. You use the underscore ( _ ) as the name
Its usage is allowed in:
- For loop variables: for (Locale _ : Locale.getAvailableLocales())
- Catching exceptions: catch(Exception _)
- Return values: var _ Class.getName();
- Try with resources: try (var _ = new FileInputSteam(file))
There is one more usage, but more on that in the next section.
Record pattern
Java already supported switch pattern matching over object types, meaning that we switch over types, not over values.
Here is an example:
public String format(Object obj) {
var result = switch(obj) {
case null -> "null";
case Integer i -> String.format("%d", i);
case Long l -> String.format("%d", l);
case String s -> s;
case Boolean -> b.toString();
case Object o -> String.format("%o", o);
}
return result;
}
We also use this type of pattern matching for our object types. Assume we have the following sealed interface:
public interface FrequentFlyer permits BlueWing, SilverWing, RoyalWing { }
We could then write the following code to send a different email based on the type of FrequentFlyer being passed in:
public void sendEmail(FrequentFlyer ff) {
switch (ff) {
case BlueWing b -> sendEncouragingEmail(b);
case SilverWing s -> SendNearlyThereEmail(s);
case RoyalWing r -> SendMajesticEmail(r);
}
}
Note that the default statement is not required because the compiler can determine that we handled all possible values in the switch.
We could take this a step further and actually deconstruct the record into local variables.
Assume that BlueWing, SilverWing, and RoyaWing all have the same structure:
public record BlueWing(Integer id, String name, String email, boolean active, int flights) { }
We could rewrite the above code as
public void sendEmail(FrequentFlyer ff) {
switch (ff) {
case BlueWing(var i, var n, var e, var b, var f) -> sendEncouragingEmail(n, e);
case SilverWing(var i, var n, var e, var b, var f) -> SendNearlyThereEmail(n, e);
case RoyalWing (var i, var n, var e, var b, var f) -> SendMajesticEmail(n, e);
}
}
What happens here is that we ask the compiler to four local variables called i, n, e, b.
We did not need to specify their types as the compiler can infer it from their position in the record definition. So it will get the type of id, being an Integer. n will get the value of the 2nd element in the record, being a String. And so on.
This creates local variables that are visible within the scope op the switch case.
But we can take this even one step further and apply the unnamed pattern here. We do not use the variables i (id) and b (active), so we just as well not define them.
public void sendEmail(FrequentFlyer ff) {
switch (ff) {
case BlueWing(var _, var n, var e, var _) -> sendEncouragingEmail(n, e);
case SilverWing(var _, var n, var e, var _) -> SendNearlyThereEmail(n, e);
case RoyalWing (var _, var n, var e, var _) -> SendMajesticEmail(n, e);
}
}
Primitive patterns
A final pattern we should take a look at is the primitive patterns. So far, pattern matching could only be applied to objects. But with primitive patterns, added in Java 23, we can now apply pattern matching to primitive variables as well.

As an example, before if you wanted to get the last element of a list of an unknown size, you had to do some tricky operations:
var element = someList.get(someList.size() - 1);
This code is not only hard to read, but it also holds some danger, as you will get an IllegalElementException if the size of someList is zero.
Now, you can simply write this
var element = someList.getLast();
There are a whole bunch of new operations added:
var element = someList.getFirst();
var element = someList.getLast();
someList.addFirst(e);
someList.addLast(e);
var element = someList.removeFirst();
var element = someList.removeLast();
var reversed = someList.reverse();
For Maps, there are similar operations available, being that they have different names that use terminology that is more suitable for maps.
Flexible Constructor Bodies
All Java programmers know this rule: in a constructor, the first statement should always be a call to super() or this(), to initialize the parent class or call another constructor in the same class. This is either explicit in the code or implicitly added by the Java compiler. The call to super is meant to initialize the parent class so that the child class can safely access them.
But this restriction is lifted within certain limits. You can now perform certain operations before calling super() or this(). There are certain scenarios where this could be very advantageous. One such example is validating parameters to ensure their validity. There is no use in initializing the parent class if the parameters passed in will be rejected.
public class PositiveBigInteger extends BigInteger {
public PositiveBigInteger(long value) {
super(value); // Potentially unnecessary work
if (value <= 0) throw new IllegalArgumentException(..);
}
}
Another scenario would be if you try to prepare the parameters required in the parent class:
public Sub(Certificate certificate) {
var publicKey = certificate.getPublicKey();
if (publicKey == null) throw ...
byte[] certBytes = switch (publicKey) {
case RSAKey rsaKey -> ...
case DSAPublicKey dsaKey -> ...
default -> ...
};
super(certBytes );
}
Stream Gatherers
Streams have been around since Java 8, which is almost 11 years now. During this period we as developers have become accustomed to using them, and for most of us, they are now the default way of writing code. Since their introduction, many functionalities have been added to streams by successive releases.
As an example of stream:
Breaking it down, a stream operation consists of:
- A stream source
- One or more intermediate operations
- A terminal operation
We have always had many ways to create stream sources, including many factory methods.
For terminal operations, we have several standard operations available. If these are not sufficient for the task at hand, we can create custom collectors. For intermediate operations, however, we did not have the option to create our own—until now!
It turned out that there was quite some demand for new intermediate operations. So the language designers had a choice: either add a large number of new intermediate operations to the standard language or allow developers to add intermediate operations. The solution was … a bit of both!
First, a limited number of gatherers were added to the standard library:
- mapConcurrent
- fold
- scan
- windowFixed
- windowSliding
Here is an example of how to use them
public List<List<String>> groupsOfThree(List<String> words) {
return words.stream()
.gather(Gatherers.windowFixed(3))
.toList();
}
But the language designers did not stop there, they added a new API that allows developers to create gatherers themselves.
A gatherer consists of four parts:
- an initializer, creates an object to hold the intermediate state of the gatherer
- an integrator, integrates new elements and optionally pushes them downstream
- a combiner, combining two intermediate states into one
- a finisher, allowing for a final action to be performed at the end of the input stream
Only the integrator is required; the rest are optional and depend on the functionality you try to provide. Also, when ‘downstream’ is mentioned, the next stage in the stream pipeline is meant.
This might seem daunting at first, let’s try to break it down with an example.
The first example, just for training purposes, provides an alternative for the map() intermediate operation. We accept an input element and output a transformed element.
public class AltMapGatherer {
public static <T, R> Gatherer<T, ?, R> altMap(Function<? super T, ? extends R> mapper) {
Gatherer.Integrator<Void, T, R> integrator = (_, current, downstream) -> {
var result = mapper.apply(current);
downstream.push(result);
return true;
};
return Gatherer.of(integrator);
}
void main() {
var list = List.of("Duke", "Duchess");
var upper = list.stream()
.gather(altMap(String::toUpperCase))
.toList();
System.out.println(upper);
}
}
As mentioned, of the four parts of a gatherer, only the integrator is required. So we create an integrator here via a lambda expression. An integrator has three parameters. The first one is the intermediate state. Our alternative map operation does not hold any state between operations, so we simply ignore this parameter by specifying the underscore. The second parameter is the current stream element that is being processed, and the third and final parameter is a reference to downstream.
In the integrator’s body, we apply the transformation operation to the stream element that was inputted. Next, we push this transformed element downstream to be processed by the next stage of the stream pipeline. Finally, we return true, signaling that we are ready to process more elements. Returning false would signal that we do not want to process more elements, allowing us to stop processing the remaining elements of the input stream.
In the main method, we will get to why it is so condensed in a bit, we can use our shiny new gatherer by adding it as an intermediate operation via the new Steam operation gather. In the constructor of altMap we specify the transformation function that is to be applied to each stream element.
As a next exercise, let us look at a gatherer that holds state. The gatherer mimics the behavior of the limit intermediate operation. You can specify how many input elements you want to process.
public class LimitingGatherer {
public <T> Gatherer<T, AtomicInteger, T> limiting(int maxSize) {
return Gatherer.ofSequential(
// Initializer
AtomicInteger::new,
// Integrator
(state, element, downstream) -> {
if (state.getAndIncrement() < maxSize) {
downstream.push(element);
return true;
} else {
return false;
}
});
}
public List<String> firstThreeWords(List<String> words) {
return words.stream()
.gather(limiting(3))
.toList();
}
public void main () {
var words = List.of("The quick brown fox jumps over the lazy dog".split(" "));
List<String> firstThreeWords = firstThreeWords(words);
System.out.println(firstThreeWords);
}
}
We create a sequential gatherer here. This is the most common type of a gatherer, but there are parallel gatherers as well. This specific factory method ofSequential allows us to specify the initializer and the integrator. The initializer returns an object that will hold an intermediate state. In our case, we create an AtomicInteger to hold the number of elements already processed.
The integrator again receives three parameters, the state, the current element, and a reference to downstream. The first thing the integrator does is increment the AtomicInteger. It then verifies if we have not exceeded the maximum number of elements. If not, we push the element downstream and return true to indicate we are ready to process more elements. If we have reached the maximum of elements, we simply return false, indicating we do not accept any more elements. No more elements from the input stream will be sent to our gatherer.
So now we’ve come to the part of this article where we’re gonna talk about things that make Java easier to use. These changes may not be so important for you and me, both of us being experienced Java developers. But to people new to the language, there are quite a lot of hurdles. Compiling, build tools, package imports, semantics, etc. Let us see where Java has become easier to use for novices.
Unnamed Classes And Instance Main Methods
One of the first things Java developers learned was how to run a Java application. We learned this by heart:
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World!");
}
}
We have become so accustomed to this code that we cannot even see what makes it so difficult to people new to the language. But they might have some questions like
- What does public mean? And what are the other possible values?
- What is static?
- What are args?
- Why do I have to define this within a class?
- And what is this System.out.println?
These are all good and fair questions. There are several changes made to the language that will allow you to turn the above code in this:
void main() {
println("Hello, World");
}
That is right, a three-line Java application. You can leave out public and static from the method definition. You can even leave out the arguments to the main method.
We no longer have to use the System.out.println method to write data to the console. Instead, we can use println (and print and readln) from the new top-level class java.io.IO. Implicitly declared classes (more on that in a second) automatically import all the exported classes and interfaces from the java.base module, we do not even have to import this class.
As mentioned, if no outer class is defined around the main method, it is considered being and implicitly declared class. It is unnamed, and no package statement is allowed. Also, there can only be one unnamed class in your project.
All-in-all a pretty neat solution, isn’t it?
Module Import Declarations
And while we are on the subject of importing packages and classes. There’s something new there too. As you might recall, in Java 9, the JVM was modularized into what now are 95 different modules. Each module contains one to many packages.
The module java.base for example contains more than 60 packages like java.lang, java.text, java.time etc.
Before, we had to import each package when we needed it, but now we can simply import all packages in a module with the module import statement:
import module java.base;
Launch Multi-File Source-Code Programs
Did you know that you could run a Java program even without compiling it? This feature has been in Java since version 11. It allows you to code and run a simple Java program, without the need for compiling.
When you run Java and pass the name of the application, the JVM will look for a class with that name. If it cannot find it, it will look for a Java source file with the given name. If it is found, it is compiled into memory and then started by the JVM.
This again is a great example of Java has been made easier accessible for new developers.
But having only one source file has its limitations. If you want to write a small application with a few more classes, you still have to go through the code-compile-run circle. But since Java 22, we can now have multiple source-file applications. The only limitation is that they must reference each other directly.
Have a look at the following code.
void main() {
Messenger messenger = new Messenger();
println(messenger.getMessage());
}
HelloMultipleWorlds.java
public class Messenger {
public String getMessage() {
return "Hello, Multiple Worlds!";
}
}
Messenger.java
The first class, HelloMultipleWorlds.java is an implicitly declared class. It references the class Messenger in its code. As such, the JVM will compile both Java source files and then run HelloMultipleWorlds if we use the following command:
java --enable-preview --source 23 HelloMultipleWorlds.java
Allowing to run an application consisting of multiple source files without compiling them first means that novices do not have to get involved with tools like Maven or Gradle from the very beginning.
Conclusion
And there we are, we have gone through the most eye-catching changes in the Java language since version 21. With version 24 just out, and version 25, the next LTS, coming in September 2025, Java has a great future even after 30 years. And the language is only getting better.