Blog

Structured concurrency: will Java Loom beat Kotlin’s coroutines?

Xebia Background Header Wave

Kotlin and Java Loom: structured concurrency for the masses

Java, as slow adopter of new concepts is getting structured concurrency as part of the Loom project. This addition enables native support for coroutines (termed virtual threads) in Java. What does that mean for Java & Kotlin developers? Should we now all transform our existing code bases to support virtual threads? Or even jump back to Java?  We compare two approaches to do structured concurrency on the JVM: Java Loom’s virtual threads and Kotlin’s coroutines. As it turns out, Kotlin coroutines show to be more developer friendly, while Java Loom might have its sweet spot as part of Java libraries and frameworks. Want to know why? Buckle up, as we have a lot to talk about!

Background

My colleague and Kotlin specialist Urs Peter suggested to look into Loom and compare it to Kotlin coroutines. We did that on one of our (awesome, I might add) innovation days.

There we realised that Java is not only a programming language, but also a platform. Actually you could see it the other way around. The Java programming language is the reference implementation for the Java platform. This is breaking news, because this concept gives you the opportunity to hate Java in two different ways…

Haha, just joking. But all jokes aside, this split view on Java puts Loom in the position of one of the most impactful Java projects. But also one of the most silent ones. Loom extends the JVM with structured concurrency with only minor changes to the programming language. So what is Loom exactly and what does it have to do with Kotlin?

Kotlin coroutines
Loom virtual threads

Easy things first: Kotlin

Kotlin is the new kid on the block and gives access to a very promising concept: structured concurrency, termed coroutines in Kotlin. Coroutines make it easier to deal with concurrency. This is a big plus in reactive programming and developing high performance(web) applications. And they improve developer experience considerably. The concept of coroutines is not new, you can find it back in other languages & libraries as e.g. fibers (cats-effect, ZIO and even boost?!?), goroutines in Go or nurseries in Python trio.

Generally speaking, a coroutine is a (very very) lightweight thread. It maps to the(carrier) threads of an operating system M:N. The term M:N means that M coroutines are mapped to N threads. For a developer this happens transparently. You’ll only deal with coroutines, the mapping and distribution is done by the language or library.

And what is with Java? Loom!

Loom introduces coroutines, termed virtual threads, as native element into the JVM. The Loom development team chose not to deviate from existing syntax. The thread-API stays more-or-less the same. The big difference to Kotlin is that Loom’s virtual threads are managed and scheduled by the JVM instead of the operating system. They skip the indirection via the traditional JVM thread abstraction. The direct interaction with OS threads gives Java (in principle) an edge on performance compared to Kotlin (but let’s wait first for Loom to be released till we draw conclusions). For developers this means that structured concurrency is available in Kotlin and Java.

Kotlin & Loom: a developer perspective

With structured concurrency, what should I expect? It is probably better to ask: what should I expect from concurrency without structured? Nathaniel J. Smith described the current situation as a form of the goto statement. You fire n concurrent tasks and now you have to take care of n problems. They might have side effects. As a consequence, it is difficult to follow, abstract and encapsulate. Structured concurrency is basically a form of preventing “fire-and-mess-things-up” situations. Similar to “banning” the goto statement. This approach to concurrency actually reflects one of my basic principles:

Global organization, local chaos.

You reach a point where concurrency comes into play, you fan-out, do some concurrent tasks, and fan-in again. To start, I created a first set of examples in Java and Kotlin that print

Hello world 2
Hello world 1

Hello World: Kotlin style

In Kotlin it is straight forward: the coroutine scope blocks till all concurrent (here the “Hello world 1”) tasks finish.

coroutineScope { // <---- only needed on top-level
    launch {
        delay(500)
        println("Hello world 1")
    }
    println("Hello world 2")
}

The delay function allows Kotlin to park (or suspend) this coroutine. You can easily see from the function signature if you can suspend a function.

public suspend fun delay(timeMillis: kotlin.Long): kotlin.Unit { /* code */ }
In short, when using the suspend keyword, the Kotlin compiler generates a finite state machine in the bytecode. The benefit is that functions called in a coroutine block look like they are executed sequentially, though they are executed in parallel. In a way, coroutines are a purely syntactic construct. — On Project Loom, the Reactive model and coroutines by Nicolas Fränkel

Hello World: Looms take

Loom provides this possibility as well (I squeezed the code a bit to have a 1-1 match with the Kotlin sample). The “try-with-resources” block waits till everything is finished. Handy.

try (ExecutorService executor = Executors.newVirtualThreadExecutor()) {
    executor.submit(() -> {
        try { Thread.sleep(500); } catch (InterruptedException e) { }
        System.out.println("Hello world 1");
    });
    System.out.println("Hello world 2");
}

What can we conclude if we compare the Kotlin and the Java example? Surprising to me, they look more similar than expected. Obvious is the syntactic sugar Kotlin adds, but with Java you can achieve the desired effect in more or less the same number of lines.

It is easily possible to extend the snippet with more (concurrent) tasks:

var deadline = Instant.now().plusSeconds(2);
try (ExecutorService executor1 = Executors
  .newVirtualThreadExecutor()
  .withDeadline(deadline)) {
    List<Callable<String>> tasks = List.of(
            () -> "task list, elem 1",
            () -> "task list, elem 2"
    );
    Stream<Future<String>> x = executor1.submit(tasks);
    x.map(Future::join).forEach(System.out::println);
}

Taking structured concurrency to the next level

If you do not do anything exotic, it does not matter, in terms of performance, if you submit all tasks with one executor or with two. The try-with-resources construct allows to introduce “structure into your concurrency”. If you want to get more exotic, then Loom provides possibilities to restrict virtual threads to a pool of carrier threads. However, this feature can lead to unexpected consequences, as outlined in Going inside Java’s Project Loom and virtual threads.

var deadline = Instant.now().plusSeconds(2);
try (ExecutorService executor1 = Executors
        .newVirtualThreadExecutor()
        .withDeadline(deadline)) {
    try (ExecutorService executor2a = Executors.newVirtualThreadExecutor()) {
        executor2a.submit(() -> System.out.println("other async tasks"));
    }
    try (ExecutorService executor2b = Executors.newVirtualThreadExecutor()) {
        Future<String> future1 = executor2b.submit(() -> "task sub 1");
        Future<String> future2 = executor2b.submit(() -> "task sub 2");
        System.out.println(future1.get() + " - " + future2.get());
    } catch (InterruptedException | ExecutionException e) {
        throw new RuntimeException(e.getMessage());
    }
}

The Kotlin equivalent uses coroutine contexts as synchronization points.

// (coroutineScope not needed)
launch {
    println("other async tasks")
}
val op1 = async { "task sub 1" }
val op2 = async { "task sub 2" }
println("${op1.await()} - ${op2.await()}")

The structured concurrency constructs of Kotlin map in most cases “more or less” to the Java-Loom constructs:

coroutineScope ~ try (ExecutorService exs = Executors.newVirtualThreadExecutor())
launch -> exs.submit(...)
async & await -> Future<...> f = exs.submit(...) & f.join()

In Kotlin — especially when only using suspend methods— (concurrency) abstractions are not in your way at all. The coroutineScope keyword is only needed on top-level scope and not in sub-scopes. Therefore we skipped it in the Kotlin example and use ~ instead of -> in the overview above. Usually, a single scope suffices for launch/async, which is transparent in your code.

Developer experience & pitfalls

As you might have noticed from the examples, Kotlin’s syntax offers more flexibility in terms of nesting. To reach the same level of convenience in Java you probably have to move the nested code into separate functions to actually keep it readable. But once you did that, you loose the context of "Loom". This means that you cannot see any more if a function is suited for virtual threads or not. Loom has a list with virtual thread-friendly functions, indicating that you have to know in advance if a function can be used or not. In contrast, Kotlin has the suspend keyword, which marks a function for consumption in a coroutine context.

Keeping track of your concurrency

And I think this is one of the cornerstones of Kotlin: developer friendliness. To the expense of learning more syntax, Kotlin gives you safety back. It is more difficult to shoot yourself into the foot. This property is not only visible when marking functions with the suspend keyword. It shows when you want to handle CPU-intensive tasks, such as image processing or video encoding, or blocking IO tasks. In Java you have to take care of mixing thread pools with virtual threads yourself, whereas Kotlin you can use one of the pre-defined dispatchers. For Java programmers this means that you can easily end up with thread starvation or slow performance. Nobody will prevent you from doing stupid things.

So, on the surface, Java/Loom looks similar to Kotlin, but the beauty is only skin-deep. 

  • Java requires you to deal with low-level concurrency abstractions, such as ExecutorService, Thread and Future, to realise (structured) concurrency. 
  • Java re-uses existing concurrency API, which is a burden for new developer. Simple constructs, such as .join() or .get() with normal Threads will be disastrous

This approach gives developers plenty of room to make mistakes or confuse existing and unrelated concurrency abstractions with the new constructs. In addition, business intent is blurred by the extra verbosity of Java. This extra burden falls away in Kotlin.

How does it relate to performance?

It is like driving one of these fast old-timers: unless you know what you are doing, you should drive carefully. Otherwise you end up driving (a) very fast, but dangerous, or (b) slow, but safe. Kotlin is more comparable to a modern car. It comes with a seat belt, air-bags and all these bells and whistles that make you drive safe, but also fast.

Depending on the benchmark, you might even conclude that Loom’s direct interface to OS threads is not faster than Kotlin. Besides, once Loom is final, also Kotlin coroutines will directly use the same interface. As consequence the toolkits will, in the near future, rely on the same concurrency primitives.

Even though virtual threads are the most prominent change Loom will introduce, other features might impact the daily life of a programmer more than expected, such as scoped variables (which might look like this). This is also mentioned on HN:

Yeah, while virtual threads are the bread and butter of Loom, they are also adding a lot of QoL things. In particular, the notion of “ScopedVariables” will be a godsend to a lot of concurrent work I do. It’s the notion of “I want this bit of context to be carried through from one thread of execution to the next”. Beyond that, one thing the loom authors have suggested is that when you want to limit concurrency the better way to do that is using concurrency constructs like semaphores rather than relying on a fixed pool size.

Kotlin provides scoped variable-constructs via the CoroutineContext, which can be quite handy if you need to keep track of stuff or you want to improve the logging in your app.

Conclusions

Modern car interior with all bells and whistles
Kotlin improves driver’s experience
Car with opened hood: the engine
Loom improves the engine

So what is the verdict?

Adding Loom to Java will definitively open up a new domain of problems, bugs and best practices. Which means that in the short term, probably nothing will change for Java or Kotlin development. Once Loom is established and if companies and development teams decide to rewrite their existing code-base to use Loom as concurrency approach, they might instead decide to switch to Kotlin altogether. This will be the crux of the matter, we predict. In particular in so called “business domains”, such as e-commerce, insurances and banks, Kotlin provides additional safety, whereas Java does not.

Under the hood, things might look different. Major frameworks and JVM-based languages might decide to switch their internals to Loom virtual threads. There the impact will be huge. And it will be the classical “iceberg”-situation: big changes under the water, but above the water, for the developer, only minor changes in the API are visible. In Kotlin there will be probably no changes at all.

I think this is a good incentive to dip your toes into the waters of Kotlin.

Have fun!

Jeroen Willemsen
Typical security jack-of-all-trades. Hands-on security architect with a nack for security, automation, and risk management. Jeroen has been involved in various OWASP projects. He enjoys a pentest every now and then, while helping organizations to get secure enough. Jeroen is often engaged in knowledge sharing through talks, blogs, projects at github, and trainings. Want to reach out? Check his allmylinks page.
Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts