GraalVM, not the holy grail, but definitely a useful tool to help you write better code, deploy faster and use less memory. My first introduction to GraalVM was at the 2018 JFall Conference where I attended a session called "A Developer's Introduction to GraalVM" presented by Oleg Selajev.
GraalVM has the following three key features:
In this blog post I will dive deeper into those three key features. Enjoy reading!
As a developer, I like clean code. If you are also on the 'clean code team', GraalVM has a feature you will love: code optimization. Using this feature will restructure your bytecode using various algorithms such as dead code elimination and branch speculation.
A typical piece of code might look like this:
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World!");
}
}
This code is then compiled to bytecode using the javac compiler:
javac HelloWorld.java
The bytecode which is created looks like this:
L0
LINENUMBER 3 L0
GETSTATIC java/lang/System.out : Ljava/io/PrintStream;
LDC “Hello, World!"
INVOKEVIRTUAL java/io/PrintStream.println (Ljava/lang/String;)V
When running your code, the Java Runtime Environment will translate this bytecode down to machine code. This is typically done using the HotSpot VM. The name HotSpot is derived from the fact that the code which is being executed is analyzed while it is running. If the code is run multiple times, it will be cached. The next time it is run, the cached machine code will be used.
The HotSpot engine has two modes: the C1 mode, aka client mode, and the C2 mode, aka server mode. Most of us will be familiar with the server mode. The server mode analyzes the code to run at the cost of a slower startup time.
The downside of using the HotSpot compiler is that, over the years, it has become very complex and difficult to maintain. For that reason, Oracle began the GraalVM compiler project with the intention of replacing the existing C++-based VM with an implementation in Java.
You can implement the following interface:
interface JVMCICompiler {byte[] compileMethod(byte[] bytecode);
}
What is the effect of running your code with GraalVM? For testing the benefits (dead code elimination, branch speculation, etc.) offered by GraalVM, we will use the Java Microbenchmark Harness (JMH). A blank JMH Maven project is generated using the following code:
mvn archetype:generate \
-DinteractiveMode=false \
-DarchetypeGroupId=org.openjdk.jmh \
-DarchetypeArtifactId=jmh-java-benchmark-archetype \
-DgroupId=nl.avisi \
-DartifactId=benchmark \
-Dversion=1.0
For the purpose of this blog, we will execute a short experiment: let's find the 10 most frequently appearing words in the entire collection of Shakespeare's inheritance. The method we use? As you see from the code, it contains a sequence of operations on a Java Stream.
package nl.avisi;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Warmup;
import org.openjdk.jmh.infra.Blackhole;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import java.util.stream.Collectors;
import java.util.stream.Stream;
@Warmup(iterations = 3)
@Measurement(iterations = 3)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@Fork(1)
public class Ranking {
@Benchmark
public void rank(Blackhole sink) {
String[] files = new String[]{ "/tmp/play.txt" };
Arrays.stream(files)
.flatMap(Ranking::fileLines)
.flatMap(line -> Arrays.stream(line.split("\\b")))
.map(word -> word.replaceAll("[^a-zA-Z]", ""))
.filter(word -> word.length() > 0)
.map(word -> word.toLowerCase())
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()))
.entrySet().stream()
.sorted((a, b) -> -a.getValue().compareTo(b.getValue()))
.limit(10)
.forEach(e -> sink.consume(e));
}
private static Stream<String> fileLines(String path) {
try {
return Files.lines(Paths.get(path));
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
We use three 'warmup iterations' (needed in order to not contaminate our benchmark with processes associated with starting up the VM) and three 'measurement iterations'. The benchmark will record the time it takes to execute the method in milliseconds.
Let's start by looking at the compiled code and the test that has run
using the classic HotSpot JVM:
$ java -jar target/benchmarks.jar rank
# JMH version: 1.21
# VM version: JDK 1.8.0_131, Java HotSpot(TM) 64-Bit Server VM, 25.131-b11
# VM invoker: /Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/bin/java
# VM options: <none>
# Warmup: 3 iterations, 10 s each
# Measurement: 3 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: nl.avisi.Ranking.rank
# Run progress: 0.00% complete, ETA 00:01:00
# Fork: 1 of 1
# Warmup Iteration 1: 1037.971 ms/op
# Warmup Iteration 2: 917.268 ms/op
# Warmup Iteration 3: 911.136 ms/op
Iteration 1: 943.235 ms/op
Iteration 2: 899.658 ms/op
Iteration 3: 882.852 ms/op
Result "nl.avisi.Ranking.rank":
908.582 ±(99.9%) 568.557 ms/op [Average]
(min, avg, max) = (882.852, 908.582, 943.235), stdev = 31.165
CI (99.9%): [340.025, 1477.139] (assumes normal distribution)
Notice that the time spent on each invocation of the method decreases as the VM has already been warmed up.
Now that we've seen the compiled code in HotSpot JVM, let's take a look at the same code and run it using GraalVM:
# JMH version: 1.21
# VM version: JDK 1.8.0_212, OpenJDK GraalVM CE 19.0.0, 25.212-b03-jvmci-19-b01
# VM invoker: /Users/freberge/Downloads/graalvm-ce-19.0.0/Contents/Home/jre/bin/java
# VM options: <none>
# Warmup: 3 iterations, 10 s each
# Measurement: 3 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: nl.avisi.Ranking.rank
# Run progress: 0.00% complete, ETA 00:01:00
# Fork: 1 of 1
# Warmup Iteration 1: 961.963 ms/op
# Warmup Iteration 2: 822.574 ms/op
# Warmup Iteration 3: 810.855 ms/op
Iteration 1: 823.962 ms/op
Iteration 2: 823.035 ms/op
Iteration 3: 806.605 ms/op
Result "nl.avisi.Ranking.rank":
817.867 ±(99.9%) 178.136 ms/op [Average]
(min, avg, max) = (806.605, 817.867, 823.962), stdev = 9.764
CI (99.9%): [639.732, 996.003] (assumes normal distribution)
As you can see, running this code with the GraalVM results in a much faster execution!
The concept of Docker images has given us greater flexibility in achieving horizontal scaling: if the load on your system becomes too heavy, you can simply spin up more instances. However, this can cause problems if it takes a long time to actually start the container.
In our own project, we are faced with a continuous stream of events. It is essential to handle each individual event in the shortest time possible. This is especially important during a cold start (e.g. after a system reboot) as there is really no time to waste waiting for Spring Boot to configure itself. In this case, the system would already face a considerable backlog of events to process.
This is where the concept of native images offered by GraalVM can step up to the plate. A native image is a heavily optimized executable (comparable to a .exe, no JVM and with the garbage collector) which is able to start very quickly. The side effect is that the size of the executable is smaller than the fat jar created by Spring Boot (although the size of the jar, of course, pales in comparison with the size of the Docker image).
You can't buy anything without paying a penny... In the case of GraalVM, it needs to know in advance which bytecode is going to be executed. Features like runtime dependency injection (on which Spring Boot relies heavily) will not work out of the box. Instead, a framework like Micronaut (which uses compile time dependency injection) can be leveraged. In addition, the time it takes to create a native image is quite considerable. A typical compilation to a native image might take several minutes, so this step is one you would like to do, e.g. during a nightly build.
Let me showcase how you can make GraalVM's support for native images work:
export GRAALVM_HOME=
export PATH=$GRAALVM_HOME/bin:$PATH
# gu is the Graal Updater; it is used for installing additional packages; in this case the package for building native images
gu install native-image
Next, install Micronaut, for instance, by using brew:
brew update
brew install micronaut
brew upgrade micronaut
To demo the effect of native images, we generate a Micronaut skeleton application with GraalVM support using:
mn create-app --features=graal-native-image --build maven
I do advise investing some time in studying the file native-image.properties. It contains various options for customizing the native image (for example, the name of the executable and which resources to package, like log back configuration).
Build the Micronaut application as follows:
mvn clean package
You will now have a jar containing the Micronaut application.
Build the native image using:
native-image --no-server -cp target/techday-0.1.jar
This results in an executable file called "techday". We are now able to compare the startup times for the Micronaut application as-a-jar and as-an-executable.
java -jar target/techday-0.1.jar
vs.
./techday
The executable will start in a few microseconds, whereas the fat jar will take a couple of seconds. Also, memory usage is greatly reduced, which you can check by running:
top -pid $(lsof -ti tcp:8080)
The result? The fat jar will consume ca. 300MB, whereas the executable consumes ca. 30MB. This will obviously result in much more efficient use of scarce resources!
In many cases, a specific problem to tackle might already be implemented but in a different language. For instance, a supplier of hardware might offer an existing interface written in Visual Basic. In the case of VM-based languages like Scala or Kotlin, re-use is quite feasible. In other cases, for example, for code written in C or C++, integration becomes very complex.
The polyglot feature of GraalVM allows us to mix different languages in a seamless way. For example, it is possible to invoke Java code from node-js code and vice versa. Even LLVM (Low-Level Virtual Machine) languages like C and C++ can be integrated.
Suppose we have a file called app.js defined as:
const http = require("http");
const span = require("ansispan");
require("colors");
http.createServer(function(request, response) {
response.writeHead(200, {"Content-Type" : "text/html"});
// this is where the interop takes place
// we can instantiate Java Types and use their methods
// like LocalDateTime.now()
var time = Java.type('java.time.LocalDateTime');
response.end(span(("Hello From GraalVM at " + time.now()) .green.fontsize(30)));
}).listen(8000, function() {
console.log("Graal.js server running".red);
});
In this example, we create a variable of type java.time.LocalDateTime and call its now() method to obtain the current timestamp.
To start the application, type:
# required libraries need to be installed first:
npm install colors ansispan
# option --jvm is required when using the Java interoperability feature
node --jvm app.js
Opening a browser and pointing it to http://localhost:8000 will show something like this:
Out of the box, GraalVM supports Javascript, Python, Ruby, R and LLVM-based languages. The programming language R is regularly used in machine learning and data science. Out of the box, R comes with features for generating SVG graphs. We will use this feature to draw a temperature graph:
library(lattice)
ms_to_date = function(ms, t0="1970-01-01", timezone) {
sec = ms / 1000
as.POSIXct(sec, origin=t0, tz=timezone)
}
function(forecast, timestamps) {
svg()
dates <- c()
for (ms in timestamps$values) {
dates <- c(dates, ms/1000)
}
class(dates) = c('POSIXt','POSIXct')
plot <- xyplot( temperature ~ time, data = data.frame(temperature = forecast$values, time = dates),
type="l",
xlab="Date",
ylab = "Temperature")
print(plot)
grDevices:::svg.off()
}
If this source code would be available in a file called plot.R in src/main/resources, we can define a Resource in Sprint Boot:
@Value(value = "classpath:plot.R")
private Resource rSource;
Next step will be to create a BiFunction:
@Autowired
private BiFunction<DataHolder, DataHolder, String> plotFunction;
@Bean
BiFunction<DataHolder, DataHolder, String> getSource(@Autowired Context ctx)
throws IOException {
Source source =
Source.newBuilder("R", rSource.getURL()).build();
// we can interpret R code as a BiFunction
return ctx.eval(source).as(BiFunction.class);
}
DataHolder is a wrapper around a List of values:
public class DataHolder<T> {
public List<T> values;
public DataHolder(List<T> values) {
this.values = values;
}
}
Finally, we can pass a List of timestamps and a List of temperatures to the BiFunction and have it return the SVG markup:
@RequestMapping(value = "/{country}/{city}", produces = "image/svg+xml")
public ResponseEntity<String> forecast(@PathVariable("country") String country, @PathVariable("city") String city) {
String svg = "";
List<Double> forecasts = new ArrayList<>();
List<Long> timestamps = new ArrayList<>();
// provide some values using for example the OpenWeatherMap API
String svg = plotFunction.apply(new DataHolder<>(forecasts), new DataHolder<>(timestamps));
return new ResponseEntity<String>(
svg,
HttpStatus.OK);
}
The result can be displayed in the browser after using, for example, http://localhost:8080/NL/Arnhem
Aside from interacting with existing languages, we can also define our own language and install it into the GraalVM registry.
When you download GraalVM, you will notice that it is based on Java 8. However, more recent versions of Java already contain the GraalVM compiler. In order to enable it, you will need to unlock it by adding some switches. Suppose we have our benchmark introduced earlier. Try it out for yourself and see what happens when you use these commands:
export JAVA_HOME=<Java_11>
export PATH=%JAVA_HOME/bin:$PATH
java -XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCI -XX:+UseJVMCICompiler -jar target/benchmarks.jar rank
The code for these examples (and more) can be found on our GitHub repository.
As mentioned in this blog post, GraalVM has its downsides. However, if you ask me, the strong features regarding Code optimisation, Native images and Polyglot made me a fan of it. I have one last piece of advice for you: try it out!