By Gerald Mücke | October 25, 2017
Optimizing Docker Images for Java
Docker is a popular technology for creating runtime environments for servers and entire systems. Docker images are easily distributed, deployed and started. But especially distribution benefits from slim images - large images take time to transmit, especially when done frequently this could have a real impact on the development speed. In this article I’ll write about some best practices for reducing or optimizing the image size.
Docker is a very convenient technology for realizing immutable servers. The docker image containing the service is defined in a playbook - the docker file. But different to VMWare or Virtual Box docker images are no single opaque file but are made of layers. Each statement in the docker file that adds, deletes or modifies content on the filesystem adds another layer. This is important to know, as removing a file from the filesystem does not actually remove it, it’s more of adding a new set of 0 bytes for erasing the file again. It’s similar to multi-session CDs, where a file can be removed from the disc but remains physically burned in.
On distribution, for each layer a hash value is a calculated and compared to the hash value of a previously transmitted layer. Only layers that have changed are transmitted. This is important to know but we’ll come to this later.
Small images conserve disc space, which is especially important in build environments with frequent builds. Further, as only changed layers are stored, the size of each layer should relate to the effective change set. This not only conserves disc space, but also speeds up development and deployment process as far less data has to be transferred. The impact is even large, the more frequent the application is build and deployed.
Now let’s have a look how to create small images.
Use a slim base image
First and most important is to start with a base as slim as possible. Of course your could start building everything
FROM scratch ...
This gives you the most options for creating a slim image but the downside is you have to add everything useful yourself.
If you don’t want to do everything yourself but like to start with a reasonably small base, use an Alpine image which is a special Linux distribution intended for creating slim binaries for container usage. It comes with its own package manager. The most important downside is, that it doesn’t use the GCC compiler for creating the binaries but the musl libc but that should be of a concern for most cases.
For creating an image with Alpine, use one of its flavors:
FROM alpine:latest ...
An overview of different sizes of docker base images can be found in this Docker Base Image OS Size Comparison.
If you require an alpine based image with Java support, Anapsix created a set of Alpine-based images with various types of java support - with JRE, with JDK, with unlimited encryption support etc - all based on the Oracle Java distribution. In most cases the JRE version are sufficient to run your systems.
FROM anapsix/alpine-java:8_server-jre ...
Which is around 48 MB in size.
The modularization support of Java 9 allows create a modularized runtime with only the parts of the JRE that you actually need. This allows an even smaller image.
The base image you’re using for your service is usually only transmitted once or every time you upgrade the base image. So size is important but not as important as the size of the more volatile content.
Second important technique is to chain commands that have reverse effects (creating and removing a file) to create fewer and smaller layers.
For example instead of:
RUN curl http://source/of/my/file.zip -o myfile.zip RUN unzip myfile.zip RUN rm myfile.zip
which will end up with a file system still having the same size as without removing the
would chain the commands using
&&, and add a backslash
\ for better readability:
RUN curl http://source/of/my/file.zip -o myfile.zip && \ unzip myfile.zip && \ rm myfile.zip
Same for installing packages using a package manager. Each package manager keeps caches or temporary files which can be removed after installation:
RUN apt-get update && \ apt-get install -y --no-install-recommends curl && \ apt-get autoremove && \ rm -rf /var/lib/apt/lists/*
Don’t build fat-jars!
Fat jars a very convenient distribution form for creating executable jar. All dependencies are merged into a
single jar file. No additional jar files are required, the fat-jar contains everything it needs. So only need
to distribute a single, fat-jar file. For docker images this means a single
ADD myFat.jar statement.
So what’s the problem with fat-jars?
Fat jars come with a set of problems.
- Fat jars may induce a legal risks - which could me mitigated by proper license check & dependency management.
- Assembling or shading might break your archives in case of same-name resources in the class path. This is especially
an issue if some classpath resources are named by convention and can not be renamed. For example a
- They violate separation of concerns by mergin runtime and business logic into a single jar. Some argue the mix of concerns can be neglected by putting them into a docker image anyway.
Especially the last point is relevant for Docker. Both sides of the argument pro vs con fat-jars have a point. But let’s look closer at this issue.
The most important issue with fat-jars mixing runtime and business logic is, that the runtime (platform or dependencies) change far less frequently than the business logic, especially but not only during development. Plus, the actual business logic only makes a tiny fraction compared to the dependencies and changes occur mostly only in this area. An entire fat-jar could easily grow to several tens or hundreds of megabytes. The overall size is typically large in comparison with business logic which might only a couple of megabytes, sometimes even less than 1 MB.
Given the layered filesystem of docker, adding the actual application as a fat-jar to the image adds a layer of several MB. Distributing the image requires to transmit the changed layer with that fat-jar. But only a tiny fraction of the fat-jar, the business logic, has actually changed.
Thus, separating runtime and business logic brings the advantage that as long as the runtime remains stable, only the small business logic has to be added to a layer and distributed.
To give you an impression of the effects, I have two practical examples.
I have created a Vert.x microservice. The image had the following layers with sizes:
- anapsix/alpine-java: ~50 MB
- fat-jar with vertx, dependencies and business logic: ~8MB
So everytime something the microservice changed, I had to distribute 8MB. This doesn’t sound much, unless you’re connected via 4G from a moving train (which I am often).
After splitting the fat-jar, I had the following layers:
- anapsix/alpine-java: ~50 MB
- dependencies: 7.2 MB
- business logic: 0.8 MB
Now I had to transmit only 0.8 MB, which is just a matter of seconds even on a slow connection. But also developers with fixed connections will notice the difference when distributing such an image several times a day.
In another case, a customer project, the fat-jar was ~ 120 MB, with only 5 MB of business logic.
In case of a Java EE application or microservice, it can be split into application platform - the Java EE application server or Java EE libraries, which can be part of the image as a separate layer, and the dependencies the application requires. The layers of such system could be:
- the operating system (base image)
- the application runtime
- application dependencies
- the business logic
More information on this topic are described in Building, packaging and distributing Java EE applications in 2017
Now, how would you package such an application, using the existing tools. Typically, your projects either use Maven or Gradle. I assume gradle has similar plugins to those of Maven, but as Maven has a wider adoption, I’ll show only Maven examples.
First, it’s good to have all the runtime dependencies of your service in one place. You can achieve that using the dependency plugin.
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>3.0.2</version> <executions> <execution> <id>copy</id> <phase>package</phase> <goals> <goal>copy-dependencies</goal> </goals> <configuration> <includeScope>compile</includeScope> <outputDirectory>target/lib</outputDirectory> </configuration> </execution> </executions> </plugin>
This will put all dependencies in a
Create Skinny Jar
Next, you create a skinny jar (no fat-jar) with your application, but specify a main class as you would
for a fat-jar so you make the jar executable (
Further, you include all the dependencies in the manifest file
of the jar (
<addClasspath>), pointing to your lib folder (
This will add every dependency jar into a classpath section of the manifest file, prefixed with
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>3.0.2</version> <configuration> <finalName>my-skinny-service</finalName> <archive> <index>true</index> <manifest> <mainClass>com.example.Main</mainClass> <addClasspath>true</addClasspath> <classpathPrefix>lib/</classpathPrefix> </manifest> </archive> </configuration> </plugin>
To execute your jar, the folder structure for jar and dependencies should be
/ +- my-skinny-service.jar +-/ lib | + dependency1.jar | + dependency2.jar | + ...
Create the Dockerfile
Building a docker image is now straightforward,
- use a base image (base layer)
- copy the dependencies (runtime layer)
- copy the application (business layer)
- specify the executable command
FROM anapsix/alpine-java:8_server-jre_unlimited RUN ln -s /opt/jdk/bin/java /usr/bin/java # copy all the dependencies COPY ./target/lib/* /opt/service/lib/ # add your skinny jar in separate step ADD ./target/my-skinny-service.jar /opt/service/service.jar EXPOSE 12345 CMD /usr/bin/java \ -jar /opt/service/service.jar
The copy step creates a separate layer that only changes, when your dependencies respectively their versions change. If you only have changes in your skinny jar, the layer is rather small and pushing/pulling requires only to transfer some kbytes of the skinny jar instead of a whole fat-jar.
If you have a lots of 3rd party dependencies or project dependencies that change much more frequently that external dependencies, you can further improve, by splitting up the runtime layer using wildcards or regex patterns.
The following statements will first copy all dependencies not starting with a specific prefix using a regular expression with exclusion classes (found in this dicussion). The following step will copy all dependencies with that prefix, resulting in two different layers.
# copy all except dependencies starting with abc COPY ./target/lib/[^a][^b][^c]* /opt/service/lib/ # copy all dependencies starting with abc COPY ./target/lib/abc* /opt/service/lib/ ...
In this article I discussed, why small docker images improve your development and testing process and
showed techniques for reducing the image size and optimize image layering for distribution to best match
frequency of changes.