A quickly increasing number of Axon Framework users are deploying their applications to Kubernetes. For good reasons! One of the main use cases of Axon is to write (evolutionary) microservices systems. Containers are ideal for providing isolation between microservices instances, without incurring the overhead of full-blown VM instances. But these containers have to be managed. Kubernetes has emerged as the clear winner in this space, with Google Cloud (GKE), AWS (EKS) and Azure (AKS) now all offering managed Kubernetes options.

When distributing an Axon Framework application across microservices, one needs to set up distributed versions of Axon’s CommandBus, EventBus/EventStore and QueryBus. While there are multiple ways of doing that, using AxonHub and AxonDB is, by far, the easiest. All that needs to happen at the application level (assuming Spring Boot) is to include the AxonHub Spring Boot Autoconfigure dependency, and set the property axoniq.axonhub.servers to point to the AxonHub server. All discovery and routing pattern set-up will take place automatically from there, and the AxonHub console will provide you with a graphical overview of your distributed Axon application.

In a setup like this it would of course be desirable to run AxonHub and AxonDB themselves on Kubernetes as well (although, just to be sure, this is in no way a requirement of AxonHub or AxonDB). In this blog, we’ll explain how to set this up. We'll focus on the non-clustered setup in part 1. In part 2 of this series, we'll cover the clustered setup. Except for the clustering, we'll focus on simple deployments (no SSL, no authentication, mostly default settings).

Getting the software

AxonDB is an optimized, scalable event store implementation, while AxonHub is an intelligent message routing platform for commands, events and queries. It is possible to just use AxonDB as an event store while not using AxonHub. But AxonHub does currently require AxonDB as an event store - so if you want to set up AxonHub, you are also going to need AxonDB. In this configuration, your application will only communicate with AxonHub. AxonHub will contact AxonDB for event storage, retrieval and tracking.

The (non-clustered) developer editions of AxonHub and AxonDB can be downloaded for free. They are distributed as zip files containing the jar binaries, documentation and other files. AxonIQ is working on publicly available Docker images and potentially Helm charts as a distribution model, but for now you will have to build your own Docker images using the downloaded binaries.

In the rest of this blog, we'll assume you're using AxonDB 1.2.3 or higher, and AxonHub 1.0.3 or higher. All the Docker and Kubernetes files we refer to below, and cheatsheets with commands to issue, are available through this GitHub repo.

Running AxonDB on Docker

We'll focus first on AxonDB, as AxonHub is dependent on it and is actually technically very similar. Once we have AxonDB running on Docker and Kubernetes, expanding to AxonHub is trivial. Also, we'll focus on running on Docker locally first, before deploying to Kubernetes.

AxonDB is a Spring Boot Java application, so running it on Docker in principle follows the Spring Boot with Docker getting started. (But we can use the OpenJDK JRE image instead of JDK, as the full JDK is not needed for AxonDB.) Our jar file is axondb-server.jar. AxonDB takes advantages of Spring Boot's facilities for externalized configuration. Specifically, we will have properties set in an axondb.properties file included in the container, but these can be overridden by environment variables as required.

When setting up the properties and Dockerfile for AxonDB, we need to take the following into account:

Ports

There are two ports to EXPOSE: 8023 for the user interface, and 8123 for the data.

Persistent storage

AxonDB needs to persistently store data. There are two things to consider: the event data itself and the control database that contains AxonDB dynamic configuration (access tokens etc.). We can for instance store both in a VOLUME on /data by setting the following properties:

axoniq.axondb.file.storage=/data
axoniq.axondb.controldb-path=/data
Hostname

The AxonDB data connections use gRPC. On top of that, AxonDB implements a mechanism to support clustering, which we also have to take into account when running in non-clustered mode. The client connects to any AxonDB instance it knows, and that instance will tell the client which is the appropriate instance to connect to (the current master). This requires that the AxonDB instance knows a hostname on which the client can find it. By default, it will use the system host name for that. When running Docker, this will default to an arbitrary hex name like 6fd6e4b9f41e. You can of course instruct Docker to set another hostname, using the -h flag. When using networking between containers, that may be sufficient. In the common scenario where you want to run a Docker container with AxonDB, to be accessed by your application started from an IDE, not in Docker, you can override the hostname that AxonDB uses to communicate to clients:

axoniq.axondb.hostname=localhost

As this is environment-specific, it makes sense to specify this as an environment variable rather than include it in axondb.properties in the image.

License

If you don't COPY an axoniq.license file into your container, it will run in Developer mode, which is restricted to 1 GB of event storage. Contact us for a trial or production license of the non-Developer editions.

Memory

When running a quick test with AxonDB on your local machine, you may ignore all memory settings, but in production or Kubernetes scenario this will quickly lead to trouble. The following facts play a role:

  • Java 8 will look at host rather than container memory to set a default heap size, as explained on Carlos Sanchez' blog, unless you use specific settings.
  • AxonDB doesn't need a lot of heap. It uses memory-mapped files to keep recent event storage segments in memory for fast access, but this is off-heap memory. So if you want to add more memory to AxonDB to make it faster, you should keep heap at the same fixed size. 512MB seems to be a reasonable AxonDB heap size. Total memory for the container should be 1GB minimum. More memory may give performance enhancements - but don't increase the heap size.

For this reason, explicitly setting heap size in the Dockerfile makes sense, as well as explicitly providing a memory limit when starting the container.

Running AxonHub on Docker

This is very similar to running to AxonDB, with the following differences:

  • There is no file storage, so no property axoniq.axondb.file.storage. We still do have the control database.
  • axoniq.axondb.controldb-path and axoniq.axondb.hostname become axoniq.axonhub.controldb-path and axoniq.axonhub.hostname
  • AxonHub needs access to an AxonDB instance. This can be configured through property axoniq.axondb.servers.
  • The ports are 8024 and 8124 instead of 8023 and 8123.
  • There are no memory-mapped files. Running with a 512MB heap in a 1GB container works fine - more memory won't do any good.

Once you have Docker images 'axondb-single' and 'axonhub-single' built as detailed above, you can create the following setup which is useful for developing locally against AxonHub. To run both AxonDB and AxonHub as Docker containers, while connecting your application running on the host to AxonHub, you could do:

docker run --rm -p8023:8023 -m 1G --name axondb -h axondb axondb-single
docker run --rm -p8024:8024 -p8124:8124 -m 1G -e axoniq.axonhub.hostname=localhost -e axoniq.axondb.servers=axondb axonhub-single

From Docker to Kubernetes

Being able to run AxonDB and AxonHub on Docker is an important step towards running it on an actual Kubernetes cluster, but we're not quite there yet. Of course, we need to push the image to a container registry where the Kubernetes cluster can get it. I'm mainly running on Google Kubernetes Engine, and would therefore push the images to my Google Container Registry. The process for doing this consists of tagging your local build of the container image with the name of the registry you want to push it to, and then to do the actual push, like this:

docker tag axondb-single eu.gcr.io/axon-demo/axondb-single
docker push eu.gcr.io/axon-demo/axondb-single

(And similarly for Hub of course.) So, now we have the images in our container registry, from which Kubernetes can run them. How to do this? In case you're new to Kubernetes, knowing some terminology may come in handy:

  • A pod in Kubernetes is set of containers being deployed together, working in tandem on the same machine. In our case, we work with pods consisting of a single container only, and the concepts of pod and container are more or less the same.
  • A controller in Kubernetes is a mechanism to control the execution of one or more copies of a pod.
  • A service in Kubernetes is a configuration to expose running pods to other pods, or systems outside of the cluster.

The most naive way to start a Docker container on Kubernetes would be something like "kubectl run axondb --image=axondb-single", which would run the image using many defaults. This will create a Kubernetes workload of a pod (with only the axondb-single container), controlled by a Deployment controller. This controller will try to run the requested number of replicas (default 1) of the pod/container. However, an important thing to note about Kubernetes is that pods are mortal or ephemeral. They don't have stable network identity, and they don't have any persistent storage attached. The network identity issue by itself can be solved by putting a Service in front of it. But that doesn't solve the persistent storage issue. Also, standard Services don't play well with AxonDB and AxonHub clustering, which we'll cover in part 2 of this series.

Luckily, Kubernetes has a solution for this, called a StatefulSet, which is a controller just like the Deployment controller, but with better functionality for our requirements. As for network identity: StatefulSets have a name, and if the name of our StatefulSet is for instance axondb, the hostnames of instances will be axondb-0, axondb-1 etc. This provides us with the stable network identity we need.

To make our AxonDB and AxonHub instances available for use, we need to do to things:

  • Expose the GUIs to operators outside the Kubernetes cluster. The easiest way of doing this, is to configure a Kubernetes LoadBalancer in front of it - this is what we've done in a demo scripts on GitHub. Alternatively, you could use an Ingress to get more control.
  • Expose the data ports to your Axon-based microservices running on the Kubernetes cluster. To that end, you configure a headless service which published each of the nodes (axondb-0, axondb-1) individually in DNS, but also creates an SRV record pointing to all nodes. Strictly speaking, for single node deployments, we could do without the headless services, but using it makes our life easier now and is needed when transitioning to clustered mode. Knowing how this mechanism works, it becomes easy to set the right values for axoniq.axondb.domain (AxonDB) and axoniq.axondb.servers, axoniq.axonhub.domain (AxonHub) through environment properties specified in the Kubernetes yaml files.

Finally, we need to make sure that our data gets persisted, even when containers are killed and new ones are started. To make that happen, we need to configure a PersistentVolume and mount it to /data.

Combining all of the above, we can describe our complete Kubernetes deployment as a yaml file (also in our GitHub of course), like this:

apiVersion: v1
kind: Service
metadata:
name: axondb
labels:
app: axondb
spec:
ports:
- port: 8123
name: grpc
targetPort: 8123
clusterIP: None
selector:
app: axondb
---
apiVersion: v1
kind: Service
metadata:
name: axondb-gui
labels:
app: axondb
spec:
ports:
- port: 8023
name: gui
targetPort: 8023
selector:
app: axondb
type: LoadBalancer
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: axondb
spec:
serviceName: "axondb"
replicas: 1
selector:
matchLabels:
app: axondb
template:
metadata:
labels:
app: axondb
spec:
containers:
- name: axondb
image: $DOCKER_IMAGE_PREFIX/axondb-single
imagePullPolicy: Always
env:
- name: axoniq.axondb.domain
value: "axondb.default.svc.cluster.local"
ports:
- containerPort: 8123
protocol: TCP
name: grpc
- containerPort: 8023
protocol: TCP
name: gui
volumeMounts:
- name: data
mountPath: /data
readinessProbe:
httpGet:
path: /health
port: 8023
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 1
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi

Please be aware that the PersistentVolumClaim won't work as-is on Minikube, because of its limited support for persistent volumes.

The README.md files in the GitHub provide some useful cheatsheets of the commands needed to execute all of this. Now, you should be fully set up to run your Axon apps with AxonHub and AxonDB on Kubernetes! Stay tuned for part 2 in this series, in which we'll cover the clustered scenario.

Sign up for our newsletter

monthly updates about new product releases and invitations to AxonIQ events