Modular Router Design for Vert.x Microservices

By Gerald Mücke | September 15, 2017

Modular Router Design for Vert.x Microservices

When developing micro services with the Vert.x framework I stumbled more than once over the question how to organize Verticles and achieve a modular design. Vert.x is unopinionated allowing various ways of accomplishing this. In this article I’d like to discuss two options for building modular services.

In Vert.x the endpoints of a micro-service are defined as routes of a router which is bound to a http server. A http server may share a TCP port which allows to run multiple instances of a Verticle listening on the same port. A router however is not shared. So when is set as a handler on a http server listening on a shared port, only the first router receives requests, while the other receive nothing. Thus, deploying Verticles with different routers on the same port won’t make all endpoints available. In my research I found out this is not an uncommon problem to solve (see here,here or here).

There are two options to solve this problem

  • a single Verticle that mounts various subrouters, provided by various classes
  • sharing a single router instance across various Verticles

Single Verticle

The basic idea is, that only a single Verticle creates the HTTP server and a router. Endpoints of the various services are added by mounting sub-router according to Vert.x documentation. The sub-routers are defined by classes that define the set of endpoints which can be maintained by different teams. In order to achieve a dynamic composition of services the ServiceLoader mechanism can be used so that additional services are mounted automatically if they are present in the classpath. Nevertheless, it is not possible to deploy additional functionality at runtime.

First, we define an interface which services have to implement. The interface defines the mountpoint for the subrouter and the actual router for the endpoints

public interface ServiceEndpoint {
  String mountPoint();
  Router router(Vertx vertx);
}

Further, we create an implementation for the ServiceEndpoint

public class OneService implements ServiceEndpoint {

  @Override
  public String mountPoint() {
    return "/1";
  }

  @Override
  public Router router(Vertx vertx) {
    Router router = Router.router(vertx);
    router.get("/one").handler(ctx -> ctx.response().end("One OK"));
    return router;
  }
}

In the META-INF/services we create a file with the fully qualified name of the ServiceEndpoint interface, i.e. io.devcon5.vertx.examples.ServiceEndpoint containing the fully qualified name of one or more implementing classes, i.e. io.devcon5.vertx.examples.OneService. These services can be defined in the same or other jars, the ServiceLoader mechanism is able to collect all implementor classes. We load all the implementors during initialization of the http server and mount their routers to the central router.

public class ServerVerticle extends AbstractVerticle {

  @Override
  public void start(final Future<Void> startFuture) throws Exception {

    //create a service loader for the ServiceEndpoints
    ServiceLoader<ServiceEndpoint> loader = ServiceLoader.load(ServiceEndpoint.class);

    //iterate over all endpoints and mount all their endpoints to a single router
    Router main = StreamSupport.stream(loader.spliterator(), false)
                               .collect(() -> Router.router(vertx), //the main router
                                        (r, s) -> r.mountSubRouter(s.mountPoint(), s.router(vertx)),
                                        (r1, r2) -> {});

    //bind the main router to the http server
    vertx.createHttpServer().requestHandler(main::accept).listen(8080, res -> {
      if (res.succeeded()) {
        startFuture.complete();
      } else {
        startFuture.fail(res.cause());
      }
    });
  }
}

Now, you can dynamically put together services using a single server. The ServerVerticle can be deployed multiple times to achieve scalability. This solution is probably more “vertxy” than the shared router solution, because it doesn’t rely on sharing an instance between threads, but the service endpoints can not be used and deployed as separate Verticles.

The full example can be found on GitHub

Shared Router

The core idea is to use a single main router which is shared between Verticles to mount their sub-routers to it. The Vert.x’ RouterImpl class is thread safe, so it shouldn’t be a problem to share the instance between different Verticles potentially running on different threads.

We need a wrapper or extension for the Router to make it Shareable. The router can then be shared between Verticles but not accross a cluster. We also define a method to create this shareable router which ensures only one router is created, even if multiple Verticle try to create a new router.

public class ShareableRouter extends RouterImpl implements Shareable {
  public static Router router(Vertx vertx) {
    return (Router) vertx.sharedData()
                         .getLocalMap("router")
                         .computeIfAbsent("main", n -> new ShareableRouter(vertx));
  }

  ShareableRouter(final Vertx vertx) {
    super(vertx);
  }
}

Now each Verticle that defines a self-contained set of endpoints, including an HttpServer can mount their routers as sub-router to this shared router or define direct routes themselves - which is not recommendable due to potential endpoint collisions.

public class HttpOneVerticle extends AbstractVerticle {

  @Override
  public void start(final Future<Void> startFuture) throws Exception {

    //create a router defining the endpoints of the service
    final Router router = Router.router(vertx);
    router.get("/one").handler(ctx -> ctx.response().end("OK one"));

    //mount the router as subrouter to the shared router
    final Router main = ShareableRouter.router(vertx).mountSubRouter("/1", router);

    vertx.createHttpServer().requestHandler(main::accept).listen(8080, res -> {
      if(res.succeeded()){
        startFuture.complete();
      } else {
        startFuture.fail(res.cause());
      }
    });
  }
}

Although this approach is a bit less “vertxy” than the other one, it is valid as Vert.x is unopinionated as Tim Fox pointed out. This approach has the advantages that each service can run standalone and may be deployed or undeployed dynamically at runtime.

The full example can be found on GitHub

comments powered by Disqus