Building Reactive Microservices in Java
Building Reactive Microservices in Java
Microservices in Java
Asynchronous and Event-Based
Application Design
Clement Escoffier
The OReilly logo is a registered trademark of OReilly Media, Inc. Building Reactive
Microservices in Java, the cover image, and related trade dress are trademarks of
OReilly Media, Inc.
While the publisher and the author have used good faith efforts to ensure that the
information and instructions contained in this work are accurate, the publisher and
the author disclaim all responsibility for errors or omissions, including without limi
tation responsibility for damages resulting from the use of or reliance on this work.
Use of the information and instructions contained in this work is at your own risk. If
any code samples or other technology this work contains or describes is subject to
open source licenses or the intellectual property rights of others, it is your responsi
bility to ensure that your use thereof complies with such licenses and/or rights.
978-1-491-98626-4
[LSI]
Table of Contents
1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Preparing Your Environment 2
iii
5. Deploying Reactive Microservices in OpenShift. . . . . . . . . . . . . . . . . . 57
What Is OpenShift? 57
Installing OpenShift on Your Machine 60
Deploying a Microservice in OpenShift 62
Service Discovery 64
Scale Up and Down 65
Health Check and Failover 67
Using a Circuit Breaker 68
But Wait, Are We Reactive? 70
Summary 71
6. Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
What Have We Learned? 73
Microservices Arent Easy 74
The Evolution of the Microservice Paradigm 75
Vert.x Versatility 76
iv | Table of Contents
CHAPTER 1
Introduction
1
This report goes beyond Vert.x and microservices. It looks at the
whole environment in which a microservice system runs and intro
duces the many tools needed to get the desired results. On this jour
ney, we will learn:
JDK 1.8
Maven 3.3+
A command-line terminal (Bash, PowerShell, etc.)
2 | Chapter 1: Introduction
CHAPTER 2
Understanding Reactive
Microservices and Vert.x
Microservices are not really a new thing. They arose from research
conducted in the 1970s and have come into the spotlight recently
because microservices are a way to move faster, to deliver value
more easily, and to improve agility. However, microservices have
roots in actor-based systems, service design, dynamic and auto
nomic systems, domain-driven design, and distributed systems. The
fine-grained modular design of microservices inevitably leads devel
opers to create distributed systems. As Im sure youve noticed, dis
tributed systems are hard. They fail, they are slow, they are bound by
the CAP and FLP theorems. In other words, they are very compli
cated to build and maintain. Thats where reactive comes in.
3
the flow of computation isnt controlled by the programmer but by
the stimuli. In this chapter, we are going to see how Vert.x helps you
be reactive by combining:
Reactive Programming
Reactive Programming | 5
Observables are bounded or unbounded streams expected to
contain a sequence of values.
Singles are streams with a single value, generally the deferred
result of an operation, similar to futures or promises.
Completables are streams without value but with an indication
of whether an operation completed or failed.
RxJava 2
While RxJava 2.x has been recently released, this report still uses the
previous version (RxJava 1.x). RxJava 2.x provides similar concepts.
RxJava 2 adds two new types of streams. Observable is used for
streams not supporting back-pressure, while Flowable is an
Observable with back-pressure. RxJava 2 also introduced the Maybe
type, which models a stream where there could be 0 or 1 item or an
error.
Reactive Streams
You may have heard of reactive streams (http://www.reactive-
streams.org/). Reactive streams is an initiative to provide a standard
for asynchronous stream processing with back-pressure. It provides
a minimal set of interfaces and protocols that describe the opera
tions and entities to achieve the asynchronous streams of data with
nonblocking back-pressure. It does not define operators manipulat
ing the streams, and is mainly used as an interoperability layer. This
initiative is supported by Netflix, Lightbend, and Red Hat, among
others.
Reactive Systems
While reactive programming is a development model, reactive sys
tems is an architectural style used to build distributed systems
(http://www.reactivemanifesto.org/). Its a set of principles used to
achieve responsiveness and build systems that respond to requests in
a timely fashion even with failures or under load.
To build such a system, reactive systems embrace a message-driven
approach. All the components interact using messages sent and
received asynchronously. To decouple senders and receivers, com
ponents send messages to virtual addresses. They also register to the
virtual addresses to receive messages. An address is a destination
identifier such as an opaque string or a URL. Several receivers can
be registered on the same addressthe delivery semantic depends
Reactive Systems | 7
on the underlying technology. Senders do not block and wait for a
response. The sender may receive a response later, but in the mean
time, he can receive and send other messages. This asynchronous
aspect is particularly important and impacts how your application is
developed.
Using asynchronous message-passing interactions provides reactive
systems with two critical properties:
Reactive Microservices
When building a microservice (and thus distributed) system, each
service can change, evolve, fail, exhibit slowness, or be withdrawn at
any time. Such issues must not impact the behavior of the whole sys
tem. Your system must embrace changes and be able to handle fail
Autonomy
Asynchronisity
Resilience
Elasticity
hello Vert.x
With very few exceptions, none of the APIs in Vert.x block the call
ing thread. If a result can be provided immediately, it will be
returned; otherwise, a Handler is used to receive events at a later
time. The Handler is notified when an event is ready to be processed
or when the result of an asynchronous operation has been compu
ted.
In the last snippet, compute does not return a result anymore, so you
dont wait until this result is computed and returned. You pass a
Handler that is called when the result is ready.
Thanks to this nonblocking development model, you can handle a
highly concurrent workload using a small number of threads. In
most cases, Vert.x calls your handlers using a thread called an event
loop. This event loop is depicted in Figure 2-3. It consumes a queue
of events and dispatches each event to the interested Handlers.
The threading model proposed by the event loop has a huge benefit:
it simplifies concurrency. As there is only one thread, you are always
called by the same thread and never concurrently. However, it also
has a very important rule that you must obey:
1 This code uses the lambda expressions introduced in Java 8. More details about this
notation can be found at http://bit.ly/2nsyJJv.
@Override
public void stop() throws Exception {
// Executed when the verticle is un-deployed
}
}
Worker Verticle
Unlike regular verticles, worker verticles are not executed on the
event loop, which means they can execute blocking code. However,
this limits your scalability.
Project Creation
Create a directory called my-first-vertx-app and move into this
directory:
mkdir my-first-vertx-app
cd my-first-vertx-app
Then, issue the following command:
mvn io.fabric8:vertx-maven-plugin:1.0.5:setup \
-DprojectGroupId=io.vertx.sample \
-DprojectArtifactId=my-first-vertx-app \
-Dverticle=io.vertx.sample.MyFirstVerticle
This command generates the Maven project structure, configures
the vertx-maven-plugin, and creates a verticle class (io.vertx.sam
ple.MyFirstVerticle), which does nothing.
import io.vertx.core.AbstractVerticle;
/**
* A verticle extends the AbstractVerticle class.
*/
public class MyFirstVerticle extends AbstractVerticle {
@Override
public void start() throws Exception {
// We create a HTTP server object
vertx.createHttpServer()
// The requestHandler is called for each incoming
// HTTP request, we print the name of the thread
.requestHandler(req -> {
req.response().end("Hello from "
+ Thread.currentThread().getName());
})
.listen(8080); // start the server on port 8080
}
}
To run this application, launch:
mvn compile vertx:run
If everything went fine, you should be able to see your application
by opening http://localhost:8080 in a browser. The vertx:run goal
launches the Vert.x application and also watches code alterations. So,
if you edit the source code, the application will be automatically
recompiled and restarted.
Lets now look at the application output:
Hello from vert.x-eventloop-thread-0
The request has been processed by the event loop 0. You can try to
emit more requests. The requests will always be processed by the
same event loop, enforcing the concurrency model of Vert.x. Hit
Ctrl+C to stop the execution.
Using RxJava
At this point, lets take a look at the RxJava support provided by
Vert.x to better understand how it works. In your pom.xml file, add
the following dependency:
@Override
public void start() {
HttpServer server = vertx.createHttpServer();
// We get the stream of request as Observable
server.requestStream().toObservable()
.subscribe(req ->
// for each HTTP request, this method is called
req.response().end("Hello from "
+ Thread.currentThread().getName())
);
// We start the server using rxListen returning a
// Single of HTTP server. We need to subscribe to
// trigger the operation
server
.rxListen(8080)
.subscribe();
}
}
The RxJava variants of the Vert.x APIs are provided in packages with
rxjava in their name. RxJava methods are prefixed with rx, such as
rxListen. In addition, the APIs are enhanced with methods provid
ing Observable objects on which you can subscribe to receive the
conveyed data.
Summary
In this chapter we learned about reactive microservices and Vert.x.
You also created your first Vert.x application. This chapter is by no
means a comprehensive guide and just provides a quick introduc
tion to the main concepts. If you want to go further on these topics,
check out the following resources:
Summary | 21
CHAPTER 3
Building Reactive Microservices
First Microservices
In this chapter we are going to implement the same set of microser
vices twice. The first microservice exposes a hello service that we will
call hello microservice. Another consumes this service twice (concur
rently). The consumer will be called hello consumer microservice.
This small system illustrates not only how a service is served, but
also how it is consumed. On the left side of Figure 3-1, the microser
vices are using HTTP interactions. The hello consumer microser
vice uses an HTTP client to invoke the hello microservice. On the
right side, the hello consumer microservice uses messages to interact
with the hello microservice. This difference impacts the reactiveness
of the system.
23
Figure 3-1. The microservices implemented in this chapter using HTTP
and message-based interactions
Getting Started
Create a directory called hello-microservice-http and then gen
erate the project structure:
mkdir hello-microservice-http
cd hello-microservice-http
mvn io.fabric8:vertx-maven-plugin:1.0.5:setup \
-DprojectGroupId=io.vertx.microservice \
-DprojectArtifactId=hello-microservice-http \
The Verticle
Open src/main/java/io/vertx/book/http/HelloMicroser
vice.java. The generated code of the verticle does nothing very
interesting, but its a starting point:
package io.vertx.book.http;
import io.vertx.core.AbstractVerticle;
@Override
public void start() {
}
}
Now, launch the following Maven command:
mvn compile vertx:run
You can now edit the verticle. Every time you save the file, the appli
cation will be recompiled and restarted automatically.
HTTP Microservice
Its time to make our MyVerticle class do something. Lets start with
an HTTP server. As seen in the previous chapter, to create an HTTP
server with Vert.x you just use:
@Override
public void start() {
vertx.createHttpServer()
.requestHandler(req -> req.response()
.end("hello"))
.listen(8080);
}
Once added and saved, you should be able to see hello at http://
localhost:8080 in a browser. This code creates an HTTP server on
vertx.createHttpServer()
.requestHandler(router::accept)
.listen(8080);
}
Once we have created the Router object, we register two routes. The
first one handles requests on / and just writes hello. The second
route has a path parameter (:name). The handler appends the passed
value to the greeting message. Finally, we change the
requestHandler of the HTTP server to use the accept method of
the router.
If you didnt stop the vertx:run execution, you should be able to
open a browser to:
Producing JSON
JSON is often used in microservices. Lets modify the previous class
to produce JSON payloads:
Project Creation
As usual, lets create a new project:
mkdir hello-consumer-microservice-http
cd hello-consumer-microservice-http
mvn io.fabric8:vertx-maven-plugin:1.0.5:setup \
-DprojectGroupId=io.vertx.microservice \
-DprojectArtifactId=hello-consumer-microservice-http \
-Dverticle=io.vertx.book.http.HelloConsumerMicroservice \
-Ddependencies=web,web-client,rx
The last command adds another dependency: the Vert.x web client,
an asynchronous HTTP client. We will use this client to call the first
microservice. The command has also added the Vert.x RxJava bind
ing we are going to use later.
Now edit the src/main/java/io/vertx/book/http/HelloConsumer
Microservice.java file and update it to contain:
package io.vertx.book.http;
import io.vertx.core.AbstractVerticle;
import io.vertx.core.json.JsonObject;
import io.vertx.ext.web.*;
import io.vertx.ext.web.client.*;
import io.vertx.ext.web.codec.BodyCodec;
@Override
public void start() {
client = WebClient.create(vertx);
vertx.createHttpServer()
.requestHandler(router::accept)
.listen(8081);
}
request.send(ar -> {
if (ar.failed()) {
rc.fail(ar.cause());
} else {
rc.response().end(ar.result().body().encode());
}
});
}
Notice the rxSend method calls. The RxJava methods from Vert.x
are prefixed with rx to be easily recognizable. The result of rxSend is
a Single, i.e., an observable of one element representing the
deferred result of an operation. The single.zip method takes as
input a set of Single, and once all of them have received their value,
calls a function with the results. Single.zip produces another
Single containing the result of the function. Finally, we subscribe.
This method takes two functions as parameters:
1. The first one is called with the result of the zip function (a
JSON object). We write the receive JSON payload into the
HTTP response.
2. The second one is called if something fails (timeout, exception,
etc.). In this case, we respond with an empty JSON object.
Autonomous
Asynchronous
The main issue with the current design is the tight coupling between
the two microservices. The web client is configured to target the first
microservice explicitly. If the first microservice fails, we wont be
able to recover by calling another one. If we are under load, creating
a new instance of the hello microservice wont help us. Thanks to the
Vert.x web client, the interactions are asynchronous. However, as we
dont use a virtual address (destination) to invoke the microservice,
but its direct URL, it does not provide the resilience and elasticity
we need.
Its important to note that using reactive programming as in the sec
ond microservice does not give you the reactive systems benefits. It
provides an elegant development model to coordinate asynchronous
actions, but it does not provide the resilience and elasticity we need.
Can we use HTTP for reactive microservices? Yes. But this requires
some infrastructure to route virtual URLs to a set of services. We
also need to implement a load-balancing strategy to provide elastic
ity and health-check support to improve resilience.
Dont be disappointed. In the next section we will take a big step
toward reactive microservices.
In contrast to send, you can use the publish method to deliver the
message to all consumers registered on the address. Finally, the send
method can be used with a reply handler. This request/response
mechanism allows implementing message-based asynchronous
interactions between two components:
// Consumer
vertx.eventBus().consumer("address", message -> {
message.reply("pong");
});
// Sender
vertx.eventBus().send("address", "ping", reply -> {
if (reply.succeeded()) {
System.out.println("Received: " + reply.result().body());
} else {
// No reply or failure
reply.cause().printStackTrace();
}
});
If you are using Rx-ified APIs, you can use the rxSend method,
which returns a Single. This Single receives a value when the reply
is received. We are going to see this method in action shortly.
Message-Based Microservices
Lets reimplement the hello microservice, this time using an event
bus instead of an HTTP server to receive the request. The microser
vice replied to the message to provide the response.
Message-Based Microservices | 33
Project Creation
Lets create a new project. This time we are going to add the Infini
span dependency, an in-memory data grid that will be used to man
age the cluster:
mkdir hello-microservice-message
cd hello-microservice-message
mvn io.fabric8:vertx-maven-plugin:1.0.5:setup \
-DprojectGroupId=io.vertx.microservice \
-DprojectArtifactId=hello-microservice-message \
-Dverticle=io.vertx.book.message.HelloMicroservice \
-Ddependencies=infinispan
Once generated, we may need to configure Infinispan to build the
cluster. The default configuration uses multicast to discover the
nodes. If your network supports multicast, it should be fine. Other
wise, check the resource/cluster directory of the code repository.
This code retrieves the eventBus from the vertx object and registers
a consumer on the address hello. When a message is received, it
replies to it. Depending on whether or not the incoming message
has an empty body, we compute a different response. As in the
example in the previous chapter, we send a JSON object back. You
may be wondering why we added the served-by entry in the JSON.
mvn io.fabric8:vertx-maven-plugin:1.0.5:setup \
-DprojectGroupId=io.vertx.microservice \
-DprojectArtifactId=hello-consumer-microservice-message \
-Dverticle=io.vertx.book.message.HelloConsumerMicroservice \
-Ddependencies=infinispan,rx
Here we also add the Vert.x RxJava support to benefit from the RX-
ified APIs provided by Vert.x. If you updated the Infinispan configu
ration in the previous section, you need to copy it to this new
project.
Now edit the io.vertx.book.message.HelloConsumerMicroser
vice. Since we are going to use RxJava, change the import statement
to match io.vertx.rxjava.core.AbstractVerticle. Then imple
ment the start method with:
@Override
public void start() {
EventBus bus = vertx.eventBus();
Single<JsonObject> obs1 = bus
.<JsonObject>rxSend("hello", "Luke")
.map(Message::body);
Single<JsonObject> obs2 = bus
.<JsonObject>rxSend("hello", "Leia")
.map(Message::body);
Single
.zip(obs1, obs2, (luke, leia) ->
new JsonObject()
.put("Luke", luke.getString("message")
+ " from "
+ luke.getString("served-by"))
.put("Leia", leia.getString("message")
+ " from "
+ leia.getString("served-by"))
)
.subscribe(
x -> req.response().end(x.encodePrettily()),
t -> {
t.printStackTrace();
req.response().setStatusCode(500)
Elasticity
Elasticity is one of the characteristics not enforced by the HTTP ver
sion of the microservice. Because the microservice was targeting a
specific instance of the microservice (using a hard-coded URL), it
didnt provide the elasticity we need. But now that we are using mes
sages sent to an address, this changes the game. Lets see how this
microservice system behaves.
Remember the output of the previous execution. The returned
JSON objects display the verticle having computed the hello mes
sage. The output always displays the same verticle. The message was
indicating the same instance. We expected this because we had a sin
gle instance running. Now lets see what happens with two.
Stop the vertx:run execution of the Hello microservice and run:
mvn clean package
The two instances of Hello are used. The Vert.x cluster connects the
different nodes, and the event bus is clustered. Thanks to the event
bus round-robin, the Vert.x event bus dispatches messages to the
available instances and thus balances the load among the different
nodes listening to the same address.
So, by using the event bus, we have the elasticity characteristic we
need.
Resilience
What about resilience? In the current code, if the hello microservice
failed, we would get a failure and execute this code:
t -> {
t.printStackTrace();
req.response().setStatusCode(500).end(t.getMessage());
}
Even though the user gets an error message, we dont crash, we dont
limit our scalability, and we can still handle requests. However, to
improve the user experience, we should always reply in a timely
fashion to the user, even if we dont receive the responses from the
service. To implement this logic, we can enhance the code with a
timeout.
To illustrate this, lets modify the Hello microservice to inject fail
ures and misbehaviors. This code is located in the microservices/
hello-microservice-faulty directory of the code repository.
Summary
In this section, we learned how to develop an HTTP microservice
with Vert.x and also how to consume it. As we learned, hard-coding
the URL of the consumed service in the code is not a brilliant idea as
it breaks one of the reactive characteristics. In the second part, we
replaced the HTTP interactions using messaging, which showed
how messaging and the Vert.x event bus help build reactive micro
services.
So, are we there yet? Yes and no. Yes, we know how to build reactive
microservices, but there are a couple of shortcomings we need to
look at. First, what if you only have HTTP services? How do you
avoid hard-coded locations? What about resilience? We have seen
timeouts and retries in this chapter, but what about circuit breakers,
failovers, and bulkheads? Lets continue the journey.
Summary | 41
CHAPTER 4
Building Reactive Microservice
Systems
Service Discovery
When you have a set of microservices, the first question you have to
answer is: how will these microservices locate each other? In order
to communicate with another peer, a microservice needs to know its
address. As we did in the previous chapter, we could hard-code the
address (event bus address, URLs, location details, etc.) in the code
or have it externalized into a configuration file. However, this solu
tion does not enable mobility. Your application will be quite rigid
and the different pieces wont be able to move, which contradicts
what we try to achieve with microservices.
43
Client- and Server-Side Service Discovery
Microservices need to be mobile but addressable. A consumer needs
to be able to communicate with a microservice without knowing its
exact location in advance, especially since this location may change
over time. Location transparency provides elasticity and dynamism:
the consumer may call different instances of the microservice using
a round-robin strategy, and between two invocations the microser
vice may have been moved or updated.
Location transparency can be addressed by a pattern called service
discovery. Each microservice should announce how it can be
invoked and its characteristics, including its location of course, but
also other metadata such as security policies or versions. These
announcements are stored in the service discovery infrastructure,
which is generally a service registry provided by the execution envi
ronment. A microservice can also decide to withdraw its service
from the registry. A microservice looking for another service can
also search this service registry to find matching services, select the
best one (using any kind of criteria), and start using it. These inter
actions are depicted in Figure 4-1.
Service Discovery | 45
Figure 4-3. Import and export of services from and to other service dis
covery mechanisms
Using Timeouts
When dealing with distributed interactions, we often use timeouts.
A timeout is a simple mechanism that allows you to stop waiting for
a response once you think it will not come. Well-placed timeouts
provide failure isolation, ensuring the failure is limited to the micro
service it affects and allowing you to handle the timeout and con
tinue your execution in a degraded mode.
client.get(path)
.rxSend() // Invoke the service
// We need to be sure to use the Vert.x event loop
.subscribeOn(RxHelper.scheduler(vertx))
// Configure the timeout, if no response, it publishes
// a failure in the Observable
.timeout(5, TimeUnit.SECONDS)
// In case of success, extract the body
.map(HttpResponse::bodyAsJsonObject)
// Otherwise use a fallback result
.onErrorReturn(t -> {
// timeout or another exception
return new JsonObject().put("message", "D'oh! Timeout");
})
.subscribe(
json -> {
System.out.println(json.encode());
}
);
Timeouts are often used together with retries. When a timeout
occurs, we can try again. Immediately retrying an operation after a
failure has a number of effects, but only some of them are beneficial.
If the operation failed because of a significant problem in the called
microservice, it is likely to fail again if retried immediately. How
ever, some kinds of transient failures can be overcome with a retry,
This last case is often ignored and can be harmful. In this case, com
bining the timeout with a retry can break the integrity of the system.
Retries can only be used with idempotent operations, i.e., with oper
ations you can invoke multiple times without changing the result
beyond the initial call. Before using a retry, always check that your
system is able to handle reattempted operations gracefully.
Retry also makes the consumer wait even longer to get a response,
which is not a good thing either. It is often better to return a fallback
than to retry an operation too many times. In addition, continually
hammering a failing service may not help it get back on track. These
Circuit Breakers
A circuit breaker is a pattern used to deal with repetitive failures. It
protects a microservice from calling a failing service again and
again. A circuit breaker is a three-state automaton that manages an
interaction (Figure 4-4). It starts in a closed state in which the circuit
breaker executes operations as usual. If the interaction succeeds,
nothing happens. If it fails, however, the circuit breaker makes a
note of the failure. Once the number of failures (or frequency of fail
ures, in more sophisticated cases) exceeds a threshold, the circuit
breaker switches to an open state. In this state, calls to the circuit
breaker fail immediately without any attempt to execute the under
lying interaction. Instead of executing the operation, the circuit
breaker may execute a fallback, providing a default result. After a
configured amount of time, the circuit breaker decides that the
operation has a chance of succeeding, so it goes into a half-open
state. In this state, the next call to the circuit breaker executes the
underlying interaction. Depending on the outcome of this call, the
circuit breaker resets and returns to the closed state, or returns to the
open state until another timeout elapses.
Summary
This chapter has addressed several concerns you will face when your
microservice system grows. As we learned, service discovery is a
must-have in any microservice system to ensure location transpar
ency. Then, because failures are inevitable, we discussed a couple of
patterns to improve the resilience and stability of your system.
Vert.x includes a pluggable service discovery infrastructure that can
handle client-side service discovery and server-side service discov
ery using the same API. The Vert.x service discovery is also able to
import and export services from and to different service discovery
infrastructures. Vert.x includes a set of resilience patterns such as
timeout, circuit breaker, and failover. We saw different examples of
these patterns. Dealing with failure is, unfortunately, part of the job
and we all have to do it.
In the next chapter, we will learn how to deploy Vert.x reactive
microservices on OpenShift and illustrate how service discovery,
circuit breakers, and failover can be used to make your system
almost bulletproof. While these topics are particularly important,
dont underestimate the other concerns that need to be handled
Summary | 55
CHAPTER 5
Deploying Reactive Microservices
in OpenShift
What Is OpenShift?
RedHat OpenShift v3 is an open source container platform. With
OpenShift you deploy applications running in containers, which
makes their construction and administration easy. OpenShift is built
on top of Kubernetes (https://kubernetes.io/).
Kubernetes (in blue in Figure 5-1) is a project with lots of function
ality for running clusters of microservices inside Linux containers at
scale. Google has packaged over a decade of experience with con
tainers into Kubernetes. OpenShift is built on top of this experience
57
and extends it with build and deployment automation (in green in
Figure 5-1). Use cases such as rolling updates, canary deployments,
and continuous delivery pipelines are provided out of the box.
Build Configuration
The build is the process of creating container images that will be
used by OpenShift to instantiate the different containers that make
up an application. OpenShift builds can use different strategies:
Deployment Configurations
A deployment configuration defines the instantiation of the image
produced by a build. It defines which image is used to create the
containers and the number of instances we need to keep alive. It also
describes when a deployment should be triggered. A deployment
also acts as a replication controller and is responsible for keeping
containers alive. To achieve this, you pass the number of desired
instances. The number of desired instances can be adjusted over
time or based on the load fluctuation (auto-scaling). The deployment
What Is OpenShift? | 59
can also specify health checks to manage rolling updates and detect
dead containers.
Pods
A pod is a group of one or more containers. However, it is typically
comprised of a single container. The pod orchestration, scheduling,
and management are delegated to Kubernetes. Pods are fungible,
and can be replaced at any time by another instance. For example, if
the container crashes, another instance will be spawned.
import io.vertx.core.AbstractVerticle;
import io.vertx.core.http.HttpHeaders;
import io.vertx.core.json.JsonObject;
import io.vertx.ext.web.*;
@Override
public void start() {
Router router = Router.router(vertx);
router.get("/").handler(this::hello);
router.get("/:name").handler(this::hello);
vertx.createHttpServer()
.requestHandler(router::accept)
.listen(8080);
}
Service Discovery
Now that we have the hello microservice deployed, lets consume it
from another microservice. The code we are going to deploy in this
section is contained in the openshift/hello-microservice-
consumer-openshift directory from the code repository.
To consume a microservice, we first have to find it. OpenShift pro
vides a service discovery mechanism. Service lookup can be done
using environment variables, DNS, or the Vert.x service discovery,
which we use here. The project pom.xml is configured to import the
Vert.x service discovery, the Kubernetes service importer, and a
server-side service discovery. You dont have to explicitly register the
service on the provider side as the Fabric8 Maven plug-in declares a
service for us. Our consumer is going to retrieve this OpenShift ser
vice and not the pods.
@Override
public void start() {
Router router = Router.router(vertx);
router.get("/").handler(this::invokeHelloMicroservice);
// Create the service discovery instance
ServiceDiscovery.create(vertx, discovery -> {
// Look for an HTTP endpoint named "hello-microservice"
// you can also filter on 'label'
Single<WebClient> single = HttpEndpoint.rxGetWebClient
(discovery, rec -> rec.getName().equals
("hello-microservice"),
new JsonObject().put("keepAlive", false));
single.subscribe(
client -> {
// the configured client to call the microservice
this.hello = client;
vertx.createHttpServer()
.requestHandler(router::accept)
.listen(8080);
},
err -> System.out.println("Oh no, no service")
);
});
}
In the start method, we use the service discovery to find the hello
microservice. Then, if the service is available, we start the HTTP
You may see a 503 error page, since the pod has not yet started. Just
refresh until you get the right page. So far, nothing surprising. The
displayed served-by values are always indicating the same pod (as
we have only one).
You can also set the number of replicas using the oc command line:
# scale up to 2 replicas
oc scale --replicas=2 dc hello-microservice
# scale down to 0
oc scale --replicas=0 dc hello-microservice
Lets create a second instance of our hello microservice. Then, wait
until the second microservice has started correctly (the wait time is
annoying, but we will fix that later), and go back to the hello-
consumer page in a browser. You should see something like:
{
"luke" : "hello Luke hello-microservice-1-h6bs6",
"leia" : "hello Leia hello-microservice-1-keq8s"
}
If you refresh several times, you will see that the OpenShift service
balances the load between the two instances. Do you remember the
keep-alive settings we disabled? When the HTTP connection uses
a keep-alive connection, OpenShift forwards the request to the
same pod, providing connection affinity. Note that in practice,
keep-alive is a very desirable header as it allows reusing connec
tions.
In the previous scenario there is a small issue. When we scale up,
OpenShift starts dispatching requests to the new pod without check
ing whether the application is ready to serve these requests. So, our
consumer may call a microservice that is not ready and get a failure.
There are a couple of ways to address this:
@Override
public void start() {
Router router = Router.router(vertx);
router.get("/health").handler(
HealthCheckHandler.create(vertx)
.register("http-server-running",
future -> future.complete(
started ? Status.OK() : Status.KO())));
router.get("/").handler(this::hello);
router.get("/:name").handler(this::hello);
vertx.createHttpServer()
.requestHandler(router::accept)
.listen(8080, ar -> started = ar.succeeded());
}
When the pod is ready, OpenShift routes the requests to this pod
and shuts down the old one. When we scale up, OpenShift doesnt
route requests to a pod that is not ready.
Summary
In this chapter, we deployed microservices in OpenShift and saw
how Vert.x and the OpenShift features are combined to build reac
tive microservices. Combining asynchronous HTTP servers and cli
ents, OpenShift services, load-balancing, failover and consumer-side
resilience gives us the characteristics of a reactive system.
This report focuses on reactive. However, when building a microser
vice system, lots of other concerns need to be managed such as secu
rity, configuration, logging, etc. Most cloud platforms, including
OpenShift, provide services to handle these concerns.
If you want to learn more about these topics, check out the follow
ing resources:
OpenShift website
OpenShift core concepts
Kubernetes website
OpenShift health checks documentation
Summary | 71
CHAPTER 6
Conclusion
We are at the end of our journey together, but you have many new
avenues to explore. We have covered a lot of content in this small
report but certainly didnt cover everything! We have just scratched
the surface. There are more things to consider when moving toward
reactive microservices. Vert.x is also not limited to microservices
and can handle a large set of different use cases.
73
Of course, one microservice does not make an application. They
come in systems. To build systems, we have to use service discovery.
Service discovery enables location transparency and mobility, two
important characteristics in microservice systems. We also covered
resilience patterns, since microservice systems are distributed sys
tems and you need to be prepared for failure.
In the last chapter, we deployed our microservices on top of Open
Shift, an open source container platform based on Kubernetes. The
combination of Vert.x and OpenShift simplifies the deployment and
execution of reactive microservices and keeps the whole system on
track.
So, is this the end? No! Its only the end of the first stage.
74 | Chapter 6: Conclusion
forms also provide configuration abilities. To mitigate this diversity,
Vert.x is capable of retrieving configurations from almost anywhere.
Once you deploy and configure your microservices, you need to
keep your system on track. Logging, metrics, and tracing are impor
tant concerns to keep in mind when designing and developing a
microservice system. You have to retrieve the logged messages, the
measures, and the traces from your microservices to aggregate them
in a centralized way to enable correlation and visualization. While
logging and monitoring are generally well understood, distributed
tracing is often ignored. However, traces are priceless in microservi
ces because they will help you identify bottlenecks, the affinity
between microservices, and give you a good idea of the responsive
ness of your system.
Vert.x Versatility
While this report has focused on reactive microservices, this is only
a single facet of Vert.x. The richness of the Vert.x ecosystem lets you
develop lots of different applications. Thanks to its execution model,
your applications will be asynchronous and will embrace the reactive
system mantra.
Modern web applications provide a real-time, interactive experience
for users. The information is pushed to the browser and is displayed
seamlessly. The Vert.x event bus can be used as the backbone to
deliver such an experience. The browser connects to the event bus
and receives messages, and can also send messages on and interact
with the backend or with other browsers connected to the event bus.
The Internet of things (IoT) is a thrilling domain but also very het
erogeneous. There are many protocols used by smart devices. Mes
sages often have to be translated from one protocol to another.
Vert.x provides clients for a large set of protocols to implement these
translations, and its execution model can handle the high concur
rency required to build IoT gateways.
These two examples illustrate the richness of the Vert.x ecosystem.
Vert.x offers an infinite set of possibilities where you are in charge.
You can shape your system using the programming language you
prefer and the development model you like. Dont let a framework
leadyou are in charge.
76 | Chapter 6: Conclusion
About the Author
Clement Escoffier (@clementplop) is a principal software engineer
at Red Hat. He has had several professional lives, from academic
positions to management. Currently, he is working as a Vert.x core
developer. He has been involved in projects and products touching
many domains and technologies such as OSGi, mobile app develop
ment, continuous delivery, and DevOps. Clement is an active con
tributor to many open source projects such as Apache Felix, iPOJO,
Wisdom Framework, and Eclipse Vert.x.