Developing PDF
Developing PDF
Developing PDF
m
pl
im
en
ts
of
Developing
Open Cloud
Native
Microservices
Your Java Code in Action
Graham Charters,
Sebastian Daschner,
Pratik Patel & Steve Poole
REPORT
Java is the open language for modern,
microservice applications. Explore Java for
your next cloud app today.
ibm.biz/OReilly-Java
Developing Open Cloud
Native Microservices
Your Java Code in Action
The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Developing Open
Cloud Native Microservices, the cover image, and related trade dress are trademarks
of O’Reilly Media, Inc.
The views expressed in this work are those of the authors, and do not represent the
publisher’s views. While the publisher and the authors have used good faith efforts
to ensure that the information and instructions contained in this work are accurate,
the publisher and the authors disclaim all responsibility for errors or omissions,
including without limitation responsibility for damages resulting from the use of or
reliance on this work. Use of the information and instructions contained in this
work is at your own risk. If any code samples or other technology this work contains
or describes is subject to open source licenses or the intellectual property rights of
others, it is your responsibility to ensure that your use thereof complies with such
licenses and/or rights.
This work is part of a collaboration between O’Reilly and IBM. See our statement of
editorial independence.
978-1-492-05272-2
[LSI]
Table of Contents
Foreword. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
What It Means to Be Cloud Native 1
Why Java and the Java Virtual Machine for Cloud Native
Applications? 5
Summary 6
3. Foundation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Rapidly Developing Service Implementations 23
Persisting Service Data 29
Implementing REST Services 34
Summary 42
iii
Handling Service Faults 52
Publishing and Consuming APIs 55
Summary 57
5. Running in Production. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Reporting Health 60
Monitoring JVM, Runtime, and Application Metrics 63
Tracing Microservice Requests 69
Summary 73
iv | Table of Contents
Foreword
v
quickly filling possible functionality gaps by delivering new APIs or
extending existing ones on demand.
All popular runtimes are implementing Java EE/Jakarta EE and
Eclipse MicroProfile APIs at the same time. You only have to down‐
load the Jakarta EE/MicroProfile runtime of your choice (a ZIP file),
extract the archive, and define the Java EE/MicroProfile API as a
“provided” dependency in Maven/Gradle (10 lines of code). In 10
minutes or less, the “Java Cloud Native Microservice Dream Team”
is ready to deliver value to the customer with the very first iteration.
This book gives you a pragmatic introduction to cloud native Java,
from the Java development kits and the open source ecosystem to a
minimalistic coffee shop example. With MicroProfile and Jakarta
EE, minimalism is the new best practice.
— Adam Bien
http://adam-bien.com
vi | Foreword
largely the result of careful evolutions of APIs and specifications,
release-to-release compatibility that has lasted for years, and multi-
vendor support and participation. MicroProfile only started in 2016
and is a demonstration of a truly open community’s ability to inno‐
vate quickly. With a cadence of three releases per year, MicroProfile
has rapidly evolved to a complete set of specifications for building,
deploying, and managing microservices in Java.
Eclipse community projects are always driven by great developers,
and at the Eclipse Foundation the cloud native Java community has
had many important contributors. I would like to recognize (in no
particular order) the contributions of just a few: Bill Shannon, Dmi‐
try Kornilov, Ed Bratt, Ivar Grimstad, David Blevins, Richard
Monson-Haefel, Steve Millidge, Arjan Tijms, John Clingan, Scott
Stark, Mark Little, Kevin Sutter, Ian Robinson, Emily Jiang, Markus
Karg, James Roper, Mark Struberg, Wayne Beaton, and Tanja Obra‐
dović are but a few individuals who have been leaders among this
community. My apologies in advance for forgetting someone from
this list!
I have known, or known of, the authors for many years, and during
that time they have been tireless champions of Java technologies.
This book will hopefully raise the profile of Java’s important role in
cloud native technologies, and lead to broader knowledge and adop‐
tion of the APIs, frameworks, technologies, and techniques which
will keep Java relevant for this new generation of cloud-based sys‐
tems and applications.
— Mike Milinkovich
Executive Director, Eclipse Foundation
Foreword | vii
Preface
ix
What You Will Learn
By the end of this book you will understand the unique challenges
that arise when creating, running, and supporting cloud native
microservice applications. This book will help you decide what else
you need to learn when embarking on the journey to the cloud, and
how modern techniques can help with deployment of new applica‐
tions in general.
The book will briefly explain important considerations for designing
an application for the cloud. It covers the key principles for micro‐
services of distribution, data consistency, continuous delivery, and
the like, which not only are important for a cloud application but
also support the operational and deployment needs of modern 24x7,
highly available Java-based applications in general.
x | Preface
Finally, in Chapter 6 we’ll wrap things up and talk about future
directions for open cloud native Java applications.
Preface | xi
Using Code Examples
Supplemental material (code examples, exercises, etc.) is available
for download at https://github.com/IBM/ocn-java.
This book is here to help you get your job done. In general, if exam‐
ple code is offered with this book, you may use it in your programs
and documentation. You do not need to contact us for permission
unless you’re reproducing a significant portion of the code. For
example, writing a program that uses several chunks of code from
this book does not require permission. Selling or distributing a CD-
ROM of examples from O’Reilly books does require permission.
Answering a question by citing this book and quoting example code
does not require permission. Incorporating a significant amount of
example code from this book into your product’s documentation
does require permission.
We appreciate, but do not require, attribution. An attribution usu‐
ally includes the title, author, publisher, and ISBN. For example:
“Developing Open Cloud Native Microservices by Graham Charters,
Sebastian Daschner, Pratik Patel, and Steve Poole (O’Reilly). Copy‐
right 2019 Graham Charters, Sebastian Daschner, Pratik Patel, Steve
Poole, 978-1-492-05272-2.”
If you feel your use of code examples falls outside fair use or the per‐
mission given above, contact us at permissions@oreilly.com.
xii | Preface
How to Contact Us
Please address comments and questions concerning this book to the
publisher:
Acknowledgments
First of all, we would like to thank the open source contributers who
put their time into the technologies covered in this book. Without
their experience and contribution to open source Java and Cloud
technologies, it would not be possible to have this rich ecosystem on
the cutting edge of software development.
We would also like to thank our reviewers, Chris Devers, Neil Pat‐
terson, Anita Chung, and Kevin Sutter, who helped shape this book
and make it of the highest quality.
We would also like to thank Adam Bien and Mike Milinkovich for
contributing the foreword, and for their leadership in community
activities around open source Java and the open Java work at the
Eclipse Foundation.
Finally, we would like to thank the team at O’Reilly for bearing with
us as we worked to create this book. We hope you enjoy reading it!
Preface | xiii
CHAPTER 1
Introduction
Microservice Oriented
First, cloud native architectures break from the traditional design of
monoliths and rely on containers (e.g., Docker) and serverless com‐
pute platforms. This means that applications are smaller and com‐
posed at a higher level. We no longer extend an existing application’s
functionality by creating or importing a library into the application,
which makes the application binary larger, slower to start and exe‐
cute, and more memory-intensive. Instead, with cloud native we
build new microservices to create a new feature and integrate it with
the rest of the application using endpoint type interfacing (such as
HTTP) and event type interfacing (such as a messaging platform).
1
For example, say we needed to add image upload capability to our
application. In the past, we would have imported a library to imple‐
ment this functionality, or we would have written an endpoint
where we accept a binary type through a web form and then saved
the image locally to our server’s disk. In a cloud native architecture,
however, we would create a new microservice to encapsulate our
image services (upload, retrieve, etc.). We would then save and
retrieve this image, not to disk, but to an object storage service in
the cloud (either one we would create or an off-the-shelf service
provided by our cloud platform).
This microservice also exposes an HTTP endpoint, but it is isolated
from the rest of the application. This isolation allows it to be devel‐
oped and tested without having to involve the rest of the application
—giving us the ability to develop and deploy faster. As it is not
tightly coupled with the rest of the application, we can also easily
add another way to invoke the routine(s): hooking it into an event-
driven messaging system, such as Kafka.
Loosely Coupled
This brings us to our second main discussion point on cloud native:
we rely more on services that are loosely coupled, rather than tightly
coupled monolith silos. For example, we use an authentication
microservice to do the initial authentication. We then use JSON
Web Tokens (JWT) to provide the necessary credentials to the rest
of our microservices suite to meet the security requirements of our
application.
The loose coupling of these small, independent microservices pro‐
vides immense benefits to us software developers and the businesses
that run on these platforms:
Cost
We are able to adapt our compute needs to demand (known as
elastic computing).
Maintainability
We are able to update or bug-fix one small part of our applica‐
tion without affecting the entire app.
Flexibility
We can introduce new features as new microservices and do
staged rollouts.
2 | Chapter 1: Introduction
Speed of development
As we are not doing low-level management of servers (and
dynamic provisioning), we can focus on delivering features.
Security
As we are more nimble, we can patch parts of our application
that need urgent fixes without extensive downtime.
Twelve-Factor Methodology
Along with these high-level cloud native traits, we should also dis‐
cuss the twelve-factor application methodology, a set of guidelines for
building applications in cloud native environments. You can read
about them in detail on their website, but we’ll summarize them for
you here:
Following these best practices will help developers succeed and will
reduce manual tasks and “hacks” that can impede the speed of
development. It will also help ensure the long-term maintainability
of your application.
Rapid Evolution
Cloud native development brings new challenges; for example,
developers often see the loss of direct access to the “server” on which
their application is running as overly burdensome. However, the
tools available for building and managing microservices, as well as
cloud provider tools, help developers to detect and troubleshoot
warnings and errors. In addition, technologies such as Kubernetes
enable developers to manage the additional complexity of more
instances of their microservices and containers. The combination of
microservices required to build a full, large application, often
referred to as a service mesh, can be managed with a tool such as
Istio.
Cloud native is rapidly evolving as the developer community better
understands how to build applications on cloud computing plat‐
forms. Many companies have invested heavily in cloud native and
are reaping the benefits outlined in this section: faster time to mar‐
ket, lower overall cost of ownership, and the ability to scale with
customer demand.
It’s clear that cloud native is becoming the way to create modern
business applications. As the pace of change is fast, it is important to
understand how to get the best out of the technology choices
available.
4 | Chapter 1: Introduction
Why Java and the Java Virtual Machine for
Cloud Native Applications?
In principle any programming language can be used to create
microservices. In reality, though, there are several factors that
should influence your choice of a programming language.
Why Java and the Java Virtual Machine for Cloud Native Applications? | 5
Software Design and Cloud Solutions
Finally, it’s important to understand that a modern cloud native
application is more complex than traditional applications. This
complexity arises because a cloud native solution operates in a world
where scale, demand, and availability are increasingly significant
factors. Cloud native applications have to be highly available, scale
enormously, and handle wide-ranging and dynamic demand. When
creating a solution, you must look carefully at what the program‐
ming language offers in terms of reducing design issues and bugs.
The Java runtime, with its object-oriented approach and built-in
memory management, helps remove problems that are challenging
to analyze locally, let alone in a highly dynamic cloud environment.
Java and the JVM address these challenges by enabling developers to
create applications that are easier to debug, easier to share, and less
prone to failure in challenging environments like the cloud.
Summary
In this chapter we outlined the key principles of being cloud native,
including being microservice oriented, loosely coupled, and respon‐
sive to the fast pace of change. We summarized how following the
twelve-factor methodology helps you succeed in being cloud native
and why Java is the right choice for building cloud native
applications.
In the next chapter we explore the importance of an open approach
when choosing what to use for your cloud native applications. An
open approach consists of open standards to help you interoperate
and insulate your code from vendor lock-in, open source to help
you reduce costs and innovate faster, and open governance to help
grow communities and ensure technology choices remain inde‐
pendent of undue influence from any one company. We’ll also out‐
line our technology choices for the cloud native microservices
shown in the remainder of the book.
6 | Chapter 1: Introduction
CHAPTER 2
Open Technology Choices
We’ll start by talking about the role of open source, why it’s impor‐
tant to us, and how to evaluate candidate projects. Next, we’ll talk
about the role of open standards, the benefits they provide, and their
relationship to open source. We’ll then talk about open governance,
its importance across both open source and open standards, and
how it helps in building open communities.
We’ll show you how to use your new understanding of open source,
standards, and governance to make informed open technology
choices. Finally, we’ll describe the open technologies we’ve chosen to
use in the subsequent chapters of this book.
7
Open Source
Most developers are aware of the concept of open source. It’s worth‐
while, however, to distinguish between free software and open source
software. The benefits of free (as in no cost) software are evident to
all of us. It’s great to be able to use something without paying for it.
It’s important, though, to understand if there are hidden costs to
using “free” software. Quite often there are usage restrictions, such
as time limits, that impact your ability to use the software as part of
a solution.
In essence, open source means the source code is available for any‐
one to see. But that’s not the only reason we care about open source
—in fact, as users of open source, we very rarely look at the source
code. So why does open source matter to us? There are a number of
reasons:
Cost
I can use it without paying, and there is both community and
paid support available.
Speed
If I use this open source project, it will help me get my job done
quicker.
Influence
If I find a problem, I can fix it or raise an issue. If I need an
enhancement, I can contribute it or request a new feature.
Community
If I have a problem, the community will hopefully help; I don’t
need to open a support ticket. I can also become part of the
community.
Opportunity
Significant, diverse open source projects grow larger markets
where there will be demand for my skills.
As you can see, many of the characteristics of open source—the rea‐
sons we’d want to use it—don’t stem from its simple availability. For
example, just because the source is available doesn’t mean there’s a
community to support it. The reality is, there’s a wide-ranging set of
attributes of open source projects that you need to consider when
choosing what is right for you, including the following:
Open Community
Open source is not just about the code: it’s also about how the code is
designed, created, tested, and supported. It’s about the people
involved and how they interact. There are many approaches to open
source, from a single individual sharing their work on GitHub all
the way to large team efforts, spread out across companies and geog‐
raphies.
So what should we look for in an open source community?
Open Source | 9
Vibrancy
Ideally, vibrant open source projects with multivendor participation
(i.e., multiple companies and/or individuals) and open governance
should be preferred, as they have been shown to offer the maximum
benefit to participants and consumers alike. However, many open
source projects are single-individual or single-company efforts.
The vibrancy of a project is measured in terms of factors such as the
number of active contributors, number of recent contributions, con‐
tributor company affiliations, and support for the user base. If a
project isn’t very active, has limited contributors, ignores outside
contribution, or neglects its user base, these are all reasons to think
twice before adopting. Here are some questions to ask yourself when
vetting a project:
Vendor neutrality
Software development is a creative process. Developers spend valua‐
ble time designing, writing, testing, fixing, and even supporting the
software. Some developers do contribute to open source for the love
of it. They enjoy giving something to the community, or it scratches
a metaphorical itch. However, for the vast majority of open source
projects, open source is a company’s business model. It might be that
it facilitates collaboration with other companies and so enables mar‐
kets to grow, similar to collaboration on open standards. Often such
A focus on collaboration
Like with open standards, open source has repeatedly demonstrated
that collaboration and sharing is in fact a business enabler and
amplifier. Examining the existing software communities, we can
easily see that the more successful ones are those that understand
that collaborating and innovating together creates a fast-paced eco‐
system whose potential audience is larger than the individual mar‐
kets of each of the participating companies.
Open Source | 11
Everyone benefits more from the creation of a large market where
no one participant owns the standards or the implementations. The
collaborative model also fuels innovation because everyone in the
community has a vested interest to keep it evolving and, since the
community is larger, the rate of innovation and quality is often
higher.
Open Standards
There’s no question that open standards are incredibly important.
Imagine a world without TCP/IP, HTTP, HTML, and the like.
Standards enable contracts for interoperability and portability. They
protect users against vendor and implementation lock-in, but also
enable ecosystems to grow and thrive by enabling collaboration and
more rapid and greater adoption. They enable vendors to collabo‐
rate or interoperate and then differentiate themselves through quali‐
ties of service, such as performance, footprint, or price. They also
enable developers to build skills applicable to more employers and
broader markets.
In the early 2000s, the collaboration between many companies and
individuals around Java EE created a vibrant ecosystem that made
Java the dominant language for enterprise applications. The APIs it
defined enabled multiple implementations (IBM WebSphere, Oracle
JBoss, Oracle WebLogic, etc.), and to this day, those APIs continue
to thrive through implementations from IBM, Oracle, Payara, Tomi‐
tribe, and others.
Historically, the Enterprise Java standards have been created
through the Java Community Process (JCP). In recent years, Eclipse
has become the place for Enterprise Java standards, initially with the
development of the Eclipse MicroProfile APIs and more recently
with Jakarta EE (the open future of the Java EE specifications).
An open standards approach, in conjunction with structured and
impartial governance from an organization like the Eclipse Founda‐
tion, means that innovation will be maximized and all the partici‐
pants will have equal footing.
Open Governance
Having a clear, fair, and impartial community process is essential for
the long-term health and prosperity of any open community,
Open Governance | 13
Choosing Application Technologies
When considering any project, you need to factor in how critical the
technology is to your solution, how complex it is, and how easy it
would be to switch out. If you’re choosing a single-vendor open
source project, picking one that exposes standard APIs greatly
reduces the associated risk.
Small, modular components in a framework—that is, ones that can
easily be replaced with alternative approaches—offer less risk com‐
pared to building on top of a framework or runtime that is not mod‐
ular and built on standard APIs.
Selecting the right cloud native technologies comes down to a mix of
need, risk, complexity, and community. The right combination is
not necessarily the same for everyone. If you are seeking a Java-
based approach that will have the best chance of navigating the com‐
plexities of cloud, the Eclipse Foundation provides an excellent
platform. The Eclipse Foundation offers a vendor-neutral,
community-led, and (above all) community-focused attitude; the
Eclipse MicroProfile and Jakarta EE technologies are the best start‐
ing point in our opinion.
The Eclipse Foundation is the home of many great technologies and
has a justified reputation as a place where communities can grow
and work together to build leading-edge, best-of-breed solutions.
Originally set up as an openly governed organization for the devel‐
opment of open source tools and runtime technologies, the Eclipse
Foundation has more recently moved to support the development of
open standards. This means that, as organizations go, Eclipse is
pretty unique in being able to tick all three boxes: open standards,
open source, and open governance.
The Foundation is an exemplar of how communities can work
together to achieve something bigger than any one of the partici‐
pants could do on their own. Its reputation as a safe, fair, and active
home for open source and open standards is one of the reasons Ora‐
cle chose to contribute its Java EE codebase and specifications to the
Foundation. Let’s now take a look at Eclipse technologies in a little
more detail.
1 Java API for RESTful Web Services and JSON Binding/Processing, respectively.
Foundation
These are the foundational technologies for writing and calling
microservices. Most are from Java EE, but MicroProfile also
adds a really simple type-safe REST client.
Scale
These are not about scaling a single microservice; Kubernetes
does that perfectly well. Instead, they cover the APIs a developer
needs in order to start building large numbers of cloud native
microservices owned by independent teams—for example, the
ability to publish a service API definition for another team to
use, or gracefully handling problems with the services you
depend on, such as intermittent failures or slow responses.
Summary
In this chapter, we started by outlining the important principles of
good open technology around open source, open standards, and
open governance. We then showed how to use these principles to
evaluate open source and open standards choices. Finally, we talked
about the lead role the Eclipse Foundation has taken in producing
open cloud native technologies, detailing our choices of Eclipse
Jakarta EE and Eclipse MicroProfile.
Lastly, we talked about the role of the JVM and the runtime charac‐
teristics required by cloud native applications. We introduced the
Eclipse OpenJ9 JVM and discussed how its runtime profile of fast
startup and low memory footprint makes it a good choice for cloud
native Java applications. We also introduced AdoptOpenJDK as a
reliable source of prebuilt OpenJDK binaries.
In the next chapter, we’ll start getting our hands dirty in the code.
We’ll begin by diving further into the Jakarta EE and MicroProfile
technologies for implementing a REST service backed by a database.
In our implementation, we have chosen to use the Open Liberty
runtime. Open Liberty is a leader in implementing the Java EE and
MicroProfile specifications. It starts fast and can be customized to
include only the runtime components you need, making it great for
lightweight cloud native microservices. Because our code is using
Open Liberty through the Java EE and MicroProfile APIs, if we
change our mind, it’s relatively easy to switch to one of the many
other implementations available to us.
23
model and implement components that directly relate to our busi‐
ness use case.
The convenience of Enterprise Java is that the programming model
adds little weight to our individual classes and methods, thanks to
the declarative approaches of both Jakarta EE and Eclipse MicroPro‐
file. Typically, the classes of our core domain logic are simple Java
classes that are merely enhanced with a few annotations.
This is why in this book we start by only covering Java and CDI,
then gradually add more specifications as our application requires
some more cross-cutting features. With this plain approach you can
achieve a lot.
@Inject
Orders orders;
@Inject
Barista barista;
orders.store(order.getId(), order);
return order;
}
24 | Chapter 3: Foundation
}
}
The CoffeeShop class exposes the use cases for ordering a coffee,
retrieving a list of all orders or a single one, and processing unfin‐
ished orders. It defines two dependencies, Orders and Barista, to
which it delegates the further execution.
As you can see, the only Enterprise Java–specific declarations are the
injections of our dependencies via @Inject. Dependency injection,
as well as inversion of control in general, is one of the most useful
patterns for developing our applications. We developers are not
required to instantiate and wire dependent components, including
all their transitive dependencies, which means we can focus on effi‐
ciently writing the business domain logic. We define the dependen‐
cies as “we need to use this component in our class” without regard
to the instantiation. The life cycle of our instances, or beans, is man‐
aged by CDI.
The CoffeeType and OrderStatus types are Java enums that define
the available types of drinks (ESPRESSO, LATTE, POUR_OVER) and their
order statuses (PREPARING, FINISHED, COLLECTED).
The components that implement our business logic should be tested
well. Writing test cases is beyond the scope of this book. However,
with the approach of plain Java first, we can efficiently develop test
Scopes
Besides dependency injection, CDI also enables us to define the
scope of the beans. The bean’s scope determines its life cycle—for
example, when it will be created and when it will be destroyed.
Instances of the CoffeeShop class are created with an implicit depen‐
dent scope; that is, the scope is dependent on the scope of whatever
uses them. If, for example, a request-scoped HTTP endpoint is
injected with our CoffeeShop bean, the CoffeeShop instance life
cycle will also exist within the same request scope.
If we need to define a different scope, say, for a class that exists only
once in our application, we annotate the class accordingly. The fol‐
lowing example shows the application-scoped Orders class:
@ApplicationScoped
public class Orders {
26 | Chapter 3: Foundation
The Orders class is responsible for storing and retrieving coffee
orders, including their status. The @ApplicationScoped annotation
declares that there is to be one instance of the Orders bean. No mat‐
ter how many injection points we have in our application—Coffee
Shop being one of them—they will always be injected with the same
instance.
The most commonly used scopes that are available in CDI are
dependent, request, application, and session. If for some reason
these capabilities are not enough, developers can write their own
scopes and extend the features of CDI. In a typical enterprise appli‐
cation, however, this is seldom required.
Configuration
For a typical application we’ll need to configure a few things, such as
how to look up and access external systems, how to connect to data‐
bases, or which credentials to use. The good news is that in a cloud
native world, we can externalize a lot of different kinds of configura‐
tion from the application level to the environment. We don’t have to
configure and change the application binaries; instead we can have
the different configuration values injected from the environment
(e.g., such as Kubernetes ConfigMaps).
As developers, we want to focus on configuration that relates to the
application business logic. Depending on your business, your appli‐
cations might be required to behave differently in different
environments.
In general, we want to be able to inject configuration values with
minimal developer effort. We just covered dependency injection,
and ideally, we’d like to have a similar way to inject configured val‐
ues into our code.
With CDI we could write CDI producers that look up our config‐
ured values and make them available. But there’s an even easier
method: using MicroProfile Config.
@Inject
@ConfigProperty(name = "coffeeShop.order.defaultCoffeeType",
defaultValue = "ESPRESSO")
private CoffeeType defaultCoffeeType;
// ...
orders.store(order.getId(), order);
return order;
}
// ...
}
28 | Chapter 3: Foundation
variable is not set in the running application, the value will default
to ESPRESSO.
The CoffeeType enum defines multiple values that can be resolved
by the string representations, and so we can choose the ESPRESSO
string representation as the default.
@Basic(optional = false)
@Enumerated(EnumType.STRING)
@Column(name = "coffee_type")
private CoffeeType type;
@Basic(optional = false)
@Enumerated(EnumType.STRING)
private OrderStatus orderStatus;
@PersistenceContext
EntityManager entityManager;
// ...
@Transactional
public CoffeeOrder orderCoffee(CoffeeOrder order) {
order.setId(UUID.randomUUID().toString());
setDefaultType(order);
OrderStatus status = barista.brewCoffee(order);
order.setOrderStatus(status);
return entityManager.merge(order);
}
30 | Chapter 3: Foundation
}
Integrating RDBMSes
Our basic example shows the persistence configuration that is
required on the project code level. In order to integrate the database
into our application, we need to define the data source, in other
words, how to connect to the database.
Ideally, we can abstract the detailed configuration from our applica‐
tion configuration. As we saw earlier, environment-specific configu‐
ration should not be part of the application code but rather managed
by the infrastructure.
JPA manages the persistence of entities within persistence contexts.
The entity manager of a persistence context acts as a cache for the
32 | Chapter 3: Foundation
In this case, we are required to qualify the EntityManager lookups
with the corresponding persistence unit:
@PersistenceContext(unitName = "coffee-orders")
EntityManager entityManager;
Transactions
As mentioned before, Enterprise Java makes it easy to execute busi‐
ness logic within transactions. This is required once we make use of
relational databases and when we want to ensure that our data is
stored in an all-or-nothing fashion. In one way or another, the
majority of enterprise applications require ACID (atomic, consis‐
tent, isolated, durable) transactions.
In a distributed system, a business use case might involve multiple
external systems, databases, or backend services. Traditionally, these
distributed transactions have relied on the use of a two-phase com‐
mit (2PC) protocol to coordinate updates across the external sys‐
tems. Achieving this consistency across distributed systems takes
time and resources, and thus it comes at the cost of availability. In
modern internet-scale systems, availability is often key, so other
techniques based around the goal of eventual consistency have been
employed. These include patterns such as Sagas and CQRS (com‐
mand query responsibility separation). In moving to an eventual
consistency model, a system becomes more loosely coupled and
responsive, with the caveat that the data may be a little stale. For a
more detailed understanding of these principles, we recommend
you look at the literature on CAP theorem.1
In order to guarantee data consistency, our systems typically require
us to use transactions in which a single database participates. As
we’ve seen in the example, the @Transactional annotation enables
this functionality without requiring developers to write boilerplate
code or extensive configuration. If required, we can further refine
how multiple, nested methods are executed. For example, methods
that are executed within an active transaction can suspend the trans‐
action and start a new transaction that is active during their execu‐
tion, or they can be part of an existing transaction. For further
information, have a closer look at the semantics of the parameters of
1 For more information, see this Illustrated Proof of the CAP theorem.
Boundary Classes
JAX-RS resource classes typically represent the boundaries, or the
entry points, of our business use cases. Clients make HTTP requests
and thus start a specific business process in the backend.
The following shows a JAX-RS resource class that implements the
HTTP handling for retrieving coffee orders:
import javax.ws.rs.*;
import javax.ws.rs.core.*;
@Path("/orders")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
34 | Chapter 3: Foundation
public class OrdersResource {
@Inject
CoffeeShop coffeeShop;
@GET
public List<CoffeeOrder> getOrders() {
return coffeeShop.getOrders();
}
}
@Inject
CoffeeShop coffeeShop;
@Context
UriInfo uriInfo;
@POST
public Response orderCoffee(CoffeeOrder order) {
36 | Chapter 3: Foundation
how objects are mapped. Let’s look at an example of how to pro‐
grammatically map our coffee order type:
@Path("/orders")
public class OrdersResource {
@Inject
CoffeeShop coffeeShop;
@Context
UriInfo uriInfo;
@GET
public JsonArray getOrders() {
return coffeeShop.getOrders().stream()
.map(this::buildOrder)
.collect(JsonCollectors.toJsonArray());
}
@JsonbTransient
private final UUID id = UUID.randomUUID();
@JsonbTypeAdapter(CoffeeTypeDeserializer.class)
private CoffeeType type;
@JsonbProperty("status")
private OrderStatus orderStatus;
// methods omitted
}
Validating Resources
Request data that is received from clients needs to be sanitized
before it can be used further. For security reasons you should never
trust the data that has come from an external source, such as a web
form or REST request. In order to make it simple to validate input,
Enterprise Java ships with the Bean Validation API, which allows us
to declaratively configure the desired validation. The good news for
developers is that this standard integrates seamlessly with the rest of
the platform, including, for example, JAX-RS resources.
38 | Chapter 3: Foundation
To ensure that only valid coffee orders are accepted in our applica‐
tion, we enhance our JAX-RS resource method with Bean Validation
constraints:
...
@POST
public Response orderCoffee(@Valid @NotNull CoffeeOrder order) {
final CoffeeOrder storedOrder = coffeeShop.
orderCoffee(order);
return Response.created(buildUri(storedOrder)).build();
}
@JsonbTransient
private final UUID id = UUID.randomUUID();
@NotNull
@JsonbTypeAdapter(CoffeeTypeDeserializer.class)
private CoffeeType type;
The type of a coffee order must not be null either; that is, clients
must provide a valid enumeration value. The value is automatically
mapped by the provided JSON-B type adapter, which returns a null
if an invalid value is transmitted. Consequently, validation will fail
for any invalid values.
JAX-RS integrates with Bean Validation such that if any constraint
validations fail, an HTTP status code of 400 Bad Request is
automatically returned. Therefore, the presented example is already
sufficient to ensure that only valid orders can be sent to our applica‐
tion.
40 | Chapter 3: Foundation
The following demonstrates an example Hypermedia response that
uses the concept of actions. There are a few Hypermedia-aware con‐
tent types that support these approaches, such as the Siren content
type on which this example is based:
{
"class": [ "coffee-order" ],
"properties": {
"type": "ESPRESSO",
"status": "PREPARING"
},
"actions": [
{
"name": "cancel-order",
"method": "POST",
"href": "https://api.coffee.example.com/cancellations",
"type": "application/json",
"fields": [
{ "name": "reason", "type": "text" },
{ "name": "order", "type": "number", "value": 123 }
]
}
],
"links": [
"self": "https://api.coffee.example.com/orders/123",
"customer": "https://api.coffee.example.com/customers/234"
]
}
In this example, the server enables the client to cancel a coffee order
and describes its usage in the cancel-order action. A new cancella‐
tion means the client would POST a JSON representation of a cancel‐
lation containing the order number and the reason to the provided
URL. In this way, the client requires knowledge only of the cancel-
order action and the origin of the provided information (i.e., the
order number, which is given, and the cancellation reason, which is
known only by the client and may be entered in a text field in the
UI).
This is one example of a content type that enables the use of Hyper‐
media controls. There is no real standard format that the industry
has agreed upon. However, this Siren-based example nicely demon‐
strates the concepts of links and actions. Whatever content type and
representation structure is being used, the projects need to agree
upon and document their usage. But as you can see, this way of
structuring the web services requires far less documentation, since
the usage of the API is baked into the resource representations
Summary
As you’ve seen in this chapter, we can already implement the vast
majority of our enterprise application using plain Java and CDI. At
its core, our business logic is written in plain Java with some
dependency injection added to simplify defining dependent compo‐
nents. MicroProfile Config enables us to inject required configura‐
tion with minimal impact in the code. What’s left is mainly
integration into our overall enterprise system, as well as nonfunc‐
tional requirements such as resiliency and observability.
We saw how to integrate persistence into our applications using JPA
and how to map domain entities to relational databases with
42 | Chapter 3: Foundation
minimal developer effort. Thanks to the previous specification work
being done in the JTA standard, we can define transactional behav‐
ior without obscuring the business code.
We can implement REST endpoints using the JAX-RS standard with
JAX-RS resources. The declarative programming model allows us to
efficiently define the endpoints with default HTTP bindings. It also
allows us to further customize the HTTP request and response map‐
pings, if required.
Enterprise Java supports binding our entities to and from JSON,
either declaratively using JSON-B or programmatically using JSON-
P. Which approach makes more sense depends on the complexity of
the entity representations. The requests can be validated using Bean
Validation, which allows developers to specify the validation pro‐
grammatically or declaratively as well. Enterprise developers might
want to explore the concepts behind Hypermedia that allow further
decoupling from the server, make the server resources discoverable,
and make communication more flexible and adaptive.
Summary | 43
CHAPTER 4
Cloud Native Development
45
they are simply requests to retrieve or modify state on the server.
There is no capability within the protocol to define any sort of rela‐
tionship between these calls. This design approach means that
HTTP services can balance workload effectively across multiple
servers (and the like) because any call can be routed to any available
responder.
This stateless design is effective for public data where the caller can
remain anonymous, but at some point it becomes essential to differ‐
entiate one client from another.
As mentioned before, prior to a client authenticating themselves, a
service does not need to be able to differentiate between callers.
They can remain anonymous and undifferentiated. Once a client is
authenticated to the server, however, then they are no longer anony‐
mous. The client may have particular powers to modify the state of
the server; hence, the server must ensure there are appropriate con‐
trols in place to prevent hijacking of the communications between
the user and the server.
Application architectures therefore face a continuous challenge in
determining how to communicate securely and statefully with an
authenticated client when the underlying protocol is stateless.
Header
{
"alg": "HS256",
"typ": "JWT"
}
The JSON that makes up the header typically has two properties.
The "alg" field defines the encryption algorithm used in the signa‐
ture. The "typ" field specifies the type of the token, which by defini‐
tion is "JWT".
Payload
This section contains claims, which are optional. There are multiple
predefined claims, some of which, although technically optional, are
generally essential.
Claims are logically grouped into three types:
Registered claims
Claims that are the most obviously useful or essential. The list
includes the token expiration time, the token’s issuer, and the
subject or principle of the token.
• "sub" or subject, the value that will be returned via the Micro‐
Profile JsonWebToken.getCallerPrinciple() method.
• "email", a private claim that can be accessed by using Json
WebToken.getClaim("email").
• "exp" or expiration date, the date and time after which this
token is considered to be invalid.
• "groups", the list of groups or roles the subject is a member of.
This can be automatically checked with the @RolesAllowed
annotation.
Signature
The signature is an encrypted version of the header and payload
joined together. It is created by base64 encoding the header and pay‐
load and then encrypting the result.
The signature can be signed by either a public or private key. In
either case, the token recipient can easily assert that the data has not
been modified. Of course, if the issuer signed the token with a pub‐
lic key, then any potential bad actor could create a fraudulent token
by using the public key. So while it is technical feasible to use public
keys, it is best to use a private key. Doing so provides additional
proof that the token issuer is who they claim to be, as only they have
the private key.
Consuming JWT
On each endpoint class, use CDI to inject the current JWT instance:
@Path("/orders")
public class OrdersResource {
@Inject
private JsonWebToken jwtPrincipal;
Each endpoint using JWT support looks similar to the following
example:
@GET
@RolesAllowed({"member"})
@Path("coffeeTypes")
public Response listSpecialistCoffeeTypes() {
...
JsonValue claim=jwtPrincipal.getClaim("adult");
if(claim==null || claim!=JsonValue.TRUE) {
return Response.status(Response.Status.FORBIDDEN).build();
}
// normal processing of order
Encrypting claims
Since JWT contents are essentially public, if the claim information is
sensitive, then it can be worthwhile to encrypt the contents of the
claim and even obscure the claim name itself. In this example, the
actual age of the user is needed:
"age" : "25"
10
0 . 99999 = 0 . 9999 = 99 . 99 %
100
0 . 99999 = 0 . 999 = 99 . 9 %
This code makes a remote call to the Barista service to retrieve the
status of an order. The client may throw an exception, for example,
if it is unable to connect to the Barista service. If this occurs, the
@Retry annotation will cause the request to be retried, and in the
event none of the request is successful, the @Fallback annotation
causes the unknownBrewStatus method to be called, which returns
OrderStatus.UNKNOWN.
Summary
In this chapter we’ve discussed a number of areas you need to focus
on when developing cloud native microservices: end-to-end security
through your microservices flow, graceful handling of network and
service availability issues to prevent cascading failures, and simple
sharing and use of microservices APIs between teams. While these
areas aren’t unique to the microservices world, they’re essential to
success within it. Without these approaches, your microservice
teams will struggle to share and collaborate while remaining
autonomous.
We’ve shown how using the open standards of JWT and Open API
and their integration into Enterprise Java through MicroProfile,
along with MicroProfile’s easy-to-use Fault Tolerance strategies,
makes it relatively easy to address these requirements. For addi‐
tional step by step instructions on how to build a cloud native
microservices application in Java, please visit ibm.biz/oreilly-cloud-
native-start.
In the next chapter we’ll move on to cloud native microservice
deployment and how to take to make your services observable so
you can detect, analyze, and resolve problems encountered in
production.
Summary | 57
CHAPTER 5
Running in Production
59
alerting tools to enable proactive detection and management of
issues.
Reporting Health
It’s common practice to report the health of a service through a
REST endpoint. Kubernetes liveness and readiness probes can be
configured to call these endpoints to get the health status of a service
and take appropriate action. For example, if a service reports itself as
not being ready, then Kubernetes will not deliver work to it. If a
check for liveness fails, then Kubernetes will kill and restart the
container.
Because of these different liveness and readiness remediation strate‐
gies, the types of health checks you perform will likewise differ. For
example, readiness should be based on transient events that are out‐
side your container’s control, such as a required service or database
being unavailable (presumably temporarily). Liveness, however,
should check for things that are unlikely to go away without a con‐
tainer restart—for example, running low on memory.
The MicroProfile Health API takes away the need to understand
which HTTP responses to provide for the different health states,
allowing you to focus on just the code required to determine
whether or not the service is healthy.
The next example is a readiness health check for the coffee-shop
service, denoted by the @Readiness annotation.
@Readiness
@ApplicationScoped
public class HealthResource implements HealthCheck {
try {
Client client = ClientBuilder.newClient();
WebTarget target =
client.target(
"http://barista:9080/barista/resources/brews");
Response response = target.request().get();
@Override
public HealthCheckResponse call() {
boolean up = isHealthy();
return HealthCheckResponse.named("coffee-shop")
.withData("barista",
String.valueOf(up))
.state(up).build();
}
}
Reporting Health | 61
If the health check fails, it returns 503 SERVICE UNAVAILABLE with
the following JSON:
{
"checks": [
{
"data": {
"barista": "false"
},
"name": "coffee-shop",
"state": "DOWN"
}
],
"outcome": "DOWN"
}
A service can implement multiple health checks. The overall health
response is an aggregation (logical AND) of all the checks. If any one
of the checks is DOWN, then the overall outcome is reported as DOWN.
Kubernetes Integration
As mentioned earlier, MicroProfile Health is designed to work
seamlessly with Kubernetes. Kubernetes allows you to configure two
types of health probe when you deploy your microservice: readiness
and liveness. The extracts of YAML in the next example show the
configuration for readiness problems for the coffee-shop service.
An initial delay is set to give the service sufficient time to start and
report its true status. After this delay, Kubernetes will check the
readiness every five seconds, and if it returns a 503 SERVICE
UNAVAILABLE HTTP response code, then Kubernetes will stop deliv‐
ering requests to it:
spec:
containers:
- name: coffee-shop-container
image: example.com/coffee-shop:1
ports:
- containerPort: 9080
# system probe
readinessProbe:
httpGet:
path: /health/ready
port: 9080
initialDelaySeconds: 15
periodSeconds: 5
failureThreshold: 1
Figure 5-2 shows a dashboard for some of the default JVM metrics.
These include heap size, system load, CPU load, and threads. These
are all interesting and important metrics to track in order to detect
when a service may encounter difficulties.
It’s possible to drill down on the spans to see more details. Figure 5-5
shows the details of the updatecoffeebrew request. It shows the
timings for the requests, the HTTP request information (URL,
methods, status), and the OpenTracing IDs for the request.
Figure 5-7 shows the details of the failed request where we can see
the exception and the HTTP methods and status.
Not all useful trace points correspond to REST API calls. For exam‐
ple, in our applications, an EJB Timer is used to periodically check
the status of orders. If you want to control which methods are
traced, MicroProfile provides the ability to explicitly trace spans
using the @Traced annotation. In the absence of an explicit trace
span, all we see in OpenTracing is the request coming into the
barista service.
The following code shows the explicit trace span added to the Order
Processor.java:
@Traced
public void processOrder(CoffeeOrder order) {
OrderStatus status = barista.retrieveBrewStatus(order);
order.setOrderStatus(status);
}
Summary
Often when you are developing a new application, be it a traditional
monolithic application or new cloud native application, observabil‐
ity is considered secondary to getting the functionality written.
However, we’ve seen that when deploying microservices, the decom‐
position and distribution of application code makes it essential to
consider observability from day one.
We’ve seen the three key aspects to observability:
Summary | 73
OpenTelemetry
At the time of this writing, a new specification for distributed trac‐
ing is being developed, called OpenTelemetry. It is being created by
the Cloud Native Computing Foundation (CNCF) and is the con‐
vergence of OpenTracing and a Google project called OpenCensus.
Discussions are underway in the MicroProfile community to adopt
OpenTelemetry, and the expectation is that this will become the
preferred approach for distributed tracing. Given the heritage of
OpenTelemetry and the fact that MicroProfile OpenTracing enables
the most useful tracing by default, any migration is likely to be
simple.
In the previous three chapters, we’ve seen how to get started with
developing cloud native Java applications using open technologies.
As industry understanding of cloud native has evolved and matured,
a number of use cases have emerged requiring more sophisticated
patterns, such as Reactive, Saga, and CQRS, as well as a general
increase in asynchronous execution. These are beyond the scope of
this introductory book, but we’ll briefly discuss them here before we
finish with our conclusions.
75
CDI events are another example where the decoupling of business
code can happen in an asynchronous way. Since Java EE 8, CDI
events can be fired and handled asynchronously, without blocking
the code that emits the event. Another technology that supports
asynchronous communications is the WebSockets protocol, which is
also natively supported in Enterprise Java.
The Eclipse MicroProfile community recently released two reactive
specifications: Reactive Streams Operators and Reactive Messaging.
Reactive Streams Operators defines reactive APIs and types. Reac‐
tive Messaging enhances CDI to add @Incoming and @Outgoing
annotations to mark methods that process and produce messages
reactively, which could, for example, be integrated with Kafka.
Over time we will likely see the integration of these APIs in various
enterprise specifications, especially to streamline the use of multiple
technologies with regard to reactive programming. The proper
plumbing of stream, handling backpressure, and potential errors
usually results in quite a lot of boilerplate code that could be
reduced if different specifications support the same APIs.
Threading
The application server is traditionally responsible for defining and
starting threads; in other words, the application code neither is sup‐
posed to manage its own threads nor start them. This is important,
as information essential to the correct execution of the business
logic (e.g., transaction and security contexts) is often associated with
the threads. The container defines one or more thread pools that are
used to execute business logic. Developers have multiple ways to
configure these pools and to further define and restrict the execu‐
tions—for example, by making use of the bulkheads pattern. We saw
an example for this in Chapter 4, using MicroProfile Fault
Tolerance.
Another aspect of asynchronous processing is the ability to have
timed executions, (i.e., code that is executed in a scheduled or
delayed manner). For this, Enterprise Java provides EJB timers,
which are restricted to EJBs, or the managed scheduled executor
services, which are part of the concurrency utilities.
Conclusions
Our primary motivation for writing this book was to enable you to
be successful in cloud native development. As strong believers in
using open technologies wherever possible, we discussed how being
open not only helps avoid vendor lock-in, but also improves quality
and longevity. Using the open criteria of open source, open stand‐
ards, and open governance, as well as their interrelationships, we
Conclusions | 77
explained our rationale for selecting particular implementations and
approaches for cloud native Java development.
Enterprise Java, with its open specifications and open source imple‐
mentations, is already very well suited for the majority of today’s
applications, and we’ve shown you how to get started with using
those capabilities.
To introduce the concepts for cloud native development, we walked
you through a complete example of developing a cloud native appli‐
cation using only open source Java technologies based purely on
open standards. We showed how many preexisting APIs can help
you develop cloud native microservices and how new APIs are now
available to handle important cloud native functionality. For exam‐
ple, we showed how to secure your services, build resiliency into
your application code, and make your code observable with metrics,
health checks, and request tracing.
This book has focused on the foundational technologies of cloud
native Java application development. Cloud native goes far beyond
the development of Java code, though, and we’ve intentionally only
touched on other aspects such as containers, Kubernetes, Istio, and
continuous integration/continuous delivery (CI/CD). These areas
warrant books of their own. We’ve also only briefly touched on how
cloud native is causing the industry to reimagine how applications
and solutions are architected and how new architecture patterns
(e.g., Sagas, CQRS) and technologies (gRPC, RSocket, GraphQL,
Reactive, asynchronous execution) are emerging to support them.
Again, these topics could take up many books all by themselves.
Looking forward, the future appears bright for open Enterprise Java
technologies. Jakarta EE has just made its first release to create a
foundation for future specifications. MicroProfile continues to grow,
in terms of both the valuable technologies it provides and commu‐
nity and vendor implementations. With more Reactive APIs becom‐
ing available, and GraphQL, Long Running Actions (a framework
for weaving together microservices to achieve eventual consistency),
and other specifications in the pipeline, MicroProfile will soon also
have the foundational capabilities for building emerging cloud
native architecture patterns.
Finally, innovation isn’t happening just in the APIs and architectures
we use, but also in the developer tools and how they’re used. We’re
seeing open tools emerging that are designed specifically for cloud
Conclusions | 79
About the Authors
Graham Charters is an IBM senior technical staff member and
WebSphere Applications Server developer advocacy lead based at
IBM’s R&D Laboratory in Hursley, UK. He has a keen interest in
emerging technologies and practices and in particular programming
models. His past exploits include establishing and contributing to
open source projects at PHP and Apache, and participating in, and
leading industry standards at OASIS and the OSGi Alliance.
Sebastian Daschner is a lead Java developer advocate for IBM. His
role is to share knowledge and educate developers about Java, enter‐
prise software, and IT in general. He enjoys speaking at conferences;
writing articles and blog posts; and producing videos, newsletters,
and other content.
Pratik Patel is a lead developer advocate at IBM. He co-wrote the
first book on Enterprise Java in 1996, Java Database Programming
with JDBC (Coriolis Group). He has also spoken at various confer‐
ences and participates in several local tech groups and startup
groups. He hacks Java, iOS, Android, HTML5, CSS3, JavaScript,
Clojure, Rails, and—well, everything except Perl.
Steve Poole is a long-time Java developer, leader, and evangelist. He
is a DevOps practitioner (whatever that means). He has been work‐
ing on IBM Java SDKs and JVMs since Java was less than 1. He is a
seasoned speaker and regular presenter at international conferences
on technical and software engineering topics.