Docker Java App With MariaDB - Deployment in Less Than A Minute PDF
Docker Java App With MariaDB - Deployment in Less Than A Minute PDF
a Minute
Background
Java developers and DevOps professionals have long struggled to automate the deployment of
enterprise Java applications. The complex nature of these applications usually meant that
application dependencies and external integrations had to be re-configured each time an
application was deployed in DEV/TEST environments.
Many solutions advertised the model once, deploy anywhere message for application
deployments. In reality, however there were always intricacies that made it very difficult to reuse an application template across both an on-premise vSphere virtual environment and an AWS
environment, for example.
More recently, however, Docker containers popularized the idea of packaging application
components into Linux Containers that can be deployed exactly the same on any Linux host as
long as Docker Engine is installed.
Unfortunately containerizing enterprise Java applications is still a challenge mostly because
existing application composition frameworks do not address complex dependencies, external
Nginx (for load balancing), clustered Tomcat and MariaDB (as the database)
Nginx (for load balancing), clustered Jetty and MariaDB (as the database)
The same Java WAR file will be deployed on two different application servers. DCHQ not only
automates the application deployments but it also integrates with 12 different clouds to
automate the provisioning and auto-scaling of clusters with software-defined networking. We
will cover:
Building the application templates that can re-used on any Linux host running anywhere
Provisioning & auto-scaling the underlying infrastructure on any cloud (with Rackspace
being the example in this blog)
Deploying the multi-tier Java-based Pizza Shop applications on the Rackspace cluster
Monitoring the CPU, Memory & I/O of the Running Containers
Enabling the Continuous Delivery Workflow with Jenkins to update the WAR file of the
running applications when a build is triggered
Scaling out the Application Server Cluster for Scalability Tests
We have created four application templates using the official images from Docker Hub for the
same Pizza Shop application but for four different application servers.
Across both templates, you will notice that Nginx is invoking a BASH script plug-in to add the
container IPs of the application servers in the default.conf file dynamically (or at request time).
The application servers (Tomcat and Jetty) are also invoking a BASH script plug-in to deploy the
Pizza Shop Java WAR files from an external URL Tomcat, JBoss and Jetty are invoking the
exact same plug-in except the WAR file is getting deployed on different directories:
Tomcat dir=/usr/local/tomcat/webapps/ROOT.war
Jetty dir=/var/lib/jetty/webapps/ROOT.war
You will notice that the cluster_size parameter allows you to specify the number of containers to
launch (with the same application dependencies).
The host parameter allows you to specify the host you would like to use for container
deployments. That way you can ensure high-availability for your application server clusters
across different hosts (or regions) and you can comply with affinity rules to ensure that the
database runs on a separate host for example. Here are the values supported for the host
parameter:
host1, host2, host3, etc. selects a host randomly within a data-center (or cluster) for
container deployments
<IP Address 1, IP Address 2, etc.> -- allows a user to specify the actual IP addresses to
use for container deployments
<Hostname 1, Hostname 2, etc.> -- allows a user to specify the actual hostnames to use
for container deployments
Wildcards (e.g. db-*, or app-srv-*) to specify the wildcards to use within a
hostname
Additionally, a user can create cross-image environment variable bindings by making a reference
to another images environment variable. In this case, we have made several bindings including
database.url=jdbc:mysql://{{MariaDB|container_ip}}:3306/{{MariaDB|MYSQL_DATABA
SE}} in which the database container IP is resolved dynamically at request time and is used to
ensure that the application servers can establish a connection with the database.
Here is a list of supported environment variable values:
Rackspace API Key needs to be provided which can be retrieved from the Account Settings
section of the Rackspace Cloud Control Panel.
A user can then create a cluster with an auto-scale policy to automatically spin up new Cloud
Servers. This can be done by navigating to Manage > Clusters page and then clicking on the +
button. You can select a capacity-based placement policy and then Weave as the networking
layer in order to facilitate secure, password-protected cross-container communication across
multiple hosts within a cluster. The Auto-Scale Policy in this example sets the maximum
number of VMs (or Cloud Servers) to 10.
A user can now provision a number of Cloud Servers on the newly created cluster by navigating
to Manage > Bare-Metal Server & VM and then clicking on the + button to select Rackspace.
Once the Cloud Provider is selected, a user can select the region, size and image needed. Ports
can be opened on the new Cloud Servers (e.g. 32000-59000 for Docker, 6783 for Weave, and
5672 for RabbitMQ). A Data Center (or Cluster) is then selected and the number of Cloud
Servers can be specified.
However, many developers may wish to update the running application server containers with
the latest Java WAR file instead. For that, DCHQ allows developers to enable a continuous
delivery workflow with Jenkins. This can be done by clicking on the Actions menu of the
running application and then selecting Continuous Delivery. A user can select a Jenkins
instance that has already been registered with DCHQ, the actual Job on Jenkins that will produce
the latest WAR file, and then a BASH script plug-in to grab this build and deploy it on a running
application server. Once this policy is saved, DCHQ will grab the latest WAR file from Jenkins
any time a build is triggered and deploy it on the running application server.
Developers, as a result will always have the latest Java WAR file deployed on their running
containers in DEV/TEST environments.
We then used the BASH plug-in to update Nginxs default.conf file so that its aware of the new
application server added. The BASH script plug-ins can also be scheduled to accommodate use
cases like cleaning up logs or updating configurations at defined frequencies. An application
time-line is available to track every change made to the application for auditing and diagnostics.
To execute a plug-in on a running container, a user can click on the Actions menu of the running
application and then select Plug-ins. A user can then select the load balancer (Nginx) container,
search for the plug-in that needs to be executed, enable container restart using the toggle button.
The
default argument for this plug-in will dynamically resolve all the container IPs of the running
Tomcat servers and add them as part of the default.conf file.
An application time-line is available to track every change made to the application for auditing
and diagnostics. This can be accessed from the expandable menu at the bottom of the page of a
running application.
Alerts and notifications are available for when containers or hosts are down or when
the CPU & Memory Utilization of either hosts or containers exceed a defined
threshold.
Conclusion
Containerizing enterprise Java applications is still a challenge mostly because existing
application composition frameworks do not address complex dependencies, external integrations
or auto-scaling workflows post-provision. Moreover, the ephemeral design of containers meant
that developers had to spin up new containers and re-create the complex dependencies & external
integrations with every version update.
DCHQ, available in hosted and on-premise versions, addresses all of these challenges and
simplifies the containerization of enterprise Java applications through an advance application
composition framework that facilitates cross-image environment variable bindings, extensible
BASH script plug-ins that can be invoked at request time or post-provision, and application
clustering for high availability across multiple hosts or regions with support for auto scaling.
Contact Us:
DCHQ Inc.,
845 Market St #450
San Francisco, CA 94103
650-307-4783