Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Web Architecture 101 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 211

About This Course

WE'LL COVER THE FOLLOWING

• Who Is This Course For?


• Why Take This Course? What To Expect From It?
• Course Author

Who Is This Course For? #


There is no prerequisite to taking this course. It is meant for anyone looking to
build a good concept in web application & software architecture & for anyone
who wants to strengthen their fundamentals on it.

If you are a beginner just starting your career in software development, this
course will help you a lot. Designing software is like fitting the lego blocks
together. With this course, you’ll develop an insight into how to fit them
together and build cool stuff.

It will also help you with the software engineering interviews, especially for
the full stack developer positions.

This course does not contain any code, & has a thorough discussion on the
architectural concepts. It also contains a lot of illustrations to help you grasp
the concepts better.

Why Take This Course? What To Expect From It?


#
This course is a 101 on Web Application & Software Architecture. It walks you
step by step through different components & concepts involved when
designing the architecture of a web application. We’ll learn about various
architectural styles such as the client-server, peer to peer decentralized
a c tectu a sty es suc as t e client se ve , pee to pee decent ali ed
architecture, microservices, the fundamentals of data flow in a web

application, different layers involved, concepts like scalability, high


availability & much more.

In this course, I also go through the techniques of picking the right


architecture and the technology stack to implement our use case. I walk you
through different use cases which will help you gain an insight into what
technology & architecture fits best for a certain use case when writing a web
application. You’ll come to understand the technology trade-offs involved.

By the end of the course, you’ll have a comprehensive insight into the web
application architecture. If you have a startup idea & you are asking yourself,
how do I implement my app? What technologies do I use? Where do I start?
This course will help you kickstart your entrepreneurial journey.

Also, this course will be continually updated & new content will be
added from time to time.

Course Author #
I am Shivang. I’ve been writing code from the past 8 years professionally & 14
years personally. In my career, I’ve gotten the opportunity to work on large
scale internet services for some of the industry giants on several different
domains such as E-commerce, Fintech, Telecom and others.

I’ve written applications from the bare bones, right from the idea to
production. I’ve maintained code, as well as worked in the production support
for systems receiving millions of hits every single day.

My last job was at Hewlett Packard Enterprise as a Full-Stack developer in their


Technical Solutions – R&D team.

Via this course, I’ve tried my best to share the knowledge, insights and the
experience gained in my years of software development, with all of you
guys!!.

Here is my LinkedIn profile, in case you want to say Hi!!


Cheers!!
Significance Of Software Architecture

In this lesson, we'll cover the signi cance of web application & software architecture & the reasoning behind
learning it.

WE'LL COVER THE FOLLOWING

• Importance Of Getting The Application Architecture Right


• An Overview Of The Software Development Process
• Proof Of Concept
• Course Design

Importance Of Getting The Application


Architecture Right #
The key element in successfully creating anything is getting the base right.
Now whether it is constructing a building or making a pizza. If we don’t get
the base right, we have to start over. Yeah!!.. I know, but there is no other way
around.

Building a web application is no different. The architecture is it’s base & has to
be carefully thought, to avoid any major design changes & code refactoring at
a later point in time.

Speaking with experience, you don’t want to delve into re-designing stuff. It
eats up your time like a black hole. It has the potential to push your shipping
date further down the calendar by months, if not longer. And I won’t even
bring up the wastage of engineering & financial resources which is caused due
to this. No, I won’t!!

It also depends on what stage of the development process we hit an impasse


due to the hasty decisions taken during the initial design phases. So, before we
even touch the code & get our hands dirty, we have to make the underlying
architecture right.
A look at the architecture of our app should bring a smile to everyone’s face.

Though software development is an iterative and evolutionary process, we


don’t always get things perfect at the first go. Still, this can’t be an excuse for
not doing our homework.

An Overview Of The Software Development


Process #
In the industry, architects, developers and product owners spend a lot of time
studying & discussing business requirements. In software engineering jargon,
this is known as the Requirement Gathering & Analysis.

Once we are done with the business requirements, we sit down & brainstorm
the use cases which we have to implement. This involves figuring out the
corner cases as early as possible & fitting the Lego blocks together.

If you’re a fan of documentation, you might also want to write a high-level


design document.

Now, we have an understanding of the business requirements, use cases,


corner cases and all. It’s time to start the research on picking the right
technology stack to implement the use cases.

Proof Of Concept #
After we pick the fitting tech stack, we start writing a POC (Proof of Concept)

Why a POC?

A POC helps us get a closer, more hands-on view of the technology & the basic
use case implementation. We get an insight into the pros and cons of the tech,
performance or other technical limitations if any.

It helps with the learning curve if we’re working with completely new tech,
also the non-technical people like the product owners, stakeholders have
something concrete to play with & to base their further decisions on.

Now, this is only for an industry scale product. If you are a solo indie
developer or a small group, you can always skip the POC part and start with
the main code.
So, we showcase the POC to the stakeholders & if everyone is satisfied, we
finally get down to creating the main repo & our first dev branch on GitHub,
or any other similar code hosting service which the business prefers.

Phew!!

So, by now you would have realized how important it is to get the architecture
right at the first time & the knowledge of web architecture to developers.

Course Design #
Hmmm…. Alright, this being said. Now speaking of this course. It is divided
into two parts. In the first, we will discuss the concepts & the architectural
components involved in designing web applications.

We will get insights into different tiers of software applications, monolithic


repos, microservices, peer to peer architecture & a lot more.

In the second part, we will go through some of the use cases of designing the
architecture for applications which we use in our day to day lives & are well
familiar with.

We will also understand how applications are designed from the bare bones.
What is the thought process of picking the right technology stack for our use
case & so forth?

So, without further ado. Let’s get started.


Introduction

This lesson gives an overview of the different topics we will cover in this chapter. Also, we will learn what is a Tier
& it’s components?

WE'LL COVER THE FOLLOWING

• What Is A Tier?

I’ll begin the course by discussing different tiers involved in the software
architectural landscape. This is like a bird’s eye view of the realm of software
architecture & is important to be understood well.

This chapter will help us understand:

What is a Tier?
Why do software applications have different tiers? What is the need for
them?
How do I decide how many tiers should my application have?

What Is A Tier? #
Think of a tier as a logical separation of components in an application or a
service. And when I say separation, I mean physical separation at the
component level, not the code level.

What do I mean by components?

Database
Backend application server
User interface
Messaging
Caching
These are the different components that make up a web service.

Now let’s have a look at the different types of tiers & their real-life examples.
Single Tier Applications

In this lesson, we will learn about the Single Tier applications.

WE'LL COVER THE FOLLOWING

• Single Tier Applications


• Advantages Of Single Tier Applications
• Disadvantages Of Single Tier Applications

Single Tier Applications #

A single-tier application is an application where the user interface,


backend business logic & the database all reside in the same machine.

Typical examples of single-tier applications are desktop applications like MS


Office, PC Games or an image editing software like Gimp.

Advantages Of Single Tier Applications #


The main upside of single-tier applications is they have no network latency
since every component is located on the same machine. This adds up to the
performance of the software.

There are no data requests to the backend server every now and then, which
would make the user experience slow. In single-tier apps, the data is easily &
quickly available since it is located in the same machine.

Though it largely depends on how powerful the machine is & the hardware
requirements of the software, to gauge the real performance of a single-tier
app.

Also, the data of the user stays in his machine & doesn’t need to be transmitted
over a network. This ensures data safety at the highest level.

Disadvantages Of Single Tier Applications #


One big downside of single-tier app is that the business has no control over the
application. Once the software is shipped, no code or features changes can
possibly be done until the customer manually updates it by connecting to the
remote server or by downloading & installing a patch.

In the 90s due to this, if a game was shipped with buggy code, there was
nothing the studios could do. They would eventually have to face quite some
heat due to the buggy nature of the software. The testing of the product has to
be thorough, there is no room for any mistakes.

The code in single-tier applications is also vulnerable to being tweaked &


reversed engineered. The security, for the business, is minimal.

Also, the applications’ performance & the look and feel can get inconsistent as
it largely depends on the configuration of the user’s machine.
Two Tier Applications

In this lesson, we will learn about the Two Tier applications.

WE'LL COVER THE FOLLOWING

• Two Tier Application


• The Need For Two Tier Application

Two Tier Application #

A Two-tier application involves a client and a server. The client would


contain the user interface & the business logic in one machine. And the
backend server would be the database running on a different machine.
The database server is hosted by the business & has control over it.

Why the need for two-tier applications? Why not host the business logic on a
different machine & have control over it?

Also, again isn’t the application code vulnerable to being accessed by a third
person?

The Need For Two Tier Application #


Well, yes!! But there are use cases where two-tier applications come in handy,
for instance, a to-do list app or a similar planner or a productivity app.

In these scenarios, it won’t cause the business significant harm, even if the
code is accessed by a third person. On the contrary, the upside is since the
code & the user interface reside in the same machine, there are fewer
network calls to the backend server which keeps the latency of the application
low.

The application makes a call to the database server, only when the user has
finished creating his to-do list & wants to persist the changes.

Another good example of this is the online browser & app-based games. The
game files are pretty heavy, they get downloaded on the client just once when
the user uses the application for the first time. Moreover, they make the
network calls only to keep the game state persistent.

Also, fewer server calls mean less money to be spent on the servers which is
naturally economical.

Though, it largely depends on our business requirements & the use case if we
want to pick this type of tier when writing our service.

We can either keep the user interface and the business logic on the client or
move the business logic to a dedicated backend server, which would make it a
three-tier application. Which I am going to discuss up next.
Three Tier Applications

In this lesson, we will learn about the Three Tier applications.

Three-tier applications are pretty popular & largely used in the industry.
Almost all of the simple websites like blogs, news websites etc. are part of this
category.

In a three-tier application the user interface, application logic & the


database all lie on different machines & thus have different tiers. They
are physically separated.

So, if we take the example of a simple blog, the user interface would be
written using Html, JavaScript, CSS, the backend application logic would run
on a server like Apache & the database would be MySQL. A three-tier
architecture works best for simple use cases.

Alright! Now, let’s move on to learn about the N-tier applications.


N Tier Applications

In this lesson, we will go over the N Tier applications and its components.

WE'LL COVER THE FOLLOWING

• N-Tier Application
• Why The Need For So Many Tiers?
• Single Responsibility Principle
• Separation Of Concerns
• Difference Between Layers & Tiers

N-Tier Application #

An N-tier application is an application which has more than three


components involved.

What are those components?

Cache
Message queues for asynchronous behaviour
Load balancers
Search servers for searching through massive amounts of data
Components involved in processing massive amounts of data
Components running heterogeneous tech commonly known as web services
etc.

All the social applications like Instagram, Facebook, large scale industry
services like Uber, Airbnb, online massive multiplayer games like Pokemon Go,
applications with fancy features are n-tier applications.
Note: There is another name for n-tier apps, the “distributed
applications”. But, I think it’s not safe to use the word “distributed” yet,
as the term distributed brings along a lot of complex stuff with it. It
would rather confuse us than help. Though I will discuss the distributed
architecture in this course, for now, we will just stick with the term N-
tier applications.

So, why the need for so many tiers?

Why The Need For So Many Tiers? #


Two software design principles that are key to explaining this are the Single
Responsibility Principle & the Separation of Concerns.

Single Responsibility Principle #


Single Responsibility Principle simply means giving one, just one
responsibility to a component & letting it execute it with perfection. Be it
saving data, running the application logic or ensuring the delivery of the
messages throughout the system.

This approach gives us a lot of flexibility & makes management easier.

For instance, when upgrading a database server. Like when installing a new
OS or a patch, it wouldn’t impact the other components of the service running
& even if something amiss happens during the OS installation process, just the
database component would go down. The application as a whole would still be
up & would only impact the features requiring the database.

We can also have dedicated teams & code repositories for every component,
thus keeping things cleaner.

Single responsibility principle is a reason, why I was never a fan of stored


procedures.

Stored procedures enable us to add business logic to the database, which is a


big no for me. What if in future we want to plug in a different database?
Where do we take the business logic? To the new database? Or do we try to

refactor the application code & squeeze in the stored procedure logic
somewhere?

A database should not hold business logic, it should only take care of
persisting the data. This is what the single responsibility principle is. And this
is why we have separate tiers for separate components.

Separation Of Concerns #
Separation of concerns kind of means the same thing, be concerned about
your work only & stop worrying about the rest of the stuff.

These principles act at all the levels of the service, be it at the tier level or the
code level.

Keeping the components separate makes them reusable. Different services


can use the same database, the messaging server or any component as long as
they are not tightly coupled with each other.

Having loosely coupled components is the way to go. The approach makes
scaling the service easy in future when things grow beyond a certain level.

Difference Between Layers & Tiers #

Note: Don’t confuse tiers with the layers of the application. Some prefer
to use them interchangeably. But in the industry layers of an application
typically means the user interface layer, business layer, service layer, or
the data access layer.
The layers mentioned in the illustration are at the code level. The difference
between layers and tiers is that the layers represent the organization of the
code and breaking it into components. Whereas, tiers involve physical
separation of components.

All these layers together can be used in any tiered application. Be it single,
two, three or N-tier. I’ll discuss these layers in detail in the course ahead.

Alright, now we have an understanding of tiers. Let’s zoom-in one notch &
focus on web architecture.
Different Tiers In Software Architecture Quiz

This lesson contains a quiz to test your understanding of tiers in software architecture.

Let’s Test Your Understanding Of Different Tiers In Software Architecture

1
What is a tier?

COMPLETED 0%
1 of 6
What Is Web Architecture?

In this lesson, we will have a brief introduction to web architecture.

Web architecture involves multiple components like database, message queue,


cache, user interface & all running in conjunction with each other to form an
online service.

In the introduction of this course, I already talked about why the knowledge
of web architecture is important to software engineers. Now we will explore it
further.

This is a typical architecture of a web application, used in the majority of the


applications running online.

If we have an understanding of the components involved in this diagram,


then we can always build upon this architecture for more complex
requirements.

I’ll step by step go through every component starting with the client-server
architecture.
Client Server Architecture

This lesson is an introduction to the Client-Server Architecture.

We’ve already learned a bit about the client-server architecture when


discussing the two-tier, three-tier & the N-tier architecture. Now we look at it
in detail.

Client-Server architecture is the fundamental building block of the web.

The architecture works on a request-response model. The client sends the


request to the server for information & the server responds with it.

Every website you browse, be it a Wordpress blog or a web application like


Facebook, Twitter or your banking app is built on the client-server
architecture.

A very small percent of the business websites and applications use the peer to
peer architecture, which is different from the client-server.

I will discuss that in ahead in the course. For now, let’s focus on the client.
Client

In this lesson, we will explore the Client component of the Client-Server Architecture.

WE'LL COVER THE FOLLOWING

• Client
• Technologies Used To Implement Clients In Web Applications

Client #
The client holds our user interface. The user interface is the presentation part
of the application. It’s written in Html, JavaScript, CSS and is responsible for
the look & feel of the application.

The user interface runs on the client. The client can be a mobile app, a
desktop or a tablet like an iPad. It can also be a web-based console, running
commands to interact with the backend server.
Technologies Used To Implement Clients In Web
Applications #
In very simple terms, a client is the window to our application. In the industry,
the open-source technologies popular for writing the web-based user interface
are ReactJS, AngularJS, VueJS, Jquery etc. All these libraries use JavaScript.

There are a plethora of other technologies for writing the front-end too, I have
just listed the popular ones for now.

Different platforms require different frameworks & libraries to write front-


end. For instance, mobile phones running Android would need a different set
of tools, those running Apple or Windows OS would need a different set of
tools.

If you are intrigued about the technologies popular in the industry have a look
at the developer survey run by StackOverflow for this year
Types of Client

In this lesson, we will learn about the two types of client: the Thin Client and the Thick Client (sometimes also
called the Fat Client).

WE'LL COVER THE FOLLOWING

• Thin Client
• Thick Client

There are primarily two types of clients:

1. Thin Client
2. Thick Client (sometimes also called the Fat client)

Thin Client #
Thin Client is the client which holds just the user interface of the application.
It has no business logic of any sort. For every action, the client sends a request
to the backend server. Just like in a three-tier application.
Thick Client #
On the contrary, the thick client holds all or some part of the business logic.
These are the two-tier applications. We’ve already gone through this if you
remember.

The typical examples of Fat clients are utility apps, online games etc.
Server

In this lesson, we will explore the Server component of the Client-Server Architecture.

WE'LL COVER THE FOLLOWING

• What is A Web Server?


• Server-Side Rendering

What is A Web Server? #

The primary task of a web server is to receive the requests from the
client & provide the response after executing the business logic based on
the request parameters received from the client.

Every service, running online, needs a server to run. Servers running web
applications are commonly known as the application servers.
Besides the application servers, there are other kinds of servers too with
specific tasks assigned to them such as the:

Proxy server
Mail server
File server
Virtual server

The server configuration & the type can differ depending on the use case.

For instance, if we run a backend application code written in Java, we


would pick Apache Tomcat or Jetty.

For simple use cases such as hosting websites, we would pick the Apache
HTTP Server.

In this lesson, we will stick to the application server.

All the components of a web application need a server to run. Be it a database,


a message queue, a cache or any other component. In modern application
development, even the user interface is hosted separately on a dedicated
server.

Server-Side Rendering #
Often the developers use a server to render the user interface on the backend
& then send the rendered data to the client. The technique is known as server-
side rendering. I will discuss the pros & cons of client-side vs server-side
rendering further down the course.

Now we have a fundamental understanding of both the client & the server.
Let’s delve into some of the concepts involved in the communication between
them.
Communication Between the Client & the Server

In this lesson, we will learn how communication takes place between the Client and the Server.

WE'LL COVER THE FOLLOWING

• Request-Response Model
• HTTP Protocol
• REST API & API Endpoints
• Real World Example Of Using A REST API

Request-Response Model #
The client & the server have a request-response model. The client sends the
request & the server responds with the data.

If there is no request, there is no response. Pretty simple right?

HTTP Protocol #
The entire communication happens over the HTTP protocol. It is the protocol
for data exchange over the World Wide Web. HTTP protocol is a request-
response protocol that defines how information is transmitted across the web.

It’s a stateless protocol, every process over HTTP is executed independently &
has no knowledge of previous processes.

If you want to read more about the protocol, this is a good resource on it

Alright, moving on…

REST API & API Endpoints #


Speaking from the context of modern N-tier web applications, every client has
to hit a REST end-point to fetch the data from the backend.
to ta S end point to etc t e data o t e bac e d.

Note: If you aren’t aware of the REST API & the API Endpoints, I have
discussed it in the next lesson in detail. I’ve brought up the terms in this
lesson, just to give you a heads up on how modern distributed web
applications communicate.

The backend application code has a REST-API implemented which acts as an


interface to the outside world requests. Every request be it from the client
written by the business or the third-party developers which consume our data
have to hit the REST-endpoints to fetch the data.

Real World Example Of Using A REST API #


For instance, let’s say we want to write an application which would keep track
of the birthdays of all our Facebook friends & send us a reminder a couple of
days before the event date.

To implement this, the first step would be to get the data on the birthdays of
all our Facebook friends.

We would write a client which would hit the Facebook Social Graph API which
is a REST-API to get the data & then run our business logic on the data.

Implementing a REST-based API has several advantages. Let’s delve into it in


detail to have a deeper understanding.
Web Architecture Quiz - Part 1

This lesson contains a quiz to test your understanding of the client, the server & the communication between
them.

Let’s Test Your Understanding Of the Client Server Communication

1
Where does the user interface component of a web application runs?

COMPLETED 0%
1 of 5
What Is A REST API?

In this lesson we will have an insight into the REST API

WE'LL COVER THE FOLLOWING

• WHAT IS REST?
• REST API
• REST Endpoint
• Decoupling Clients & the Backend Service
• Application Development Before the REST API
• API Gateway

WHAT IS REST? #

REST stands for Representational State Transfer. It’s a software


architectural style for implementing web services. Web services
implemented using the REST architectural style are known as the
RESTful Web services.

REST API #
A REST API is an API implementation that adheres to the REST architectural
constraints. It acts as an interface. The communication between the client &
the server happens over HTTP. A REST API takes advantage of the HTTP
methodologies to establish communication between the client and the server.
REST also enables servers to cache the response that improves the
performance of the application.
The communication between the client and the server is a stateless process.
And by that, I mean every communication between the client and the server is
like a new one.

There is no information or memory carried over from the previous


communications. So, every time a client interacts with the backend, it has to
send the authentication information to it as well. This enables the backend to
figure out that the client is authorized to access the data or not.

With the implementation of a REST API the client gets the backend endpoints to
communicate with. This entirely decouples the backend & the client code.

Let’s understand what this means.

REST Endpoint #
An API/REST/Backend endpoint means the url of a service. For example,
https://myservice.com/getuserdetails/{username} is a backend endpoint for
fetching the user details of a particular user from the service.

The REST-based service will expose this url to all its clients to fetch the user
details using the above stated url.
Decoupling Clients & the Backend Service #
With the availability of the endpoints, the backend service does not have to
worry about the client implementation. It just calls out to its multiple clients &
says “Hey everyone, here is the url address of the resource/information you
need. Hit it when you need it. Any client with the required authorization to
access a resource can access it”.

Developers can have different implementations with separate codebases, for


different clients, be it a mobile browser, a desktop browser, a tablet or an API
testing tool. Introducing new types of clients or modifying the client code has
no effect on the functionality of the backend service.

This means the clients and the backend service are decoupled.

Application Development Before the REST API #


Before the REST-based API interfaces got mainstream in the industry. We often
tightly coupled the backend code with the client. JSP (Java Server Pages) is one
example of it.

We would always put business logic in the JSP tags. Which made code
refactoring & adding new features really difficult as the logic got spread
across different layers.

Also, in the same codebase, we had to write separate code/classes for handling
requests from different types of clients. A different servlet for a mobile client
and a different one for a web-based client.

After the REST APIs became widely used, there was no need to worry about
the type of the client. Just provide the endpoints & the response would contain
data generally in the JSON or any other standard data transport format. And
the client would handle the data in whatever way they would want.

This cut down a lot of unnecessary work for us. Also, adding new clients
became a lot easier. We could introduce multiple types of new clients without
considering the backend implementation.

In today’s industry landscape, there is hardly any online service without a


REST API. Want to access public data of any social network? Use their REST
API.

API Gateway #

The REST-API acts as a gateway, as a single entry point into the system. It
encapsulates the business logic. Handles all the client requests, taking care of
the authorization, authentication, sanitizing the input data & other necessary
tasks before providing access to the application resources.

So, now we are aware of the client-server architecture, we know what a REST
API is. It acts as the interface & the communication between the client and the
server happens over HTTP.

Let’s look into the HTTP Pull & Push-based communication mechanism.
HTTP Push & Pull - Introduction

In this lesson, we will have an introduction to the HTTP Push & Pull mechanism.

WE'LL COVER THE FOLLOWING

• HTTP PULL
• HTTP PUSH

In this lesson, we will get an insight into the HTTP Push & Pull mechanism. We
know that the majority of the communication on the web happens over HTTP,
especially wherever the client-server architecture is involved.

There are two modes of data transfer between the client and the server. HTTP
PUSH & HTTP PULL. Let’s find out what they are & what they do.

HTTP PULL #
As I stated earlier, for every response, there has to be a request first. The
client sends the request & the server responds with the data. This is the
default mode of HTTP communication, called the HTTP PULL mechanism.

The client pulls the data from the server whenever it requires. And it keeps
doing it over and over to fetch the updated data.

An important thing to note here is that every request to the server and the
response to it consumes bandwidth. Every hit on the server costs the business
money & adds more load on the server.

What if there is no updated data available on the server, every time the client
sends a request?

The client doesn’t know that, so naturally, it would keep sending the requests
to the server over and over. This is not ideal & a waste of resources. Excessive
pulls by the clients have the potential to bring down the server.
HTTP PUSH #
To tackle this, we have the HTTP PUSH based mechanism. In this mechanism,
the client sends the request for particular information to the server, just for
the first time, & after that the server keeps pushing the new updates to the
client whenever they are available.

The client doesn’t have to worry about sending requests to the server, for
data, every now & then. This saves a lot of network bandwidth & cuts down
the load on the server by notches.

This is also known as a Callback. Client phones the server for information. The
server responds, Hey!! I don’t have the information right now but I’ll call you
back whenever it is available.

A very common example of this is user notifications. We have them in almost


every web application today. We get notified whenever an event happens on
the backend.

Clients use AJAX (Asynchronous JavaScript & XML) to send requests to the
server in the HTTP Pull based mechanism.

There are multiple technologies involved in the HTTP Push based mechanism
such as:

Ajax Long polling


Web Sockets
HTML5 Event Source
Message Queues
Streaming over HTTP

We’ll go over all of them in detail up-next.


HTTP Pull - Polling with Ajax

In this lesson, we will understand HTTP Pull, AJAX and how polling is done using AJAX.

WE'LL COVER THE FOLLOWING

• AJAX – Asynchronous JavaScript & XML

There are two ways of pulling/fetching data from the server.

The first is sending an HTTP GET request to the server manually by triggering
an event, like by clicking a button or any other element on the web page.

The other is fetching data dynamically at regular intervals by using AJAX


without any human intervention.

AJAX – Asynchronous JavaScript & XML #

AJAX stands for asynchronous JavaScript & XML. The name says it all, it
is used for adding asynchronous behaviour to the web page.
As we can see in the illustration above, instead of requesting the data
manually every time with the click of a button. AJAX enables us to fetch the
updated data from the server by automatically sending the requests over and
over at stipulated intervals.

Upon receiving the updates, a particular section of the web page is updated
dynamically by the callback method. We see this behaviour all the time on
news & sports websites, where the updated event information is dynamically
displayed on the page without the need to reload it.

AJAX uses an XMLHttpRequest object for sending the requests to the server
which is built-in the browser and uses JavaScript to update the HTML DOM.

AJAX is commonly used with the Jquery framework to implement the


asynchronous behaviour on the UI.

This dynamic technique of requesting information from the server after


regular intervals is known as Polling.
HTTP Push

In this lesson, we will learn about the HTTP Push mechanism.

WE'LL COVER THE FOLLOWING

• Time To Live (TTL)


• Persistent Connection
• Heartbeat Interceptors
• Resource Intensive

Time To Live (TTL) #


In the regular client-server communication, which is HTTP PULL, there is a
Time to Live (TTL) for every request. It could be 30 secs to 60 secs, varies from
browser to browser.

If the client doesn’t receive a response from the server within the TTL, the
browser kills the connection & the client has to re-send the request hoping it
would receive the data from the server before the TTL ends this time.

Open connections consume resources & there is a limit to the number of open
connections a server can handle at one point in time. If the connections don’t
close & new ones are being introduced, over time, the server will run out of
memory. Hence, the TTL is used in client-server communication.

But what if we are certain that the response will take more time than the TTL
set by the browser?

Persistent Connection #
In this case, we need a Persistent Connection between the client and the
server.
A persistent connection is a network connection between the client & the
server that remains open for further requests & the responses, as
opposed to being closed after a single communication.

It facilitates HTTP Push-based communication between the client and the


server.

Heartbeat Interceptors #
Now you might be wondering how is a persistent connection possible if the
browser kills the open connections to the server every X seconds?

The connection between the client and the server stays open with the help of
Heartbeat Interceptors.

These are just blank request responses between the client and the server
to prevent the browser from killing the connection.

Isn’t this resource-intensive?


Resource Intensive #

Yes, it is. Persistent connections consume a lot of resources in comparison to


the HTTP Pull behaviour. But there are use cases where establishing a
persistent connection is vital to the feature of an application.

For instance, a browser-based multiplayer game has a pretty large amount of


request-response activity within a certain time in comparison to a regular
web application.

It would be apt to establish a persistent connection between the client and the
server from a user experience standpoint.

Long opened connections can be implemented by multiple techniques such as


Ajax Long Polling, Web Sockets, Server-Sent Events etc.

Let’s have a look into each of them.


HTTP Push-Based Technologies

In this lesson, we will discuss some HTTP Push based technologies.

WE'LL COVER THE FOLLOWING

• Web Sockets
• AJAX – Long Polling
• HTML5 Event Source API & Server Sent Events
• Streaming Over HTTP
• Summary

Web Sockets #
A Web Socket connection is ideally preferred when we need a persistent bi-
directional low latency data flow from the client to server & back.

Typical use-cases of these are messaging, chat applications, real-time social


streams & browser-based massive multiplayer games which have quite a
number of read writes in comparison to a regular web app.

With Web Sockets, we can keep the client-server connection open as long as
we want.

Have bi-directional data? Go ahead with Web Sockets. One more thing, Web
Sockets tech doesn’t work over HTTP. It runs over TCP. The server & the client
should both support web sockets or else it won’t work.

The WebSocket API & Introducing WebSockets – Bringing Sockets to the Web
are good resources for further reading on web sockets

AJAX – Long Polling #


Long Polling lies somewhere between Ajax & Web Sockets. In this technique
instead of immediately returning the response, the server holds the response
until it finds an update to be sent to the client.

The connection in long polling stays open a bit longer in comparison to


polling. The server doesn’t return an empty response. If the connection
breaks, the client has to re-establish the connection to the server.

The upside of using this technique is that there are quite a smaller number of
requests sent from the client to the server, in comparison to the regular
polling mechanism. This reduces a lot of network bandwidth consumption.

Long polling can be used in simple asynchronous data fetch use cases when
you do not want to poll the server every now & then.

HTML5 Event Source API & Server Sent Events #


The Server-Sent Events implementation takes a bit of a different approach.
Instead of the client polling for data, the server automatically pushes the data
to the client whenever the updates are available. The incoming messages from
the server are treated as Events.

Via this approach, the servers can initiate data transmission towards the
client once the client has established the connection with an initial request.

This helps in getting rid of a huge number of blank request-response cycles


cutting down the bandwidth consumption by notches.

To implement server-sent events, the backend language should support the


technology & on the UI HTML5 Event source API is used to receive the data in-
coming from the backend.

An important thing to note here is that once the client establishes a


connection with the server, the data flow is in one direction only, that is from
the server to the client.

SSE is ideal for scenarios such as a real-time feed like that of Twitter,
displaying stock quotes on the UI, real-time notifications etc.

This is a good resource for further reading on SSE

Streaming Over HTTP #


Streaming Over HTTP is ideal for cases where we need to stream large data
over HTTP by breaking it into smaller chunks. This is possible with HTML5 &

a JavaScript Stream API.

The technique is primarily used for streaming multimedia content, like large
images, videos etc, over HTTP.

Due to this, we can watch a partially downloaded video as it continues to


download, by playing the downloaded chunks on the client.

To stream data, both the client & the server agree to conform to some
streaming settings. This helps them figure when the stream begins & ends
over an HTTP request-response model.

You can go through this resource for further reading on Stream API

Summary #
So, now we have an understanding of what HTTP Pull & Push is. We went
through different technologies which help us establish a persistent connection
between the client and the server.

Every tech has a specific use case, Ajax is used to dynamically update the web
page by polling the server at regular intervals.

Long polling has a connection open time slightly longer than the polling
mechanism.
Web Sockets have bi-directional data flow, whereas Server sent events
facilitate data flow from the server to the client.

Streaming over HTTP facilitates streaming of large objects like multi-media


files.

What tech would fit best for our use cases depends on the kind of application
we intend to build.

Alright, let’s quickly gain an insight into the pros & cons of the client and the
server-side rendering.
Client-Side Vs Server-Side Rendering

In this lesson, we will learn about the client side and the server-side rendering & the use cases for both the
approaches.

WE'LL COVER THE FOLLOWING

• Client-Side Rendering - How Does A Browser Render A Web Page?


• Server-Side Rendering
• Use Cases For Server-Side & Client-Side Rendering

Client-Side Rendering - How Does A Browser


Render A Web Page? #
When a user requests a web page from the server & the browser receives the
response. It has to render the response on the window in the form of an
HTML page.

For this, the browser has several components, such as the:

Browser engine
Rendering engine
JavaScript interpreter
Networking & the UI backend
Data storage etc.

I won’t go into much detail but the browser has to do a lot of work to convert
the response from the server into an HTML page.

The rendering engine constructs the DOM tree, renders & paints the
construction. And naturally, all this activity needs a bit of time.

Server-Side Rendering #
To avoid all this rendering time on the client, developers often render the UI
on the server, generate HTML there & directly send the HTML page to the UI.
This technique is known as the Server-side rendering. It ensures faster
rendering of the UI, averting the UI loading time in the browser window since
the page is already created & the browser doesn’t have to do much assembling
& rendering work.

Use Cases For Server-Side & Client-Side


Rendering #
The server-side rendering approach is perfect for delivering static content,
such as WordPress blogs. It’s also good for SEO as the crawlers can easily read
the generated content.

However, the modern websites are highly dependent on Ajax. In such


websites, content for a particular module or a section of a page has to be
fetched & rendered on the fly.

Therefore, server-side rendering doesn’t help much. For every Ajax-request,


instead of sending just the required content to the client, the approach
generates the entire page on the server. This process consumes unnecessary
bandwidth & also fails to provide a smooth user experience.

A big downside to this is once the number of concurrent users on the website
rises, it puts an unnecessary load on the server.

Client-side rendering works best for modern dynamic Ajax-based websites.

Though we can leverage a hybrid approach, to get the most out of both
techniques. We can use server-side rendering for the home page & for the
other static content on our website & use client-side rendering for the
dynamic pages.

Alright, before moving down to the database, message queue & the caching
components. It’s important for us to understand a few concepts such as:

Monolithic architecture
Micro-services
Scalability
High availability
Distributed systems

What are nodes in distributed systems? Why are they important to


software design?

The clarity on these concepts will help us understand the rest of the web
components better. Let’s have a look one by one.
Web Architecture Quiz - Part 2

This lesson contains a quiz to test your understanding of the REST API & the HTTP mechanisms.

Let’s Test Your Understanding Of the REST API & the HTTP mechanisms

1
Why should we implement a REST API in our application? Which of the
following option(s) are correct?

COMPLETED 0%
1 of 8
What Is Scalability?

This lesson is an introduction to scalability.

WE'LL COVER THE FOLLOWING

• What is Scalability?
• What Is Latency?
• Measuring Latency
• Network Latency
• Application Latency
• Why Is Low Latency So Important For Online Services?

I am pretty sure, being in the software development universe, you’ve come


across this word a lot many times. Scalability. What is it? Why is it so
important? Why is everyone talking about it? Is it important to scale systems?
What are your plans or contingencies to scale when your app or the platform
experiences significant traffic growth?

This chapter is a deep dive into scalability. It covers all the frequently asked
questions on it such as: what does scalability mean in the context of web
applications, distributed systems or cloud computing? Etc.

So, without further ado. Let’s get started.

What is Scalability? #
Scalability means the ability of the application to handle & withstand
increased workload without sacrificing the latency.

For instance, if your app takes x seconds to respond to a user request. It


should take the same x seconds to respond to each of the million concurrent
user requests on your app.
The backend infrastructure of the app should not crumble under a load of a
million concurrent requests. It should scale well when subjected to a heavy
traffic load & should maintain the latency of the system.

What Is Latency? #
Latency is the amount of time a system takes to respond to a user request.
Let’s say you send a request to an app to fetch an image & the system takes 2
seconds to respond to your request. The latency of the system is 2 seconds.

Minimum latency is what efficient software systems strive for. No matter how
much the traffic load on a system builds up, the latency should not go up. This
is what scalability is.

If the latency remains the same, we can say yeah, the application scaled well
with the increased load & is highly scalable.

Let’s think of scalability in terms of Big-O notation. Ideally, the complexity of a


system or an algorithm should be O(1) which is constant time like in a key-
value database.

A program with the complexity of O(n^2) where n is the size of the data set is
not scalable. As the size of the data set increases the system will need more
computational power to process the tasks.
So, how do we measure latency?

Measuring Latency #
Latency is measured as the time difference between the action that the user
takes on the website, it can be an event like the click of a button, & the system
response in reaction to that event.

This latency is generally divided into two parts:

1. Network Latency
2. Application Latency

Network Latency #
Network Latency is the amount of time that the network takes for sending a
data packet from point A to point B. The network should be efficient enough
to handle the increased traffic load on the website. To cut down the network
latency, businesses use CDN & try to deploy their servers across the globe as
close to the end-user as possible.

Application Latency #
Application Latency is the amount of time the application takes to process a
user request. There are more than a few ways to cut down the application
latency. The first step is to run stress & load tests on the application & scan for
the bottlenecks that slow down the system as a whole. I’ve talked more about
it in the upcoming lesson.

Why Is Low Latency So Important For Online


Services? #
Latency plays a major role in determining if an online business wins or loses a
customer. Nobody likes to wait for a response on a website. There is a well-
known saying if you want to test a person’s patience, give him a slow internet
connection 😊

If the visitor gets the response within a stipulated time, great or he bounces
off to another website.

There are numerous market researches that present the fact that high latency
in applications is a big factor in customers bouncing off a website. If there is
money involved, zero latency is what businesses want, only if that was
possible.

Think of massive multiplayer online MMO games, a slight lag in an in-game


event ruins the whole experience. A gamer with a high latency internet
connection will have a slow response time despite having the best reaction
time of all the players in an arena.

Algorithmic trading services need to process events within milliseconds.


Fintech companies have dedicated networks to run low latency trading. The
regular network just won’t cut it.

We can realize the importance of low latency by the fact that Huawei &
Hibernia Atlantic in the year 2011 started laying a fibre-optic link cable across
the Atlantic Ocean between London & Newyork, that was estimated having a
cost of approx. $300M, just to save traders 6 milliseconds of latency.
Types Of Scalability

In this lesson, we will explore the two types of scaling: Vertical and Horizontal Scaling.

WE'LL COVER THE FOLLOWING

• What is Vertical Scaling?


• What is Horizontal Scaling?
• Cloud Elasticity

An application to scale well needs solid computing power. The servers should
be powerful enough to handle increased traffic loads.

There are two ways to scale an application:

1. Vertical Scaling
2. Horizontal Scaling

What is Vertical Scaling? #


Vertical scaling means adding more power to your server. Let’s say your app
is hosted by a server with 16 Gigs of RAM. To handle the increased load you
increase the RAM to 32 Gigs. You have vertically scaled the server.
Ideally, when the traffic starts to build upon your app the first step should be
to scale vertically. Vertical scaling is also called scaling up.

In this type of scaling we increase the power of the hardware running the
app. This is the simplest way to scale since it doesn’t require any code
refactoring, not making any complex configurations and stuff. I’ll discuss
further down the lesson, why code refactoring is required when we
horizontally scale the app.

But there is only so much we can do when scaling vertically. There is a limit to
the capacity we can augment for a single server.

A good analogy would be to think of a multi-story building we can keep


adding floors to it but only upto a certain point. What if the number of people
in need of a flat keeps rising? We can’t scale up the building to the moon, for
obvious reasons.

Now is the time to build more buildings. This is where Horizontal Scalability
comes in.

When the traffic is just too much to be handled by single hardware, we bring
in more servers to work together.
What is Horizontal Scaling? #
Horizontal scaling, also known as scaling out, means adding more hardware
to the existing hardware resource pool. This increases the computational
power of the system as a whole.

Now the increased traffic influx can be easily dealt with the increased
computational capacity & there is literally no limit to how much we can scale
horizontally assuming we have infinite resources. We can keep adding servers
after servers, setting up data centres after data centres.

Horizontal scaling also provides us with the ability to dynamically scale in


real-time as the traffic on our website increases & decreases over a period of
time as opposed to vertical scaling which requires pre-planning & a stipulated
time to be pulled off.

Cloud Elasticity #
The biggest reason why cloud computing got so popular in the industry is the
ability to scale up & down dynamically. The ability to use & pay only for the
resources required by the website became a trend for obvious reasons.

If the site has a heavy traffic influx more server nodes get added & when it
doesn’t the dynamically added nodes are removed.

This approach saves businesses bags of money every single day. The approach
is also known as cloud elasticity. It indicates the stretching & returning to the
original infrastructural computational capacity.

Having multiple server nodes on the backend also helps with the website
staying alive online all the time even if a few server nodes crash. This is
known as High Availability. We’ll get to that in the upcoming lessons.
Which Scalability Approach Is Right For Your App?

In this lesson, we will learn about which type of scaling is better for a given scenario.

WE'LL COVER THE FOLLOWING

• Pros & Cons of Vertical & Horizontal Scaling


• What about the code? Why does the code need to change when it has to
run on multiple machines?
• Which Scalability Approach Is Right for Your App?

Pros & Cons of Vertical & Horizontal Scaling #


This is the part where I talk about the plus & minuses of both the approaches.

Vertical scaling for obvious reasons is simpler in comparison to scaling


horizontally as we do not have to touch the code or make any complex
distributed system configurations. It takes much less administrative,
monitoring, management efforts as opposed to when managing a distributed
environment.

A major downside of vertical scaling is availability risk. The servers are


powerful but few in number, there is always a risk of them going down & the
entire website going offline which doesn’t happen when the system is scaled
horizontally. It becomes more highly available.

What about the code? Why does the code need


to change when it has to run on multiple
machines? #
If you need to run the code in a distributed environment, it needs to be
stateless. There should be no state in the code. What do I mean by that?

No static instances in the class. Static instances hold application data & if a
particular server goes down all the static data/state is lost. The app is left in an
inconsistent state.
Rather, use a persistent memory like a key-value store to hold the data & to
remove all the state/static variable from the class. This is why functional
programming got so popular with distributed systems. The functions don’t
retain any state.

Always have a ballpark estimate on mind when designing your app. How
much traffic will it have to deal with?

Development teams today are adopting a distributed micro-services


architecture right from the start & the workloads are meant to be deployed on
the cloud. So, inherently the workloads are horizontally scaled out on the fly.

The upsides of horizontally scaling include no limit to augmenting the


hardware capacity. Data is replicated across different geographical regions as
nodes & data centres are set up across the globe.

I’ll discuss cloud, serverless and microservices in the upcoming lessons. So,
stay tuned.

Which Scalability Approach Is Right for Your


App? #
If your app is a utility or a tool which is expected to receive minimal
consistent traffic, it may not be mission-critical. For instance, an internal tool
of an organization or something similar.

Why bother hosting it in a distributed environment? A single server is enough


to manage the traffic, go ahead with vertical scaling when you know that the
traffic load would not increase significantly.

If your app is a public-facing social app like a social network, a fitness app or
something similar, then the traffic is expected to spike exponentially in the
near future. Both high availability & horizontal scalability is important to you.

Build to deploy it on the cloud & always have horizontal scalability in mind
right from the start.
Primary Bottlenecks that Hurt the Scalability Of Our
Application

WE'LL COVER THE FOLLOWING

• Database
• Application Architecture
• Not Using Caching In the Application Wisely
• Inef cient Con guration & Setup of Load Balancers
• Adding Business Logic to the Database
• Not Picking the Right Database
• At the Code Level

There are several points in a web application which can become a bottleneck
& can hurt the scalability of our application. Let’s have a look at them.

Database #
Consider that, we have an application that appears to be well architected.
Everything looks good. The workload runs on multiple nodes & has the ability
to horizontally scale.

But the database is a poor single monolith, just one server been given the onus
of handling the data requests from all the server nodes of the workload.

This scenario is a bottleneck. The server nodes work well, handle millions of
requests at a point in time efficiently, still, the response time of these requests
& the latency of the application is very high due to the presence of a single
database. There is only so much it can handle.

Just like the workload scalability, the database needs to be scaled well.
Make wise use of database partitioning, sharding, use multiple database
servers to make the module efficient.

Application Architecture #
A poorly designed application’s architecture can become a major bottleneck as
a whole.

A common architectural mistake is not using asynchronous processes &


modules where ever required rather all the processes are scheduled sequentially.

For instance, if a user uploads a document on the portal, tasks such as sending
a confirmation email to the user, sending a notification to all of the
subscribers/listeners to the upload event should be done asynchronously.

These tasks should be forwarded to a messaging server as opposed to doing it


all sequentially & making the user wait for everything.

Not Using Caching In the Application Wisely #


Caching can be deployed at several layers of the application & it speeds up the
response time by notches. It intercepts all the requests going to the database,
reducing the overall load on it.

Use caching exhaustively throughout the application to speed up things


significantly.

Inef cient Con guration & Setup of Load


Balancers #
Load balancers are the gateway to our application. Using too many or too few
of them impacts the latency of our application.

Adding Business Logic to the Database #


No matter what justification anyone provides, I’ve never been a fan of adding
business logic to the database.
The database is just not the place to put business logic. Not only it makes the
whole application tightly coupled. It puts unnecessary load on it.

Imagine when migrating to a different database, how much code refactoring it


would require.

Not Picking the Right Database #


Picking the right database technology is vital for businesses. Need
transactions & strong consistency? Pick a Relational Database. If you can do
without strong consistency rather need horizontal scalability on the fly pick a
NoSQL database.

Trying to pull off things with a not so suitable tech always has a profound
impact on the latency of the entire application in negative ways.

At the Code Level #


This shouldn’t come as a surprise but inefficient & badly written code has the
potential to take down the entire service in production, which includes:

Using unnecessary loops, nested loops.


Writing tightly coupled code.
Not paying attention to the Big-O complexity while writing the code. Be
ready to do a lot of firefighting in production.

In this lesson, if a few of the things are not clear to you such as Strong
consistency, how message queue provides an asynchronous behaviour, how to
pick the right database. I’ll discuss all that in the upcoming lessons, stay
tuned.
How To Improve & Test the Scalability Of Our
Application?

In this lesson, we will learn how we can improve & test the scalability of our application.

WE'LL COVER THE FOLLOWING

• Tuning The Performance Of The Application – Enabling It To Scale Better


• Pro ling
• Caching
• CDN (Content Delivery Network)
• Data Compression
• Avoid Unnecessary Client Server Requests
• Testing the Scalability Of Our Application

Here are some of the common & the best strategies to fine-tune the
performance of our web application. If the application is performance-
optimized it can withstand more traffic load with less resource consumption
as opposed to an application that is not optimized for performance.

Now you might be thinking why am I talking about performance when I


should be talking about scalability?

Well, the application’s performance is directly proportional to scalability. If an


application is not performant it will certainly not scale well. These best
practices can be implemented even before the real pre-production testing is
done on the application.

So, here we go.

Tuning The Performance Of The Application –


Enabling It To Scale Better #
Pro ling #

Profile the hell out. Run application profiler, code profiler. See which processes
are taking too long, eating up too much resources. Find out the bottlenecks.
Get rid of them.

Profiling is the dynamic analysis of our code. It helps us measure the space
and the time complexity of our code & enables us to figure out issues like
concurrency errors, memory errors & robustness & safety of the program.
This Wikipedia resource contains a good list of performance analysis tools
used in the industry

Caching #
Cache wisely. Cache everywhere. Cache all the static content. Hit the database
only when it is really required. Try to serve all the read requests from the
cache. Use a write-through cache.

CDN (Content Delivery Network) #


Use a CDN. Using a CDN further reduces the latency of the application due to
the proximity of the data from the requesting user.

Data Compression #
Compress data. Use apt compression algorithms to compress data. Store data
in the compressed form. As compressed data consumes less bandwidth,
consequently, the download speed of the data on the client will be faster.

Avoid Unnecessary Client Server Requests #


Avoid unnecessary round trips between the client & server. Try to club
multiple requests into one.

These are a few of the things we should keep in mind in context to the
performance of the application.

Testing the Scalability Of Our Application #


Once we are done with the basic performance testing of the application, it is
time for capacity planning, provisioning the right amount of hardware &
computing power.
The right approach for testing the application for scalability largely depends
on the design of our system. There is no definite formula for that. Testing can
be performed at both the hardware and the software level. Different services
& components need to be tested both individually and collectively.

During the scalability testing, different system parameters are taken into
account such as the CPU usage, network bandwidth consumption, throughput,
the number of requests processed within a stipulated time, latency, memory
usage of the program, end-user experience when the system is under heavy load
etc.

In this testing phase, simulated traffic is routed to the system, to study how the
system behaves under the heavy load, how the application scales under the
heavy load. Contingencies are planned for unforeseen situations.

As per the anticipated traffic, appropriate hardware & the computational


power is provisioned to handle the traffic smoothly with some buffer.

Several load & stress tests are run on the application. Tools like JMeter are
pretty popular for running concurrent user test on the application if you are
working on a Java ecosystem. There are a lot of cloud-based testing tools
available that help us simulate tests scenarios just with a few mouse clicks.

Businesses test for scalability all the time to get their systems ready to handle
the traffic surge. If it’s a sports website it would prepare itself for the sports
event day, if it’s an e-commerce website it would make itself ready for the
festival season.

Read how production engineers support global events on Facebook.

Also, how Hotstar a video streaming service scaled with over 10 million
concurrent users

In the industry tech like Cadvisor, Prometheus and Grafana are pretty popular
for tracking the system via web-based dashboards.
I’ve written an article on it in case you want to read more about the pre-
production monitoring.
Scalability Quiz

This lesson contains a quiz to test your understanding of scalability.

Let’s Test Your Understanding Of Scalability

1
Which of the following statements is true in context to latency &
scalability?

COMPLETED 0%
1 of 5
What Is High Availability?

In this lesson, we will learn about high availability and its importance in online services.

WE'LL COVER THE FOLLOWING

• What Is High Availability?


• How Important Is High Availability To Online Services?

Highly available computing infrastructure is the norm in the computing


industry today. More so, when it comes to the cloud platforms, it’s the key
feature which enables the workloads running on them to be highly available.

This lesson is an insight into high availability. It covers all the frequently asked
questions about it such as:

What is it?
Why is it so important to businesses?
What is a highly available cluster?
How do cloud platforms ensure high availability of the services running
on them?
What is fault tolerance & redundancy? How are they related to high
availability?

So, without any further ado. Let’s get on with it.

What Is High Availability? #

High availability also known as HA is the ability of the system to stay


online despite having failures at the infrastructural level in real-time.

High availability ensures the uptime of the service much more than the
normal time. It improves the reliability of the system, ensures minimum
downtime.

The sole mission of highly available systems is to stay online & stay connected.
A very basic example of this is having back-up generators to ensure
continuous power supply in case of any power outages.

In the industry, HA is often expressed as a percentage. For instance, when the


system is 99.99999% highly available, it simply means 99.99999% of the total
hosting time the service will be up. You might often see this in the SLA
(Service Level Agreements) of cloud platforms.

How Important Is High Availability To Online


Services? #
It might not impact businesses that much if social applications go down for a
bit & then bounce back. However, there are mission-critical systems like
aircraft systems, spacecrafts, mining machines, hospital servers, finance stock
market systems that just cannot afford to go down at any time. After all, lives
depend on it.

The smooth functioning of the mission-critical systems relies on the continual


connectivity with their network/servers. These are the instances when we just
cannot do without super highly available infrastructures.

Besides no service likes to go down, critical or not.

To meet the high availability requirements systems are designed to be fault-


tolerant, their components are made redundant.

What is fault-tolerant & redundancy in systems designing? I’ll discuss up next;


Reasons For System Failures

In this lesson, we will discuss the common reasons for system failure.

WE'LL COVER THE FOLLOWING

• Software Crashes
• Hardware Failures
• Human Errors
• Planned Downtime

Before delving into the HA system design, fault-tolerance and redundancy. I’ll
first talk about the common reasons why systems fail.

Software Crashes #
I am sure you are pretty familiar with software crashes. Applications crash all
the time, be it on a mobile phone or a desktop.

Corrupt software files. Remember the BSOD blue screen of death in windows?
OS crashing, memory-hogging unresponsive processes. Likewise, software
running on cloud nodes crash unpredictably, along with it they take down the
entire node.

Hardware Failures #
Another reason for system failure is hardware crashes. Overloaded CPU, RAM,
hard disk failures, nodes going down. Network outages.

Human Errors #
This is the biggest reason for system failures. Flawed configurations & stuff.

Google made a tiny network configuration error & it took down almost half of
the internet in Japan. This is an interesting read.

Planned Downtime #
Besides the unplanned crashes, there are planned down times which involve
routine maintenance operations, patching of software, hardware upgrades
etc.

These are the primary reasons for system failures, now let’s talk about how
HA systems are designed to overcome these scenarios of system downtime.
Achieving High Availability - Fault Tolerance

In this lesson, we will learn about fault tolerance & designing a HA fault tolerant service.

WE'LL COVER THE FOLLOWING

• What is Fault Tolerance?


• Designing A Highly Available Fault-Tolerant Service – Architecture

There are several approaches to achieve HA. The most important of them is to
make the system fault-tolerant.

What is Fault Tolerance? #

Fault tolerance is the ability of the system to stay up despite taking hits.

A fault-tolerant system is equipped to handle faults. Being fault-tolerant is an


essential element in designing life-critical systems.

A few of the instances/nodes, out of several, running the service go offline &
bounce back all the time. In case of these internal failures, the system could
work at a reduced level but it will not go down entirely.

A very basic example of a system being fault-tolerant is a social networking


application. In the case of backend node failures, a few services of the app
such as image upload, post likes etc. may stop working. But the application as
a whole will still be up. This approach is also technically known as Fail Soft.

Designing A Highly Available Fault-Tolerant


Service – Architecture #
To achieve high availability at the application level, the entire massive service
is architecturally broken down into smaller loosely coupled services called the
micro-services.

There are many upsides of splitting a big monolith into several microservices,
as it provides:

Easier management
Easier development
Ease of adding new features
Ease of maintenance
High availability

Every microservice takes the onus of running different features of an


application such as image upload, comment, instant messaging etc.

So, even if a few services go down the application as a whole is still up.
Redundancy

In this lesson, we will learn about Redundancy as a High Availability mechanism.

WE'LL COVER THE FOLLOWING

• Redundancy – Active-Passive HA Mode


• Getting Rid Of Single Points Of Failure
• Monitoring & Automation

Redundancy – Active-Passive HA Mode #

Redundancy is duplicating the components or instances & keeping them


on standby to take over in case the active instances go down. It’s the fail-
safe, backup mechanism.
In the above diagram, you can see the instances active & on standby. The
standby instances take over in case any of the active instances goes down.

This approach is also known as Active-Passive HA mode. An initial set of nodes


are active & a set of redundant nodes are passive, on standby. Active nodes get
replaced by passive nodes, in case of failures.

There are systems like GPS, aircrafts, communication satellites which have
zero downtime. The availability of these systems is ensured by making the
components redundant.

Getting Rid Of Single Points Of Failure #


Distributed systems got so popular solely due to the reason that with them, we
could get rid of the single points of failure present in a monolithic
architecture.

A large number of distributed nodes work in conjunction with each other to


achieve a single synchronous application state.

When so many redundant nodes are deployed, there are no single points of
failure in the system. In case a node goes down redundant nodes take its
place. Thus, the system as a whole remains unimpacted.

Single points of failure at the application level mean bottlenecks. We should


detect bottlenecks in performance testing & get rid of them as soon as we can.

Monitoring & Automation #


Systems should be well monitored in real-time to detect any bottlenecks or
single point of failures. Automation enables the instances to self-recover
without any human intervention. It gives the instances the power of self-
healing.

Also, the systems become intelligent enough to add or remove instances on


the fly as per the requirements.

Since the most common cause of failures is human error, automation helps
cut down failures to a big extent.
Replication

In this lesson, we will learn about Replication as a High Availability mechanism.

WE'LL COVER THE FOLLOWING

• Replication – Active-Active HA Mode


• Geographical Distribution of Workload

Replication – Active-Active HA Mode #


Replication means having a number of similar nodes running the workload
together. There are no standby or passive instances. When a single or a few
nodes go down, the remaining nodes bear the load of the service. Think of this
as load balancing.
This approach is also known as the Active Active High Availability mode. In
this approach, all the components of the system are active at any point in

time.

Geographical Distribution of Workload #


As a contingency for natural disasters, data centre regional power outages &
other big-scale failures, workloads are spread across different data centres
across the world in different geographical zones.

This avoids the single point of failure thing in context to a data centre. Also,
the latency is reduced by quite an extent due to the proximity of data to the
user.

All the highly available fault-tolerant design decisions are subjective to how
critical the system is? What are the odds that the components will fail? Etc.

Businesses often use multi-cloud platforms to deploy their workloads which


ensures further availability. If things go south with one cloud provider, they
have another to fail back over.
High Availability Clustering

In this lesson, we will learn about High Availability Clustering.

Now, that we have a clear understanding of high availability, let’s talk a bit
about the high availability cluster.

A High Availability cluster also known as the Fail-Over cluster contains a set of
nodes running in conjunction with each other that ensures high availability of
the service.

The nodes in the cluster are connected by a private network called the
Heartbeat network that continuously monitors the health and the status of
each node in the cluster.

A single state across all the nodes in a cluster is achieved with the help of a
shared distributed memory & a distributed co-ordination service like the
Zookeeper.
To ensure the availability, HA clusters use several techniques such as Disk
mirroring/RAID Redundant Array Of Independent Disks, redundant network
connections, redundant electrical power etc. The network connections are
made redundant so if the primary network goes down the backup network
takes over.

Multiple HA clusters run together in one geographical zone ensuring


minimum downtime & continual service.

Alright, so now we have a pretty good understanding of scalability and high


availability. These two concepts are crucial to software system design.

Moving on to the next chapter where we discuss monolithic & microservices


architecture.
High Availability Quiz

This lesson contains a quiz to test your understanding of high availability.

Let’s Test Your Understanding Of High Availability

1
Which of the following statements is true in context to scalability & high
availability?

COMPLETED 0%
1 of 4
Understanding DNS – Part 1
In this lesson, we will understand what is DNS Domain Name System & how does it work

We'll cover the following

• Domain Name System


• How Does Domain Name System Work?

Every machine that is online & is a part of world wide web www has a unique IP
address that enables it to be contacted by other machines on the web using that
particular IP address.

IP stands for Internet Protocol. It’s a protocol that facilitates delivery of data
packets from one machine to the other using their IP addresses.

2001:db8:0:1234:0:567:8:1 - this is an example of an IP address of a machine. The


server that hosts our website will have a similar IP address. And to fetch content
from that server a user has to type in the unique IP address of the server in their
browser url tab & hit enter to be able to interact with the content of the website.

Now, naturally, it’s not viable to type in the IP address of the website, from our
memory, every time we visit a certain website. Even if we try to how many IP
addresses do you think you can remember?

Typing in domain names for instance amazon.com is a lot easier than working
directly with IP addresses. I think we can all agree on this;

Domain Name System #


Domain name system commonly known as DNS is a system that averts the need to
remember long IP addresses to visit a website by mapping easy to remember
domain names to IP addresses.

amazon.com is a domain name that is mapped to its unique IP address by the DNS
so that we are not expected to type in the IP address of amazon.com into our
browsers every time we visit that website.
If you are intrigued and want to read more on IP addresses, you can visit this
Wikipedia resource on it.

Okay!! Now let’s understand how does DNS work?

How Does Domain Name System Work? #


When a user types in the url of the website in their browser and hits enter. This
event is known as DNS querying.

There are four key components, i.e. a group of servers, that make up the DNS
infrastructure. Those are -

DNS Recursive Nameserver aka DNS Resolver


Root Nameserver
Top-Level Domain Nameserver
Authoritative Nameserver

In the next lesson let’s understand how does the DNS query lookup process work
and what is the role of these servers in the lookup process.
Understanding DNS – Part 2
This lesson continues the discussion on the domain name system

In this lesson, we will have an insight into the complete DNS query lookup process
& we will also understand the role of different servers in the DNS infrastructure.

Lets’ begin…

So, when the user hits enter after typing in the domain name into their browser,
the browser sends a request to the DNS Recursive nameserver which is also known
as the DNS Resolver.

The role of DNS Resolver is to receive the client request and forward it to the Root
nameserver to get the address of the Top-Level domain nameserver.

DNS Recursive nameserver is generally managed by our ISP Internet service


provider. The whole DNS system is a distributed system setup in large data centers
managed by internet service providers.

These data centers contain clusters of servers that are optimized to process DNS
queries in minimal time that is in milliseconds.

So, once the DNS Resolver forwards the request to the Root nameserver, the Root
nameserver returns the address of the Top-Level domain nameserver in response.
As an example, the top-level domain for amazon.com is .com.

Once the DNS Resolver receives the address of the top-level domain nameserver, it
sends a request to it to fetch the details of the domain name. Top level domain
nameservers hold the data for domains using their top-level domains.

For instance, .com top-level domain nameserver will contain information on


domains using .com. Similarly, a .edu top-level domain nameserver will hold
information on domains using .edu.

Since our domain is amazon.com the DNS Resolver will route the request to the
.com top-level domain name server.

Once the top-level domain name server receives the request from the Resolver, it
returns the IP address of amazon.com domain name server.

amazon.com domain name server is the last server in the DNS query lookup
process. It is the nameserver responsible for the amazon.com domain & is also
known as the Authoritative nameserver. This nameserver is owned by the owner of
the domain name.

DNS Resolver then fires a query to the authoritative nameserver & it then returns
the IP address of amazon.com website to the DNS Resolver. DNS Resolver caches the
data and forwards it to the client.

On receiving the response, the browser sends a request to the IP address of the
amazon.com website to fetch data from their servers.

Often all this DNS information is cached and the DNS servers don’t have to do so
much rerouting every time a client requests an IP of a certain website.

DNS information of websites that we visit also gets cached in our local machines
that is our browsing devices with a TTL Time To Live.

All the modern browsers do this automatically to cut down the DNS query lookup
time when revisiting a website.

This is how the entire DNS query lookup process works.


In the next lesson, let’s have an insight into DNS load balancing.
What Is A Microservice Architecture?

In this lesson, we will learn about the Microservice Architecture.

WE'LL COVER THE FOLLOWING

• What Is A Microservices Architecture?

What Is A Microservices Architecture? #

In a microservices architecture, different features/tasks are split into


separate respective modules/codebases which work in conjunction with
each other forming a large service as a whole.

Remember the Single Responsibility & the Separation of Concerns principles?


Both the principles are applied in a microservices architecture.

This particular architecture facilitates easier & cleaner app maintenance,


feature development, testing & deployment in comparison to a monolithic
architecture.

Imagine accommodating every feature in a single repository. How complex


things would be? It would be a maintenance nightmare.

Also, since the project is large, it is expected to be managed by several


different teams. When modules are separate, they can be assigned to
respective teams with minimum fuss, smoothening out the development
process.

And did I bring up scalability? To scale, we need to split things up. We need
to scale out when we can’t scale up further. Microservices architecture is
inherently designed to scale.
The diagram below represents a microservices architecture:

Every service ideally has a separate database, there are no single points of
failure & system bottlenecks.

Let’s go through some of the pros and cons of using a microservices


architecture.
DNS Load Balancing
In this lesson, we will understand DNS Load balancing

We'll cover the following

• DNS Load Balancing


• Limitations Of DNS Load Balancing

DNS Load Balancing #


In the previous lesson, we understood how the DNS query lookup process works
and the role of different servers in the domain name system. The final end server,
in the lookup chain, is the authoritative server that returns the IP address of the
domain.

When a large-scale service such as amazon.com runs, it needs way more than a
single machine to run its services. A service as big as amazon.com is deployed
across multiple data centers in different geographical locations across the globe.

To spread the user traffic across different clusters in different data centers. There
are different ways to setup load balancing. In this lesson, we will discuss DNS load
balancing that is setup at the DNS level on the authoritative server.
DNS load balancing enables the authoritative server to return different IP addresses
of a certain domain to the clients. Every time it receives a query for an IP, it
returns a list of IP addresses of a domain to the client.

With every request, the authoritative server changes the order of the IP addresses
in the list in a round-robin fashion.

As the client receives the list, it sends out a request to the first IP address on the list
to fetch the data from the website. The reason for returning a list of IP addresses to
the client is to enable it to use other IP addresses in the list in case the first doesn’t
return a response within a stipulated time.

When another client sends out a request for an IP address to the authoritative
server, it re-orders the list and puts another IP address on the top of the list
following the round-robin algorithm.

Also, when the client hits an IP it may not necessarily hit an application server, it
may hit another load balancer implemented at the data center level that manages
the clusters of application servers.
Limitations Of DNS Load Balancing #

DNS load balancing is largely used by companies to distribute traffic across


multiple data centers that the application runs in. Though this approach has
several limitations, for instance, it doesn’t take into account the existing load on
the servers, the content they hold, their request processing time, their in-service
status and so on.

Also, since these IP addresses are cached by the client’s machine and the DNS
Resolver, there is always a possibility of a request being routed to a machine that is
out of service.

DNS load balancing despite its limitations is preferred by companies because it’s an
easy and less expensive way of setting up load balancing on their service.

Recommended Read – Round Robin DNS


When Should You Pick A Microservices Architecture?

In this lesson, we will learn about the pros and cons of the Microservice Architecture & when should we pick it for
our project.

WE'LL COVER THE FOLLOWING

• Pros of Microservice Architecture


• No Single Points Of Failure
• Leverage the Heterogeneous Technologies
• Independent & Continuous Deployments
• Cons Of Microservices Architecture
• Complexities In Management
• No Strong Consistency
• When Should You Pick A Microservices Architecture?

Pros of Microservice Architecture #


No Single Points Of Failure #
Since microservices is a loosely coupled architecture, there is no single point
of failure. Even if a few of the services go down, the application as a whole is
still up.

Leverage the Heterogeneous Technologies #


Every component interacts with each other via a REST API Gateway interface.
The components can leverage the polyglot persistence architecture & other
heterogeneous technologies together like Java, Python, Ruby, NodeJS etc.

Polyglot persistence is using multiple databases types like SQL, NoSQL together
in an architecture. I’ll discuss it in detail in the database lesson.

Independent & Continuous Deployments #


The deployments can be independent and continuous. We can have dedicated
teams for every microservice, it can be scaled independently without
impacting other services.

Cons Of Microservices Architecture #


Complexities In Management #
Microservices is a distributed environment, where there are so many nodes
running together. Managing & monitoring them gets complex.

We need to setup additional components to manage microservices such as a


node manager like Apache Zookeeper, a distributed tracing service for
monitoring the nodes etc.

We need more skilled resources, maybe a dedicated team to manage these


services.

No Strong Consistency #
Strong consistency is hard to guarantee in a distributed environment. Things
are Eventually consistent across the nodes. And this limitation is due to the
distributed design.

I’ll discuss both Strong and eventual consistency in the database chapter.

When Should You Pick A Microservices


Architecture? #
The microservice architecture fits best for complex use cases and for apps
which expect traffic to increase exponentially in future like a fancy social
network application.

A typical social networking application has various components such as


messaging, real-time chat, LIVE video streaming, image uploads, Like, Share
feature etc.

In this scenario, I would suggest developing each component separately


keeping the Single Responsibility and the Separation of Concerns principle in
mind.
Writing every feature in a single codebase would take no time in becoming a
mess.

So, by now, in the context of monolithic and microservices, we have gone


through three approaches:

1. Picking a monolithic architecture


2. Picking a microservice architecture
3. Starting with a monolithic architecture and then later scale out into a
microservice architecture.

Picking a monolithic or a microservice architecture largely depends on our


use case.

I’ll suggest, keep things simple, have a thorough understanding of the


requirements. Get the lay of the land, build something only when you need it
& keep evolving the code iteratively. This is the right way to go.
Load Balancing Methods
In this lesson, we will have an insight into hardware and software load balancing.

We'll cover the following

• Hardware Load Balancers


• Software Load Balancers
• Algorithms/Traf c Routing Approaches Leveraged By Load Balancers
• Round Robin & Weighted Round Robin
• Least Connections
• Random
• Hash

There are primarily three modes of load balancing -

1. DNS Load Balancing


2. Hardware-based Load Balancing
3. Software-based Load Balancing

We’ve already discussed DNS load balancing in the former lesson. In this one, we
will discuss hardware and software load balancing.

So, without further ado. Let’s get on with it.

Hardware-based & Software-based, both are pretty common ways of balancing


traffic load on large scale services. Let’s begin with hardware-based load
balancing.

Hardware Load Balancers #


Hardware load balancers are highly performant physical hardware, they sit in
front of the application servers and distribute the load based on the number of
existing open connections to a server, compute utilization and several other
parameters.
Since, these load balancers are physical hardware they need maintenance &
regular updates, just like any other server hardware would need. They are
expensive to setup in comparison to software load balancers and their upkeep may
require a certain skill set.

If the business has an IT team & network specialists in house, they can take care of
these load balancers else the developers are expected to wrap their head around
how to set up these hardware load balancers with some assistance from the
vendor. This is the reason developers prefer working with software load
balancers.

When using hardware load balancers, we may also have to overprovision them to
deal with the peak traffic that is not the case with software load balancers.

Hardware load balancers are primarily picked because of their top-notch


performance.

Now let’s have an insight into software-based load balancing.

Software Load Balancers #


Software load balancers can be installed on commodity hardware and VMs. They
are more cost-effective and offer more flexibility to the developers. Software load
balancers can be upgraded and provisioned easily in comparison to hardware load
balancers.

You will also find several LBaaS Load Balancers as a Service services online that
enable you to directly plug in load balancers into your application without you
having to do any sort of setup.

Software load balancers are pretty advanced when compared to DNS load
balancing as they consider many parameters such as content that the servers host,
cookies, HTTP headers, CPU & memory utilization, load on the network & so on to
route traffic across the servers.

They also continually perform health checks on the servers to keep an updated list
of in-service machines.

Development teams prefer to work with software load balancers as hardware load
balancers require specialists to manage them.
HAProxy is one example of a software load balancer that is widely used by the big
guns in the industry to scale their systems such as GitHub, Reddit, Instagram, AWS,

Tumblr, StackOverflow & many more.

Besides the Round Robin algorithm that the DNS Load balancers use, software load
balancers leverage several other algorithms to efficiently route traffic across the
machines. Let’s have an insight.

Algorithms/Traf c Routing Approaches Leveraged


By Load Balancers #
Round Robin & Weighted Round Robin #
We know that Round Robin algorithm sends IP address of machines sequentially to
the clients. Parameters such as load on the servers, their CPU consumption and so
on are not taken into account when sending the IP addresses to the clients.

We have another approach known as the Weighted Round Robin where based on
the server’s compute & traffic handling capacity weights are assigned to them. And
then based on the server weights, traffic is routed to them using the Round Robin
algorithm.

With this approach, more traffic is converged to machines that can handle a higher
traffic load thus making efficient use of the resources.

This approach is pretty useful when the service is deployed in different data
centers having different compute capacities. More traffic can be directed to the
larger data centers containing more machines.

Least Connections #
When using this algorithm, the traffic is routed to the machine that has the least
open connections of all the machines in the cluster. There are two approaches to
implement this –

In the first, it is assumed that all the requests will consume an equal amount of
server resources & the traffic is routed to the machine, having the least open
connections, based on this assumption.

Now in this scenario, there is a possibility that the machine having the least open
connections might be already processing requests demanding most of its CPU
power. Routing more traffic to this machine wouldn t be such a good idea.

In the other approach, the CPU utilization & the request processing time of the
chosen machine is also taken into account before routing the traffic to it. Machines
with less request processing time, CPU utilization & simultaneously having the
least open connections are the right candidates to process the future client
requests.

The least connections approach comes in handy when the server has long opened
connections, for instance, persistent connections in a gaming application.

Random #
Following this approach, the traffic is randomly routed to the servers. The load
balancer may also find similar servers in terms of existing load, request processing
time and so on and then it randomly routes the traffic to these machines.

Hash #
In this approach, the source IP where the request is coming from and the request
URL are hashed to route the traffic to the backend servers.

Hashing the source IP ensures that the request of a client with a certain IP will
always be routed to the same server.

This facilitates better user experience as the server has already processed the
initial client requests and holds the client’s data in its local memory. There is no
need for it to fetch the client session data from the session memory of the cluster &
then process the request. This reduces latency.

Hashing the client IP also enables the client to re-establish the connection with the
same server, that was processing its request, in case the connection drops.

Hashing a url ensures that the request with that url always hits a certain cache that
already has data on it. This is to ensure that there is no cache miss.

This also averts the need for duplicating data in every cache and is thus a more
efficient way to implement caching.

Well, this is pretty much it on the fundamentals of load balancing. In the next
chapter, let’s understand monoliths and microservices.
Monolith & Microservices Quiz

This lesson contains a quiz to test your understanding of monoliths and microservices.

Let’s Test Your Understanding Of Monolithic & Microservices architecture

1
When should we use a monolithic architecture for our project? Which of
the following option(s) are correct?

COMPLETED 0%
1 of 4
Introduction & Types of Data

In this lesson, we will have an introduction to databases and the different types of data.

WE'LL COVER THE FOLLOWING

• What Is A Database?
• Structured Data
• Unstructured Data
• Semi-structured Data
• User state

What Is A Database? #
A database is a component required to persist data. Data can be of many
forms: structured, unstructured, semi-structured and user state data.

Let’s quickly have an insight into the classification of data before delving into
the databases.
Structured Data #
Structured data is the type of data which conforms to a certain structure,
typically stored in a database in a normalized fashion.

There is no need to run any sort of data preparation logic on structured data
before processing it. Direct interaction can be done with this kind of data.

An example of structured data would be the personal details of a customer


stored in a database row. The customer id would be of integer type, the name
would be of String type with a certain character limit etc.

So, with structured data, we know what we are dealing with. Since the
customer name is of String type, without much worry of errors or exceptions,
we can run String operations on it.

Structured data is generally managed by a query language such as SQL


(Structured query language).

Unstructured Data #
Unstructured data has no definite structure. It is generally the heterogeneous
type of data comprising of text, image files, video, multimedia files, pdfs, Blob
objects, word documents, machine-generated data etc.

This kind of data is often encountered in data analytics. Here the data streams
in from multiple sources such as IoT devices, social networks, web portals,
industry sensors etc. into the analytics systems.

We cannot just directly process unstructured data. The initial data is pretty
raw, we have to make it flow through a data preparation stage which
segregates it based on some business logic & then runs the analytics
algorithms on it.

Semi-structured Data #
Semi-structured data is a mix of structured & unstructured data. Semi-
structured data is often stored in data transport formats such as XML or JSON
and is handled as per the business requirements.
User state #

The data containing the user state is the information of activity which the user
performs on the website.

For instance, when browsing through an e-commerce website, the user would
browse through several product categories, change the preferences, add a few
products to the reminder list for the price drops.

All this activity is the user state. So next time, when the user logs in, he can
continue from where he left off. It would not feel like that one is starting fresh
& all the previous activity is lost.

Storing user state improves the browsing experience & the conversion rate for
the business.
So, now we are clear on the different types of data. Let’s have a look into
different types of databases.

There are multiple different types of databases with specific use cases. We’ll
quickly go through each of them in order to have a bird’s eye view of the
database realm.
Load Balancing Quiz
This lesson contains a quiz to test your understanding of load balancing in web applications.

Let’s Test Your Understanding Of Load Balancing

1
Why do we need load balancing in web applications?
2
Why do load balancers regularly need to perform the health checks of the
infrastructure?

3 Why is DNS load balancing implemented on the authoritative nameserver?


4

Why does DNS load balancing make authoritative nameserver return a list of
IP addresses as opposed to just returning a single different IP every time?

5
What is true about DNS load balancing?
Retake Quiz
Relational Database

In this lesson, we will discuss the relational databases.

WE'LL COVER THE FOLLOWING

• What Is A Relational Database?


• What Are Relationships?
• Data Consistency
• ACID Transactions

What Is A Relational Database? #


This is the most common & widely used type of database in the industry. A
relational database saves data containing relationships. One to One, One to
Many, Many to Many, Many to One etc. It has a relational data model. SQL is
the primary data query language used to interact with relational databases.

MySQL is the most popular example of a relational database. Alright!! I get it


but what are relationships?

What Are Relationships? #


Let’s say you as a customer buy five different books from an online book
store. When you created an account on the book store you would have been
assigned a customer id say C1. Now that C1[You] is linked to five different
books B1, B2, B3, B4, B5.

This is a one to many relationship. In the simplest of forms, one table will
contain the details of all the customers & another table will contain all the
products in the inventory.

One row in the customer table will correspond to multiple rows in the product
inventory table.
On pulling the user object with id C1 from the database we can easily find
what books C1 purchased via the relationship model.

Data Consistency #
Besides, the relationships, relational databases also ensure saving data in a
normalized fashion. In very simple terms, normalized data means a unique
entity occurs in only one place/table, in its simplest and atomic form and is
not spread throughout the database.

This helps in maintaining the consistency of the data. In future, if we want to


update the data, we just update at that one place and every fetch operation
gets the updated data.

Had the data been spread throughout the database in different tables. We
would have to update the new value of an entity everywhere. This is
troublesome and things can get inconsistent.

ACID Transactions #
Besides normalization & consistency, relational databases also ensure ACID
transactions.

ACID – Atomicity, Consistency, Integrity, Durability.

An acid transaction means if a transaction in a system occurs, say a financial


transaction, either it will be executed with perfection without affecting any
other processes or transactions.

The system will have a new state after the transaction which is durable &
consistent. Or if anything, amiss happens during the transaction, say a minor
system failure, the entire operation is rolled back.

When a transaction happens, there is an initial state of the system State A &
then there is a final state of the system State B after the transaction. Both the
states are consistent and durable.

A relational database ensures that either the system is in State A or State B at


all times. There is no middle state. If anything fails, the system goes back to
State A.

If the transaction is executed smoothly the system transitions from State A to


State B.
What Is A Monolithic Architecture?
In this lesson, we will discuss the Monolithic Architecture.

We'll cover the following

• What Is A Monolithic Architecture?

What Is A Monolithic Architecture? #

An application has a monolithic architecture if it contains the entire


application code in a single codebase.

A monolithic application is a self-contained, single-tiered software application


unlike the microservices architecture, where different modules are responsible for
running respective tasks and features of an app.

The diagram below represents a monolithic architecture:


In a monolithic web-app all the different layers of the app, UI, business, data access
etc. are in the same codebase.

We have the Controller, then the Service Layer interface, Class implementations of
the interface, the business logic goes in the Object Domain model, a bit in the
Service, Business and the Repository/DAO [Data Access Object] classes.

Monolithic apps are simple to build, test & deploy in comparison to a microservices
architecture.

There are times during the initial stages of the business when teams chose to move
forward with the monolithic architecture & then later intend to branch out into the
distributed, microservices architecture.

Well, this decision has several trade-offs. And there is no standard solution to this.

In the present computing landscape, the applications are being built & deployed on
the cloud. A wise decision would be to pick the loosely coupled stateless
microservices architecture right from the start if you expect things to grow at quite
a pace in the future.

Because re-writing stuff has its costs. Stripping down things in a tightly coupled
architecture & re writing stuff demands a lot of resources & time.

On the flip side, if your requirements are simple why bother writing a
microservices architecture? Running different modules in conjunction with each
other isn’t a walk in the park.

Let’s go through some of the pros and cons of monolithic architecture.


When Should You Pick a Monolithic Architecture?
In this lesson, we will learn about the pros and cons of a Monolithic Architecture & when to
choose it for our project.

We'll cover the following

• Pros Of Monolithic Architecture


• Simplicity
• Cons Of Monolithic Architecture
• Continuous Deployment
• Regression Testing
• Single Points Of Failure
• Scalability Issues
• Cannot Leverage Heterogeneous Technologies
• Not Cloud-Ready, Hold State
• When Should You Pick A Monolithic Architecture?

Pros Of Monolithic Architecture #


Simplicity #
Monolithic applications are simple to develop, test, deploy, monitor and manage
since everything resides in one repository.

There is no complexity of handling different components, making them work in


conjunction with each other, monitoring several different components & stuff.
Things are simple.

Cons Of Monolithic Architecture #


Continuous Deployment #
Continuous deployment is a pain in case of monolithic applications as even a minor
code change in a layer needs a re-deployment of the entire application.
Regression Testing #

The downside of this is that we need a thorough regression testing of the entire
application after the deployment is done as the layers are tightly coupled with each
other. A change in one layer impacts other layers significantly.

Single Points Of Failure #


Monolithic applications have a single point of failure. In case any of the layers has a
bug, it has the potential to take down the entire application.

Scalability Issues #
Flexibility and scalability are a challenge in monolith apps as a change in one layer
often needs a change and testing in all the layers. As the code size increases, things
might get a bit tricky to manage.

Cannot Leverage Heterogeneous Technologies #


Building complex applications with a monolithic architecture is tricky as using
heterogeneous technologies is difficult in a single codebase due to the
compatibility issues.

It’s tricky to use Java & NodeJS together in a single codebase, & when I say tricky, I
am being generous. I am not sure if it’s even possible to do that.

Not Cloud-Ready, Hold State #


Generally, monolithic applications are not cloud-ready as they hold state in the
static variables. An application to be cloud-native, to work smoothly & to be
consistent on the cloud has to be distributed and stateless.

When Should You Pick A Monolithic Architecture? #


Monolithic applications fit best for use cases where the requirements are pretty
simple, the app is expected to handle a limited amount of traffic. One example of
this is an internal tax calculation app of an organization or a similar open public
tool.

These are the use cases where the business is certain that there won’t be an
exponential growth in the user base and the traffic over time.
There are also instances where the dev teams decide to start with a monolithic
architecture and later scale out to a distributed microservices architecture.

This helps them deal with the complexity of the application step by step as and
when required. This is exactly what LinkedIn did.

In the next lesson, we will learn about the Microservice architecture.


When Should You Pick A Relational Database?

In this lesson, we will discuss when to choose a relational database for our project.

WE'LL COVER THE FOLLOWING

• Transactions & Data Consistency


• Large Community
• Storing Relationships
• Popular Relational Databases

If you are writing a stock trading, banking or a Finance-based app or you need
to store a lot of relationships, for instance, when writing a social network like
Facebook. Then you should pick a relational database. Here is why –

Transactions & Data Consistency #


If you are writing a software which has anything to do with money or
numbers, that makes transactions, ACID, data consistency super important to
you.

Relational DBs shine when it comes to transactions & data consistency. They
comply with the ACID rule, have been around for ages & are battle-tested.

Large Community #
Also, they have a larger community. Seasoned engineers on the tech are easily
available, you don’t have to go too far looking for them.

Storing Relationships #
If your data has a lot of relationships like which friends of yours live in a
particular city? Which of your friend already ate at the restaurant you plan to
visit today? etc. There is nothing better than a relational database for storing
this kind of data.

Relational databases are built to store relationships. They have been tried &
tested & are used by big guns in the industry like Facebook as the main user-
facing database.

Popular Relational Databases #


Some of the popular relational databases used in the industry are MySQL - it’s
an open-source relationship database written in C, C++ been around since
1995.

Others are Microsoft SQL Server, a proprietary RDBMS written by Microsoft in


C, C++. PostgreSQL an open-source RDBMS written in C. MariaDB, Amazon
Aurora, Google Cloud SQL etc.

Well, that’s all on the relational databases. Moving on to non-relational


databases.
NoSQL Databases - Introduction

In this lesson, we will get an insight into NoSQL databases and how they are different from Relational databases.

WE'LL COVER THE FOLLOWING

• What Is A NoSQL Database?


• How Is A NoSQL Database Different From A Relational Database?
• Scalability
• Clustering

What Is A NoSQL Database? #


In this lesson, we will get an insight into NoSQL databases and how they are
different from Relational databases. As the name implies NoSQL databases
have no SQL, they are more like JSON-based databases built for Web 2.0

They are built for high frequency read & writes, typically required in social
applications like Twitter, LIVE real-time sports apps, online massive multi-
player games etc.

How Is A NoSQL Database Different From A


Relational Database? #
Now, one obvious question that would pop-up in our minds is:

Why the need for NoSQL databases when relational databases were doing
fine, were battle-tested, well adopted by the industry & had no major
persistence issues?

Scalability #
Well, one big limitation with SQL based relational databases is Scalability.
Scaling relational databases is something which is not trivial. They have to be
Sharded or Replicated to make them run smoothly on a cluster. In short, this
requires careful thought and human intervention.

On the contrary, NoSQL databases have the ability to add new server nodes on
the fly & continue the work, without any human intervention, just with a snap
of the fingers.

Today’s websites need fast read writes. There are millions, if not billions of
users connected with each other on social networks.

Massive amount of data is generated every micro-second, we needed an


infrastructure designed to manage this exponential growth.

Clustering #
NoSQL databases are designed to run intelligently on clusters. And when I say
intelligently, I mean with minimal human intervention.

Today, the server nodes even have self-healing capabilities. That’s pretty
smooth. The infrastructure is intelligent enough to self-recover from faults.

Though all this innovation does not mean old school relational databases
aren’t good enough & we don’t need them anymore.

Relational databases still work like a charm & are still in demand. They have a
specific use-case. We have already gone through this in When to pick a
relational database lesson. Remember? 😊

Also, NoSQL databases had to sacrifice Strong consistency, ACID Transactions


& much more to scale horizontally over a cluster & across the data centres.

The data with NoSQL databases is more Eventually Consistent as opposed to


being Strongly Consistent.

So, this obviously means NoSQL databases aren’t the silver bullet. And it’s
completely alright, we don’t need silver bullets. We aren’t hunting
werewolves, we are upto a much harder task connecting the world online.

I’ll talk about the underlying design of NoSQL databases in much detail and
why they have to sacrifice Strong consistency and Transactions in the
upcoming lessons.

For now, let’s focus on some of the features of NoSQL databases.


What Is A Microservice Architecture?
In this lesson, we will learn about the Microservice Architecture.

We'll cover the following

• What Is A Microservices Architecture?

What Is A Microservices Architecture? #

In a microservices architecture, different features/tasks are split into separate


respective modules/codebases which work in conjunction with each other
forming a large service as a whole.

Remember the Single Responsibility & the Separation of Concerns principles? Both
the principles are applied in a microservices architecture.

This particular architecture facilitates easier & cleaner app maintenance, feature
development, testing & deployment in comparison to a monolithic architecture.

Imagine accommodating every feature in a single repository. How complex things


would be? It would be a maintenance nightmare.

Also, since the project is large, it is expected to be managed by several different


teams. When modules are separate, they can be assigned to respective teams with
minimum fuss, smoothening out the development process.

And did I bring up scalability? To scale, we need to split things up. We need to
scale out when we can’t scale up further. Microservices architecture is inherently
designed to scale.

The diagram below represents a microservices architecture:


Every service ideally has a separate database, there are no single points of failure
& system bottlenecks.

Let’s go through some of the pros and cons of using a microservices architecture.
Features Of NoSQL Databases

In this lesson, we will discuss the features of NoSQL databases.

WE'LL COVER THE FOLLOWING

• Pros Of NoSQL Databases


• Gentle Learning Curve
• Schemaless
• Cons Of NoSQL Databases
• Inconsistency
• No Support For ACID Transactions
• Conclusion
• Popular NoSQL Databases

In the introduction we learned that the NoSQL databases are built to run on
clusters in a distributed environment, powering Web 2.0 websites.

Now, let’s go over some features of NoSQL databases.

Pros Of NoSQL Databases #


Besides the design part, NoSQL databases are also developer-friendly. What do
I mean by that?

Gentle Learning Curve #


First, the learning curve is less than that of relational databases. When
working with relational databases a big chunk of our time goes into learning
how to design well-normalized tables, setting up relationships, trying to
minimize joins and stuff.

Schemaless #
One needs to be pretty focused when designing the schema of a relational
database to avoid running into any issues in the future.

Think of relational databases as a strict headmaster. Everything has to be in


place, neat and tidy, things need to be consistent. But NoSQL databases are a
bit of chilled out & relaxed.

There are no strict enforced schemas, work with the data as you want. You
can always change stuff, spread things around. Entities have no relationships.
Thus, things are flexible & you can do stuff your way.

Wonderful Right?

Not always!! This flexibility is good and bad at the same time. Being so
flexible, developer-friendly, having no joins and relationships etc. makes it
good.

Cons Of NoSQL Databases #


Inconsistency #
But it introduces a risk of entities being inconsistent at the same time. Since
an entity is spread throughout the database one has to update the new values
of the entity at all places.

Failing to do so, makes the entity inconsistent. This is not a problem with
relational databases since they keep the data normalized. An entity resides at
one place only.

No Support For ACID Transactions #


Also, NoSQL distributed databases don’t provide ACID transactions. A few that
claim to do that, don’t support them globally. They are just limited to a certain
entity hierarchy or a small region where they can lock down nodes to update
them.

Note: Transactions in distributed systems come with terms and conditions


applied.
Conclusion #
My first experience with a NoSQL datastore was with the Google Cloud
Datastore.

An upside I felt was that we don’t have to be a pro in database design to write
an application. Things were comparatively simpler, as there was no stress of
managing joins, relationships, n+1 query issues etc.

Just fetch the data with its Key. You can also call it the id of the entity. This is a
constant O(1) operation, which makes NoSQL Dbs really fast.

I have designed a lot of MySQL DB schemas in the past with complex


relationships. And I would say working with a NoSQL database is a lot easier
than working with relationships.

It’s alright if we need to make a few extra calls to the backend to fetch data in
separate calls that doesn’t make much of a difference. We can always cache
the frequently accessed data to overcome that.

Popular NoSQL Databases #


Some of the popular NoSQL databases used in the industry are MongoDB,
Redis, Neo4J, Cassandra.

So, I guess, by now we have a pretty good idea of what NoSQL databases are.
Let’s have a look at some of the use cases which fit best with them.
When Should You Pick A Microservices Architecture?
In this lesson, we will learn about the pros and cons of the Microservice Architecture & when
should we pick it for our project.

We'll cover the following

• Pros of Microservice Architecture


• No Single Points Of Failure
• Leverage the Heterogeneous Technologies
• Independent & Continuous Deployments
• Cons Of Microservices Architecture
• Complexities In Management
• No Strong Consistency
• When Should You Pick A Microservices Architecture?

Pros of Microservice Architecture #


No Single Points Of Failure #
Since microservices is a loosely coupled architecture, there is no single point of
failure. Even if a few of the services go down, the application as a whole is still up.

Leverage the Heterogeneous Technologies #


Every component interacts with each other via a REST API Gateway interface. The
components can leverage the polyglot persistence architecture & other
heterogeneous technologies together like Java, Python, Ruby, NodeJS etc.

Polyglot persistence is using multiple databases types like SQL, NoSQL together in
an architecture. I’ll discuss it in detail in the database lesson.

Independent & Continuous Deployments #


The deployments can be independent and continuous. We can have dedicated
teams for every microservice, it can be scaled independently without impacting
other services.
ot e se ces.

Cons Of Microservices Architecture #


Complexities In Management #
Microservices is a distributed environment, where there are so many nodes
running together. Managing & monitoring them gets complex.

We need to setup additional components to manage microservices such as a node


manager like Apache Zookeeper, a distributed tracing service for monitoring the
nodes etc.

We need more skilled resources, maybe a dedicated team to manage these


services.

No Strong Consistency #
Strong consistency is hard to guarantee in a distributed environment. Things are
Eventually consistent across the nodes. And this limitation is due to the distributed
design.

I’ll discuss both Strong and eventual consistency in the database chapter.

When Should You Pick A Microservices


Architecture? #
The microservice architecture fits best for complex use cases and for apps which
expect traffic to increase exponentially in future like a fancy social network
application.

A typical social networking application has various components such as


messaging, real-time chat, LIVE video streaming, image uploads, Like, Share
feature etc.

In this scenario, I would suggest developing each component separately keeping


the Single Responsibility and the Separation of Concerns principle in mind.

Writing every feature in a single codebase would take no time in becoming a mess.

So, by now, in the context of monolithic and microservices, we have gone through
three approaches:

1. Picking a monolithic architecture


2. Picking a microservice architecture

3. Starting with a monolithic architecture and then later scale out into a
microservice architecture.

Picking a monolithic or a microservice architecture largely depends on our use


case.

I’ll suggest, keep things simple, have a thorough understanding of the


requirements. Get the lay of the land, build something only when you need it &
keep evolving the code iteratively. This is the right way to go.
Monolith & Microservices – Understanding The Trade-
Offs – Part 1
This lesson discusses the trade-offs involved when choosing between the monolith and the
microservices architecture

We'll cover the following

• Fault Isolation
• Development Team Autonomy
• Segment – From Monolith To Microservices And Back Again To Monolith

So, in this chapter, by now, we have understood what is a monolith, what is a


microservice, their pros & cons & when to pick which? In this lesson, we will
continue our discussion a bit further on the trade-offs that are involved when
choosing between the monolith and the microservices architecture to design our
application.

Fault Isolation #
When we have a microservices architecture in place it’s easy for us to isolate faults
and debug them. When a glitch occurs in a particular service, we just have to fix
the issue in that service without the need to scan the entire codebase in order to
locate and fix that issue. This is also known as fault isolation.

Even if the service goes down due to the fault, the other services are up & running.
This has a minimal impact on the system.

Development Team Autonomy #


In case of a monolith architecture if the number of developers and the teams
working on a single codebase grows beyond a certain number. It may impede the
productivity and the velocity of the teams.

In this scenario, things become a little tricky to manage. First off, as the size of the
codebase increases, the compile-time & the time required to run the tests increases
too. Since, in a monolith architecture, the entire codebase has to be compiled as
opposed to just compiling the module we work on.
A code change made, in the codebase, by any other team has a direct impact on the
features we develop. It may even break the functionality of our feature. Due to this
a thorough regression testing is required every time anyone pushes new code or
an update to production.

Also, as the code is pushed to production, we need all the teams to stop working on
the codebase until the change is pushed to production.

The code pushed by a certain team may also require approval from other teams in
the organization working on the same codebase. This process is a bottleneck in the
system.

On the contrary, in the case of microservices separate teams have complete


ownership of their codebases. They have a complete development and deployment
autonomy over their modules with separate deployment pipelines. Code
management becomes easier. It becomes easier to scale individual services based
on their traffic load patterns.

So, if you need to move fast, launch a lot of features quick to the market and scale.
Moving forward with microservices architecture is a good bet.

Having a microservices architecture sounds delightful but we cannot ignore the


increase in the complexity in the architecture due to this. Adopting microservices
has its costs.

With the microservices architecture comes along the need to set up distributed
logging, monitoring, inter-service communication, service discovery, alerts, tracing,
build & release pipelines, health checks & so on. You may even have to write a lot of
custom tooling from scratch for yourself.

So, I think you get the idea. There are always trade-offs involved there is no perfect
or the best solution. We need to be crystal on our use case and see what
architecture suits our needs best.

Let’s understand this further with the help of a real-world example of a company
called Segment that started with a monolith architecture, moved to microservices
and then moved back again to the monolith architecture.
Segment From Monolith To Microservices And
Back Again To Monolith #
Segment is a customer data platform that originally started with a monolith and
then later split it into microservices. As their business gained traction, they again
decided to revert to the monolith architecture.

Why did they do that?

Let’s have an insight…

Segment engineering team split their monolith into microservices for fault
isolation & easy debugging of issues in the system.

Fault isolation with microservices helped them minimize the damage a fault
caused in the system. It was confined to a certain service as opposed to impacting,
even bringing down the entire system as a whole.

The original monolith architecture had low management overhead but had a
single point of failure. A glitch in a certain functionality had the potential to impact
the entire system.

Segment integrates data from many different data providers into their system. As
the business gained traction, they integrated more data providers into their system
creating a separate microservice for every data provider. The increase in the
number of microservices led to an increase in the complexity of their architecture
significantly, subsequently taking a toll on their productivity.

The defects with regards to microservices started increasing significantly. They


had three engineers straight up just dedicated to getting rid of these defects to keep
the system online. This operational overhead became resource-intensive & slowed
down the organization immensely.

To tackle the issue, they made the decision to move back to monolith giving up on
fault isolation and other nice things that the microservices architecture brought
along.

They ended up with an architecture having a single code repository that they
called Centrifuge that handled billions of messages per day delivered to multiple
APIs.

Let’s have a further insight into their architecture in the lesson up next.
When To Pick A NoSQL Database?

In this lesson, we will discover when to choose a NoSQL Database over any other kind of database.

WE'LL COVER THE FOLLOWING

• Handling A Large Number Of Read Write Operations


• Flexibility With Data Modeling
• Eventual Consistency Over Strong Consistency
• Running Data Analytics

Handling A Large Number Of Read Write


Operations #
Look towards NoSQL databases when you need to scale fast. And when do you
generally need to scale fast?

When there are a large number of read-write operations on your website &
when dealing with a large amount of data, NoSQL databases fit best in these
scenarios. Since they have the ability to add nodes on the fly, they can handle
more concurrent traffic & big amount of data with minimal latency.

Flexibility With Data Modeling #


The second cue is during the initial phases of development when you are not
sure about the data model, the database design, things are expected to change
at a rapid pace. NoSQL databases offer us more flexibility.

Eventual Consistency Over Strong Consistency #


It’s preferable to pick NoSQL databases when it’s OK for us to give up on
Strong consistency and when we do not require transactions.

A good example of this is a social networking website like Twitter. When a


tweet of a celebrity blows up and everyone is liking and re-tweeting it from

around the world. Does it matter if the count of likes goes up or down a bit for
a short while?

The celebrity would definitely not care if instead of the actual 5 million 500
likes, the system shows the like count as 5 million 250 for a short while.

When a large application is deployed on hundreds of servers spread across


the globe, the geographically distributed nodes take some time to reach a
global consensus.

Until they reach a consensus, the value of the entity is inconsistent. The value
of the entity eventually gets consistent after a short while. This is what
Eventual Consistency is.

Though the inconsistency does not mean that there is any sort of data loss. It
just means that the data takes a short while to travel across the globe via the
internet cables under the ocean to reach a global consensus and become
consistent.

We experience this behaviour all the time. Especially on YouTube. Often you
would see a video with 10 views and 15 likes. How is this even possible?

It’s not. The actual views are already more than the likes. It’s just the count of
views is inconsistent and takes a short while to get updated. I will discuss
Eventual consistency in more detail further down the course.

Running Data Analytics #


NoSQL databases also fit best for data analytics use cases, where we have to
deal with an influx of massive amounts of data.

There are dedicated databases for use cases like this such as Time-Series
databases, Wide-Column, Document Oriented etc. I’ll talk about each of them
further down the course.

Right now, let’s have an insight into the performance comparison of SQL and
NoSQL tech.
Is NoSQL More Performant than SQL?

In this lesson, we will learn if the NoSQL database is more performant than the SQL databases.

WE'LL COVER THE FOLLOWING

• Why Do Popular Tech Stacks Always Pick NoSQL Databases?


• Real World Case Studies
• Using Both SQL & NoSQL Database In An Application

Is NoSQL more performant than SQL? This question is asked all the time. And I
have a one-word answer for this.

No!!

From a technology benchmarking standpoint, both relational and non-


relational databases are equally performant.

More than the technology, it’s how we design our systems using the
technology that affects the performance.

Both SQL & NoSQL tech have their use cases. We have already gone through
them in the lessons When to pick a relational database? & When to pick a
NoSQL database?

So, don’t get confused with all the hype. Understand your use case and then
pick the technology accordingly.

Why Do Popular Tech Stacks Always Pick


NoSQL Databases? #
But why do the popular tech stacks always pick NoSQL databases? For instance,
the MEAN (MongoDB, ExpressJS, AngularJS/ReactJS, NodeJs) stack.

Well, most of the applications online have common use cases. And these tech
stacks have them covered. There are also commercial reasons behind this.

Now, there are a plethora of tutorials available online & a mass promotion of
popular tech stacks. With these resources, it’s easy for beginners to pick them
up and write their applications as opposed to running solo research on other
technologies.

Though, we don’t always need to stick with the popular stacks. We should pick
what fits best with our use case. There are no ground rules, pick what works
for you.

We have a separate lesson on how to pick the right tech stack for our app
further down the course. We will continue this discussion there.

Coming back to the performance, it entirely depends on the application & the
database design. If we are using more Joins with SQL. The response will
inevitably take more time.

If we remove all the relationships and joins, SQL becomes just like NoSQL.

Real World Case Studies #


Facebook uses MySQL for storing its social graph of millions of users. Though
it did have to change the DB engine and make some tweaks but MySQL fits
best for its use case.

Quora uses MySQL pretty efficiently by partitioning the data at the application
level. This is an interesting read on it.

Note: A well-designed SQL data store will always be more performant


than a not so well-designed NoSQL store.

Hmmm…. Ok!! Alright

Using Both SQL & NoSQL Database In An


Application #
Can’t I use both in my application? Both SQL & a NoSQL datastore. What if I
have a requirement fitting both?
You can!! As a matter of fact, all the large-scale online services use a mix of
both to implement their systems and achieve the desired behaviour.

The term for leveraging the power of multiple databases is called Polyglot
Persistence. Let’s know more about it in the next lesson.
Monolith & Microservices – Understanding The Trade-
Offs – Part 2
This lesson continues the discussion on the trade-offs involved when choosing between the
monolith and the microservices architecture

We'll cover the following

• Segment High-Level Architecture


• Istio – The Move From Microservices To A Monolith

Segment High-Level Architecture #


Segment’s data infrastructure ingests hundreds of thousands of events per second.
These events are then directed to different APIs & webhooks via a message queue.
These APIs are also called as server-side destinations & there are over a hundred
of these destinations at Segment.

When they started with a monolith architecture. They had an API that ingested
events from different sources and those events were then forwarded to a
distributed message queue. The queue based on configuration and settings further
moved the event payload to different destination APIs.
If you aren’t aware of what a message queue, webhook & data ingestion is? No
worries, I’ve discussed these in detail in the latter part of this course. This
example that I am discussing right now is pretty straight forward nothing
complicated, so we can focus on this right now & can delve in detail into the rest
of the concepts later.

In the monolithic architecture, as all the events were moved into a single queue,
some of the events often failed to deliver to the destinations and were retried by
the queue after stipulated time intervals.

This made the queue contain both the new as well as the failed events waiting to
be retried. This eventually flooded the queue resulting in a delay of the delivery of
events to the destinations.

To tackle the queue flooding issue, the engineering team at Segment split the
monolith into microservices and created a separate microservice for every
destination.

Every service contained its own individual distributed message queue. This helped
cut down the load on a single queue & enabled the system to scale also increasing
the throughput.
In this scenario, even if a certain queue got flooded it didn’t impact the event
delivery of other services. This is how Segment leveraged fault isolation with the
microservices architecture.

Over time as the business gained traction additional destinations were added.
Every destination had a separate microservice and a queue. The increase in the
number of services led to an increase in the complexity of the architecture.

Separate services had separate event throughput & traffic load patterns. A single
scale policy couldn’t be applied on all the queues commonly. Every service and the
queue needed to be scaled differently based on its traffic load pattern. And this
process had to be done manually.

Auto-scaling was implemented in the infrastructure but every service had distinct
CPU & memory requirements. This needed manual tuning of the infrastructure.
This meant - more queues needed more resources for maintenance.

To tackle this, Segment eventually reverted to monolith architecture calling their


architecture as Centrifuge that combined all the individual queues for different
destinations into a single monolith service.

The info that I have provided on Segment architecture in this lesson is very high-
level. If you wish to go into more details, want to have a look at the Centrifuge
architecture. Do go through these resources -

Goodbye Microservices: From 100s Of Problem Children To 1 Superstar

Centrifuge: A Reliable System For Delivering Billions Of Events Per Day

Below is another instance of a popular service that transitioned from


microservices to a monolith architecture.

Istio – The Move From Microservices To A Monolith


#
Istio is an open-source service mesh that enables us to connect, secure, control and
observe microservices. It enables us to have control over how microservices share
data with each other.
It recently transitioned from a microservices to a monolith architecture. According
to the Istio team, having a monolith architecture enabled them to deliver value

and achieve the goals they intended to.

Recommended Read – Istio Is An Example Of When Not To Do Microservices.

Recommended Watch - Microservices - Are They Still Worth It?


Database Quiz - Part 1

This lesson contains a quiz to test your understanding of databases.

Let’s Test Your Understanding Of Databases

1
What is the use of a database in web applications?

COMPLETED 0%
1 of 8
Introduction To Micro Frontends
In this lesson, we will understand what are micro frontends

We'll cover the following

• What Are Micro Frontends?


• Micro Frontends E-Commerce Application Example

What Are Micro Frontends? #


Micro frontends are separate loosely coupled components of the user interface of
an application that are developed applying the principles of microservices on the
front end.

Writing micro frontends is more of an architectural design decision & a


development approach as opposed to it being a technology.

What does applying the principles of microservices to the front end means?

Microservices provide complete autonomy to the teams developing them. They are
loosely coupled, provide fault isolation also offer the freedom to pick the desired
technology stack to the individual teams to develop a certain service.

Micro frontends offer the same upsides to the front-end development.

Generally, in application development, despite having a microservices architecture


on the backend, our front end is a monolith that is developed by a dedicated front-
end development team.
But with the micro frontends approach, we split our application into vertical slices.
Where a single slice goes end to end right from the user interface to the database.

Every slice is owned by a dedicated team. The team besides the backend devs also
includes the front-end developers who have the onus of developing the user
interface component only of that particular service.

Every team builds its own user interface component choosing their desired
technology and later all these components are integrated together forming the
complete user interface of the application. This micro frontend approach averts
the need for a dedicated centralized user interface team.
Let’s understand this further with the help of an example.

Micro Frontends E-Commerce Application Example


#
I’ve taken the example of an e-commerce application because the micro frontends
approach is pretty popular with e-commerce websites.

Alright, let’s imagine an online game store that home delivers the DVDs of all kinds
of video games for both desktops and consoles such as Xbox, Nintendo Switch, Play
Station & the related hardware.

Our online gaming store will have several different UI components. A few key
components out of those are –

The Search Component – This is a search bar on the top centre of the website that
enables the users to search games based on the keywords they enter.

Once the user runs a search the component then enables the user to filter their
search results based on several options such as the price range, type of console,
game genre and so on.
The Game Category Component – This component displays the popular and
widely searched games for different categories on the home page of the website.

Add To Cart & Checkout Component – This user interface component enables the
users to add the games of their liking to the cart and proceed with the checkout
filling in their address & other required information to make the final payment.

During the checkout, the website may recommend related games to the user as
upsells. Also, a user can apply coupons & gift cards if he has any.

The Payment Component – The payment UI component offers different payment


options to the user & facilitates the order payment once the user enters his card
details on the page.

Every UI component has a dedicated microservice running on the backend


powering that particular user interface component. And all these different
components are developed and managed by dedicated full-stack teams.

The complete user interface of the application is rendered combining all these
different individual UI components, also called Micro Frontends.

Let’s continue this discussion in the next lesson.


Polyglot Persistence

In this lesson, we will understand what is meant by Polyglot Persistence.

WE'LL COVER THE FOLLOWING

• What Is Polyglot Persistence?


• Real World Use Case
• Relational Database
• Key Value Store
• Wide Column Database
• ACID Transactions & Strong Consistency
• Graph Database
• Document Oriented Store
• Downside Of This Approach

What Is Polyglot Persistence? #

Polyglot persistence means using several different persistence


technologies to fulfil different persistence requirements in an
application.

We will understand this concept with the help of an example.

Real World Use Case #


Let’s say we are writing a social network like Facebook.

Relational Database #
To store relationships like persisting friends of a user, friends of friends, what
rock band they like, what food preferences they have in common etc. we
would pick a relational database like MySQL.

Key Value Store #


For low latency access of all the frequently accessed data, we will implement a
cache using a Key-value store like Redis or Memcache.

We can use the same Key-value data store to store user sessions.

Now our app is already a big hit, it has got pretty popular and we have
millions of active users.

Wide Column Database #


To understand user behaviour, we need to set up an analytics system to run
analytics on the data generated by the users. We can do this using a wide-
column database like Cassandra or HBase.

ACID Transactions & Strong Consistency #


The popularity of our application just doesn’t seem to stop, it’s soaring. Now
businesses want to run ads on our portal. For this, we need to set up a
payments system.

Again, we would pick a relational database to implement ACID transactions &


ensure Strong consistency.

Graph Database #
Now to enhance the user experience of our application we have to start
recommending content to the users to keep them engaged. A Graph database
would fit best to implement a recommendation system.

Alright, by now, our application has multiple features, & everyone loves it.
How cool it would be if a user can run a search for other users, business pages
and stuff on our portal & could connect with them?

Document Oriented Store #


To implement this, we can use an open-source document-oriented datastore
like ElasticSearch. The product is pretty popular in the industry for
implementing a scalable search feature on websites. We can persist all the
search-related data in the elastic store.

Downside Of This Approach #


So, this is how we use multiple databases to fulfil different persistence
requirements. Though, one downside of this approach is increased complexity
in making all these different technologies work together.

A lot of effort goes into building, managing and monitoring polyglot


persistence systems. What if there was something simpler? That would save
us the pain of putting together everything ourselves. Well, there is.

What?

Let’s find out in the next lesson.


Multi-Model Databases

In this lesson, we will talk about the multi-model databases.

WE'LL COVER THE FOLLOWING

• What Are Multi-Model Databases?


• Popular Multi-Model Databases

What Are Multi-Model Databases? #


Until now the databases supported only one data model, it could either be a
relational database, a graph database or any other database with a certain
specific data model.

But with the advent of multi-model databases, we have the ability to use
different data models in a single database system.

Multi-model databases support multiple data models like Graph, Document-


Oriented, Relational etc. as opposed to supporting only one data model.

They also avert the need for managing multiple persistence technologies in a
single service. They reduce the operational complexity by notches. With multi-
model databases, we can leverage different data models via a single API.
Popular Multi-Model Databases #
Some of the popular multi-model databases are Arango DB, Cosmos DB, Orient
DB, Couchbase etc.

So, by now we are clear on what NoSQL databases are & when to pick them
and stuff. Now let’s understand concepts like Eventual Consistency, Strong
Consistency which are key to understanding distributed systems.
Eventual Consistency

In this lesson, we will discuss Eventual Consistency.

WE'LL COVER THE FOLLOWING

• What Is Eventual Consistency?


• Real World Use Case

What Is Eventual Consistency? #


Eventual consistency is a consistency model which enables the data store to be
highly available. It is also known as optimistic replication & is key to
distributed systems.

So, how exactly does it work?

We’ll understand this with the help of a use case.

Real World Use Case #


Think of a popular microblogging site deployed across the world in different
geographical regions like Asia, America, Europe. Moreover, each geographical
region has multiple data centre zones: North, East, West, South. Furthermore,
each of the zones has multiple clusters which have multiple server nodes
running.

So, we have many datastore nodes spread across the world which the micro-
blogging site uses for persisting data.

Since there are so many nodes running, there is no single point of failure. The
data store service is highly available. Even if a few nodes go down the
persistence service as a whole is still up.

Alright, now let’s say a celebrity makes a post on the website which everybody
starts liking around the world.

At a point in time, a user in Japan likes the post which increases the “Like”
count of the post from say 100 to 101. At the same point in time, a user in
America, in a different geographical zone clicks on the post and he sees the
“Like” count as 100, not 101.

Why did this happen?

Simply, because the new updated value of the Post “Like” counter needs some
time to move from Japan to America and update the server nodes running
there.

Though the value of the counter at that point in time was 101, the user in
America sees the old inconsistent value.

But when he refreshes his web page after a few seconds the “Like” counter
value shows as 101. So, the data was initially inconsistent but eventually got
consistent across the server nodes deployed around the world. This is what
eventual consistency is.

Let’s take it one step further, what if at the same point in time both the users
in Japan and America Like the post, and a user in another geographic zone say
Europe accesses the post.

All the nodes in different geographic zones have different post values. And
they will take some time to reach a consensus.

The upside of eventual consistency is that the system can add new nodes on
the fly without the need to block any of them, the nodes are available to the
end-users to make an update at all times.

Millions of users across the world can update the values at the same time
without having to wait for the system to reach a common final value across all
nodes before they make an update. This feature enables the system to be
highly available.

Eventual consistency is suitable for use cases where the accuracy of values
doesn’t matter much like in the above-discussed use case.

Other use cases of eventual consistency can be when keeping the count of
users watching a Live video stream online. When dealing with massive
amounts of analytics data. A couple of counts up and down won’t matter
much.

But there are use cases where the data has to be laser accurate like in
banking, stock markets. We just cannot have our systems to be Eventually
Consistent, we need Strong Consistency.

Let’s discuss it in the next lesson.


Introduction & Types of Data
In this lesson, we will have an introduction to databases and the different types of data.

We'll cover the following

• What Is A Database?
• Structured Data
• Unstructured Data
• Semi-structured Data
• User state

What Is A Database? #
A database is a component required to persist data. Data can be of many forms:
structured, unstructured, semi-structured and user state data.

Let’s quickly have an insight into the classification of data before delving into the
databases.
Structured Data #

Structured data is the type of data which conforms to a certain structure, typically
stored in a database in a normalized fashion.

There is no need to run any sort of data preparation logic on structured data
before processing it. Direct interaction can be done with this kind of data.

An example of structured data would be the personal details of a customer stored


in a database row. The customer id would be of integer type, the name would be of
String type with a certain character limit etc.

So, with structured data, we know what we are dealing with. Since the customer
name is of String type, without much worry of errors or exceptions, we can run
String operations on it.

Structured data is generally managed by a query language such as SQL (Structured


query language).

Unstructured Data #
Unstructured data has no definite structure. It is generally the heterogeneous type
of data comprising of text, image files, video, multimedia files, pdfs, Blob objects,
word documents, machine-generated data etc.

This kind of data is often encountered in data analytics. Here the data streams in
from multiple sources such as IoT devices, social networks, web portals, industry
sensors etc. into the analytics systems.

We cannot just directly process unstructured data. The initial data is pretty raw,
we have to make it flow through a data preparation stage which segregates it
based on some business logic & then runs the analytics algorithms on it.

Semi-structured Data #
Semi-structured data is a mix of structured & unstructured data. Semi-structured
data is often stored in data transport formats such as XML or JSON and is handled
as per the business requirements.

User state #
The data containing the user state is the information of activity which the user
performs on the website.

For instance, when browsing through an e-commerce website, the user would
browse through several product categories, change the preferences, add a few
products to the reminder list for the price drops.

All this activity is the user state. So next time, when the user logs in, he can
continue from where he left off. It would not feel like that one is starting fresh &
all the previous activity is lost.

Storing user state improves the browsing experience & the conversion rate for the
business.
So, now we are clear on the different types of data. Let’s have a look into different
types of databases.

There are multiple different types of databases with specific use cases. We’ll
quickly go through each of them in order to have a bird’s eye view of the database
realm.
Relational Database
In this lesson, we will discuss the relational databases.

We'll cover the following

• What Is A Relational Database?


• What Are Relationships?
• Data Consistency
• ACID Transactions

What Is A Relational Database? #


This is the most common & widely used type of database in the industry. A
relational database saves data containing relationships. One to One, One to Many,
Many to Many, Many to One etc. It has a relational data model. SQL is the primary
data query language used to interact with relational databases.

MySQL is the most popular example of a relational database. Alright!! I get it but
what are relationships?

What Are Relationships? #


Let’s say you as a customer buy five different books from an online book store.
When you created an account on the book store you would have been assigned a
customer id say C1. Now that C1[You] is linked to five different books B1, B2, B3,
B4, B5.

This is a one to many relationship. In the simplest of forms, one table will contain
the details of all the customers & another table will contain all the products in the
inventory.

One row in the customer table will correspond to multiple rows in the product
inventory table.

On pulling the user object with id C1 from the database we can easily find what
books C1 purchased via the relationship model.

Data Consistency #
Besides, the relationships, relational databases also ensure saving data in a
normalized fashion. In very simple terms, normalized data means a unique entity
occurs in only one place/table, in its simplest and atomic form and is not spread
throughout the database.

This helps in maintaining the consistency of the data. In future, if we want to


update the data, we just update at that one place and every fetch operation gets the
updated data.

Had the data been spread throughout the database in different tables. We would
have to update the new value of an entity everywhere. This is troublesome and
things can get inconsistent.

ACID Transactions #
Besides normalization & consistency, relational databases also ensure ACID
transactions.

ACID – Atomicity, Consistency, Isolation, Durability.

An acid transaction means if a transaction in a system occurs, say a financial


transaction, either it will be executed with perfection without affecting any other
processes or transactions; the system will have a new state after the transaction
which is durable & consistent. Or if anything amiss happens during the
transaction, say a minor system failure, the entire operation is rolled back.

When a transaction happens, there is an initial state of the system State A & then
there is a final state of the system State B after the transaction. Both the states are
consistent and durable.

A relational database ensures that either the system is in State A or State B at all
times. There is no middle state. If anything fails, the system goes back to State A.

If the transaction is executed smoothly the system transitions from State A to State
B.
Database Quiz - Part 2

This lesson contains a quiz to test your understanding of different models of databases, eventual, strong
consistency & CAP theorem.

Let’s Test Your Understanding Of A Few Database Concepts

1
What does polyglot persistence mean?

COMPLETED 0%
1 of 5
NoSQL Databases - Introduction
In this lesson, we will get an insight into NoSQL databases and how they are different from
Relational databases.

We'll cover the following

• What Is A NoSQL Database?


• How Is A NoSQL Database Different From A Relational Database?
• Scalability
• Clustering

What Is A NoSQL Database? #


In this lesson, we will get an insight into NoSQL databases and how they are
different from Relational databases. As the name implies NoSQL databases have no
SQL, they are more like JSON-based databases built for Web 2.0

They are built for high frequency read & writes, typically required in social
applications like Twitter, LIVE real-time sports apps, online massive multi-player
games etc.

How Is A NoSQL Database Different From A


Relational Database? #
Now, one obvious question that would pop-up in our minds is:

Why the need for NoSQL databases when relational databases were doing
fine, were battle-tested, well adopted by the industry & had no major
persistence issues?

Scalability #
Well, one big limitation with SQL based relational databases is Scalability. Scaling
relational databases is something which is not trivial. They have to be Sharded or
Replicated to make them run smoothly on a cluster. In short, this requires careful
thought and human intervention.

On the contrary, NoSQL databases have the ability to add new server nodes on the
fly & continue the work, without any human intervention, just with a snap of the
fingers.

Today’s websites need fast read writes. There are millions, if not billions of users
connected with each other on social networks.

Massive amount of data is generated every micro-second, we needed an


infrastructure designed to manage this exponential growth.

Clustering #
NoSQL databases are designed to run intelligently on clusters. And when I say
intelligently, I mean with minimal human intervention.

Today, the server nodes even have self-healing capabilities. That’s pretty smooth.
The infrastructure is intelligent enough to self-recover from faults.

Though all this innovation does not mean old school relational databases aren’t
good enough & we don’t need them anymore.

Relational databases still work like a charm & are still in demand. They have a
specific use-case. We have already gone through this in When to pick a relational
database lesson. Remember? 😊

Also, NoSQL databases had to sacrifice Strong consistency, ACID Transactions &
much more to scale horizontally over a cluster & across the data centres.

The data with NoSQL databases is more Eventually Consistent as opposed to being
Strongly Consistent.

So, this obviously means NoSQL databases aren’t the silver bullet. And it’s
completely alright, we don’t need silver bullets. We aren’t hunting werewolves, we
are upto a much harder task connecting the world online.

I’ll talk about the underlying design of NoSQL databases in much detail and why
they have to sacrifice Strong consistency and Transactions in the upcoming lessons.

For now, let’s focus on some of the features of NoSQL databases.


Types of Databases

In this lesson, we will brie y recall the different types of databases.

WE'LL COVER THE FOLLOWING

• Different Types Of Databases

Different Types Of Databases #


There are quite a number of different types of databases available to the
application developers, catering to specific use cases.

Such as the:

Document-Oriented database
Key-value datastore
Wide-column database
Relational database
Graph database
Time-Series database
Databases dedicated to mobile apps

In the polyglot persistence lesson, we went through the need for different
types of databases. We have also covered relational databases in-depth when
to pick a relational one & stuff in the previous lessons.

Now we will have insights into the other remaining types of databases and the
use cases which fit them.

So, without any further ado. Let’s get on with it.


Document Oriented Database

In this lesson, we will get to know about the Document Oriented database and when to choose it for our projects.

WE'LL COVER THE FOLLOWING

• What Is A Document Oriented Database?


• Popular Document Oriented Databases
• When Do I Pick A Document Oriented Data Store for My Project?
• Real Life Implementations

What Is A Document Oriented Database? #

Document Oriented databases are the main types of NoSQL databases.


They store data in a document-oriented model in independent
documents. The data is generally semi-structured & stored in a JSON-like
format.

The data model is similar to the data model of our application code, so it’s
easier to store and query data for developers.

Document oriented stores are suitable for Agile software development


methodology as it’s easier to change things with evolving demands when
working with them.

Popular Document Oriented Databases #


Some of the popular document-oriented stores used in the industry are
MongoDB, CouchDB, OrientDB, Google Cloud Datastore, Amazon Document DB

When Do I Pick A Document Oriented Data Store


for My Project? #
If you are working with semi-structured data, need a flexible schema which
would change often. You ain’t sure about the database schema when you start
writing the app. There is a possibility that things might change over time. You
are in need of something flexible which you could change over time with
minimum fuss. Pick a Document-Oriented data store.

Typical use cases of Document oriented databases are the following:

Real-time feeds
Live sports apps
Writing product catalogues
Inventory management
Storing user comments
Web-based multiplayer games

Being in the family of NoSQL databases these provide horizontal scalability,


performant read-writes as they cater to CRUD - Create Read Update Delete use
cases. Where there isn’t much relational logic involved & all we need is just
quick persistence & retrieval of data.

Real Life Implementations #


Here are some of the good real-life implementations of the tech below -

SEGA uses Mongo-DB to improve the experience for millions of mobile


gamers

Coinbase scaled from 15k requests per min to 1.2 million requests per
minute with MongoDB
Features Of NoSQL Databases
In this lesson, we will discuss the features of NoSQL databases.

We'll cover the following

• Pros Of NoSQL Databases


• Gentle Learning Curve
• Schemaless
• Cons Of NoSQL Databases
• Inconsistency
• No Support For ACID Transactions
• Conclusion
• Popular NoSQL Databases

In the introduction we learned that the NoSQL databases are built to run on
clusters in a distributed environment, powering Web 2.0 websites.

Now, let’s go over some features of NoSQL databases.

Pros Of NoSQL Databases #


Besides the design part, NoSQL databases are also developer-friendly. What do I
mean by that?

Gentle Learning Curve #


First, the learning curve is less than that of relational databases. When working
with relational databases a big chunk of our time goes into learning how to design
well-normalized tables, setting up relationships, trying to minimize joins and stuff.

Schemaless #
One needs to be pretty focused when designing the schema of a relational database
to avoid running into any issues in the future.
Think of relational databases as a strict headmaster. Everything has to be in place,
neat and tidy, things need to be consistent. But NoSQL databases are a bit of chilled
out & relaxed.

There are no strict enforced schemas, work with the data as you want. You can
always change stuff, spread things around. Entities have no relationships. Thus,
things are flexible & you can do stuff your way.

Wonderful Right?

Not always!! This flexibility is good and bad at the same time. Being so flexible,
developer-friendly, having no joins and relationships etc. makes it good.

Cons Of NoSQL Databases #


Inconsistency #
But it introduces a risk of entities being inconsistent at the same time. Since an
entity is spread throughout the database one has to update the new values of the
entity at all places.

Failing to do so, makes the entity inconsistent. This is not a problem with relational
databases since they keep the data normalized. An entity resides at one place only.

No Support For ACID Transactions #


Also, NoSQL distributed databases don’t provide ACID transactions. A few that
claim to do that, don’t support them globally. They are just limited to a certain
entity hierarchy or a small region where they can lock down nodes to update
them.

Note: Transactions in distributed systems come with terms and conditions


applied.

Conclusion #
My first experience with a NoSQL datastore was with the Google Cloud Datastore.

An upside I felt was that we don’t have to be a pro in database design to write an
application. Things were comparatively simpler, as there was no stress of
managing joins, relationships, n+1 query issues etc.

Just fetch the data with its Key. You can also call it the id of the entity. This is a
constant O(1) operation, which makes NoSQL Dbs really fast.

I have designed a lot of MySQL DB schemas in the past with complex relationships.
And I would say working with a NoSQL database is a lot easier than working with
relationships.

It’s alright if we need to make a few extra calls to the backend to fetch data in
separate calls that doesn’t make much of a difference. We can always cache the
frequently accessed data to overcome that.

Popular NoSQL Databases #


Some of the popular NoSQL databases used in the industry are MongoDB, Redis,
Neo4J, Cassandra.

So, I guess, by now we have a pretty good idea of what NoSQL databases are. Let’s
have a look at some of the use cases which fit best with them.
Graph Database

In this lesson, we will get to know about the Graph database and when to choose it for our projects

WE'LL COVER THE FOLLOWING

• What Is A Graph Database?


• Features Of A Graph Database
• When Do I Pick A Graph Database?
• Real Life Implementations

What Is A Graph Database? #


Graph databases are also a part of the NoSQL database family. They store data
in nodes/vertices and edges in the form of relationships.

Each Node in a graph database represents an entity. It can be a person, a


place, a business etc. And the Edge represents the relationship between the
entities.

But, why use a graph database to store relationships when we already


have SQL based relational databases available?

Features Of A Graph Database #


Hmmm… primarily, two reasons. The first is visualization. Think of that
pinned board in the thriller detective movies where the pins are pinned on a
board over several images connected via threads. It does help in visualizing
how the entities are related & how things fit together. Right?

The second reason is the low latency. In graph databases, the relationships are
stored a bit differently from how the relational databases store relationships.

Graph databases are faster as the relationships in them are not calculated at
the query time, as it happens with the help of joins in the relational databases.
Rather the relationships here are persisted in the data store in the form of
edges and we just have to fetch them. No need to run any sort of computation
at the query time.

A good real-life example of an application which would fit a graph database is


Google Maps. Nodes represent the cities and the Edges represent the
connection between them.

Now, if I have to look for roads between different cities, I don’t need joins to
figure out the relationship between the cities when I run the query. I just need
to fetch the edges which are already stored in the database.

When Do I Pick A Graph Database? #


Ideal use cases of graph databases are building social, knowledge, network
graphs. Writing AI-based apps, recommendation engines, fraud analysis app,
storing genetic data etc.

Graph databases help us visualize our data with minimum latency. A popular
graph database used in the industry is Neo4J.

Real Life Implementations #


Here are some of the real-life implementations of the tech listed below -

Walmart shows product recommendations to its customers in real-time


using Neo4J graph database

NASA uses Neo4J to store “lessons learned” data from their previous
missions to educate the scientists & engineers.
When To Pick A NoSQL Database?
In this lesson, we will discover when to choose a NoSQL Database over any other kind of
database.

We'll cover the following

• Handling A Large Number Of Read Write Operations


• Flexibility With Data Modeling
• Eventual Consistency Over Strong Consistency
• Running Data Analytics

Handling A Large Number Of Read Write


Operations #
Look towards NoSQL databases when you need to scale fast. And when do you
generally need to scale fast?

When there are a large number of read-write operations on your website & when
dealing with a large amount of data, NoSQL databases fit best in these scenarios.
Since they have the ability to add nodes on the fly, they can handle more
concurrent traffic & big amount of data with minimal latency.

Flexibility With Data Modeling #


The second cue is during the initial phases of development when you are not sure
about the data model, the database design, things are expected to change at a rapid
pace. NoSQL databases offer us more flexibility.

Eventual Consistency Over Strong Consistency #


It’s preferable to pick NoSQL databases when it’s OK for us to give up on Strong
consistency and when we do not require transactions.

A good example of this is a social networking website like Twitter. When a tweet of
a celebrity blows up and everyone is liking and re-tweeting it from around the
world. Does it matter if the count of likes goes up or down a bit for a short while?

The celebrity would definitely not care if instead of the actual 5 million 500 likes,
the system shows the like count as 5 million 250 for a short while.

When a large application is deployed on hundreds of servers spread across the


globe, the geographically distributed nodes take some time to reach a global
consensus.

Until they reach a consensus, the value of the entity is inconsistent. The value of
the entity eventually gets consistent after a short while. This is what Eventual
Consistency is.

Though the inconsistency does not mean that there is any sort of data loss. It just
means that the data takes a short while to travel across the globe via the internet
cables under the ocean to reach a global consensus and become consistent.

We experience this behaviour all the time. Especially on YouTube. Often you would
see a video with 10 views and 15 likes. How is this even possible?

It’s not. The actual views are already more than the likes. It’s just the count of
views is inconsistent and takes a short while to get updated. I will discuss Eventual
consistency in more detail further down the course.

Running Data Analytics #


NoSQL databases also fit best for data analytics use cases, where we have to deal
with an influx of massive amounts of data.

There are dedicated databases for use cases like this such as Time-Series databases,
Wide-Column, Document Oriented etc. I’ll talk about each of them further down the
course.

Right now, let’s have an insight into the performance comparison of SQL and
NoSQL tech.
Is NoSQL More Performant than SQL?
In this lesson, we will learn if the NoSQL database is more performant than the SQL databases.

We'll cover the following

• Why Do Popular Tech Stacks Always Pick NoSQL Databases?


• Real World Case Studies
• Using Both SQL & NoSQL Database In An Application

Is NoSQL more performant than SQL? This question is asked all the time. And I
have a one-word answer for this.

No!!

From a technology benchmarking standpoint, both relational and non-relational


databases are equally performant.

More than the technology, it’s how we design our systems using the technology that
affects the performance.

Both SQL & NoSQL tech have their use cases. We have already gone through them
in the lessons When to pick a relational database? & When to pick a NoSQL
database?

So, don’t get confused with all the hype. Understand your use case and then pick
the technology accordingly.

Why Do Popular Tech Stacks Always Pick NoSQL


Databases? #
But why do the popular tech stacks always pick NoSQL databases? For instance, the
MEAN (MongoDB, ExpressJS, AngularJS/ReactJS, NodeJs) stack.

Well, most of the applications online have common use cases. And these tech
stacks have them covered. There are also commercial reasons behind this.
Now, there are a plethora of tutorials available online & a mass promotion of
popular tech stacks. With these resources, it’s easy for beginners to pick them up

and write their applications as opposed to running solo research on other


technologies.

Though, we don’t always need to stick with the popular stacks. We should pick
what fits best with our use case. There are no ground rules, pick what works for
you.

We have a separate lesson on how to pick the right tech stack for our app further
down the course. We will continue this discussion there.

Coming back to the performance, it entirely depends on the application & the
database design. If we are using more Joins with SQL. The response will inevitably
take more time.

If we remove all the relationships and joins, SQL becomes just like NoSQL.

Real World Case Studies #


Facebook uses MySQL for storing its social graph of millions of users. Though it did
have to change the DB engine and make some tweaks but MySQL fits best for its
use case.

Quora uses MySQL pretty efficiently by partitioning the data at the application
level. This is an interesting read on it.

Note: A well-designed SQL data store will always be more performant than a
not so well-designed NoSQL store.

Using Both SQL & NoSQL Database In An


Application #
Can’t I use both in my application? Both SQL & a NoSQL datastore. What if I have a
requirement fitting both?

You can!! As a matter of fact, all the large-scale online services use a mix of both to
implement their systems and achieve the desired behaviour.

The term for leveraging the power of multiple databases is called Polyglot
Persistence. Let’s know more about it in the next lesson.
Key Value Database

In this lesson, we will get to know about the Key-Value database and when to choose it for our projects.

WE'LL COVER THE FOLLOWING

• What Is A Key Value Database?


• Features Of A Key Value Database
• Popular Key Value Databases
• When Do I Pick A Key Value Database?
• Real Life Implementations

What Is A Key Value Database? #


Key-value databases also are a part of the NoSQL family. These databases use a
simple key-value method to store and quickly fetch the data with minimum
latency.

Features Of A Key Value Database #


A primary use case of a Key-value database is to implement caching in
applications due to the minimum latency they ensure.

The Key serves as a unique identifier and has a value associated with it. The
value can be as simple as a block of text & can be as complex as an object
graph.

The data in Key-value databases can be fetched in constant time O(1), there is
no query language required to fetch the data. It’s just a simple no-brainer
fetch operation. This ensures minimum latency.

Popular Key Value Databases #


Some of the popular key-value data stores used in the industry are Redis,
Hazelcast, Riak, Voldemort & Memcache.

When Do I Pick A Key Value Database? #


If you have a use case where you need to fetch data real fast with minimum
fuss & backend processing then you should pick a key-value data store.

Key-value stores are pretty efficient in pulling off scenarios where super-fast
data fetch is the order of the day.

Typical use cases of a key value database are the following:

Caching
Persisting user state
Persisting user sessions
Managing real-time data
Implementing queues
Creating leaderboards in online games & web apps
Implementing a pub-sub system

Real Life Implementations #


Some of the real-life implementations of the tech are -

Inovonics uses Redis to drive real-time analytics on millions of sensor


data

Microsoft uses Redis to handle the traffic spike on its platforms

Google Cloud uses Memcache to implement caching on their cloud


platform
Database Quiz - Part 1
This lesson contains a quiz to test your understanding of databases.

Let’s Test Your Understanding Of Databases

1
What is the use of a database in web applications?

2
What is unstructured data?
3 Why do web applications store user state?

4
What are the key features of relational databases? Which of the following
option(s) are correct?
5
Why are database ACID transactions so important for financial systems?

6
When should we choose a relational database for our application? Which of
the following option(s) are correct?

7
Why do we need NoSQL databases when already have relational databases?
8
When should we pick a NoSQL database for our application?
Retake Quiz
Polyglot Persistence
In this lesson, we will understand what is meant by Polyglot Persistence.

We'll cover the following

• What Is Polyglot Persistence?


• Real World Use Case
• Relational Database
• Key Value Store
• Wide Column Database
• ACID Transactions & Strong Consistency
• Graph Database
• Document Oriented Store
• Downside Of This Approach

What Is Polyglot Persistence? #

Polyglot persistence means using several different persistence technologies to


fulfil different persistence requirements in an application.

We will understand this concept with the help of an example.

Real World Use Case #


Let’s say we are writing a social network like Facebook.

Relational Database #
To store relationships like persisting friends of a user, friends of friends, what rock
band they like, what food preferences they have in common etc. we would pick a
relational database like MySQL.
Key Value Store #

For low latency access of all the frequently accessed data, we will implement a
cache using a Key-value store like Redis or Memcache.

We can use the same Key-value data store to store user sessions.

Now our app is already a big hit, it has got pretty popular and we have millions of
active users.

Wide Column Database #


To understand user behaviour, we need to set up an analytics system to run
analytics on the data generated by the users. We can do this using a wide-column
database like Cassandra or HBase.

ACID Transactions & Strong Consistency #


The popularity of our application just doesn’t seem to stop, it’s soaring. Now
businesses want to run ads on our portal. For this, we need to set up a payments
system.

Again, we would pick a relational database to implement ACID transactions &


ensure Strong consistency.

Graph Database #
Now to enhance the user experience of our application we have to start
recommending content to the users to keep them engaged. A Graph database
would fit best to implement a recommendation system.

Alright, by now, our application has multiple features, & everyone loves it. How
cool it would be if a user can run a search for other users, business pages and stuff
on our portal & could connect with them?

Document Oriented Store #


To implement this, we can use an open-source document-oriented datastore like
ElasticSearch. The product is pretty popular in the industry for implementing a
scalable search feature on websites. We can persist all the search-related data in
the elastic store.
Downside Of This Approach #
So, this is how we use multiple databases to fulfil different persistence
requirements. Though, one downside of this approach is increased complexity in
making all these different technologies work together.

A lot of effort goes into building, managing and monitoring polyglot persistence
systems. What if there was something simpler? That would save us the pain of
putting together everything ourselves. Well, there is.

What?

Let’s find out in the next lesson.


Database Quiz - Part 3

This lesson contains a quiz to test your understanding of different types of databases.

Let’s Test Your Understanding Of Different Types Of Databases

1
What are the use cases for a document-oriented database? Which of the
following option(s) are correct?

COMPLETED 0%
1 of 5
Multi-Model Databases
In this lesson, we will talk about the multi-model databases.

We'll cover the following

• What Are Multi-Model Databases?


• Popular Multi-Model Databases

What Are Multi-Model Databases? #


Until now the databases supported only one data model, it could either be a
relational database, a graph database or any other database with a certain specific
data model.

But with the advent of multi-model databases, we have the ability to use different
data models in a single database system.

Multi-model databases support multiple data models like Graph, Document-


Oriented, Relational etc. as opposed to supporting only one data model.

They also avert the need for managing multiple persistence technologies in a single
service. They reduce the operational complexity by notches. With multi-model
databases, we can leverage different data models via a single API.
Popular Multi-Model Databases #
Some of the popular multi-model databases are Arango DB, Cosmos DB, Orient DB,
Couchbase etc.

So, by now we are clear on what NoSQL databases are & when to pick them and
stuff. Now let’s understand concepts like Eventual Consistency, Strong Consistency
which are key to understanding distributed systems.
Eventual Consistency
In this lesson, we will discuss Eventual Consistency.

We'll cover the following

• What Is Eventual Consistency?


• Real World Use Case

What Is Eventual Consistency? #


Eventual consistency is a consistency model which enables the data store to be
highly available. It is also known as optimistic replication & is key to distributed
systems.

So, how exactly does it work?

We’ll understand this with the help of a use case.

Real World Use Case #


Think of a popular microblogging site deployed across the world in different
geographical regions like Asia, America, Europe. Moreover, each geographical
region has multiple data centre zones: North, East, West, South. Furthermore, each
of the zones has multiple clusters which have multiple server nodes running.

So, we have many datastore nodes spread across the world which the micro-
blogging site uses for persisting data.

Since there are so many nodes running, there is no single point of failure. The data
store service is highly available. Even if a few nodes go down the persistence
service as a whole is still up.

Alright, now let’s say a celebrity makes a post on the website which everybody
starts liking around the world.

At a point in time, a user in Japan likes the post which increases the “Like” count of
the post from say 100 to 101. At the same point in time, a user in America, in a

different geographical zone clicks on the post and he sees the “Like” count as 100,
not 101.

Why did this happen?

Simply, because the new updated value of the Post “Like” counter needs some time
to move from Japan to America and update the server nodes running there.

Though the value of the counter at that point in time was 101, the user in America
sees the old inconsistent value.

But when he refreshes his web page after a few seconds the “Like” counter value
shows as 101. So, the data was initially inconsistent but eventually got consistent
across the server nodes deployed around the world. This is what eventual
consistency is.

Let’s take it one step further, what if at the same point in time both the users in
Japan and America Like the post, and a user in another geographic zone say
Europe accesses the post.

All the nodes in different geographic zones have different post values. And they
will take some time to reach a consensus.
The upside of eventual consistency is that the system can add new nodes on the fly
without the need to block any of them, the nodes are available to the end-users to

make an update at all times.

Millions of users across the world can update the values at the same time without
having to wait for the system to reach a common final value across all nodes
before they make an update. This feature enables the system to be highly available.

Eventual consistency is suitable for use cases where the accuracy of values doesn’t
matter much like in the above-discussed use case.

Other use cases of eventual consistency can be when keeping the count of users
watching a Live video stream online. When dealing with massive amounts of
analytics data. A couple of counts up and down won’t matter much.

But there are use cases where the data has to be laser accurate like in banking,
stock markets. We just cannot have our systems to be Eventually Consistent, we
need Strong Consistency.

Let’s discuss it in the next lesson.


Introduction

In this lesson, we will get introduced to the concept of caching and why it is important for performance.

WE'LL COVER THE FOLLOWING

• What Is Caching?
• Caching Dynamic Data
• Caching Static Data

Hmmm… before beginning with this lesson, I want to ask you a question.
When you visit a website and request certain data from the server. How long
do you wait for the response?

5 seconds, 10 seconds, 15 seconds, 30 seconds? I know, I know, I am pushing


it… 45? What? Still no response…

And then you finally bounce off & visit another website for your answer. We
are impatient creatures; we need our answers quick. This makes caching vital
to applications to prevent users from bouncing off to other websites, all the
time.

What Is Caching? #

Caching is key to the performance of any kind of application. It ensures


low latency and high throughput. An application with caching will
certainly do better than an application without caching, simply because
it returns the response in less time as opposed to the application without
a cache implemented.

Implementing caching in a web application simply means copying frequently


accessed data from the database which is disk-based hardware and storing it
in RAM Random Access Memory hardware.

RAM-based hardware provides faster access than the disk-based hardware. As


I said earlier it ensures low latency and high throughput. Throughput means
the number of network calls i.e. request-response between the client and the
server within a stipulated time.

RAM-based hardware is capable of handling more requests than the disk-


based hardware, on which the databases run.

Caching Dynamic Data #


With caching we can cache both the static data and the dynamic data.
Dynamic data is the data which changes more often, it has an expiry time or a
TTL “Time To Live”. After the TTL ends, the data is purged from the cache and
the newly updated data is stored in it. This process is known as Cache
Invalidation.

Caching Static Data #


Static data consists of images, font files, CSS & other similar files. This is the
kind of data which doesn’t change often & can easily be cached on the client-
side in the browser or their local memory. Also, on the CDNs the Content
Delivery Networks.
Caching also helps applications maintain their expected behaviour during
network interruptions.

In the next lesson, let’s understand how do we figure if we really need a cache
in our applications?
How Do I figure If I Need A Cache In My Application?

In this lesson, we will discuss how to tell if we need caching in our application.

WE'LL COVER THE FOLLOWING

• Different Components In the Application Architecture Where the Cache Can


Be Used

First up, it’s always a good idea to use a cache as opposed to not using it. It
doesn’t do any harm. It can be used at any layer of the application & there are
no ground rules as to where it can and cannot be applied.

The most common usage of caching is database caching. Caching helps


alleviate the stress on the database by intercepting the requests being routed
to the database for data.

The cache then returns all the frequently accessed data. Thus, cutting down
the load on the database by notches.

Different Components In the Application


Architecture Where the Cache Can Be Used #
Across the architecture of our application, we can use caching at multiple
places. Caching is used in the client browser to cache static data. It is used
with the database to intercept all the data requests, in the REST API
implementation etc.

Besides these places, I would suggest you to look for patterns. We can always
cache the frequently accessed content on our website, be it from any
component. There is no need to compute stuff over and over when it can be
cached.

Think of Joins in relational databases. They are notorious for making the
response slow. More Joins means more latency. A cache can avert the need for
running joins every time just by storing the data in demand. Now imagine
how much would this mechanism speed up our application.

Also, even if the database goes down for a while. The users won’t notice it as
the cache would continue to serve the data requests.

Caching is also the core of the HTTP protocol. This is a good resource to read
more about it.

We can store user sessions in a cache. It can be implemented at any layer of


an application be it at the OS level, at the network level, CDN or the database.
You might remember, we talked about the Key-value data stores in the
database lesson. They are primarily used to implement caching in web
applications.

They can be used for cross-module communication in a microservices


architecture by saving the shared data which is commonly accessed by all the
services. It acts as a backbone for the microservice communication.

Key-value data stores via caching are also widely used in in-memory data
stream processing and running analytics.
Strong Consistency
In this lesson, we will discuss Strong Consistency.

We'll cover the following

• What Is Strong Consistency?


• Real World Use Case
• ACID Transaction Support

What Is Strong Consistency? #


Strong Consistency simply means the data has to be strongly consistent at all times.
All the server nodes across the world should contain the same value of an entity at
any point in time. And the only way to implement this behaviour is by locking
down the nodes when being updated.

Real World Use Case #


Let’s continue the same Eventual Consistency example from the previous lesson.
To ensure Strong Consistency in the system, when the user in Japan likes the post,
all the nodes across different geographical zones have to be locked down to
prevent any concurrent updates.

This means at one point in time, only one user can update the post “Like” counter
value.
So, once the user in Japan updates the “Like” counter from 100 to 101. The value
gets replicated globally across all nodes. Once all the nodes reach a consensus, the
locks get lifted.

Now, other users can Like the post. If the nodes take a while to reach a consensus,
they have to wait until then.

Well, this is surely not desired in case of social applications. But think of a stock
market application where the users are seeing different prices of the same stock at
one point in time and updating it concurrently. This would create chaos.

Therefore, to avoid this confusion we need our systems to be Strongly Consistent.


The nodes have to be locked down for updates.

Queuing all the requests is one good way of making a system Strongly Consistent.
Well, the implementation is beyond the scope of this course. Though we will
discuss a theorem called the CAP theorem which is key to implementing these
consistency models.

So, by now I am sure you would have realized that picking the Strong Consistency
model hits the capability of the system to be Highly Available.
The system while being updated by one user does not allow other users to perform
concurrent updates. This is how strongly consistent ACID transactions are

implemented.

ACID Transaction Support #


Distributed systems like NoSQL databases which scale horizontally on the fly don’t
support ACID transactions globally & this is due to their design. The whole reason
for the development of NoSQL tech is the ability to be Highly Available and
Scalable. If we have to lock down the nodes every time, it becomes just like SQL.

So, NoSQL databases don’t support ACID transactions and those that claim to, have
terms and conditions applied to them.

Generally, the transaction support is limited to a geographic zone or an entity


hierarchy. Developers of the tech make sure that all the Strongly consistency entity
nodes reside in the same geographic zone to make the ACID transactions possible.

Well, this is pretty much about Strong Consistency. Now let’s take a look into the
CAP Theorem

You might also like