Previewpdf
Previewpdf
Previewpdf
Computing
Emerging Technologies in
Computing
Theory, Practice, and Advances
Edited by
Pramod Kumar, Anuradha Tomar, and
R. Sharmila
MATLAB ® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks
does not warrant the accuracy of the text or exercises in this book. This book’s use or discussion of
MATLAB ® software or related products does not constitute endorsement or sponsorship by The
MathWorks of a particular pedagogical approach or particular use of the MATLAB ® software
© 2022 selection and editorial matter, Pramod Kumar, Anuradha Tomar, R. Sharmila; individual
chapters, the contributors
Reasonable efforts have been made to publish reliable data and information, but the author and
publisher cannot assume responsibility for the validity of all materials or the consequences of
their use. The authors and publishers have attempted to trace the copyright holders of all m
aterial
reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged, please write
and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, r eproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information
storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.
com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA
01923, 978-750-8400. For works that are not available on CCC please contact mpkbookspermissions
@tandf.co.uk
Trademark notice: Product or corporate names may be trademarks or registered trademarks and
are used only for identification and explanation without intent to infringe.
DOI: 10.1201/9781003121466
Typeset in Minion
by codeMantra
Contents
Editors, ix
Contributors, xi
v
vi ◾ Contents
INDEX, 269
Editors
ix
Contributors
xi
xii ◾ Contributors
Aatif Jamshed
Department of IT Manish Kumar
ABES Engineering College Department of CSE
Ghaziabad, India Krishna Engineering College
Ghaziabad, India
Jasmeet Kalra
Department of Mechanical Pramod Kumar
Engineering Department of CSE
Graphic Era Hill University Krishna Engineering College
Dehradun, India Ghaziabad, India
Umang Kant
Department of Computer Science Raja Kumar
and Engineering School of Computer Science &
Delhi Technological University Engineering
Delhi, India Head of Research - Faculty of
and Innovation and Technology
Department of Computer Science Taylor’s University
and Engineering Malaysia
Krishna Engineering College
Ghaziabad, India Vinod Kumar
Department of Computer Science
Ashish Kumar and Engineering
Department of CSE Delhi Technological University
Krishna Engineering College Delhi, India
Ghaziabad, India
Contributors ◾ xiii
Introduction to
Emerging Technologies
in Computer Science
and Its Applications
Umang Kant and Vinod Kumar
Delhi Technological University
CONTENTS
1.1 Introduction 2
1.1.1 Computer Vision 2
1.1.2 Deep Learning 6
1.1.3 Internet of Things (IoT) 9
1.1.4 Quantum Computing 11
1.1.5 Edge Computing 14
1.1.6 Fog Computing 16
1.1.7 Serverless Computing 17
1.1.8 Implanted Technology 19
1.1.9 Virtual, Augmented, and Mixed Reality 23
1.1.10 Digital Twin 25
1.2 Conclusions 28
References 29
DOI: 10.1201/9781003121466-1 1
2 ◾ Emerging Technologies in Computing
1.1 INTRODUCTION
The extensive and exhaustive research carried out in the field of Artificial
Intelligence (AI) is a confirmation that it finds its applications in every
field of life now-a-days. Researchers and scientists are making every pos‑
sible effort to help the world by using AI, in turn making machines which
think and maybe act like humans. We are aware that AI is like an umbrella
that shelters numerous technologies under it and hence it is perceived as
an interdisciplinary field with s everal approaches. AI is an eclectic branch
of Computer Science that aims to respond to Turing’s question in assent‑
ing and is responsible for developing smart machines capable of executing
tasks that require human intelligence and is responsible for a visible para‑
digm shift in every sector of the technological world and thereby giving
birth to new concepts and technologies on the way. Machine Learning is
an application of AI, which aims at offering the machines or systems the
capability to learn on its own and improve its experiences at every turn
without human intervention. In order to make the machines learn, we
need to provide them with ample amount of data so that machines can
analyze some pattern in the data (if any) and make better decisions based
on observing the data, working on the patterns of the data and then train‑
ing the algorithms using that data [1,2]. Learning process is initiated by
observing the data as mentioned above, and learning techniques can be as
follows: supervised, unsupervised, semi-supervised, or reinforced based
on the data to be trained and the application to be addressed. Hence, the
main aim of Machine Learning is to allow the machines to learn on their
own in the absence of human assistance and adjust their output or actions
accordingly [3]. AI has given birth to many new technologies and Machine
Learning is one of the ways to achieve AI. We will now be discussing
some recent technologies which have been researched the most these days
and are finding their way in every aspect of business, education, health,
commercial, and other fields of life.
1.1.1 Computer Vision
Computer vision is such a type of AI which we all have naturally experi‑
enced in our lives in multiple ways without even realizing it. Such is the
power of human brain and senses. Computer vision aims to replicate this
power of human brain and senses using machines. Humans can (i) describe
the content of the image or video they have seen, (ii) summarize an image
Introduction to Emerging Technologies ◾ 3
or video they have seen, and (iii) recognize a face or an object that they
have seen [4]. Hence, a machine can take advantage of human’s capa‑
bility of remembering things and people they have seen and make their
machines learn the same capability using algorithms dedicated for this
process. We all are aware that taking and uploading a picture or video on
internet has become extremely easy; all we need is a smartphone and some
social media platform. According to recent articles, around hundreds of
videos are uploaded per minute on YouTube social platform; and the same
is the case with other social platforms such as Facebook, Instagram, and
Twitter. Around 3 billion images alone are shared online each day, maybe
more. These images and videos can be easily recognized and summarized
by humans but to train the machine for the same capability, we first need
the machine to be able to index the content and then be able to search the
content in that video or photograph. The Machine Learning algorithms
will need to (i) find what the image or video contains and (ii) utilize the
metadata descriptions provided by the person who has uploaded that
image or video.
In simple terms, computer vision can be defined as a field of study
focused on the problem of helping computers to see [5]. The goal of com‑
puter vision is to use the observed image data to infer something about
the world [6]. Computer vision is an interdisciplinary technological field
which deals with replicating and observing the human vision and brain
processing system and facilitating the machines to identify and process
items in images and videos in a similar manner as humans are capable
of. Due to the advancements in AI, Neural Networks (NNs), and Deep
Learning, computer vision has taken great leaps in recent years and is still
a hot field among researchers [7]. Computer vision is also clearly a sub‑
field of AI, Machine Learning, and Deep Learning, as it deals with overly
complex data identification and interpretation. Due to recent advance‑
ments, computer vision has been able to successfully outdo humans in
tasks of identifying, indexing, and labeling objects in the images or vid‑
eos. This must have been experienced by many users while tagging people
in images using social media platforms such as Facebook. The algorithms
are trained in such an extensive manner that now they can perform bet‑
ter than humans in identifying and tagging items or people [8]. Another
factor which is responsible for the better working of machines to achieve
computer vision is that over the past few years, ample amount of data is
being generated these days. Large amount of data generated is being used
4 ◾ Emerging Technologies in Computing
FIGURE 1.1 Object detection and classification. (From Jarvis, R.A., IEEE
Transactions on Pattern Analysis and Machine Intelligence, 122–139, 1983. With
permission.) [5].
Introduction to Emerging Technologies ◾ 5
1.1.2 Deep Learning
Deep Learning is a subset of Machine Learning that aims to further auto‑
mate the functions of a human being.
It is a branch based on the building algorithms that learn and re-learn
by mimicking the functions of a human brain.
Just like the NN helps humans learn from their experiences, artificial
neural networks (ANNs) help an algorithm learn and execute the task.
ANNs, also generally called NNs, are computing systems vaguely
inspired by the biological NNs that constitute animal brains.
An ANN comprises multiple artificial neurons (or nodes) arranged in a
network of multiple layers, which loosely models the NN of the biological
brain (Figure 1.2).
Each connection, like the synapses in a biological brain, can transmit a
signal to other neurons.
The “signal” comprises input data (in real numbers) and then processes
it before sending it further down the chain. Every neuron processes the
data before transmitting it further.
Different layers of the neurons perform different transformations on
their inputs. Signals travel from the first layer (the input layer) to the last
layer (the output layer), possibly after traversing the layers multiple times.
Once the data is prepared for processing, it is fed to the NN in the first
layer, also known as the input layer.
Now the neurons process this data and send it further down to the next
layer. Each layer is designed to perform a specific task. The middle layers
are also called the hidden layers.
Once the data is sequentially processed by each hidden layer, the model
transmits the processed data to the output layer (the last layer).
In the output layer, the data is further processed finally as designed and
gives out the final model. This one pass of the data through the network is
called an epoch.
This output is then tested for accuracy; most often than not, the output
of the first epoch is far from being correct. Therefore, this information is
passed back to the network in reverse order so that the network can learn
from its mistakes. This is called back-propagation.
The network then tweaks its parameters further and processes the data
in a similar fashion. This process is continued for several epochs until the
model starts producing acceptably accurate results.
The theory, model, and the data existed earlier as well, though it is only the
advancements in the technology that have empowered this vision into reality.
Today we have access to sophisticated data management architectures
and the computational power to process this massive data. This has made
the access to these technologies fairly simple.
The most prominent technologies on this front are TensorFlow, Keras,
and Pytorch that have enabled everyone to access the state-of-art technol‑
ogy of Deep Learning.
There are not many differences between Machine Learning and Deep
Learning. Here are a few.
While Machine Learning is based on pre-defined models or algorithms,
Deep Learning is built on NN architecture. It further removes the need for
human intervention for feature selection in the data.
8 ◾ Emerging Technologies in Computing
• Self-driving cars
• News aggregation and fraud news detection
• Natural language processing
• Virtual assistants
• Recommender systems
• Visual recognition
• Credit fraud detection
• Healthcare and diagnostics
There are many more. The possibilities are endless (Figure 1.3).
Artificial
Intelligence
Machine Learning
Deep Learning
the time even when internet is disabled. And such security threats put all
users at risks in using small household devices and business or industrial
devices. As discussed above, IoT connects physical devices with digital
devices; hence, the real world between the two is always at risk. Hence, the
current area of research in IoT these days revolves mostly around security
concerns.
As security is a major concern in IoT, so is privacy. By always being con‑
nected over the internet using embedded devices, privacy takes a back seat
in the entire process and hence the security lapse. IoT models have user’s
data which can be manipulated to achieve some cause or worse it can be
sold to companies or can be made available over the dark web. Hence, it
becomes equally important for the users to be aware about the bargain
they make while using these smart devices. Apart from security and pri‑
vacy, other IoT concerns can be cost, connectivity, user’s acceptance, and
device standards.
IoT has given rise to big data analytics as IoT generates vast amount
of data on a daily basis and hence has given companies vast data sets to
analyze and work upon. This data can be in many different forms such as
images, videos, audios, pressure or temperature readings or heartbeat, or
other sensor readings. This vast amount of data has given rise to metadata,
which contains data about data. This huge data cannot be stored in the
company’s data warehouse or other resources, but on the cloud. Hence, IoT,
in turn, has given rise to the need for cloud services with every company
aiming to achieve the IoT business model. These days, from small orga‑
nizations and institutes to large multinational companies, all are making
use of cloud services to better manage the data. With better connectivity
(3G, 4G, 5G, CDMA, GSM, and LTE networks) and new technologies, the
IoT market will continue to evolve even with security and privacy issues.
1.1.4 Quantum Computing
Current generation computers are based on classical physics and therefore
on classical computing. In theory, i.e. Turing machines and in practice,
i.e. PCs, laptops, smartphones, tablets all are current generation comput‑
ers and work on the principle of classical computing and are called clas‑
sical computers. These classical computers can only be in one state at a
particular time and their operations can only have local effects, i.e. they
are limited by locality [13]. As we are aware, the fundamental principle
behind the computer systems is the ability to store, fetch, and manipulate
12 ◾ Emerging Technologies in Computing
would be the combination of two or more music notes, and the final music
that we hear would be the super positioned note; (ii) Entanglement: it is
a counter-intuitive quantum behavior which is not visible in the classical
environment. Here, the object particles are entangled with each other and
form a new model or environment. This new model behaves in entirely
new ways which is not possible in the classical world and also cannot be
explained using classical computing logic; and (iii) Interference: inter‑
ference of quantum states occurs due to the logic of phase. Due to the
phenomenon of phase, the states undergo interference. This logic of state
interference is similar to the logic of wave interference. In wave interfer‑
ence, the wave amplitudes add when they are in phase, else their ampli‑
tudes cancel each other. To develop a fault-tolerant quantum system and to
enhance the computational capabilities of a quantum computer, research‑
ers are working toward improving the (i) qubit count: the aim to create
more qubit states; more the qubit states, more the options of manipulation
and processing of states; and (ii) low error rates: the aim is to eliminate the
possible noise and errors encountered while working on multiple qubit
states. Low error rates are required to manage qubit states in an efficient
manner and to perform all sequential or parallel operations. Volume is
considered to be a useful metric for analyzing quantum computer capabil‑
ity [14]. Volume measures the correlation between the quality and number
of qubits, the error rates of qubit processing, and the quantum computer
circuit connectivity. Hence, the aim is to develop quantum computers with
large quantum volume for solving computational hard problems [22,23].
The basic motivation to research in the field of quantum comput‑
ing is that the classical computers themselves have become so powerful
and cheap due to miniaturization that they have almost already reached
micro levels where quantum processing and effects are said to occur.
The chip makers have led to such a level of miniaturization that instead
of suppressing the quantum effects in classical computers, researchers
might try to work with them, leading to further miniaturization and
hence more quantum effects. It is too soon to comment on the pressing
question “to what extent will quantum computers be built?” The first
2-qubit quantum computer was developed in 1997, and then a 5-qubit
quantum computer was built in 2001 to find the factor of number 15 [21].
The largest quantum computer up until now has only few dozen qubits.
Hence, the work on quantum computers has been rather slow but at a
steady pace.
14 ◾ Emerging Technologies in Computing
1.1.5 Edge Computing
With the rise in IoT-connected devices, edge computing has also come
into the picture and is transmuting the manner in which data is being
processed, managed, stored, and distributed to the users by millions
and millions of connected devices around the globe. As discussed in
Section 1.1.3, in IoT, the connected devices generate tremendous amount
of data which is stored and retrieved from either a centralized storage
location or cloud-based storage location. And this data is expected to con‑
tinue to grow at an unprecedented growth. Hence, more time is being
spent in storing and retrieving the generated data. IoT, real-time comput‑
ing power, and faster networking medium and technologies like 5G (wire‑
less) have aided edge computing with a large number of opportunities in
business industries.
As per the definition given by celebrated researcher Gartner, edge
computing is a part of a distributed computing topology in which
information processing is located closer to the edge, also called as
edge nodes (Figure 1.5), where things and people produce or consume
that information [24]. It is understood that edge computing gets the
data storage and its processing closer to where the data is being gen‑
erated. Hence, edge computing is a decentralized distributed comput‑
ing framework bringing the enterprise data closer to data sources [25].
Edge computing though different from fog computing finds many sim‑
ilarities with it. This concept is discussed in the following section.
1.1.6 Fog Computing
Fog computing as defined by researchers is a decentralized, distributed
computing technology where the data, storage resources, computation
technology, and respective applications are located at someplace between
the data source (IoT-connected devices) and the cloud [27]. Fog comput‑
ing finds many similarities with edge computing as both bring the data
source and computing near to the location where the data is being created
(see Section 1.1.5). Fog computing also finds similar advantages to edge
computing such as it also reduces data latency and provides better effi‑
ciency among other advantages. Hence, many scholars interchangeably
use both terms, as the basic aim of both technologies is same. Although
the main motive and working are similar in both, there still exists minor
differences between the two. While edge computing implies to getting the
data storage and processing closer to data sources, i.e. edge nodes, fog
computing refers to getting the data storage and processing in-between
the data source and the cloud, i.e. fog nodes [28]. As can be understood by
the name, in nature, fog concentrates between the ground and the cloud,
to be precise it stays in-between but still closer to the cloud. Hence, the
term fog computing has been coined by Cisco in 2015 by the company’s
product line manager, Ginny Nichols [29]. Fog and edge computing can
be viewed as two sides of a coin, as both complement each other and both
have similar advantages and limitations. The same can be understood by
referring to Figure 1.6.
Fog computing along with edge computing is viewed as an alternative
to cloud computing. Both retain few properties of cloud computing, at
the same time maintaining few distinctions [30]. Smart electrical grid,
smart transportation networks, and smart cities are all applications of fog
computing.
The implementation of fog computing requires bringing IoT applica‑
tions at the fog nodes at network edge using fog computing models and
tools. Here, the fog nodes, which are closest to the edge nodes, receive
the data from other edge devices (modems, routers, sensors etc.). After
receiving the data, it is then forwarded to the optimum location for fur‑
ther analysis. In fog computing, the data analysis is based upon the type
of data. The data can be divided into different categories based on the
Introduction to Emerging Technologies ◾ 17
FIGURE 1.6 Representation of cloud, fog, and edge computing. (From Gupta,
B.B., & Agrawal, D.P. (Eds.), Handbook of Research on Cloud Computing and Big
Data Applications in IoT. IGI Global, 2019. With permission.) [31].
time sensitivity. The data which needs immediate analysis and processing
can be put into most time-sensitive category, i.e. the user needs this type
of data immediately, and the categories can hence vary from most time-
sensitive to medium time-sensitive to least time-sensitive data. Among the
collected data set, the most time-sensitive data is selected and is analyzed
as near to its source as possible to avoid latency issues. The data which is
not time sensitive is sent to aggregation nodes for analysis, which can be
carried out at the approximate time as the user does not require it imme‑
diately. The benefits of fog computing, which are similar to edge comput‑
ing, are (i) reduced latency, (ii) better network bandwidth, (iii) improved
reliability, (iv) reduced costs, and (v) better insights, among others. The
differences are listed in Table 1.1.
1.1.7 Serverless Computing
Serverless computing is an auto-scaling computing because here the com‑
pany or the organization gets backend services from a serverless vendor
[32]. In serverless computing, the backend services are provided based on
user’s requirements (event-driven), and hence the users do not need to
worry about the underlying architecture and infrastructure and just need
to implement the codes and further processing without any conundrum.
An organization taking services from a serverless retailer is only charged
for the extent of the processing carried out by them and they need not pay
18 ◾ Emerging Technologies in Computing
1. In fog computing, the fog nodes are responsible In edge computing, each edge node
for deciding whether the data generated from manages, stores, and processes the
various smart IoT devices is to be processed data right at the edge of the
using its own resources or has to be sent to source, i.e. locally instead of
the cloud for storing and computing. transferring it to the cloud.
2. Fog computing distributes the storage, Edge computing locates the storage
communication, and further processing of and other processing close to the
data close to the cloud keeping it in control of source of data. Here also, the
end user, rather than exactly close to the control is in the hands of end
source of data. users.
3. Fog computing manages the intelligence down Edge computing manages the
to the level of local area network of the intelligence of the edge gateways
network architecture. into the devices generating the
data, thereby keeping the entire
computing close to the devices.
4. In fog computing, fog nodes work with the In edge computing, the cloud
cloud. involvement is eliminated.
5. Fog computing forms a hierarchical-layered Edge computing is limited to
network. individual edge nodes which do
not form a network.
a fixed fee for the server architecture, number of servers, and the band‑
width [33]. The price to be paid scales based on the services provided by
the serverless retailer to the organization; hence, serverless computing is
also called as auto-scaling computing. Serverless retailers provide storage
and database services to the users [34].
As discussed, the users are not aware about the server architecture
and the number of servers used. This does not mean that the environ‑
ment is free from servers; physical servers are present and are used but
the users are not aware about their existence. Since the underlying archi‑
tecture is not considered by the user, this serverless environment becomes
cost-effective and more user-friendly. The cost-effectiveness of this com‑
puting can be referred to in Figure 1.7. Apart from being cost-effective,
serverless computing has other benefits as well: (i) fully managed service,
(ii) dynamic scalability and flexibility, (iii) only pay for the resources or
services used (as discussed above), (iv) enhanced productivity, (v) flawless
connections, (vi) better turnaround time, (vii) no infrastructure manage‑
ment, and (viii) efficient use of resources, and others.
Introduction to Emerging Technologies ◾ 19
1.1.8 Implanted Technology
The implanted technology is a type of AI through which we can enhance
the capability of the human body organs with the help of computer vision
[35]. As discussed in Section 1.1.1, computer vision is a technology that
will replicate the power of human brain and senses in a machine, and with
the help of that machine we can enhance the power of our own senses. The
main question arises why the need of implantation technology. In order to
find the answer to this question, we need to look at the uses of implantation
technology first. The need of implantation technology can be better ana‑
lyzed by understanding a few examples where it can be used: (i) humans do
not have a memory that will store that data forever, similar to what a com‑
puter hard disk does, so here we can use brain implantation technology that
will make our memory permanent and we will remember each and every
20 ◾ Emerging Technologies in Computing
moment and experiences of our life from birth until death, (ii) in today’s
environment we see that we are surrounded by many diseases and some
diseases such as cancer that are not easily recognized and hence become
fatal if not recognized earlier. Here, implantation technology can help us
in this way that we can recognize the disease at the beginning so that it can
be cured soon; and (iii) in the future, we can use implantation technology
in our day-to-day life by making use of our organs as smart devices, which,
in turn, will remove the requirement of physical electronic devices such as
smartphones and laptops. because we can use AR and computer vision to
achieve it. Implantation technology uses brain computer interface (BCI)
technology which, in turn, is also a heavily researched interdisciplinary
area. The layout of the BCI is shown in Figure 1.8.
BCI is a technology with which we can provide instructions to the
machine through the input signals of the brain. We all know that our
brain is made up of neurons, as shown in Figure 1.9, and these neurons
have electrical signals (impulses) in them which are generated when a per‑
son thinks or feels anything. The BCI technology will sense those signals
and convert them to computer readable format (binary format). There is
a subset of AI known as Machine Learning that focuses on training the
machine by feeding the data to produce some useful productions and there
is a subset of Machine Learning known as Deep Learning that mainly
focuses on replication of neurons to machine through a NN, as shown in
Figure 1.10 (both Machine Learning and Deep Learning are discussed in
Sections 1.1 and 1.1.2 of this chapter). A NN is a mathematical modeling of
a neuron of our brain and with the help of this network, we try to replicate
the functionality of the brain into our machine.
FIGURE 1.10 A neural network. (From Abdi, H., Journal of Biological Systems,
2(03), 247–281, 1994. With permission.) [36].
22 ◾ Emerging Technologies in Computing
Our neurons work on action potential, and the propagation (Figure 1.11)
of this action potential makes us realize that with implantation technol‑
ogy, we try to sense this action potential and generate a machine level code
from it through which the machine will be able to understand the action
that is needed to be performed. Notwithstanding the shortcomings and
hurdles, with the activity potential component effectively surely known,
further advances in microelectronic advances empowered improvement
of the neural probe, as shown in Figure 1.12. Recent studies have revealed
improvements and progressed the possibility of a neural test by present‑
ing the idea of “neural dust,” an enormous number of remote cathodes
that can be appended straightforwardly to various nerves consequently
making numerous remote detecting hubs inside the body.
Anticipating what technology is to come in future is an assignment that
engineers generally leave to futurists and sci-fi authors and movie makers.
Nevertheless, as scientists and researchers, the whole science community
takes motivation from such futurists and work towards a common goal
by trying to benefit from these ideas, which invigorates scholarly discus‑
sions and research so that all potential situations may unfurl later on. At the
FIGURE 1.11 Action potential and its flow. (From Sobot, R., IEEE Technology
and Society Magazine, 37, 35–45, 2018. With permission.) [37].
Introduction to Emerging Technologies ◾ 23
FIGURE 1.12 A neural probe. (From Sobot, R., IEEE Technology and Society
Magazine, 37, 35–45, 2018. With permission.) [37].
the real world and hence take decisions or actions based on the virtual
information being presented. The VR environment can manipulate the
user with the virtual information. Although it sounds simple, our brain
is so complex and evolved that even though confused for some time can
ultimately tell the difference between the virtual world and the real world.
However, during that moment of illusion, the user can be manipulated
by the system. To achieve this VR, the environment has a set of systems
and devices which are used by the users to experience the virtual world.
These devices can be gloves, headsets, glasses, etc. These devices emulate
our sensory organs, i.e. our senses and generate an impression of reality
[39]. The need of VR is simply to engage and attract users and custom‑
ers to a particular field. Also, VR is used whenever there is a scope of
large expenses, danger, or impracticality. VR changes the perception of
the users just by presenting a particular object in the 3D format. VR is
currently being used in various applications, including (i) entertainment,
(ii) sports, (iii) medicine, (iv) architecture, and (v) the arts [40]. Figure 1.13
depicts few examples of VR environments.
One of the latest and biggest technology trends is AR, which is a varia‑
tion of virtual environments (VE) or VR [44]. Although the concept is old,
the usage has been quite recent. As discussed above, the VE technologies
immerse the users completely inside this virtual environment, where the
user becomes so engrossed that he/she becomes a part of that virtual envi‑
ronment and starts interpreting this virtual world as the real world. The
vehicle testing simulation environments can be classic examples of VR or
VE technologies. The variation comes in the form of AR [45,46], where the
user can make the difference between the virtual world and the real world,
i.e. the virtual objects are superimposed on the real-world objects. Hence,
AR complements reality, rather than entirely replacing it. An example of
AR can be a game named Pokémon Go, where the players can locate, cap‑
ture, and play with Pokémon characters that turn up in the real world and
real objects, such as parks, subways, bathrooms, lawns, and roofs. This
game had become a rage and had to be controlled and banned in many
parts of the world. Apart from games, AR is being used in many other fields
as well: (i) news broadcasting, where the news presenters or anchors can
draw lines and other shapes on the screen; (ii) navigation systems, where
the routes are superimposed on the actual roads; (iii) defense, where the
military personnel can view their status and positionings on their helmets;
(iv) medicine, sometimes neurosurgeons use AR projections of a 3D brain
to help them in surgeries; (v) airports, where the ground crew wears AR
glasses to help aircrafts in smooth navigation as well as in landing and take-
offs; (vi) historical sites, making AR projections on historical sites, bringing
them to life and make the tourists re-live the past era; and (vii) robot path
planning; and (viii) IKEA, a furniture company uses an AR application
called as IKEA Place which makes the customers see how the furniture and
other household items would fit in their houses or offices. Figure 1.14 will
help the readers understand the concept of AR in a better way.
Another variation of VR is mixed reality, where the virtual environ‑
ment is combined with the real world [47]; hence in mixed reality, the user
has an interaction with both the real world and virtual world. Although
similar, AR and mixed reality are different from each other. Mixed reality
is an extension of AR. It brings the best of both real and virtual worlds.
1.1.10 Digital Twin
A digital twin is a replica or virtual representation of physical assets or
products. The term digital twin was introduced by Dr. Michael Grieves
26 ◾ Emerging Technologies in Computing
FIGURE 1.14 (a) AR in Pokémon Go Game. (With permission from Marc Bruxelle/
Shutterstock.com.) (b) AR being used in furniture app. (From https://techcrunch.
com/.) [See Ref. 48]. (c) AR being used in phones for suggestions and games. (From
https://www.diva-portal.org/.) [See Ref. 49]. (d) AR used by GMaps. (From https://
www.techannouncement.in/.)
Theories of Blockchain
Accenture (2017) Banking on Blockchain. A Value Analysis for Investment Banks,
https://www.accenture.com/_acnmedia/Accenture/Conversion-
Assets/DotCom/Documents/Global/PDF/Consulting/Accenture-Banking-on-Blockchain.pdf.
Angraal, S. , Krumholz, H. , Schulz, W. (2017) Blockchain technology applications in
healthcare. https://www.ahajournals.org/doi/10.1161/CIRCOUTCOMES.117.003800.
Azaria, A. , Ekblaw, A. , Vieira, T. , Lippman, A. (2016) MedRec: Using blockchain for medical
data access and permission management. International Conference on Open and Big Data
(OBD), August 22–24, 2016 , Piscataway.
Banafa, A. (2017) IoT and blockchain convergence: Benefits and challenges.
https://iot.ieee.org/newsletter/january-2017/iot-andblockchain-convergence-benefits-and-
challenges.html.
Barron, L. (2018) CryptoKitties is going mobile. Can ethereum handle the traffic?
https://fortune.com/2018/02/13/cryptokitties-ethereum-ios-launch-china-ether/.
Bhardwaj, S. , Kaushik, M. (2018) Blockchain technology to drive the future. In: Satapathy,
S.C. , Bhateja, V. , Das, S. (eds.) Smart Computing and Informatics. Springer: SIST.
Brito, J. , Castillo, A. (2013) Bitcoin: A Primer for Policymakers. Fairfax County, Virginia:
Geroge Mason University.
Casey, M. , Crane, J. , Gensler, G. , Johnson, S. , Narula, N. (2018) The Impact of
Blockchain Technology on Finance: A catalyst for Change Geneva Reports on the World
Economy 21. ISBN: 978-1-912179-15-2.
Chen, G. , Xu, B. , Lu, M , Chen, N. (2018) Exploring blockchain technology and its potential
applications for education. Smart Learning Environments. 5(1). doi:10.1186/s40561-017-
0050-x.
Danzi, P. , Ellersgaard Kalør, A. , Stefanovic, C. , Popovski, P. (2017) Analysis of the
communication traffic for blockchain synchronization of IoT devices, ArXiv e-prints, Nov.
2017.
Dobson, D. (2018). The 4 types of blockchain. https://iltanet.org/blogs/deborah-
dobson/2018/02/13/the-4-types-of-blockchain-networks-explained
Dorri, A. , Kanhere, S. , Jurdak, R. , Gauravaram, P. (2017) LSB: A lightweight scalable
blockchain for IoT security and privacy, ArXiv e-prints, Dec. 2017
Felin, T. , Lakhani, K. (2018). What Problems Will You Solve with Blockchain? MIT Sloan
Management Review.
Ganne, E. (2018) Can Blockchain Revolutionize International Trade? World Trade
Organization. ISBN 978-92-870-4761-8.
Gartner Reports | Artificial Lawyer (2018) Hype killer – only 1% of companies are using
blockchain. https://www.artificiallawyer.com/2018/05/04/hype-killer-only-1-of-companies-are-
using-blockchain-gartner-reports/
Guegan, D. (2017) Public blockchain versus private blockchain. https://halshs.archives-
ouvertes.fr/halshs-01524440/file/17020.pdf
Holotiuk, F. , Pisani, F. , Moormann, J. (2017) The Impact of Blockchain Technology on
Business Models in the Payments Industry, Proceedings of the 13th International Conference
on Wirtschafts information, 912–926.
Huh, S. , Cho, S. , Kim, S. (2017) Managing IoT devices using blockchain platform. 2017 19th
International Conference on Advanced Communication Technology (ICACT), Bongpyeong,
pp. 464–467. doi: 10.23919/ICACT.2017.7890132.
Iansiti, M. , Lakhani, K. R. (2017) The Truth about Blockchain. Harvard Business Review.
Janssen, M. , Weerakkody, V. , Ismagilova, E. , Sivarajah, U. , Irani, Z. (2020) A framework
for analyzing blockchain technology adoption: Integrating institutional, market and technical
factors. International Journal of Information Management. 50: 302–309.
Kumar, N. M. , Mallick, P. K. (2018) Blockchain technology for security issues and challenges
in IoT. Procedia Computer Science 132: 1815–1823. ISSN 1877-0509,
doi:10.1016/j.procs.2018.05.140.
Mettler, M. (2016) Blockchain technology in healthcare: the revolution starts here. IEEE 18th
International Conference on e-Health Networking, September 14–16, 2016 , Piscataway
Micheler, E. , von der Heyde, L. , (2016) Holding, clearing and settling securities through
blockchain/distributed ledger technology: creating an efficient system by empowering
investors. Journal of International Banking & Financial Law, 31(11): 11 JIBFL 631. ISSN
1742-6812.
Morabito, V. (2017) Business Innovation Through Blockchain. doi:10.1007/978-3-319-48478-
5.
Nakamoto, S. (2008) Bitcoin: A Peer-to-Peer Electronic Cash System. The Academic Press.
https://Bitcoin.org/Bitcoin.pdf
Narayanan, A. , Bonneau, J. , Felten, E. , Miller, A. , Goldfeder, S. (2016) Bitcoin and
Cryptocurrency Technologies: A Comprehensive Introduction. Princeton, NJ: Princeton
University Press.
Nguyen, G. -T. , Kim, K. (2018) A survey about consensus algorithms used in blockchain.
Journal of Information Processing Systems, 14(1): 101–128.
Nian, L. P. , Chuen, D. L. K. (2015) “A light touch of regulation for virtual currencies”. In:
Chuen, David LEE Kuo (ed.), Handbook of Digital Currency: Bitcoin, Innovation, Financial
Instruments, and Big Data. Waltham, MA: Academic Press. p. 319. ISBN 978-0-12-802351-8.
Raconteur (2016) The future of blockchain in 8 Charts. https://www.raconteur.net/the-future-
of-blockchain-in-8-charts/.
Ramachandran, G. , Krishnamachari, B. (2018) Blockchain for the IoT: Opportunities and
challenges. https://arxiv.org/abs/1805.02818.
Scott, B. (2016) How can cryptocurrency and blockchain technology play a role in building
social and solidarity finance?, UNRISD Working Paper, No. 2016-1, United Nations Research
Institute for Social Development (UNRISD), Geneva.
Sharples, M. , Domingue, J. (2016) The blockchain and kudos: A distributed system for
educational record. In: Reputation and Reward. Adaptive and adaptable learning. Cham:
Springer, pp. 490–496. doi:10.1007/978-3-319-45153-4_48.
Sherman, A. T. , Javani, F. , Zhang, H. , Golaszewski, E. (2019) On the origins and variations
of blockchain technologies. IEEE Security & Privacy, 17(1): 72–77.
Skiba, D. J. (2017) The potential of blockchain in education and health care. Nursing
Education Perspective 38(4): 220–221. doi:10.1097/01.NEP.0000000000000190.
Statista (2021) Worldwide Bitcoin Blockchain Size.
https://www.statista.com/statistics/647523/worldwide-bitcoin-blockchain-size/
Tapscott, D. , Tapscott, A. (2016) Here’s why blockchain will change the world.
https://fortune.com/2016/05/08/why-blockchains-will-change-the-world/
Voorhees (2015) It’s all about the blockchain, money and state. http://moneyandstate.com/its-
all-about-the-blockchain/.
Advances in Robotic Systems
Corke, Peter. Robotics, Vision and Control: Fundamental Algorithms in MATLAB® Second,
Completely Revised. Vol. 118. Springer, Cham, 2017.
McKerrow, Phillip John , and Phillip McKerrow . Introduction to Robotics. Sydney: Addison-
Wesley, 1991.
Rossi, Dino , Zoltán Nagy , and Arno Schlueter . “Soft robotics for architects: Integrating soft
robotics education in an architectural context.” Soft Robotics 1.2 (2014): 147–153.
Kanda, Takayuki , and Hiroshi Ishiguro . Human-Robot Interaction in Social Robotics. CRC
Press, Boca Raton, 2017.
Craig, John J. Introduction to Robotics: Mechanics and Control, 3/E. Pearson Education,
India, 2009.
Brooks, Alex , et al. “Towards component-based robotics.” 2005 IEEE/RSJ International
Conference on Intelligent Robots and Systems. Albert. Canada, IEEE, 2005.
Bicho, Estela. Dynamic Approach to Behavior-Based Robotics: Design, Specification,
Analysis, Simulation and Implementation. Shaker Verlag, Germany, 2000.
Bischoff, Rainer , et al. “Brics-best practice in robotics.” ISR 2010 (41st International
Symposium on Robotics) and ROBOTIK 2010 (6th German Conference on Robotics). VDE,
Berlin, Germany, 2010.
Carsten, Joseph , Dave Ferguson , and Anthony Stentz . “3d field d: Improved path planning
and replanning in three dimensions.” 2006 IEEE/RSJ International Conference on Intelligent
Robots and Systems. IEEE, Beijing, China, 2006.
Hoy, Michael , Alexey S. Matveev , and Andrey V. Savkin . “Algorithms for collision-free
navigation of mobile robots in complex cluttered environments: A survey.” Robotica 33.3
(2015): 463–497.
Haddad, Moussa , Wisama Khalil , and H. E. Lehtihet . “Trajectory planning of unicycle
mobile robots with a trapezoidal-velocity constraint.” IEEE Transactions on Robotics 26.5
(2010): 954–962.
Lu, Jianbo , et al. “System and method for dynamically determining vehicle loading and
vertical loading distance for use in a vehicle dynamic control system.” U.S. Patent No.
7,668,645. 23 Feb. 2010 .
Dayoub, Feras , Grzegorz Cielniak , and Tom Duckett . “A sparse hybrid map for vision-
guided mobile robots.” (2011): 213–218.
Chen, S.Y. “Kalman filter for robot vision: A survey.” IEEE Transactions on Industrial
Electronics 59.11 (2011): 4409–4420.
Hamrita, Takoi K. , Walter D. Potter , and Benjamin Bishop . “Robotics, microcontroller and
embedded systems education initiatives: An interdisciplinary approach.” International Journal
of Engineering Education 21.4 (2005): 730.
Klančar, Gregor , and Igor Škrjanc . “Tracking-error model-based predictive control for mobile
robots in real time.” Robotics and Autonomous Systems 55.6 (2007): 460–469.
Rubio, Fernando , et al. “Comparison between Bayesian network classifiers and SVMs for
semantic localization.” Expert Systems with Applications 64 (2016): 434–443.
Kim, T. W. , and J. Yuh . “Development of a real-time control architecture for a semi-
autonomous underwater vehicle for intervention missions.” Control Engineering Practice
12.12 (2004): 1521–1530.
Malu, Sandeep Kumar , and Jharna Majumdar . “Kinematics, localization and control of
differential drive mobile robot.” Global Journal of Research in Engineering Vol 14, No 1-H,
ISSN 2249–4596 (2014).
Omrane, Hajer , Mohamed Slim Masmoudi , and Mohamed Masmoudi . “Fuzzy logic based
control for autonomous mobile robot navigation.” Computational Intelligence and
Neuroscience 2016 (2016): 1–10.
Kharola, Ashwani , Dhuliya Piyush , and Sharma Priyanka . “Anti swing and Position control
of single wheeled inverted pendulum Robot (SWIPR),” International Journal of Applied
Evolutionary Computation (IJAEC) 9.4 (Oct-Dec–2018): 37–47.
Mohammed A. Hussein , Ahmed S. Ali , F.A. Elmisery , and R. Mostafa . “Motion control of
robot by using Kinect sensor,” Research Journal of Applied Sciences, Engineering and
Technology 8 (2014): 1384–1388.
Sasaki, M. , Suhaimi, M. S. A. B. , Matsushita, K. , Ito, S. , and Rusydi, M. I. Robot control
system based on electrooculography and electromyogram. Journal of Computer and
Communications 3.11 (2015): 113–128.
Pierson, H. A. , and Gashler, M. S. “Deep learning in robotics: A review of recent research,”
Advanced Robotics 31.16 (2017): 821–835.
El-Seoud, Samir Abou , Nadine Farag , and Gerard McKee . “A review on non-supervised
approaches for cyberbullying detection.” International Journal of Engineering Pedagogy 10.4
(2020).
Jha, Dipendra , et al. “Irnet: A general purpose deep residual regression framework for
materials discovery.” Proceedings of the 25th ACM SIGKDD International Conference on
Knowledge Discovery & Data Mining, Anchorage, AK, USA. 2019.
Wang, Peng , et al. “Deep learning-based human motion recognition for predictive context-
aware human-robot collaboration.” CIRP Annals 67.1 (2018): 17–20.