Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Cloud Computing Syllabus

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 19

R18 B.Tech.

CSE Syllabus JNTU HYDERABAD

CS714PE: CLOUD COMPUTING (Professional Elective - IV)

IV Year B.Tech. CSE I –Sem LTPC


3003
Pre-requisites:
1. A course on “Computer Networks”
2. A course on “Operating Systems”
3. A course on “Distributed Systems”

Course Objectives:
 This course provides an insight into cloud computing
 Topics covered include- distributed system models, different cloud service models, serviceoriented
architectures, cloud programming and software environments, resource management.

Course Outcomes:
 Ability to understand various service delivery models of a cloud computing architecture.
 Ability to understand the ways in which the cloud can be programmed and deployed.
 Understanding cloud service providers.

UNIT - I
Computing Paradigms: High-Performance Computing, Parallel Computing, Distributed Computing,
Cluster Computing, Grid Computing, Cloud Computing, Bio computing, Mobile Computing, Quantum
Computing, Optical Computing, Nano computing.

UNIT - II
Cloud Computing Fundamentals: Motivation for Cloud Computing, The Need for Cloud Computing,
Defining Cloud Computing, Definition of Cloud computing, Cloud Computing Is a Service, Cloud
Computing Is a Platform, Principles of Cloud computing, Five Essential Characteristics, Four Cloud
Deployment Models

UNIT - III
Cloud Computing Architecture and Management: Cloud architecture, Layer, Anatomy of the Cloud,
Network Connectivity in Cloud Computing, Applications, on the Cloud, Managing the Cloud, Managing
the Cloud Infrastructure Managing the Cloud application, Migrating Application to Cloud, Phases of
Cloud Migration Approaches for Cloud Migration.

UNIT - IV
Cloud Service Models: Infrastructure as a Service, Characteristics of IaaS. Suitability of IaaS, Pros
and Cons of IaaS, Summary of IaaS Providers, Platform as a Service, Characteristics of PaaS,
Suitability of PaaS, Pros and Cons of PaaS, Summary of PaaS Providers, Software as a Service,
Characteristics of SaaS, Suitability of SaaS, Pros and Cons of SaaS, Summary of SaaS Providers,
Other Cloud Service Models.

UNIT V
Cloud Service Providers: EMC, EMC IT, Captiva Cloud Toolkit, Google, Cloud Platform, Cloud
Storage, Google Cloud Connect, Google Cloud Print, Google App Engine, Amazon Web Services,
Amazon Elastic Compute Cloud, Amazon Simple Storage Service, Amazon Simple Queue ,service,
Microsoft, Windows Azure, Microsoft Assessment and Planning Toolkit, SharePoint, IBM, Cloud
Models, IBM Smart Cloud, SAP Labs, SAP HANA Cloud Platform, Virtualization Services Provided by
SAP, Sales force, Sales Cloud, Service Cloud: Knowledge as a Service, Rack space, VMware, Manjra
soft, Aneka Platform

R18 B.Tech. CSE Syllabus JNTU HYDERABAD


TEXT BOOK:
1. Essentials of cloud Computing: K. Chandrasekhran, CRC press, 2014
REFERENCE BOOKS:
1. Cloud Computing: Principles and Paradigms by Rajkumar Buyya, James Broberg and Andrzej
M. Goscinski, Wiley, 2011.
2. Distributed and Cloud Computing, Kai Hwang, Geoffery C. Fox, Jack J. Dongarra, Elsevier,
2012.
3. Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance, Tim Mather,
Subra Kumaraswamy, Shahed Latif, O’Reilly, SPD, rp 2011.

UNIT – I Notes
Computing Paradigms: High-Performance Computing, Parallel Computing, Distributed Computing,
Cluster Computing, Grid Computing, Cloud Computing, Bio computing, Mobile Computing, Quantum
Computing, Optical Computing, Nano computing.

1. High-Performance Computing

It is the use of parallel processing for running advanced application programs efficiently,
relatives, and quickly. The term applies especially is a system that function above a teraflop
(1012) (floating opm per second). The term High-performance computing is occasionally used
as a synonym for supercomputing. Although technically a supercomputer is a system that
performs at or near currently highest operational rate for computers. Some supercomputers
work at more than a petaflop (10 12) floating points opm per second. The most common HPC
system all scientific engineers & academic institutions. Some Government agencies
particularly military are also relying on APC for complex applications.

High-performance Computers:
High Performance Computing (HPC) generally refers to the practice of combining computing
power to deliver far greater performance than a typical desktop or workstation, in order to
solve complex problems in science, engineering, and business.
Processors, memory, disks, and OS are elements of high-performance computers of interest to
small & medium size businesses today are really clusters of computers. Each individual
computer in a commonly configured small cluster has between one and four processors and
today ‘s processors typically are from 2 to 4 crores, HPC people often referred to individual
computers in a cluster as nodes. A cluster of interest to a small business could have as few as 4
nodes on 16 crores. Common cluster size in many businesses is between 16 & 64 crores or
from 64 to 256 crores. The main reason to use this is that in its individual node can work
together to solve a problem larger than any one computer can easily solve. These nodes are so
connected that they can communicate with each other in order to produce some meaningful
work. There are two popular HPC’s software i. e, Linux, and windows. Most of installations
are in Linux because of Linux legacy in supercomputer and large scale machines. But one can
use it with his / her requirements.
Importance of High performance Computing :
1. It is used for scientific discoveries, game-changing innovations, and to improve quality of
life.
2. It is a foundation for scientific & industrial advancements.
3. It is used in technologies like IoT, AI, 3D imaging evolves & amount of data that is used
by organization is increasing exponentially to increase ability of a computer, we use High-
performance computer.
4. HPC is used to solve complex modeling problems in a spectrum of disciplines. It includes
AI, Nuclear Physics, Climate Modelling, etc.
5. HPC is applied to business uses, data warehouses & transaction processing.

Need of High performance Computing :


1. It will complete a time-consuming operation in less time.
2. It will complete an operation under a light deadline and perform a high numbers of
operations per second.
3. It is fast computing, we can compute in parallel over lot of computation elements CPU,
GPU, etc. It set up very fast network to connect between elements.

Need of ever increasing Performance :


1. Climate modeling
2. Drug discovery
3. Data Analysis
4. Protein folding
5. Energy research

How Does HPC Work?


User/Scheduler → Compute cluster → Data storage
To create a high-performance computing architecture, multiple computer servers are networked
together to form a compute cluster. Algorithms and software programs are executed
simultaneously on the servers, and the cluster is networked to data storage to retrieve the
results. All of these components work together to complete a diverse set of tasks.
To achieve maximum efficiency, each module must keep pace with others, otherwise, the
performance of the entire HPC infrastructure would suffer.

Challenges with HPC

1. Cost: The cost of the hardware, software, and energy consumption is enormous, making
HPC systems exceedingly expensive to create and operate. Additionally, the setup and
management of HPC systems require qualified workers, which raises the overall cost.
2. Scalability: HPC systems must be made scalable so they may be modified or expanded as
necessary to meet shifting demands. But creating a scalable system is a difficult endeavour
that necessitates thorough planning and optimization.
3. Data Management: Data management can be difficult when using HPC systems since they
produce and process enormous volumes of data. These data must be stored and accessed
using sophisticated networking and storage infrastructure, as well as tools for data analysis
and visualization.
4. Programming: Parallel programming techniques, which can be more difficult than
conventional programming approaches, are frequently used in HPC systems. It might be
challenging for developers to learn how to create and optimise algorithms for parallel
processing.
5. Support for software and tools: To function effectively, HPC systems need specific
software and tools. The options available to users may be constrained by the fact that not
all software and tools are created to function with HPC equipment.
6. Power consumption and cooling: To maintain the hardware functioning at its best,
specialised cooling technologies are needed for HPC systems’ high heat production.
Furthermore, HPC systems consume a lot of electricity, which can be expensive and
difficult to maintain.

Applications of HPC
High Performance Computing (HPC) is a term used to describe the use of supercomputers and
parallel processing strategies to carry out difficult calculations and data analysis activities.
From scientific research to engineering and industrial design, HPC is employed in a wide range
of disciplines and applications. Here are a few of the most significant HPC use cases and
applications:
1. Scientific research: HPC is widely utilized in this sector, especially in areas like physics,
chemistry, and astronomy. With standard computer techniques, it would be hard to model
complex physical events, examine massive data sets, or carry out sophisticated
calculations.
2. Weather forecasting: The task of forecasting the weather is difficult and data-intensive,
requiring sophisticated algorithms and a lot of computational power. Simulated weather
models are executed on HPC computers to predict weather patterns.
3. Healthcare: HPC is being used more and more in the medical field for activities like
medication discovery, genome sequencing, and image analysis. Large volumes of medical
data can be processed by HPC systems rapidly and accurately, improving patient diagnosis
and care.
4. Energy and environmental studies: HPC is employed to simulate and model complex
systems, such as climate change and renewable energy sources, in the energy and
environmental sciences. Researchers can use HPC systems to streamline energy systems,
cut carbon emissions, and increase the resilience of our energy infrastructure.
5. Engineering and Design: HPC is used in engineering and design to model and evaluate
complex systems, like those found in vehicles, buildings, and aeroplanes. Virtual
simulations performed by HPC systems can assist engineers in identifying potential
problems and improving designs before they are built.

2. Parallel Computing :

Before taking a toll on Parallel Computing, first, let’s take a look at the background of
computations of computer software and why it failed for the modern era.
Computer software was written conventionally for serial computing. This meant that to solve a
problem, an algorithm divides the problem into smaller instructions. These discrete instructions
are then executed on the Central Processing Unit of a computer one by one. Only after one
instruction is finished, next one starts.
A real-life example of this would be people standing in a queue waiting for a movie ticket and
there is only a cashier. The cashier is giving tickets one by one to the persons. The complexity
of this situation increases when there are 2 queues and only one cashier.
So, in short, Serial Computing is following:
1. In this, a problem statement is broken into discrete instructions.
2. Then the instructions are executed one by one.
3. Only one instruction is executed at any moment of time.

Look at point 3. This was causing a huge problem in the computing industry as only one
instruction was getting executed at any moment of time. This was a huge waste of hardware
resources as only one part of the hardware will be running for particular instruction and of
time. As problem statements were getting heavier and bulkier, so does the amount of time in
execution of those statements. Examples of processors are Pentium 3 and Pentium 4.
Now let’s come back to our real-life problem. We could definitely say that complexity will
decrease when there are 2 queues and 2 cashiers giving tickets to 2 persons simultaneously.
This is an example of Parallel Computing.
Parallel Computing :
It is the use of multiple processing elements simultaneously for solving any problem. Problems
are broken down into instructions and are solved concurrently as each resource that has been
applied to work is working at the same time.
Advantages of Parallel Computing over Serial Computing are as follows:
1. It saves time and money as many resources working together will reduce the time and cut
potential costs.
2. It can be impractical to solve larger problems on Serial Computing.
3. It can take advantage of non-local resources when the local resources are finite.
4. Serial Computing ‘wastes’ the potential computing power, thus Parallel Computing makes
better work of the hardware.

Types of Parallelism:
1. Bit-level parallelism –
It is the form of parallel computing which is based on the increasing processor’s size. It
reduces the number of instructions that the system must execute in order to perform a task
on large-sized data.
Example: Consider a scenario where an 8-bit processor must compute the sum of two 16-
bit integers. It must first sum up the 8 lower-order bits, then add the 8 higher-order bits,
thus requiring two instructions to perform the operation. A 16-bit processor can perform
the operation with just one instruction.
2. Instruction-level parallelism –
A processor can only address less than one instruction for each clock cycle phase. These
instructions can be re-ordered and grouped which are later on executed concurrently
without affecting the result of the program. This is called instruction-level parallelism.
3. Task Parallelism –
Task parallelism employs the decomposition of a task into subtasks and then allocating
each of the subtasks for execution. The processors perform the execution of sub-tasks
concurrently.
4. Data-level parallelism (DLP) –
Instructions from a single stream operate concurrently on several data – Limited by non-
regular data manipulation patterns and by memory bandwidth
Why parallel computing?
 The whole real-world runs in dynamic nature i.e. many things happen at a certain time but
at different places concurrently. This data is extensively huge to manage.
 Real-world data needs more dynamic simulation and modeling, and for achieving the same,
parallel computing is the key.
 Parallel computing provides concurrency and saves time and money.
 Complex, large datasets, and their management can be organized only and only using
parallel computing’s approach.
 Ensures the effective utilization of the resources. The hardware is guaranteed to be used
effectively whereas in serial computation only some part of the hardware was used and the
rest rendered idle.
 Also, it is impractical to implement real-time systems using serial computing.
Applications of Parallel Computing:
 Databases and Data mining.
 Real-time simulation of systems.
 Science and Engineering.
 Advanced graphics, augmented reality, and virtual reality.
Limitations of Parallel Computing:
 It addresses such as communication and synchronization between multiple sub-tasks and
processes which is difficult to achieve.
 The algorithms must be managed in such a way that they can be handled in a parallel
mechanism.
 The algorithms or programs must have low coupling and high cohesion. But it’s difficult to
create such programs.
 More technically skilled and expert programmers can code a parallelism-based program
well.
Future of Parallel Computing: The computational graph has undergone a great transition
from serial computing to parallel computing. Tech giant such as Intel has already taken a step
towards parallel computing by employing multicore processors. Parallel computation will
revolutionize the way computers work in the future, for the better good. With all the world
connecting to each other even more than before, Parallel Computing does a better role in
helping us stay that way. With faster networks, distributed systems, and multi-processor
computers, it becomes even more necessary.

4. Distributed Computing :

A distributed computer system consists of multiple software components that are on multiple
computers, but run as a single system. The computers that are in a distributed system can be
physically close together and connected by a local network, or they can be geographically distant
and connected by a wide area network. A distributed system can consist of any number of
possible configurations, such as mainframes, personal computers, workstations, minicomputers,
and so on. The goal of distributed computing is to make such a network work as a single
computer.

Distributed systems offer many benefits over centralized systems, including the following:
Scalability
The system can easily be expanded by adding more machines as needed.
Redundancy
Several machines can provide the same services, so if one is unavailable, work does not
stop. Additionally, because many smaller machines can be used, this redundancy does
not need to be prohibitively expensive.

Distributed computing systems can run on hardware that is provided by many vendors, and can
use a variety of standards-based software components. Such systems are independent of the
underlying software. They can run on various operating systems, and can use various
communications protocols. Some hardware might use UNIX or Linux as the operating system,
while other hardware might use Windows operating systems. For intermachine communications,
this hardware can use SNA or TCP/IP on Ethernet or Token Ring.
You can organize software to run on distributed systems by separating functions into two parts:
clients and servers. This is described in The client/server model. A common design of
client/server systems uses three tiers, as described in Three-tiered client/server architecture.

5. Cluster Computing

Cluster computing defines several computers linked on a network and implemented like an
individual entity. Each computer that is linked to the network is known as a node.
Cluster computing provides solutions to solve difficult problems by providing faster
computational speed, and enhanced data integrity. The connected computers implement
operations all together thus generating the impression like a single system (virtual device). This
procedure is defined as the transparency of the system.

Advantages of Cluster Computing


The advantages of cluster computing are as follows −
 Cost-Effectiveness − Cluster computing is considered to be much more costeffective.
These computing systems provide boosted implementation concerning the mainframe
computer devices.
 Processing Speed − The processing speed of cluster computing is validated with that of
the mainframe systems and other supercomputers demonstrate around the globe.
 Increased Resource Availability − Availability plays an important role in cluster
computing systems. Failure of some connected active nodes can be simply transformed
onto different active nodes on the server, providing high availability.
 Improved Flexibility − In cluster computing, better description can be updated and
improved by inserting unique nodes into the current server.

Types of Cluster Computing


The types of cluster computing are as follows −
High Availability (HA) and Failover Clusters
These cluster models generate the availability of services and resources in an uninterrupted
technique using the system’s implicit redundancy. The basic term of Cluster is that if a node
declines, then applications and services can be made available to different nodes. These methods
of clusters deliver as the element for critical missions, mails, documents, and application
servers.
Load Balancing Clusters
This cluster allocates all the incoming traffic/requests for resources from nodes that run the
equal programs and machines. In this cluster model, some nodes are answerable for tracking
orders, and if a node declines, therefore the requests are distributed amongst all the nodes
available. Such a solution is generally used on web server farms.

HA & Load Balancing Clusters


This cluster model associates both cluster features, resulting in boost availability and scalability
of services and resources. This kind of cluster is generally used for email, web, news, and FTP
servers.
Distributed & Parallel Processing Clusters
This cluster model boosts availability and implementation for applications that have huge
computational tasks. A large computational task has been divided into smaller tasks and
distributed across the stations. Such clusters are generally used for numerical computing or
financial analysis that needs high processing power.
6. Grid Computing
Grid Computing can be defined as a network of computers working together to perform a task
that would rather be difficult for a single machine. All machines on that network work under
the same protocol to act as a virtual supercomputer. The task that they work on may include
analyzing huge datasets or simulating situations that require high computing power. Computers
on the network contribute resources like processing power and storage capacity to the
network.
Grid Computing is a subset of distributed computing, where a virtual supercomputer comprises
machines on a network connected by some bus, mostly Ethernet or sometimes the Internet. It
can also be seen as a form of Parallel Computing where instead of many CPU cores on a single
machine, it contains multiple cores spread across various locations. The concept of grid
computing isn’t new, but it is not yet perfected as there are no standard rules and protocols
established and accepted by people.
Working:
A Grid computing network mainly consists of these three types of machines
1. Control Node: A computer, usually a server or a group of servers which administrates the
whole network and keeps the account of the resources in the network pool.
2. Provider: The computer contributes its resources to the network resource pool.
3. User: The computer that uses the resources on the network.
When a computer makes a request for resources to the control node, the control node gives the
user access to the resources available on the network. When it is not in use it should ideally
contribute its resources to the network. Hence a normal computer on the node can swing in
between being a user or a provider based on its needs. The nodes may consist of machines with
similar platforms using the same OS called homogeneous networks, else machines with
different platforms running on various different OSs called heterogeneous networks. This is the
distinguishing part of grid computing from other distributed computing architectures.
For controlling the network and its resources a software/networking protocol is used generally
known as Middleware. This is responsible for administrating the network and the control
nodes are merely its executors. As a grid computing system should use only unused resources
of a computer, it is the job of the control node that any provider is not overloaded with tasks.
Another job of the middleware is to authorize any process that is being executed on the
network. In a grid computing system, a provider gives permission to the user to run anything
on its computer, hence it is a huge security threat to the network. Hence a middleware should
ensure that there is no unwanted task being executed on the network.
The meaning of the term Grid Computing has changed over the years, according to “The Grid:
Blueprint for a new computing infrastructure” by Ian Foster and Carl Kesselman published in
1999, the idea was to consume computing power like electricity is consumed from a power
grid. This idea is similar to the current concept of cloud computing, whereas now grid
computing is viewed as a distributed collaborative network. Currently, grid computing is being
used in various institutions to solve a lot of mathematical, analytical, and physics problems.
Advantages of Grid Computing:
1. It is not centralized, as there are no servers required, except the control node which is just
used for controlling and not for processing.
2. Multiple heterogeneous machines i.e. machines with different Operating Systems can use a
single grid computing network.
3. Tasks can be performed parallelly across various physical locations and the users don’t
have to pay for them (with money).

Disadvantages of Grid Computing:


1. The software of the grid is still in the involution stage.
2. A super-fast interconnect between computer resources is the need of the hour.
3. Licensing across many servers may make it prohibitive for some applications.
4. Many groups are reluctant with sharing resources.
5. Trouble in the control node can come to halt in the whole network.

7. Cloud Computing

Cloud Computing tutorial provides basic and advanced concepts of Cloud Computing. Our
Cloud Computing tutorial is designed for beginners and professionals.

Cloud computing is a virtualization-based technology that allows us to create, configure, and


customize applications via an internet connection. The cloud technology includes a development
platform, hard disk, software application, and database.

What is Cloud Computing

The term cloud refers to a network or the internet. It is a technology that uses remote servers on
the internet to store, manage, and access data online rather than local drives. The data can be
anything such as files, images, documents, audio, video, and more.

There are the following operations that we can do using cloud computing:

o Developing new applications and services


o Storage, back up, and recovery of data
o Hosting blogs and websites
o Delivery of software on demand
o Analysis of data
o Streaming videos and audios
Why Cloud Computing?

Small as well as large IT companies, follow the traditional methods to provide the IT
infrastructure. That means for any IT company, we need a Server Room that is the basic need
of IT companies.

In that server room, there should be a database server, mail server, networking, firewalls, routers,
modem, switches, QPS (Query Per Second means how much queries or load will be handled by
the server), configurable system, high net speed, and the maintenance engineers.

To establish such IT infrastructure, we need to spend lots of money. To overcome all these
problems and to reduce the IT infrastructure cost, Cloud Computing comes into existence.

Characteristics of Cloud Computing

The characteristics of cloud computing are given below:

1) Agility :The cloud works in a distributed computing environment. It shares resources


among users and works very fast.

2) High availability and reliability : The availability of servers is high and more reliable because
the chances of infrastructure failure are minimum.

3) High Scalability :Cloud offers "on-demand" provisioning of resources on a large scale,


without having engineers for peak loads.

4) Multi-Sharing : With the help of cloud computing, multiple users and applications can
work more efficiently with cost reductions by sharing common infrastructure.

5) Device and Location Independence

Cloud computing enables the users to access systems using a web browser regardless of their
location or what device they use e.g. PC, mobile phone, etc. As infrastructure is off-
site (typically provided by a third-party) and accessed via the Internet, users can connect
from anywhere.

6) Maintenance :Maintenance of cloud computing applications is easier, since they do not need
to be installed on each user's computer and can be accessed from different places. So, it
reduces the cost also.

7) Low Cost :By using cloud computing, the cost will be reduced because to take the services of
cloud computing, IT company need not to set its own infrastructure and pay-as-per usage of
resources.

8) Services in the pay-per-use mode : Application Programming Interfaces (APIs) are provided
to the users so that they can access services on the cloud by using these APIs and pay the
charges as per the usage of services.
Prerequisite : Before learning cloud computing, you must have the basic knowledge of
computer fundamentals.

Audience : Our cloud computing is designed to help beginners and professionals.

Problem : We assure that you will not find any difficulty while learning our cloud computing
tutorial. But if there is any mistake in this tutorial, kindly post the problem or error in the contact
form.

7. Mobile Computing

Mobile Computing tutorial provides basic and advanced concepts of mobile computing. In this
tutorial, you will get an overview of Mobile Computing, its continuous evolution, and the future
trends of this technology. Our Mobile Computing tutorial is designed for beginners and
professionals.

Mobile Computing refers a technology that allows transmission of data, voice and video via a
computer or any other wireless enabled device. It is free from having a connection with a fixed
physical link. It facilitates the users to move from one physical location to another during
communication.

Our Mobile communication tutorial includes all topics of mobile computing like its brief
overview and history, evolution, classification, advantages and disadvantages, security issues,
future trends etc.

Introduction of Mobile Computing

Mobile Computing is a technology that provides an environment that enables users to transmit
data from one device to another device without the use of any physical link or cables.

In other words, you can say that mobile computing allows transmission of data, voice and video
via a computer or any other wireless-enabled device without being connected to a fixed physical
link. In this technology, data transmission is done wirelessly with the help of wireless devices
such as mobiles, laptops etc.

This is only because of Mobile Computing technology that you can access and transmit data
from any remote locations without being present there physically. Mobile computing technology
provides a vast coverage diameter for communication. It is one of the fastest and most reliable
sectors of the computing technology field.

The concept of Mobile Computing can be divided into three parts:

o Mobile Communication
o Mobile Hardware
o Mobile Software
Mobile Communication

Mobile Communication specifies a framework that is responsible for the working of mobile
computing technology. In this case, mobile communication refers to an infrastructure that
ensures seamless and reliable communication among wireless devices. This framework ensures
the consistency and reliability of communication between wireless devices. The mobile
communication framework consists of communication devices such as protocols, services,
bandwidth, and portals necessary to facilitate and support the stated services. These devices are
responsible for delivering a smooth communication process.

Mobile communication can be divided in the following four types:

1. Fixed and Wired


2. Fixed and Wireless
3. Mobile and Wired
4. Mobile and Wireless

Fixed and Wired: In Fixed and Wired configuration, the devices are fixed at a position, and
they are connected through a physical link to communicate with other devices.

For Example, Desktop Computer.

Fixed and Wireless: In Fixed and Wireless configuration, the devices are fixed at a position,
and they are connected through a wireless link to make communication with other devices.

For Example, Communication Towers, WiFi router

Mobile and Wired: In Mobile and Wired configuration, some devices are wired, and some are
mobile. They altogether make communication with other devices.

For Example, Laptops.

Mobile and Wireless: In Mobile and Wireless configuration, the devices can communicate with
each other irrespective of their position. They can also connect to any network without the use of
any wired device.

For Example, WiFi Dongle.

Mobile Hardware

Mobile hardware consists of mobile devices or device components that can be used to receive or
access the service of mobility. Examples of mobile hardware can be smartphones, laptops,
portable PCs, tablet PCs, Personal Digital Assistants, etc.
These devices are inbuilt with a receptor medium that can send and receive signals. These
devices are capable of operating in full-duplex. It means they can send and receive signals at the
same time. They don't have to wait until one device has finished communicating for the other
device to initiate communications.

Mobile Software

Mobile software is a program that runs on mobile hardware. This is designed to deal capably
with the characteristics and requirements of mobile applications. This is the operating system for
the appliance of mobile devices. In other words, you can say it the heart of the mobile systems.
This is an essential component that operates the mobile device.

This provides portability to mobile devices, which ensures wireless communication.

Applications of Mobile Computing

Following is a list of some significant fields in which mobile computing is generally applied:

o Web or Internet access.


o Global Position System (GPS).
o Emergency services.
o Entertainment services.
o Educational services.

Prerequisite

Before learning Mobile Computing, you must have the basic knowledge of Computer
fundamentals and networking.

Audience

Our Mobile Computing tutorial is designed to help beginners and professionals.

8. Quantum Computing

Quantum Computing is the process of using quantum-mechanics for solving complex


and massive operations quickly and efficiently. As classical computers are used for performing
classical computations, similarly, a Quantum computer is used for performing Quantum
computations. Quantum Computations are too complex to solve that it becomes almost
impossible to solve them with classical computers. The word 'Quantum' is derived from the
concept of Quantum Mechanics in Physics that describes the physical properties of the nature of
electrons and photons. Quantum is the fundamental framework for deeply describing and
understanding nature. Thus, it is the reason that quantum calculations deal with complexity.
Quantum Computing is a subfield of Quantum Information Science. It describes the best
way of dealing with a complicated computation. Quantum-mechanics is based on the
phenomena of superposition and entanglement, which are used to perform the quantum
computations.

For performing Quantum calculations, a Quantum Computer is used that is dissimilar to a


classical computer. Although the concept of quantum computing came earlier, it didn't
gain much popularity then.

Superposition and Entanglement

A Quantum deals with the smallest particles found in nature, i.e., electrons and photons. These
three particles are known as Quantum particles. In this, superposition defines the ability of a
quantum system to be present in multiple states (one or more) at the same time.

For example, a time machine in which a person can be present at one or more places at the same
time, or we can say something that is present up, down, here and there at the same time. It is
known as superposition.

Entanglement defines a very strong correlation between the quantum particles. These particles
are so strongly linked that even if we place one particle at one end of the universe and one at the
other end, both of them dance instantaneously.

Einstein describes entanglement as 'Spooky action at a distance'. Thus, entanglement describes


the strong bond between the particles where distance does not matter.

Quantum Computer

A Quantum Computer is a device that is used for performing quantum calculations, which are
highly complex in nature. It stores data in the form of Qubits. Qubits are also known
as Quantum Bits. A Quantum Computer can simulate those problems or operations that a
classical computer (that we currently use) cannot do. Even a quantum computer is capable of
solving computational problems faster than a normal computer.

For example, it is easy to get the product of (500 * 187625) through a classical computer, but it
is easy and quick to get the same result through a quantum computer. A classical computer will
take approximately 5 seconds to get the result, whereas a quantum computer will take 0.005
seconds to get the result.

Currently, researchers are working with Quantum computers in the field of cybersecurity to
break codes and encrypt electronic communications to explore better cybersecurity and protected
data.
What are Quantum Bits

Quantum Bits or Qbits are the storage unit of Quantum Computers. All the information is stored
in the form of qubits in a quantum computer. Quantum bits are the subatomic particles that are
composed of electrons or photons. It is difficult to generate and manage Qubits, and it is a
challenging task for scientists who are working in this field. These are the qubits that carry the
property of superposition and entanglement. It means qubits are able to show various
combinations of 1 and 0 at the same time. Thus, it is superposition. Researches make use of
microwave beams or lasers for manipulating qubits. The final result of a computation
immediately collapses to a quantum state of 1 or 0. It is the entanglement in which two members
of a pair are present in a single quantum state. When two qubits of a pair are placed at a far
distance, and if the state of one qubit changes, the state of the other will instantaneously change
in a predictable manner. A connected group of quantum bits or qubits has much more power than
the same binary digit number.

History of Quantum Computing

In the early 1980s, Paul Benioff(a physicist) proposed a quantum mechanical model of the
Turing Machine. Since then, the concept of Quantum Computing came into existence. Later on,
it was suggested that a quantum computer could simulate those things that a classical computer
cannot. The suggestion was given by Richard Feynman and Yuri Manin. Peter
Shor developed a quantum algorithm in 1994 for factoring the integers. The algorithm was
strong enough to decrypt RSA-encrypted communications. More research is still going on in the
field of Quantum Computing. On 23 October 2019, Google AI, in partnership
with NASA (National Aeronautics and Space Administration), US, published a paper in which it
was claimed that they had achieved Quantum Supremacy. Although some of them have disputed
this claim, it is still a significant milestone in history.

Applications of Quantum Computing

There are the following applications of Quantum Computing:

o Cybersecurity: Personal information is stored in computers in the current era of


digitization. So, we need a very strong system of cybersecurity to protect data from
stealing. Classical computers are good enough for cybersecurity, but the vulnerable
threats and attacks weaken it. Scientists are working with quantum computers in this
field. It is also found that it is possible to develop several techniques to deal with such
cybersecurity threats via machine learning.
o Cryptography is also a field of security where quantum computers are helping to
develop encryption methods to deliver the packets onto the network safely. Such creation
of encryption methods is known as Quantum Cryptography.
o Weather Forecasting: Sometimes, the process of analyzing becomes too long to forecast
the weather using classical computers. On the other hand, Quantum Computers have
enhanced power to analyze, recognize the patterns, and forecast the weather in a short
period and with better accuracy. Even quantum systems are able to predict more detailed
climate models with perfect timings.
o AI and Machine Learning: AI has become an emerging area of digitization. Many
tools, apps, and features have been developed via AI and ML. As the days passing by,
numerous applications are being developed. As a result, it has challenged the classical
systems to match up accuracy and speed. But, Quantum computers can help to process
such complex problems in less time for which a classical computer will take hundreds of
years to solve those problems.
o Drug Design and Development: Drug designing and development is a typical job to be
done. It is because the development of drugs is based on trial and error method, which is
expensive as well as risky tasks. It is also a challenging task for quantum computers too.
It is the researcher's hope and belief that quantum computing can become an effective
way of knowing the drugs and their reactions over human beings. The day when quantum
computing will successfully become capable of drug development, it will save a lot of
time and money for drug industries. Also, more drug discoveries could be made with
better results for the pharmaceutical industries.
o Finance Marketing: A finance industry can survive in the market only if it provides
fruitful results to its customers. Such industries need unique and effective strategies to get
growth. Although in conventional computers, the technique of Monte Carlo simulations
is being used, in turn, it consumes a lot of time on the computer. However, if such
complex calculations are performed by a quantum system, it will improve the quality of
solutions and decrease development time.
o Computational Chemistry: The superposition and entanglement properties of a
quantum computer may provide superpowers to machines for successfully mapping the
molecules. As a result, it opens several opportunities in the field of pharmaceuticals
research. More massive problems that a quantum computer can handle include creating
room-temperature superconductor, creating ammonia-based fertilizer, creating solid-state
batteries, and removing CO2 (carbon dioxide) for a better climate, etc. Quantum
computing will be the most prominent in the field of computational chemistry.
o Logistics Optimization: Conventional Computing is being used for improving data
analysis and robust modeling by enabling various industries to optimize their logistics
and scheduling workflows associated with their supply-chain management. Such
operating models continuously perform the calculations and recalculations for finding the
optimal routes of fleet operations, air traffic control, and traffic management. Some of
these operations can become complex and difficult for classical computers to solve. Thus,
quantum computing can become an ideal computing solution to solve such complex
problems. In quantum computing, two approaches are used, which are:
1. Quantum Annealing: It is an advanced optimization technique that can surpass
the classical computers.
2. Universal Quantum Computers: It is capable of finding solutions for all types
of computational problems. But, such a type of quantum system will take time to
be commercially available. Researchers are hopefully working to enhance the
system,

Classical Computing Vs. Quantum Computing

The differences between classical computing and quantum computing are described in the below
table:

Classical Computing Quantum Computing

Classical Computers are used for classical Quantum Computers make use of the quantum computin
computing. approach.

Data is stored in bits. Data is stored in Qubits.

It performs calculations in the form of binary It performs calculations on the basis of the object's probability
digits.

It can only process a limited amount of data. It can process exponentially more data.

Logical operations are carried out using the Logical operations are performed using the quantum state, i.e
physical state, i.e., usually binary. qubits.

Fails to solve too complex and massive Quantum Computers deals with complex and massiv
problems. problems.

It has standardized programming languages It does not rely on any specific programming language.
such as Java, C, C++.

Classical systems are used for daily purposes. These systems cannot be used for daily purposes as it
complex in nature, and scientists or engineers can use it.

It is built with CPU and other processors. It has a simple architecture and runs on the set of qubits.

It provides security to data but limited. It provides highly secured data and data encryption.

Low speed and more time taking systems. Improved speed and saves much time.

Future of Quantum Computing

The future of Quantum Computing seems quite enhanced and productive for world trade. The
above-discussed points tell that it is the beginning of the concept and will surely become a part
of our life. It is not the mainstream yet. In the future, the quantum systems will enable the
industries to tackle those problems, which they always thought impossible to solve. According to
reports, the market of quantum computing will grow strongly in the coming decades. Google is
showing a great focus and interest in the theory of quantum computing. Recently, Google has
launched a new version of TensorFlow, which is TensorFlow Quantum (TFQ). TFQ is an
open-source library. It is used to prototype quantum machine learning models. When it will be
developed, it will enable developers to easily create hybrid AI algorithms that will allow the
integration of techniques of a quantum computer and a classical computer. The main motive of
TFQ is to bring quantum computing and machine learning techniques together to evenly build
and control natural as well as artificial quantum computers. Scientists are still facing some new
and known challenges with quantum computing, but it will surely lead to software development
in the coming years.

You might also like