Cloud Computing Syllabus
Cloud Computing Syllabus
Cloud Computing Syllabus
Course Objectives:
This course provides an insight into cloud computing
Topics covered include- distributed system models, different cloud service models, serviceoriented
architectures, cloud programming and software environments, resource management.
Course Outcomes:
Ability to understand various service delivery models of a cloud computing architecture.
Ability to understand the ways in which the cloud can be programmed and deployed.
Understanding cloud service providers.
UNIT - I
Computing Paradigms: High-Performance Computing, Parallel Computing, Distributed Computing,
Cluster Computing, Grid Computing, Cloud Computing, Bio computing, Mobile Computing, Quantum
Computing, Optical Computing, Nano computing.
UNIT - II
Cloud Computing Fundamentals: Motivation for Cloud Computing, The Need for Cloud Computing,
Defining Cloud Computing, Definition of Cloud computing, Cloud Computing Is a Service, Cloud
Computing Is a Platform, Principles of Cloud computing, Five Essential Characteristics, Four Cloud
Deployment Models
UNIT - III
Cloud Computing Architecture and Management: Cloud architecture, Layer, Anatomy of the Cloud,
Network Connectivity in Cloud Computing, Applications, on the Cloud, Managing the Cloud, Managing
the Cloud Infrastructure Managing the Cloud application, Migrating Application to Cloud, Phases of
Cloud Migration Approaches for Cloud Migration.
UNIT - IV
Cloud Service Models: Infrastructure as a Service, Characteristics of IaaS. Suitability of IaaS, Pros
and Cons of IaaS, Summary of IaaS Providers, Platform as a Service, Characteristics of PaaS,
Suitability of PaaS, Pros and Cons of PaaS, Summary of PaaS Providers, Software as a Service,
Characteristics of SaaS, Suitability of SaaS, Pros and Cons of SaaS, Summary of SaaS Providers,
Other Cloud Service Models.
UNIT V
Cloud Service Providers: EMC, EMC IT, Captiva Cloud Toolkit, Google, Cloud Platform, Cloud
Storage, Google Cloud Connect, Google Cloud Print, Google App Engine, Amazon Web Services,
Amazon Elastic Compute Cloud, Amazon Simple Storage Service, Amazon Simple Queue ,service,
Microsoft, Windows Azure, Microsoft Assessment and Planning Toolkit, SharePoint, IBM, Cloud
Models, IBM Smart Cloud, SAP Labs, SAP HANA Cloud Platform, Virtualization Services Provided by
SAP, Sales force, Sales Cloud, Service Cloud: Knowledge as a Service, Rack space, VMware, Manjra
soft, Aneka Platform
UNIT – I Notes
Computing Paradigms: High-Performance Computing, Parallel Computing, Distributed Computing,
Cluster Computing, Grid Computing, Cloud Computing, Bio computing, Mobile Computing, Quantum
Computing, Optical Computing, Nano computing.
1. High-Performance Computing
It is the use of parallel processing for running advanced application programs efficiently,
relatives, and quickly. The term applies especially is a system that function above a teraflop
(1012) (floating opm per second). The term High-performance computing is occasionally used
as a synonym for supercomputing. Although technically a supercomputer is a system that
performs at or near currently highest operational rate for computers. Some supercomputers
work at more than a petaflop (10 12) floating points opm per second. The most common HPC
system all scientific engineers & academic institutions. Some Government agencies
particularly military are also relying on APC for complex applications.
High-performance Computers:
High Performance Computing (HPC) generally refers to the practice of combining computing
power to deliver far greater performance than a typical desktop or workstation, in order to
solve complex problems in science, engineering, and business.
Processors, memory, disks, and OS are elements of high-performance computers of interest to
small & medium size businesses today are really clusters of computers. Each individual
computer in a commonly configured small cluster has between one and four processors and
today ‘s processors typically are from 2 to 4 crores, HPC people often referred to individual
computers in a cluster as nodes. A cluster of interest to a small business could have as few as 4
nodes on 16 crores. Common cluster size in many businesses is between 16 & 64 crores or
from 64 to 256 crores. The main reason to use this is that in its individual node can work
together to solve a problem larger than any one computer can easily solve. These nodes are so
connected that they can communicate with each other in order to produce some meaningful
work. There are two popular HPC’s software i. e, Linux, and windows. Most of installations
are in Linux because of Linux legacy in supercomputer and large scale machines. But one can
use it with his / her requirements.
Importance of High performance Computing :
1. It is used for scientific discoveries, game-changing innovations, and to improve quality of
life.
2. It is a foundation for scientific & industrial advancements.
3. It is used in technologies like IoT, AI, 3D imaging evolves & amount of data that is used
by organization is increasing exponentially to increase ability of a computer, we use High-
performance computer.
4. HPC is used to solve complex modeling problems in a spectrum of disciplines. It includes
AI, Nuclear Physics, Climate Modelling, etc.
5. HPC is applied to business uses, data warehouses & transaction processing.
1. Cost: The cost of the hardware, software, and energy consumption is enormous, making
HPC systems exceedingly expensive to create and operate. Additionally, the setup and
management of HPC systems require qualified workers, which raises the overall cost.
2. Scalability: HPC systems must be made scalable so they may be modified or expanded as
necessary to meet shifting demands. But creating a scalable system is a difficult endeavour
that necessitates thorough planning and optimization.
3. Data Management: Data management can be difficult when using HPC systems since they
produce and process enormous volumes of data. These data must be stored and accessed
using sophisticated networking and storage infrastructure, as well as tools for data analysis
and visualization.
4. Programming: Parallel programming techniques, which can be more difficult than
conventional programming approaches, are frequently used in HPC systems. It might be
challenging for developers to learn how to create and optimise algorithms for parallel
processing.
5. Support for software and tools: To function effectively, HPC systems need specific
software and tools. The options available to users may be constrained by the fact that not
all software and tools are created to function with HPC equipment.
6. Power consumption and cooling: To maintain the hardware functioning at its best,
specialised cooling technologies are needed for HPC systems’ high heat production.
Furthermore, HPC systems consume a lot of electricity, which can be expensive and
difficult to maintain.
Applications of HPC
High Performance Computing (HPC) is a term used to describe the use of supercomputers and
parallel processing strategies to carry out difficult calculations and data analysis activities.
From scientific research to engineering and industrial design, HPC is employed in a wide range
of disciplines and applications. Here are a few of the most significant HPC use cases and
applications:
1. Scientific research: HPC is widely utilized in this sector, especially in areas like physics,
chemistry, and astronomy. With standard computer techniques, it would be hard to model
complex physical events, examine massive data sets, or carry out sophisticated
calculations.
2. Weather forecasting: The task of forecasting the weather is difficult and data-intensive,
requiring sophisticated algorithms and a lot of computational power. Simulated weather
models are executed on HPC computers to predict weather patterns.
3. Healthcare: HPC is being used more and more in the medical field for activities like
medication discovery, genome sequencing, and image analysis. Large volumes of medical
data can be processed by HPC systems rapidly and accurately, improving patient diagnosis
and care.
4. Energy and environmental studies: HPC is employed to simulate and model complex
systems, such as climate change and renewable energy sources, in the energy and
environmental sciences. Researchers can use HPC systems to streamline energy systems,
cut carbon emissions, and increase the resilience of our energy infrastructure.
5. Engineering and Design: HPC is used in engineering and design to model and evaluate
complex systems, like those found in vehicles, buildings, and aeroplanes. Virtual
simulations performed by HPC systems can assist engineers in identifying potential
problems and improving designs before they are built.
2. Parallel Computing :
Before taking a toll on Parallel Computing, first, let’s take a look at the background of
computations of computer software and why it failed for the modern era.
Computer software was written conventionally for serial computing. This meant that to solve a
problem, an algorithm divides the problem into smaller instructions. These discrete instructions
are then executed on the Central Processing Unit of a computer one by one. Only after one
instruction is finished, next one starts.
A real-life example of this would be people standing in a queue waiting for a movie ticket and
there is only a cashier. The cashier is giving tickets one by one to the persons. The complexity
of this situation increases when there are 2 queues and only one cashier.
So, in short, Serial Computing is following:
1. In this, a problem statement is broken into discrete instructions.
2. Then the instructions are executed one by one.
3. Only one instruction is executed at any moment of time.
Look at point 3. This was causing a huge problem in the computing industry as only one
instruction was getting executed at any moment of time. This was a huge waste of hardware
resources as only one part of the hardware will be running for particular instruction and of
time. As problem statements were getting heavier and bulkier, so does the amount of time in
execution of those statements. Examples of processors are Pentium 3 and Pentium 4.
Now let’s come back to our real-life problem. We could definitely say that complexity will
decrease when there are 2 queues and 2 cashiers giving tickets to 2 persons simultaneously.
This is an example of Parallel Computing.
Parallel Computing :
It is the use of multiple processing elements simultaneously for solving any problem. Problems
are broken down into instructions and are solved concurrently as each resource that has been
applied to work is working at the same time.
Advantages of Parallel Computing over Serial Computing are as follows:
1. It saves time and money as many resources working together will reduce the time and cut
potential costs.
2. It can be impractical to solve larger problems on Serial Computing.
3. It can take advantage of non-local resources when the local resources are finite.
4. Serial Computing ‘wastes’ the potential computing power, thus Parallel Computing makes
better work of the hardware.
Types of Parallelism:
1. Bit-level parallelism –
It is the form of parallel computing which is based on the increasing processor’s size. It
reduces the number of instructions that the system must execute in order to perform a task
on large-sized data.
Example: Consider a scenario where an 8-bit processor must compute the sum of two 16-
bit integers. It must first sum up the 8 lower-order bits, then add the 8 higher-order bits,
thus requiring two instructions to perform the operation. A 16-bit processor can perform
the operation with just one instruction.
2. Instruction-level parallelism –
A processor can only address less than one instruction for each clock cycle phase. These
instructions can be re-ordered and grouped which are later on executed concurrently
without affecting the result of the program. This is called instruction-level parallelism.
3. Task Parallelism –
Task parallelism employs the decomposition of a task into subtasks and then allocating
each of the subtasks for execution. The processors perform the execution of sub-tasks
concurrently.
4. Data-level parallelism (DLP) –
Instructions from a single stream operate concurrently on several data – Limited by non-
regular data manipulation patterns and by memory bandwidth
Why parallel computing?
The whole real-world runs in dynamic nature i.e. many things happen at a certain time but
at different places concurrently. This data is extensively huge to manage.
Real-world data needs more dynamic simulation and modeling, and for achieving the same,
parallel computing is the key.
Parallel computing provides concurrency and saves time and money.
Complex, large datasets, and their management can be organized only and only using
parallel computing’s approach.
Ensures the effective utilization of the resources. The hardware is guaranteed to be used
effectively whereas in serial computation only some part of the hardware was used and the
rest rendered idle.
Also, it is impractical to implement real-time systems using serial computing.
Applications of Parallel Computing:
Databases and Data mining.
Real-time simulation of systems.
Science and Engineering.
Advanced graphics, augmented reality, and virtual reality.
Limitations of Parallel Computing:
It addresses such as communication and synchronization between multiple sub-tasks and
processes which is difficult to achieve.
The algorithms must be managed in such a way that they can be handled in a parallel
mechanism.
The algorithms or programs must have low coupling and high cohesion. But it’s difficult to
create such programs.
More technically skilled and expert programmers can code a parallelism-based program
well.
Future of Parallel Computing: The computational graph has undergone a great transition
from serial computing to parallel computing. Tech giant such as Intel has already taken a step
towards parallel computing by employing multicore processors. Parallel computation will
revolutionize the way computers work in the future, for the better good. With all the world
connecting to each other even more than before, Parallel Computing does a better role in
helping us stay that way. With faster networks, distributed systems, and multi-processor
computers, it becomes even more necessary.
4. Distributed Computing :
A distributed computer system consists of multiple software components that are on multiple
computers, but run as a single system. The computers that are in a distributed system can be
physically close together and connected by a local network, or they can be geographically distant
and connected by a wide area network. A distributed system can consist of any number of
possible configurations, such as mainframes, personal computers, workstations, minicomputers,
and so on. The goal of distributed computing is to make such a network work as a single
computer.
Distributed systems offer many benefits over centralized systems, including the following:
Scalability
The system can easily be expanded by adding more machines as needed.
Redundancy
Several machines can provide the same services, so if one is unavailable, work does not
stop. Additionally, because many smaller machines can be used, this redundancy does
not need to be prohibitively expensive.
Distributed computing systems can run on hardware that is provided by many vendors, and can
use a variety of standards-based software components. Such systems are independent of the
underlying software. They can run on various operating systems, and can use various
communications protocols. Some hardware might use UNIX or Linux as the operating system,
while other hardware might use Windows operating systems. For intermachine communications,
this hardware can use SNA or TCP/IP on Ethernet or Token Ring.
You can organize software to run on distributed systems by separating functions into two parts:
clients and servers. This is described in The client/server model. A common design of
client/server systems uses three tiers, as described in Three-tiered client/server architecture.
5. Cluster Computing
Cluster computing defines several computers linked on a network and implemented like an
individual entity. Each computer that is linked to the network is known as a node.
Cluster computing provides solutions to solve difficult problems by providing faster
computational speed, and enhanced data integrity. The connected computers implement
operations all together thus generating the impression like a single system (virtual device). This
procedure is defined as the transparency of the system.
7. Cloud Computing
Cloud Computing tutorial provides basic and advanced concepts of Cloud Computing. Our
Cloud Computing tutorial is designed for beginners and professionals.
The term cloud refers to a network or the internet. It is a technology that uses remote servers on
the internet to store, manage, and access data online rather than local drives. The data can be
anything such as files, images, documents, audio, video, and more.
There are the following operations that we can do using cloud computing:
Small as well as large IT companies, follow the traditional methods to provide the IT
infrastructure. That means for any IT company, we need a Server Room that is the basic need
of IT companies.
In that server room, there should be a database server, mail server, networking, firewalls, routers,
modem, switches, QPS (Query Per Second means how much queries or load will be handled by
the server), configurable system, high net speed, and the maintenance engineers.
To establish such IT infrastructure, we need to spend lots of money. To overcome all these
problems and to reduce the IT infrastructure cost, Cloud Computing comes into existence.
2) High availability and reliability : The availability of servers is high and more reliable because
the chances of infrastructure failure are minimum.
4) Multi-Sharing : With the help of cloud computing, multiple users and applications can
work more efficiently with cost reductions by sharing common infrastructure.
Cloud computing enables the users to access systems using a web browser regardless of their
location or what device they use e.g. PC, mobile phone, etc. As infrastructure is off-
site (typically provided by a third-party) and accessed via the Internet, users can connect
from anywhere.
6) Maintenance :Maintenance of cloud computing applications is easier, since they do not need
to be installed on each user's computer and can be accessed from different places. So, it
reduces the cost also.
7) Low Cost :By using cloud computing, the cost will be reduced because to take the services of
cloud computing, IT company need not to set its own infrastructure and pay-as-per usage of
resources.
8) Services in the pay-per-use mode : Application Programming Interfaces (APIs) are provided
to the users so that they can access services on the cloud by using these APIs and pay the
charges as per the usage of services.
Prerequisite : Before learning cloud computing, you must have the basic knowledge of
computer fundamentals.
Problem : We assure that you will not find any difficulty while learning our cloud computing
tutorial. But if there is any mistake in this tutorial, kindly post the problem or error in the contact
form.
7. Mobile Computing
Mobile Computing tutorial provides basic and advanced concepts of mobile computing. In this
tutorial, you will get an overview of Mobile Computing, its continuous evolution, and the future
trends of this technology. Our Mobile Computing tutorial is designed for beginners and
professionals.
Mobile Computing refers a technology that allows transmission of data, voice and video via a
computer or any other wireless enabled device. It is free from having a connection with a fixed
physical link. It facilitates the users to move from one physical location to another during
communication.
Our Mobile communication tutorial includes all topics of mobile computing like its brief
overview and history, evolution, classification, advantages and disadvantages, security issues,
future trends etc.
Mobile Computing is a technology that provides an environment that enables users to transmit
data from one device to another device without the use of any physical link or cables.
In other words, you can say that mobile computing allows transmission of data, voice and video
via a computer or any other wireless-enabled device without being connected to a fixed physical
link. In this technology, data transmission is done wirelessly with the help of wireless devices
such as mobiles, laptops etc.
This is only because of Mobile Computing technology that you can access and transmit data
from any remote locations without being present there physically. Mobile computing technology
provides a vast coverage diameter for communication. It is one of the fastest and most reliable
sectors of the computing technology field.
o Mobile Communication
o Mobile Hardware
o Mobile Software
Mobile Communication
Mobile Communication specifies a framework that is responsible for the working of mobile
computing technology. In this case, mobile communication refers to an infrastructure that
ensures seamless and reliable communication among wireless devices. This framework ensures
the consistency and reliability of communication between wireless devices. The mobile
communication framework consists of communication devices such as protocols, services,
bandwidth, and portals necessary to facilitate and support the stated services. These devices are
responsible for delivering a smooth communication process.
Fixed and Wired: In Fixed and Wired configuration, the devices are fixed at a position, and
they are connected through a physical link to communicate with other devices.
Fixed and Wireless: In Fixed and Wireless configuration, the devices are fixed at a position,
and they are connected through a wireless link to make communication with other devices.
Mobile and Wired: In Mobile and Wired configuration, some devices are wired, and some are
mobile. They altogether make communication with other devices.
Mobile and Wireless: In Mobile and Wireless configuration, the devices can communicate with
each other irrespective of their position. They can also connect to any network without the use of
any wired device.
Mobile Hardware
Mobile hardware consists of mobile devices or device components that can be used to receive or
access the service of mobility. Examples of mobile hardware can be smartphones, laptops,
portable PCs, tablet PCs, Personal Digital Assistants, etc.
These devices are inbuilt with a receptor medium that can send and receive signals. These
devices are capable of operating in full-duplex. It means they can send and receive signals at the
same time. They don't have to wait until one device has finished communicating for the other
device to initiate communications.
Mobile Software
Mobile software is a program that runs on mobile hardware. This is designed to deal capably
with the characteristics and requirements of mobile applications. This is the operating system for
the appliance of mobile devices. In other words, you can say it the heart of the mobile systems.
This is an essential component that operates the mobile device.
Following is a list of some significant fields in which mobile computing is generally applied:
Prerequisite
Before learning Mobile Computing, you must have the basic knowledge of Computer
fundamentals and networking.
Audience
8. Quantum Computing
A Quantum deals with the smallest particles found in nature, i.e., electrons and photons. These
three particles are known as Quantum particles. In this, superposition defines the ability of a
quantum system to be present in multiple states (one or more) at the same time.
For example, a time machine in which a person can be present at one or more places at the same
time, or we can say something that is present up, down, here and there at the same time. It is
known as superposition.
Entanglement defines a very strong correlation between the quantum particles. These particles
are so strongly linked that even if we place one particle at one end of the universe and one at the
other end, both of them dance instantaneously.
Quantum Computer
A Quantum Computer is a device that is used for performing quantum calculations, which are
highly complex in nature. It stores data in the form of Qubits. Qubits are also known
as Quantum Bits. A Quantum Computer can simulate those problems or operations that a
classical computer (that we currently use) cannot do. Even a quantum computer is capable of
solving computational problems faster than a normal computer.
For example, it is easy to get the product of (500 * 187625) through a classical computer, but it
is easy and quick to get the same result through a quantum computer. A classical computer will
take approximately 5 seconds to get the result, whereas a quantum computer will take 0.005
seconds to get the result.
Currently, researchers are working with Quantum computers in the field of cybersecurity to
break codes and encrypt electronic communications to explore better cybersecurity and protected
data.
What are Quantum Bits
Quantum Bits or Qbits are the storage unit of Quantum Computers. All the information is stored
in the form of qubits in a quantum computer. Quantum bits are the subatomic particles that are
composed of electrons or photons. It is difficult to generate and manage Qubits, and it is a
challenging task for scientists who are working in this field. These are the qubits that carry the
property of superposition and entanglement. It means qubits are able to show various
combinations of 1 and 0 at the same time. Thus, it is superposition. Researches make use of
microwave beams or lasers for manipulating qubits. The final result of a computation
immediately collapses to a quantum state of 1 or 0. It is the entanglement in which two members
of a pair are present in a single quantum state. When two qubits of a pair are placed at a far
distance, and if the state of one qubit changes, the state of the other will instantaneously change
in a predictable manner. A connected group of quantum bits or qubits has much more power than
the same binary digit number.
In the early 1980s, Paul Benioff(a physicist) proposed a quantum mechanical model of the
Turing Machine. Since then, the concept of Quantum Computing came into existence. Later on,
it was suggested that a quantum computer could simulate those things that a classical computer
cannot. The suggestion was given by Richard Feynman and Yuri Manin. Peter
Shor developed a quantum algorithm in 1994 for factoring the integers. The algorithm was
strong enough to decrypt RSA-encrypted communications. More research is still going on in the
field of Quantum Computing. On 23 October 2019, Google AI, in partnership
with NASA (National Aeronautics and Space Administration), US, published a paper in which it
was claimed that they had achieved Quantum Supremacy. Although some of them have disputed
this claim, it is still a significant milestone in history.
The differences between classical computing and quantum computing are described in the below
table:
Classical Computers are used for classical Quantum Computers make use of the quantum computin
computing. approach.
It performs calculations in the form of binary It performs calculations on the basis of the object's probability
digits.
It can only process a limited amount of data. It can process exponentially more data.
Logical operations are carried out using the Logical operations are performed using the quantum state, i.e
physical state, i.e., usually binary. qubits.
Fails to solve too complex and massive Quantum Computers deals with complex and massiv
problems. problems.
It has standardized programming languages It does not rely on any specific programming language.
such as Java, C, C++.
Classical systems are used for daily purposes. These systems cannot be used for daily purposes as it
complex in nature, and scientists or engineers can use it.
It is built with CPU and other processors. It has a simple architecture and runs on the set of qubits.
It provides security to data but limited. It provides highly secured data and data encryption.
Low speed and more time taking systems. Improved speed and saves much time.
The future of Quantum Computing seems quite enhanced and productive for world trade. The
above-discussed points tell that it is the beginning of the concept and will surely become a part
of our life. It is not the mainstream yet. In the future, the quantum systems will enable the
industries to tackle those problems, which they always thought impossible to solve. According to
reports, the market of quantum computing will grow strongly in the coming decades. Google is
showing a great focus and interest in the theory of quantum computing. Recently, Google has
launched a new version of TensorFlow, which is TensorFlow Quantum (TFQ). TFQ is an
open-source library. It is used to prototype quantum machine learning models. When it will be
developed, it will enable developers to easily create hybrid AI algorithms that will allow the
integration of techniques of a quantum computer and a classical computer. The main motive of
TFQ is to bring quantum computing and machine learning techniques together to evenly build
and control natural as well as artificial quantum computers. Scientists are still facing some new
and known challenges with quantum computing, but it will surely lead to software development
in the coming years.