Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CC MIdTerm (AP-21)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Ch.

1:
1. What is the innovative characteristic of cloud computing?
Ans.
i. Resources pooling:
a) The customer has no control or information about the location of the
resources provided, but can choose location on a higher level of
abstraction.
ii. On-demand self-service:
a) User can track sever uptimes, capability and network storage on an
ongoing basis. The user can monitor computing functionalities too.
iii. Easy maintenance:
a) The servers are managed easily and the downtime is also small.
b) It updates every time that increasingly enhances it.
iv. Large network access:
a) The user may use a device and an internet connection to access the
cloud data or upload it to the cloud from anywhere.
v. Availability:
a) The cloud capabilities can be changed and expanded according to the
usage.
vi. Automatic system:
a) The cloud capabilities analyze the data required automatically and
possible to track, manage and report for the usage.
vii. Economical:
a) The amount spent on the basic maintenance and additional costs are
much smaller.
viii. Security:
a) It provides a snapshot of the data stored so that even if one of the
servers is damaged, the data cannot get lost.
b) The information is stored on the storage devices, which no other person
can hack or use.
ix. Pay as you go:
a) Users only have to pay for the service or the space in cloud computing.
b) No hidden or additional charge to be paid is liable to pay.
x. Measured service:
a) Analyzed by the charge-per-use capabilities, resource use can be
measured and reported by the service provider, either on the virtual
server instances running through the cloud.
2. Which are the technologies on which cloud computing relies?
Ans.
1. Distributed:
Collection of independent computers that appears to its users as a single
coherent system.
Distributed systems often exhibit other properties such as heterogeneity,
openness, scalability, transparency, concurrency, continuous availability, and
independent failures.
Three major milestones have led to cloud computing: mainframe computing,
cluster computing, and grid computing.
Main-frame were the first examples of large computational facilities leveraging
multiple processing units and were powerful, highly reliable computers
specialized for large I/O operations.
Cluster computing is a type of computer system that is parallel or distributed
and which consists of a collection of interconnected independent computers,
working together as a highly centralized computing tool that integrates software
and networking with independent computers in a single system.
Grid computing is a processor architecture that combines computer resources
from different fields to achieve the main purpose and to solve problems that are
too large for a supercomputer and to retain the ability to handle several small
problems
2. Virtualization:
Uses software to create a layer of abstraction over computer hardware, enabling
multiple virtual computers, usually referred to as VMs, to split the hardware
elements from a single computer — processors, memory, storage and more.
3. Web 2.0:
It is the term used to represent a range of websites and applications that permit
anyone to create or share information or material created online and it
represents the evolution of the World Wide Web; the web apps, which enable
interactive data sharing, user-centred design and worldwide collaboration.
4. Service oriented computing:
The computing paradigm that uses services as a fundamental component in the
creation of applications.
The software-as-a-service concept advocated by service-oriented computing
(SOC) was pioneering and first appeared on the software model ASP (Application
Service Provider) and a QoS requirement between the clients and the providers.
5. Utility oriented computing:
The concept Utility Computing pertains to utilities and business models that
provide its customers with a service provider, and charges you for consumption
(Charge-as-per-Use).
3. Provide a brief characterization of a distributed system
Ans.
Distributed system:
i. Collection of independent computers that appears to its users as a single
coherent system.
ii. Distributed systems often exhibit other properties such as heterogeneity,
openness, scalability, transparency, concurrency, continuous availability,
and independent failures.
iii. Three major milestones have led to cloud computing: mainframe
computing, cluster computing, and grid computing.
a) Main-frame:
-Main-framewere the first examples of large computational facilities leveraging
multiple processing units and were powerful, highly reliable computers
specialized for large I/O operations.
b) Cluster computing:
-It is a type of computer system that is parallel or distributed and which consists
of a collection of interconnected independent computers, working together as a
highly centralized computing tool that integrates software and networking with
independent computers in a single system.
c) Grid computing:
-It is a processor architecture that combines computer resources from different
fields to achieve the main purpose and to solve problems that are too large for a
supercomputer and to retain the ability to handle several small problems

4. What is virtualization?
Ans.
Uses software to create a layer of abstraction over computer hardware, enabling
multiple virtual computers, usually referred to as VMs, to split the hardware
elements from a single computer — processors, memory, storage and more.
(Refer Q.1 from Chapter 3)
5. What is the major revolution introduced by Web 2.0? Give examples.
Ans.
i. Web 2.0 is the second stage of development in World Wide Web.
ii. Emphasis on dynamic and user generated content rather than static
content.
iii. Examples of Web2.0 applications are Google Documents, Google Maps,
Flickr, Facebook, Twitter, YouTube, delicious, Blogger, and Wikipedia.
iv. In particular, social networking Websites take the biggest advantage of
Web2.0.
v. The level of interaction in Websites such as Facebook or Flickr would not
have been possible without the support of AJAX, Really Simple
Syndication (RSS), and other tools that make the user experience
incredibly interactive.
vi. Moreover, community Websites harness the collective intelligence of the
community, which provides content to the applications themselves: Flickr
provides advanced services for storing digital pictures and videos,
Facebook is a social networking site that leverages user activity to
provide content, and Blogger, like any other blogging site, provides an
online diary that is fed by users.
6. What is utility computing?
Ans.
i. Utility computing is a vision of computing that defines a service-
provisioning model for compute services in which resource such as
storage, compute power, applications, and infrastructure are packaged
and offered on a pay-per-use basis.
ii. The concept utility applies to utility services offered by a utility’s
provider, such as electricity, telephone, water and gas.

7. Briefly summarize the Cloud Computing Reference Model.


Ans.
i. A model that characterizes and standardizes the functions if a cloud
computing environment, is the cloud reference model.
ii. This is a basic benchmark for cloud computing development.
iii. A standard cloud reference model for architects, software engineers,
security experts and businesses are required to achieve the potential of
cloud computing.

iv. Services are classified under the three main categories, and they are:
Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a
Service (SaaS).
v. The model structures the broad variety of cloud computing services in a
layered view from the base to the top of the computing stack.
vi. Infrastructure as a Service:
a) It is the most common cloud computing service model, offering the
basic infrastructure of virtual servers, networks, OS and storage drivers.
b) IaaS is a completely outsourced pay-for-use service that can be run in
public, private or hybrid infrastructure.
vii. Platform as a Service:
a) Web applications can easily and quickly be created via PaaS with the
flexibility and robustness of the service to support it.
b) PaaS solutions are scalable and suitable if multiple developers work on
a single project.
viii. Software as a Service:
a) This cloud computing solution includes deploying Internet-based
software to different companies paying via subscription or a paid-per-
use model.
b) SaaS is managed from a centralized location and it is ideal for short-
term projects.

8. What is the major advantage of cloud computing?


Ans.

9. Briefly summarize the challenges still open in cloud computing.


Ans.
Here is a list of all challenges which are still open in cloud computing, are as
follows:
i. Security and Privacy of cloud:
The cloud data store must be secure and confidential as the clients are so
dependent on the cloud provider.
Cloud provider must take security measures necessary to secure customer’s
data.
Hacking and malware are also one of the biggest problems because they can
affect many customers.
ii. Interoperability and Portability:
Migration services into and out of the cloud shall be provided to the customer.
No bond period should be allowed, as the customers can be hampered.
iii. Reliable and flexible:
To overcome this challenge, third-party services should be monitored and the
performance, robustness, and dependence of companies supervised.
iv. Cost:
Cloud computing is affordable, but it can be sometimes expensive to change the
cloud to customer demand, it is sometimes costly to transfer data from the cloud
to the premises.
v. Downtime:
Downtime is the most popular cloud computing challenge as a platform free from
downtime is guaranteed by no cloud provider.
vi. Lack of resources:
The cloud industry also faces a lack of resources and expertise, with many
businesses hoping to overcome it by hiring new, more experienced employees.
vii. Dealing with the multi-cloud environments:
Today not even a single cloud is operating with full businesses.
viii. Cloud migration:
While it is very simple to release a new app in the cloud, transferring an existing
app to a cloud computing environment is harder.
ix. Vendor lock-in:
Clients being reliant on the implementation of a single cloud provider and not
switching to another vendor.
Such as Amazon EC2 & MS Azure are not easily transferred to any other cloud
platform.
x. Privacy and legal issues:
The main problem regarding cloud privacy or data security is a data breach.
An infringement of the information could lead to a multitude of losses both for
the provider and for the customer.

Ch.2 :
1. What is the difference between parallel and distributed computing?
Parallel computing Distributed computing
Definition is a computation type in is a computation type in
which multiple which networked
processors execute computers
multiple tasks communicate and
simultaneously coordinate to achieve a
common goal.
No. of computers One computer Multiple computers
required
Processing mechanism Multiple processors Computers rely on
perform processing message passing
Synchronization All processors share a There is no global clock
single master clock for in distributed
synchronization computing, it uses
synchronization
algorithms
Memory Computers can have Each computer has their
shared memory or own memory
distributed memory
Usage To increase To share resources and
performance and for to increase scalability
scientific computing

2. Describe the different levels of parallelism that can be obtained in a


computing system.
Based on the lumps of code (grain size) and to boost processor efficiency by
hiding latency.
Parallelism within an application can be detected at several levels
•Large grain (or task level):

•Medium grain (or control level):

•Fine grain (data level):


•Very fine grain (multiple-instruction issue):

Or
i. Bit-level Parallelism:
In this parallelism, it’s focused on the doubling of the word size of the
processor. Increased parallelism in bit levels means that arithmetical
operations for large numbers are executed more quickly.
ii. Instruction-level parallelism (ILP):
This form of parallelism aims to leverage the possible overlap in a computer
program between instructions.
On each hardware of the processor, most ILP types are implemented and
applied:
iii. Instruction Pipelining:
Execute various stages in the same cycle of various independent instructions
and use all idle resources.
iv. Task Parallelism:
Task parallelism involves breaking down a task into subtasks and then
assigning each of the subtasks for execution.
Subtasks are carried out concurrently by the processors.
v. Out-of-order execution:
Instructions without breaching data dependencies may be executed if even
though previous instructions are still executed, a unit is available.

3. Explain hardware architecture of parallel systems?


Flynn's taxonomy defines the architecture of multi-processor computers according to
how the two distinct aspects of instruction and data stream can be categorized into four
different systems, are as follows:

Single-instruction, single-data (SISD) systems:


-SISD computing system is a uniprocessor that can execute a single instruction
on a single data stream.
-The SISD processes machine instructions sequentially, computers that adopt
this model are commonly referred to as sequential computers.
Multiple-instruction, single-data (MISD) systems
-An MISD is a multiprocessor system that executes different instructions on
different PEs, but they all operate in the same dataset.
-For most applications, machines designed using MISD are not useful.

Single-instruction, multiple-data (SIMD) systems


-A SIMD system is a multi-processor system that can execute the same
instruction on all CPUs but operates on many data streams.
-SIMD-based machines are ideal for scientific computing because they involve
many vector and matrix operations.

Multiple-instruction, multiple-data (MIMD) systems


-MIMD is a multiprocessor system that can carry out multiple instructions on
multiple sets of data.
-The most common type of parallel computer - most modern supercomputers fall
into this category.
4. Discuss the message-based communication model.
i. Message abstraction is essential in the development of models and
technologies, enabling distributed computing.
ii. It includes any type of data representation with size and time constraints
while invoking a remote process or an object instance sequence or a
common message.
iii. That is why the 'message-based communication model,' which is based on
data streaming abstraction, can benefit from referencing various inter-
process communication models.
iv. There are three message-based communication model and they are:
a) Point-to-point message model:
-This model organizes the communication among single components.
-Each message is sent from one component to another, and there is a direct
addressing to identify the message receiver.
b) Publish-and-subscribe message model:
-Model is based on the one- to-many communication model and simplifies the
implementation of indirect communication patterns.
-There are two major roles: the publisher and the subscriber.
c) Request-reply message model:
-The request-reply message model identifies all communication models in which,
for each message sent by a process, there is a reply.
5. What is SOA?
i. SOA is an architectural style supporting service orientation.
ii. Service-Oriented Architecture (SOA) is a software style in which services
via an interconnected communication protocol to other components are
distributed by application components.
iii. There are two major roles within SOA: the service provider and the
service consumer.
iv. The service provider is the maintainer of the service and the organization
that makes available one or more services for others to use.
v. The service provider is the maintainer of the service and the organization
that makes available one or more services for others to use.
vi. Example of SOA:
CORBA has been a suitable platform for realizing SOA systems.
6. Discuss RPC and how it enables inter-process communication.
i. Remote Procedure Call (RPC) is a protocol that one program can use to
request a service from a program located in another computer on a
network without having to understand the network's details.
ii. The system is based on a client/server model.
iii. The calling process thread remains blocked until the procedure on the
server process has completed its execution and the result (if any) is
returned to the client.
7. Explain architectural style of a distributed system?
i. Architectural styles determine the components and connectors that are
used as instances of the style together with a set of constraints on how
they can be combined.
ii. Software architectural:
-Software architectural styles are based on the logical arrangement of software
components.
-Styles and patterns in software architecture define how to organize the system
components to build a complete system and to satisfy the customer's
requirements.

a) Data centred architectures:


-These architectures identify the data as the fundamental element of the
software system, and access to shared data is the core characteristic of the data-
centred architectures.
-Batch sequential style and pipe-and filter style
b) Virtual machine architectures:
-The virtual machine class of architectural styles is characterized by the presence
of an abstract execution environment (generally referred as a virtual machine)
that simulates features that are not available in the hardware or in the software.
iii. System architectural:
System architectural styles cover the physical organization of components and
processes over a dis- tributed infrastructure.
The Client-server and Peer-to - peer (P2P) are the two key system level
architectures we use today.
a) Client Server Architecture:
-Two major components are in the client server architecture.
-The server and the client. The server is the location of all transmission,
transmission, and processing data, while the client can access the remote server
services and resources
b) Peer to Peer (P2P):
-At a certain time, each node can be a client or a server.
-If something is asked from the node, it could be referred to as a client and if
something arrives from a node it could be referred to as a server.
8. What is a distributed system? What are the components that characterize
it?
Adistributedsystemisacollectionofindependentcomputersthatappearstoitsusersa
sasinglecoherentsystem.
Hardware:
Operating system:
•provides services for inter-process communication (IPC), process scheduling
and management
•IPC services are implemented on top of Transmission Control Protocol/
Internet Protocol (TCP/IP)
Middleware:
Develops its own protocols, data formats, and programming language or
frameworks for the development of distributed applications
Application:
GUI

Ch.3:
1. What is virtualization and what are its benefits?
Ans.
i. The virtualization environment can also be referred to as cloud-based
services and applications.
ii. Virtualization uses software to create a layer of abstraction over
computer hardware, enabling multiple virtual computers, usually referred
to as VMs, to split the hardware elements from a single computer —
processors, memory, storage and more.
iii. Virtualization involves the creation of something's virtual platform,
including virtual computer hardware, virtual storage devices and virtual
computer networks.
iv. (Refer Q. 6 part A)

2. What are the characteristics of virtualized environments?


Ans.
i. Increased security:
a) The ability to fully transparently govern the execution of a guest program
creates new opportunities for providing a safe, controlled execution
environment.
b) A virtual machine manager can govern and filter guest programs' activity
so as to prevent harmful operations from being carried out.
c) Example, Cukoo and Java Virtual Machine.
ii. Execution managed:
a) features are sharing, aggregation, emulation and isolation.
A. Sharing:
a) Virtualization makes it possible to create a separate computing
environment in the same host.
b) This common function reduces the number of active servers and reduces
energy consumption.
B. Aggregation:
a) A group of individual hosts can be linked and represented as a single
virtual host.
C. Emulsion:
a) An entirely different environment can also be emulated with regard to
the host, so that guest programs that require certain features not present
in the physical host can be carried out.
D. Isolation:
a) Virtualization allows guests to provide an entirely separate environment
in that they are executed.
b) The virtual machine is able to filter the guest’s activities and prevent
dangerous operations against the host.
E. Portability:
a) In the case of a hardware virtualization, the guest is packed in a virtual
image which can be moved and executed safely on various virtual
machines in many instances.
3. Discuss classification or taxonomy of virtualization at different levels.
Ans.
i. The first classification discriminates against the service or entity that is
being emulated.
ii. In particular we can divide these execution virtualization techniques into
two major categories by considering the type of host they require.
iii. Process-level techniques are implemented on top of an existing operating
system, which has full control of the hardware.
iv. System-level techniques are implemented directly on hardware and do
not require or require a minimum of support from an existing operating
system.
v. Within these two categories we can list various techniques that offer the
guest a different type of virtual computation environment: bare
hardware, operating system resources, low-level programming language,
and application libraries.
4. Discuss the machine reference model of execution virtualization.
Ans.
Execution virtualization includes all techniques that aim to emulate an execution
environment that is separate from the one hosting the virtualization layer.
All these techniques concentrate their interest on providing support for the
execution of programs, whether these are the operating system, a binary
specification of a program compiled against an abstract machine model, or an
application.
5. What are hardware virtualization techniques?
Ans.
Hardware-assisted virtualization
• Hardware provides architectural support for building a virtual machine
manager.
• Not allowing potentially harmful instructions to be executed directly on
the host.
• The operating system provides direct access to resources without an
emulation or modification with hardware-assisted virtualization, and this
improves overall performance.
Full virtualization
• To the ability to run a program, directly on top of a virtual machine and
without any modification
• Full virtualization is obtained with a combination of hardware and
software
• Complete isolation, enhanced security, ease of emulation of different
architectures, and coexistence of different systems on the same platform.
(Refer Q.8)
Paravirtualization
• To provide the capability to demand the execution of performance-critical
operations directly on the host
• Paravirtualization does not confine you to device drivers included in the
virtualization software. It uses instead the device drivers, known as the
privileged guest, in one of our guest operating systems
6. What are the advantages and disadvantages of virtualization?
Ans.
A. Advantages:
i. Scalability:
a) A virtual machine is merely as scalable as any other solutions.
b) Virtualization allows data migration, upgrade, and instant performance
improvement into new VMs in a short time.
ii. Consolidation of servers:
a) Eliminates the need for physical computers while providing efficient
operation of systems and specifications.
b) Such consolidation will minimize costs and the requisite physical space
for computer systems.
iii. Improved system reliability:
a) One of the reasons for virtualization is its ability to help avoid failures in
the system. Memory corruption caused by system drivers and the like is
the most common crashes preventing a VM.
iv. Virtual workstations:
a) Virtualization provides the global versatility to allow multiple systems
to be run on a single computer to operate the systems remotely.
b) VM also reduces all hardware and desktop footprint.
B. Disadvantages:
i. Programs that require physical hardware:
Virtualization does not work well for any applications that require physical
hardware.
An example is using a dongle or other hardware attached
ii. Performance quality can decrease:
If you run an application which runs RAM or CPU use, virtualization can cause
a performance delay.
VM operates in layers on its hosting systems so that any operation with
extreme performance will have reduced performance if you do not use one
application or server.
iii. Testing is critical:
This is particularly true for virtualization, as it doesn't work as if you can turn
it off and back on. A system that works smoothly already can perform
virtualization, leading to errors and potential waste of time and expense.
iv. Unexpected expenses:
Customer will spend more than initially planned in order to take account of
this attention to time and detail.
v. Data can be at risk:
data is hosting on a third-party network while operating on virtual instances
on shared hardware resources. This can vulnerability the data to threats or
unauthorized access.
vi. Quick scalability is a challenge:
Virtualization can be a tedious task to ensure all necessary software,
protection, adequate storage, and availability of resources. This takes longer
than you might plan as a third-party vendor is involved.
7. What is Xen? Discuss its elements for virtualization.
Ans.
i. Xen is a Para virtualized open-source hypervisor.
ii. Xen has been expanded with hardware-assisted virtualization to be fully
compliant.
iii. It allows for high efficiency in the guest operating system.
iv. Xen hypervisor is operating a Xen-based system that is operating in the
most comfortable mode and retains the guest operating system's access
to essential hardware.
v. Guest operating system that runs between domains, representing
instances of virtual machines.
vi. Ring 0 is the most privileged level and Ring 3 is the less privileged level.
vii. Nearly every OS, except OS/2, uses only two different levels, i.e., Ring 3,
for user program and non-privilege OS, Ring 0 for kernel code, and.
viii. It gives the Xen an opportunity to achieve paravirtualization.
8. Discuss the reference model of full virtualization.
Ans.
I. Full virtualization refers to the ability to run a program, most likely an
operating system, directly on top of a virtual machine and without any
modification.
II. A successful and efficient implementation of full virtualization is obtained
with a combination of hardware and software, not allowing potentially
harmful instructions to be executed directly on the host.

III. The Hypervisor delivers every VM, including a virtual BIOS, virtual
devices and virtualized memory management, all services of the physical
system.
IV. The Guest OS is completely unconnected with the virtualization layer
from the underlying hardware.
V. Full virtualization is achieved through the use of binary and direct
execution combinations.
9. Explain type-1 and type-2 hypervisors.

Ans.
Type 1 Hypervisor (also called bare metal or native)
i. A bare-metal hypervisor (type 1) is a software layer which is installed
directly above a physical server and its underlying hardware.
ii. Type I hypervisors run directly on top of the hardware. Therefore, they take
the place of the operating systems and interact directly with the ISA interface
exposed by the underlying hardware.
iii. There is no intermediate software or operating system, therefore bare-metal
hypervisor is the name.
iv. A Type 1 hypervisor, which does not run inside Windows or any other
operating system so it is proven to provide excellent performance and
stability.
v. Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServer and
Microsoft Hyper-V hypervisor.

Type 2 Hypervisor (also known as hosted hypervisors)


i. Type II hypervisors require the support of an operating system to provide
virtualization services.
ii. This type of hypervisor is also called a hosted virtual machine since it is
hosted within an operating system.
iii. Example of Type 2 hypervisor include VMware Player or Parallels Desktop.
iv. Here we have the following:
• A physical machine.
• An installed hardware operating system (Windows, Linux, macOS).
• Software for the type 2 hypervisor in this operating system.
• The current instances of virtual guest machines.
Extra:
Cloud computing:
Leonard Kleinrock in 1969.
Cloud computing refers to both the applications delivered as services over the internet
and the hardware and the system software on the datacenters that provide those
services.
It is a model for enabling ubiquitous, convenient, on-demand network access to a
shared pool of configurable computing resources that can be rapidly provisioned and
released with the minimal management effort or service provider interaction.

Aditi Polekar

You might also like