Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CC Unit1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

1.

Briefly summarize the challenges still open in cloud computing


• Security issues related to confidentiality, integrity and availability
of data stored in the cloud. Ensuring security against both external
and internal threats is difficult in a shared multi-tenant
environment.
• Legal and regulatory implications of data being stored across
geographical boundaries. There are concerns around data privacy,
residency, seizure by foreign governments etc.
• Performance issues for large databases accessed over WAN.
Latency and bandwidth limitations over WAN impacts
performance as compared to LAN.
• Lack of standards and interoperability between different cloud
platforms. There is a need for open standards for smooth data and
application migration between cloud platforms.
• Management of extremely large-scale distributed systems.
Automated provisioning, monitoring, scaling of resources across
data centers spanning different geographic regions.
• Energy efficiency and environmental impact of large data centers.
Optimizing power usage effectiveness is important to reduce
carbon footprint.
• New algorithms and techniques for elastic resource provisioning
and scaling. To deal with workload variability and provide optimal
resource allocation.
• Business continuity and disaster recovery. Providing resilience
against outages through redundancy and geo-replication of data.

2. What are the major advantages of cloud computing?


• Reduced upfront capital costs for hardware and software. No
need for upfront infrastructure investment.
• Lower ongoing operating costs for maintenance, electricity,
cooling etc. Cloud providers benefit from economies of scale.
• High scalability to accommodate spikes in demand. Resources can
be rapidly provisioned to handle sudden workload increases.
• Usage-based pricing model - pay only for what you use. No need
to pay for idle capacity. Promotes efficient use of resources.
• Easy deployment and provisioning of resources on-demand. Self-
service model allows resources to be acquired instantly.
• High availability and reliability built into cloud platforms.
Redundancy and resilient infrastructure prevents downtime.
• Device and location independence - access services from
anywhere. Enables mobility and BYOD.
• Ability to rapidly develop and deploy applications using cloud
platforms. Reduces time-to-market for delivering solutions.

3. Explain Cloud Computing Reference Model


The cloud computing reference model consists of 3 layers:

• Infrastructure as a Service (IaaS): Provides access to fundamental


computing resources such as virtual machines, storage, networks
on a pay-as-you-go basis. Eg: AWS EC2, Rackspace
• Platform as a Service (PaaS): Provides a development platform
with preconfigured components like databases, middleware etc.
for building cloud applications. Eg: AWS Elastic Beanstalk, Heroku
• Software as a Service (SaaS): Provides access to complete end-
user applications hosted in the cloud. Eg: Salesforce CRM,
Dropbox, Gmail
• The reference model creates a standard taxonomy for
differentiating cloud computing offerings and provides a base for
the cloud ecosystem comprising infrastructure providers, platform
providers, ISVs, SIs etc.
Benefits:

• Portability across different cloud providers adhering to the same


service model.
• Interoperability between services belonging to different service
models.
• Comparison between competing offerings within the same service
model.
• Aids in making adoption, migration and integration decisions
based on business needs.
• The model brings order to the complex cloud landscape and
enables stakeholders to communicate using a common
terminology.
4. Explain hardware architecture of distributed systems.
• A distributed system consists of multiple autonomous computers
linked by a network and coordinated by middleware. The
computers have processing capabilities, storage, and are able to
communicate with each other to achieve a common goal.
Some common distributed system architectures:

• Client-server architecture - Clients request services from servers.


Servers wait for requests, process them and send responses. Can
be 2-tier (client-server) or 3-tier (client-application server-
database server). Suitable for centralized data and computation.
• Peer-to-peer architecture - Nodes act as both clients and servers.
Nodes interact directly for sharing resources like storage, compute
power etc. Suitable for distributed coordination and decentralized
applications.
• Cluster computing - Tightly coupled homogeneous systems that
appear as a unified computing resource. Used for high
performance computing.
• Grid computing - Loosely coupled heterogeneous systems that
appear as a single virtual computer. Used for large scale resource
sharing.
• Nodes in a distributed system can have different hardware
architectures. They can be homogeneous like in clusters or
heterogeneous like in grids. The nodes communicate with each
other via message passing over the network using protocols like
TCP/IP, RPC, sockets etc.

• Middleware provides transparency and coordination between the


nodes. It hides low-level implementation details and provides
uniform abstractions like distributed file systems, programming
models, naming services etc. Examples include MPI, Hadoop,
CORBA etc.
5.Differences between parallel and distributed computing:
• Parallel computing systems have multiple processors accessing
shared memory. Distributed systems have multiple autonomous
computers communicating over a network.
• Parallel computing aims to achieve speedup in program execution
using tight coupling between processors. Distributed computing
aims to provide resource sharing using loose coupling.
• Parallel computing uses homogeneous architectures like SMP
servers. Distributed systems can use heterogeneous commodity
systems.
• Parallel programming emphasizes data parallelism by distributing
data across processors. Distributed programming emphasizes
process coordination between networked computers.
• Parallel computing focuses on low latency shared memory access.
Distributed computing must deal with high latency network
communication.
• Parallel computing is suitable for compute intensive scientific
workloads. Distributed computing is suitable for I/O intensive
commercial workloads.
6. Define cloud computing. State characteristics of cloud computing.
• Cloud computing provides on-demand access to a shared pool of
configurable computing resources over the internet. The resources can
be rapidly provisioned and released with minimal management effort.
• Key characteristics of cloud computing:
• On-demand self-service - Resources can be provisioned without human
interaction with the service provider.
• Broad network access - Services are available over the network and
accessed via standard mechanisms.
• Resource pooling - Computing resources are pooled to serve multiple
consumers using a multi-tenant model.
• Rapid elasticity - Capabilities can be elastically provisioned to scale
rapidly outward and inward.
• Measured service - Resource usage is monitored, controlled, reported
for transparency and usage-based billing.
• Deployment models: Public, private, hybrid, community cloud.
• Service models: IaaS, PaaS, SaaS.

7.Discuss the machine reference model of execution virtualization

The machine reference model consists of three interfaces:

• Instruction Set Architecture (ISA) - The interface between hardware and


low-level system software like OS, hypervisor. Defines the machine's
processor, registers, memory access etc.
• Application Binary Interface (ABI) - Allows application portability across
different operating systems. Consists of syscalls, data types, byte
ordering, calling conventions etc.
• Application Programming Interface (API) - Allows applications to access
system libraries, operating system services etc.
• In virtualized execution, the guest system interacts with a virtualized
version of ISA, ABI or API instead of the underlying physical host system:

• Full virtualization emulates the complete ISA. Guest OS runs unmodified.


• Para-virtualization provides a modified ISA requiring OS customization.
• Process VMs emulate ABI. Guest apps run unmodified.
• Language VMs emulate API. Guest code runs on VM.
• The virtualization layer interposes between guest and host system and
transparently maps virtual interfaces to underlying physical host
interfaces.

8. Discuss the machine reference model of execution virtualization


• The machine reference model consists of three interfaces:
• Instruction Set Architecture (ISA) - The interface between hardware and
low-level system software like OS, hypervisor. Defines the machine's
processor, registers, memory access etc.
• Application Binary Interface (ABI) - Allows application portability across
different operating systems. Consists of syscalls, data types, byte
ordering, calling conventions etc.
• Application Programming Interface (API) - Allows applications to access
system libraries, operating system services etc.
• In virtualized execution, the guest system interacts with a virtualized
version of ISA, ABI or API instead of the underlying physical host system:
• Full virtualization emulates the complete ISA. Guest OS runs unmodified.
• Para-virtualization provides a modified ISA requiring OS customization.
• Process VMs emulate ABI. Guest apps run unmodified.
• Language VMs emulate API. Guest code runs on VM.
• The virtualization layer interposes between guest and host system and
transparently maps virtual interfaces to underlying physical host
interfaces.

9. What are the benefits of cloud computing?


1. Reduced Capital Expenditure
• Cloud computing eliminates the large upfront costs associated
with setting up an organizational IT infrastructure. There is no
need to purchase servers, networks, data centers etc.
• Computing resources can be leased from cloud providers on an
on-demand basis. This converts fixed capital expenditure to
flexible and scalable operating expenditure.
2. Lower Operating Costs
• Cloud providers benefit from economies of scale which lowers the
cost per unit of resources. These cost savings are passed down to
consumers.
• IT maintenance costs are reduced since the responsibility lies with
the cloud provider.
• Costs for power, cooling, real estate, upgrades are also reduced
since cloud leverages shared infrastructure.
3. Improved Scalability
• Cloud provides almost unlimited scale to handle spikes in usage,
high availability and failover support.
• Resources like storage, bandwidth can be provisioned instantly to
meet demand peaks.
• Reduces the need for costly overprovisioning to handle peak
loads.
4. Usage-based billing
• Pay-as-you-go model allows usage-based billing e.g. per CPU hour
consumed per GB stored per data transfer.
• Organizations pay only for what they use instead of investing in
idle capacity. Drives efficiency.
5. Easy Deployment
• Self-service model allows fast provisioning and releasing of
resources without human interaction.
• Reduces time-to-market for delivering applications and services by
simplifying deployment.
6. Reliability
• Cloud platforms provide built-in redundancy, automatic failover
and disaster recovery mechanisms.
• Achieves higher levels of reliability and availability compared to
on-premise solutions.
7. Universal Access
• Cloud services are accessed conveniently over the network via
standard protocols and clients like web browsers.
• Supports mobility and BYOD since services can be accessed from
anywhere on any device.
8. Composition
• Services from different models (IaaS, PaaS, SaaS) can be
composed to build sophisticated applications.
• Enables innovation by remixing building blocks.
9. List and explain types of Parallelism

• Task Parallelism:
1.Application is decomposed into large grained independent units of
work called tasks or processes.
2.Tasks are allocated to different processors and executed
simultaneously
3.Used for non-numerically intensive independent workloads.
• Data Parallelism
1.Same operation is performed concurrently on different chunks of data
distributed across processors.
2.Used in HPC workloads like scientific computing having large data sets
and computations.
• Pipelining
1.Execution of a serial program is broken into overlapping phases.
2.Phases of multiple instructions are executed concurrently.
3.Instruction-level parallelism for increasing throughput.
• Concurrency
1.Executing multiple single-threaded applications simultaneously on a
multiprocessor system.
2.To maximize utilization by switching between applications to hide
latency.
3.Thread-level parallelism from multi-tasking systems.
10. How does Virtualization work in Cloud computing?
• Hypervisors hosted on physical hardware create isolated virtual machine
(VM) instances
• A VM emulates virtual hardware resources – CPU, memory, storage,
network adapters
• VMs encapsulate entire runtime environment: OS, middleware,
applications
• Guest OS and software are executed on VMs transparently as if running
on actual hardware
• Provides secure, isolated environments for multi-tenancy on shared
infrastructure
• CPU, memory allocated dynamically to VMs from pooled physical
resources
• VM live migration enables dynamic scaling, automated failover,
hardware abstraction
• Widely used in IaaS for on-demand provisioning of compute, storage,
network resources
• Achieves rapid elasticity, usage-based billing, resource pooling attributes
• Higher-level PaaS and SaaS leverage underlying IaaS virtualization
capabilities
• Cloud consumer self-service fulfilled by on-demand creation of VM
instances
11. Define Hypervisor. Explain Its functionality.
• A hypervisor creates virtual machines on underlying host hardware, each
running their own guest operating systems:
• Emulates virtualized physical hardware resources - processor, memory,
storage, networking.
• Allocates resources dynamically to VMs based on changing workload
requirements.
• Isolates VMs for multi-tenancy support without interference.
• Manages concurrency controls between guest OS access to virtualized
hardware.
• Live migrates VMs across hosts for load balancing, scaling, failover
purposes.
• Some hypervisors provide advanced management capabilities like VM
snapshots, dynamic resource scaling of VMs.
Two main types:
1. Native (Type 1) Hypervisor
• Runs directly on the host's hardware rather than within host OS
• Has direct access to physical resources for tighter control and efficiency
• Eg: VMware ESXi, Microsoft Hyper-V, Citrix XenServer
2. Hosted (Type 2) Hypervisor
• Runs as an application on an existing host OS which manages hardware
• Additional abstraction layer results in reduced efficiency
• Eg: Oracle VirtualBox, VMware Workstation
Functionality:
• Emulates virtualized physical hardware components
• Dynamically allocates host resources to VMs
• Isolates VMs from each other
• Manages concurrency control between guest OS access
• Live migrates VMs across hosts
• Provides advanced capabilities like snapshots, dynamic scaling of VMs

You might also like