Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CCD Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

Vendor Lock-in/ Data Lock-in in Cloud Computing

In the context of cloud computing, vendor lock-in refers to a situation in which a client gets
dependent on a certain cloud provider in order to complete their computing needs. The client is
not able to readily switch to another provider without incurring large expenditures or
experiencing business disruptions. This might occur when a customer has committed
considerable resources to a certain cloud platform, such as building apps or transferring data
there and is unable to switch platforms without paying a hefty fee. When a customer makes use
of exclusive technology or services that are only offered by one particular cloud provider,
vendor lock-in can also happen. This may restrict the ability of the customer to transfer
providers and may offer the provider a powerful negotiating position when negotiating prices
and other issues.
The Problem with Cloud-Computing Standardization

Lack of standards could make cloud computing trickier to use. It could also restrict
implementation by limiting interoperability among cloud platforms and causing inconsistency in
areas such as security and interoperability. For example, the lack of standardization could keep a
customer trying to switch from a private to a public cloud from doing so as seamlessly as
switching browsers or e-mail systems. In addition, it would keep users from knowing the basic
capabilities they could expect from any cloud service.
Many of today’s in-progress standards, summarized, are being based in part on the US National
Institute of Standards and Technology’s Special Publication 800-145, a document called “The
NIST Definition of Cloud Computing (Draft).”
A key standardization issue involves virtualization, which plays a critical role in most cloud-
computing approaches.
Factors Affecting Availability:
The cloud service’s ability to recover from an outage situation and availability depends on a
few factors, including the cloud service provider’s data center architecture, application
architecture, hosting location redundancy, diversity of Internet service providers (ISPs), and
data storage architecture.
Following is a list of the major factors:
 The redundant design of System as a Service and Platform as a Service application.
 The architecture of the Cloud service data center should be fault-tolerant.
 Having better Network connectivity and geography can resist disaster in most cases.
 Customers of the cloud service should quickly respond to outages with the support team
of the Cloud Service Provider.
 Sometimes the outage affects only a specific region or area of cloud services, so it is
difficult in those cases to troubleshoot the situation.
 There should be reliability in the software and hardware used in delivering cloud
services.
 The infrastructure of the network should be efficient and should be able to cope-up with
DDoS(distributed denial of service ) attacks on the cloud service.
 Not having proper security against internal and external threats, e.g., privileged users
abusing privileges.
 Regular testing and maintenance of the cloud infrastructure and applications can help
identify and fix issues before they cause downtime.
 Proper capacity planning is essential to ensure that the cloud service can handle peak
traffic and usage without becoming overloaded.
 Adequate backups and disaster recovery plans can help minimize the impact of outages
or data loss incidents.
 Monitoring tools and alerts can help detect and respond to issues quickly, reducing
downtime and improving overall availability.
 Ensuring compliance with industry standards and regulations can help minimize the
risk of security breaches and downtime due to compliance issues.
 Continuous updates and patches to the cloud infrastructure and applications can help
address vulnerabilities and improve overall security and availability.
 Transparency and communication with customers during outages can help manage
expectations and maintain trust in the cloud service provider.

Fault Tolerance in Cloud Computing

Fault tolerance in cloud computing means creating a blueprint for ongoing work whenever some
parts are down or unavailable. It helps enterprises evaluate their infrastructure needs and
requirements and provides services in case the respective device becomes unavailable for some
reason.

It does not mean that the alternative system can provide 100% of the entire service. Still, the
concept is to keep the system usable and, most importantly, at a reasonable level in operational
mode. It is important if enterprises continue growing in a continuous mode and increase their
productivity levels.

Main Concepts behind Fault Tolerance


o Replication: Fault-tolerant systems work on running multiple replicas for each service.
Thus, if one part of the system goes wrong, other instances can be used to keep it running
instead. For example, take a database cluster that has 3 servers with the same information
on each. All the actions like data entry, update, and deletion are written on each.
Redundant servers will remain idle until a fault tolerance system demands their
availability.
o Redundancy: When a system part fails or goes downstate, it is important to have a
backup type system. The server works with emergency databases that include many
redundant services. For example, a website program with MS SQL as its database may
fail midway due to some hardware fault. Then the redundancy concept has to take
advantage of a new database when the original is in offline mode.

Techniques for Fault Tolerance in Cloud Computing


o Priority should be given to all services while designing a fault tolerance system. Special
preference should be given to the database as it powers many other entities.
o After setting the priorities, the Enterprise has to work on mock tests. For example,
Enterprise has a forums website that enables users to log in and post comments. When
authentication services fail due to a problem, users will not be able to log in.
Existence of Fault Tolerance in Cloud Computing
o System Failure: This can either be a software or hardware issue. A software failure
results in a system crash or hangs, which may be due to Stack Overflow or other reasons.
Any improper maintenance of physical hardware machines will result in hardware system
failure.
o Incidents of Security Breach: There are many reasons why fault tolerance may arise due
to security failures. The hacking of the server hurts the server and results in a data breach.
Other reasons for requiring fault tolerance in the form of security breaches include
ransomware, phishing, virus attacks, etc.
Cloud Disaster Recovery

Cloud-based backup and retrieval capabilities help you to back-up and reestablish business-
critical directories if they are breached. Thanks to its high adaptability, cloud technologies allow
efficient disaster recovery, irrespective of the task's nature or ferocity. Data is kept in a virtual
storage environment designed for increased accessibility. The program is accessible on
availability, enabling companies of various sizes to customize Disaster Recovery (DR) solutions
to their existing requirements.

Cloud disaster recovery (CDR) is simple to configure and maintain, as opposed to conventional
alternatives. Companies no longer ought to waste a lot of time transmitting data backups from
their in-house databases or hard drive to restore after a tragedy. Cloud optimizes these
procedures, decisions correctly, and information retrieval.

Cloud Disaster Recovery (CDR) is based on a sustainable program that provides you recover
safety functions fully from a catastrophe and offers remote access to a computer device in a
protected virtual world.

Cloud disaster recovery methodologies

Recognize the cloud disaster recovery providing scalability. It must be protecting specific
information, apps, and other assets while also accommodating added resources as required and
providing sufficient efficiency as other international customers utilize the facilities. Recognize
the disaster recovery content's security needs and ensure that the vendor can offer authentication,
VPNs (virtual private networks), cryptography, and other toolkits are required to protect it's vital
resources.

Ultimately, suggest how the DR system should be designed. There are three basic DR strategies:
warm, cold, and hot. These concepts are vaguely connected to how easily a structure can be
healed.
o Warm disaster recovery

Warm disaster recovery is a reserve strategy in which copy data and systems are stored with a
cloud DR vendor and regularly updated with services and information in the prior data center.
However, the redundant assets aren't doing anything. When a disaster happens, the warm DR can
be implemented to capability approach from the DR vendor, which is usually as simple as
beginning a Virtual machine and rerouting Domain names and traffic to the DR assets. Although
recovery times might be pretty limited, the secured tasks must still experience some leisure time.

o Cold disaster recovery

Cold disaster recovery usually entails storing information or VMware virtual (VM) pictures.
These resources are generally inaccessible unless added work is performed, such as retrieving the
stored data or filling up the picture into a Virtual machine. Cold DR is typically the easiest (often
just memory) and absolute cheapest method. Still, it requires a long time to regain, leaving the
organization with the most leisure time in the event of a disaster.

o Hot disaster recovery

Hot disaster recovery is traditionally described as a real-time simultaneous implementation of


information and tasks that run concurrently. Both the primary and backup data centers execute a
specific tasks and information in sync, with both websites communicating a fraction of the entire
data packets. When a disaster happens, the residual pages continue to handle things without
interruption. Consumers should be unaware of the disturbance. Although there is no time for rest
with hot DR, it is the most complex and expensive methodology.
Resource Management in Cloud Computing

The term resource management refers to the operations used to control how capabilities
provided by Cloud resources and services are made available to other entities, whether users,
applications, or services.
Types of Resources
Physical Resource: Computer, disk, database, network, etc.
Logical Resource: Execution, monitoring, and application to communicate.

Resource Management in Cloud Computing Environment


On the Cloud Vendor’s View
 Provision resources on an on-demand basis.
 Energy conservation and proper utilization is maintained in Cloud Data Centers
On the Cloud Service Provider’s View
 To make available the best performance resources at the cheapest cost.
 QoS (Quality of Service) to their cloud users
On the Cloud User’s View
 Renting resources at a low price without compromising performance
 Cloud provider guarantees to provide a minimum level of service to the user

Resource Management Models


Compute Model
Resource in the cloud is shared by all users at the same time. It allows the user to reserve the
VM’s memory to ensure that the memory size requested by the VM is always available to
operate locally on clouds with a good enough level of QoS (Quality of Service) being delivered
to the end user.
Grid Strictly manages the workload of computing mode. Local resource manager such as
Portable Batch System, Condor, and Sun Grid Engine manages the compute resource for the
Grid site. Identify the user to run the job
Data Model
It is related to plotting, separating, querying, transferring, caching, and replicating data.

Energy Efficiency in Cloud Computing


Data center is the most prominent in cloud computing which contains collection of servers on
which Business information is stored and applications run. Data center which includes servers,
cables, air conditioner, network etc.. Consumes more power and releases huge amount of
Carbon-di-oxide (CO2) to the environment. One of the most important challenge faced in
cloud computing is the optimization of Energy Utilization. Hence the concept of green cloud
computing came into existence.
There are multiple techniques and algorithms used to minimize the energy consumption in
cloud.
Techniques include:
1. Dynamic Voltage and Frequency Scaling (DVFS)
2. Virtual Machine (VM)
3. Migration and VM Consolidation

Advantages of Machine Learning with Cloud Computing

Although machine learning and cloud computing have their advantages individually, together,
they have 3 core advantages as follows:

1. Cloud works on the principle of 'pay for what you need'. The Cloud's pay-per-use model
is good for companies who wish to leverage ML capabilities for their business without
much expenditure.
2. It provides the flexibility to work with machine learning functionalities without having
advanced data science skills.
3. It helps us ease of experiment with various ML technologies and scales up as projects go
into production and demand increases.

Benefits of Machine Learning in the Cloud

The cloud is the future of data science. It’s where machine learning will be done, and it’s where a
lot of big data analysis will happen. And that means a lot of benefits for companies.
1) It enables experimenting and testing multiple models

For one thing, the cloud allows you to scale your machine learning projects up and down as
needed. You can start with a small set of data points and add more as you get more confident in
your predictions.

Variable usage makes it easy for enterprises to experiment with machine learning capabilities
and scale up as projects go into production and demand increases.

You can also use machine learning to run experiments on different sets of data to see what works
best. This is something that’s difficult or impossible to do on your own server at home or in your
office building. And it’s something that requires a lot of time and effort if you want to do it
yourself. In short, the cloud drastically speeds up the machine learning lifecycle.

2) It’s inexpensive

Traditional machine learning isn’t just complex and hard to set up: It’s pricey. If you want to
train and deploy large machine learning models, such as deep learning, on your own servers,
you’ll need expensive GPU cards. This is particularly true with today’s state-of-the-art models,
such as China’s natural language Wu Dao 2.0, a model with nearly 2 trillion parameters. With
such models, the cloud is a must-have, not just a nice-to-have.

In order to scale your models to accommodate large-scale needs, you’ll need high-end GPU units,
which means that they’ll remain largely unused during periods of low use. In other words, you’ll
have expensive servers sitting around collecting dust, while still requiring extensive maintenance.

On the other hand, when using machine learning in the cloud, you’re only paying for your
consumption, which works wonders for scalability. Whether you’re just personally
experimenting or servicing millions of customers, you can scale to any needs, and only pay for
what you use.
3) It needs less technical knowledge

Building, managing, and maintaining powerful servers oneself is a complex task. With the cloud,
much of the complexity behind these tasks is handled by the cloud provider.

Popular cloud services like AWS, Microsoft Azure, and Google Cloud Platform in fact offer
machine learning options that don’t require deep knowledge of AI, machine learning theory, or a
large team of data scientists.

With the cloud, AI can be deployed in a matter of minutes. It also scales automatically, so you
don’t have to worry about the technical complexity of provisioning resources or managing
infrastructure.

4) Easy integration

Most popular cloud services also provide SDKs (software developer kits) and APIs. This allows
you to embed machine learning functionality directly into applications. They also support most
programming languages.

With the cloud, you can integrate machine learning into your workflows quickly and easily. In
the past, machine learning models were difficult to integrate into existing applications. In today’s
cloud-native AI world, this is no longer the case. The Akkio AI platform provides end-to-end
AutoML automation, including model training, real-time inference, high-performance DevOps,
and more. Akkio’s no-code web application provides AI toolkits without the complexities of
traditional cloud environments, which require technical expertise in tools like Python,
TensorFlow, PyTorch, and Kubernetes.

5) It reduces time-to-value

Another important aspect of the cloud is that it reduces the time-to-value. Time-to-value is the
amount of time it takes from when you start a project to when you see results from it.
In traditional machine learning deployments, this process can take months or even years. With
the cloud, you can start seeing results in hours or days. That’s because you don’t have to
provision resources, manage infrastructure, or write code. You can simply upload your data and
start building models.

6) Access to more data

Data is the lifeblood of machine learning. The more data you have, the better your models will
be. And the cloud provides access to more data than ever before.

For example, if you’re building a predictive model for customer churn, you can access historical
customer data that’s stored in the cloud. This data can be used to train your machine learning
model so that it can make better predictions.

7) Security and privacy

When done right, machine learning in the cloud is secure and private. That’s because the data is
stored in the cloud provider’s secure data center.

The cloud provider is responsible for the security of the data center and the data that’s stored
there. This means that you don’t have to worry about building your own security infrastructure.

In addition, most cloud providers offer additional security features, such as encryption, to further
protect your data.

8) It frees up resources

Machine learning in the cloud frees up resources so that you can focus on other things. For
example, if you’re building a machine learning model to predict demand for a new product, you
can use the cloud to train and deploy the model. This frees up your time so that you can focus on
other things, such as marketing the product.
UNIT 2

Cloud Computing Architecture and Components


Cloud Computing Architecture is a combination of components required for a Cloud Computing
service. A Cloud computing architecture consists of several components like a frontend platform,
a backend platform or servers, a network or Internet service, and a cloud-based delivery service.

Cloud Computing Architecture


The Architecture of Cloud computing contains many different components. It includes Client
infrastructure, applications, services, runtime clouds, storage spaces, management, and security.
These are all the parts of a Cloud computing architecture.

Front End:

The client uses the front end, which contains a client-side interface and application. Both of these
components are important to access the Cloud computing platform. The front end includes web
servers (Chrome, Firefox, Opera, etc.), clients, and mobile devices.

Back End:

The backend part helps you manage all the resources needed to provide Cloud computing
services. This Cloud architecture part includes a security mechanism, a large amount of data
storage, servers, virtual machines, traffic control mechanisms, etc.

\
Important Components of Cloud Computing Architecture
Here are some important components of Cloud computing architecture:

1. Client Infrastructure:
Client Infrastructure is a front-end component that provides a GUI. It helps users to interact with
the Cloud.

2. Application:
The application can be any software or platform which a client wants to access.

3. Service:
The service component manages which type of service you can access according to the client’s
requirements.

Three Cloud computing services are:

 Software as a Service (SaaS)


 Platform as a Service (PaaS)
 Infrastructure as a Service (IaaS)

4. Runtime Cloud:
Runtime cloud offers the execution and runtime environment to the virtual machines.

5. Storage:
Storage is another important Cloud computing architecture component. It provides a large
amount of storage capacity in the Cloud to store and manage data.

6. Infrastructure:
It offers services on the host level, network level, and application level. Cloud infrastructure
includes hardware and software components like servers, storage, network devices, virtualization
software, and various other storage resources that are needed to support the cloud computing
model.

7. Management:
This component manages components like application, service, runtime cloud, storage,
infrastructure, and other security matters in the backend. It also establishes coordination between
them.
8. Security:
Security in the backend refers to implementing different security mechanisms for secure Cloud
systems, resources, files, and infrastructure to the end-user.

9. Internet:
Internet connection acts as the bridge or medium between frontend and backend. It allows you to
establish the interaction and communication between the frontend and backend.

Benefits of Cloud Computing Architecture


Following are the cloud computing architecture benefits:

 Makes the overall Cloud computing system simpler.


 Helps to enhance your data processing.
 Provides high security.
 It has better disaster recovery.
 Offers good user accessibility.
 Significantly reduces IT operating costs.

 Infrastructure as a Service | IaaS


 Iaas is also known as Hardware as a Service (HaaS). It is one of the layers of the cloud
computing platform. It allows customers to outsource their IT infrastructures such as
servers, networking, processing, storage, virtual machines, and other resources.
Customers access these resources on the Internet using a pay-as-per use model.
 In traditional hosting services, IT infrastructure was rented out for a specific period of
time, with pre-determined hardware configuration. The client paid for the configuration
and time, regardless of the actual use. With the help of the IaaS cloud computing
platform layer, clients can dynamically scale the configuration to meet changing
requirements and are billed only for the services actually used.
 IaaS cloud computing platform layer eliminates the need for every organization to
maintain the IT infrastructure.

IaaS is offered in three models: public, private, and hybrid cloud. The private cloud implies that
the infrastructure resides at the customer-premise. In the case of public cloud, it is located at the
cloud computing platform vendor's data center, and the hybrid cloud is a combination of the two
in which the customer selects the best of both public cloud and private cloud.

IaaS provider provides the following services -

1. Compute: Computing as a Service includes virtual central processing units and virtual
main memory for the VMs that is provisioned to the end- users.
2. Storage: IaaS provider provides back-end storage for storing files.
3. Network: Network as a Service (NaaS) provides networking components such as routers,
switches, and bridges for the VMs.
4. Load balancers: It provides load balancing capability at the infrastructure layer.

Characteristics of IaaS

There are the following characteristics of IaaS -

o Resources are available as a service


o Services are highly scalable
o Dynamic and flexible
o GUI and API-based access
o Automated administrative tasks

Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google
Compute Engine (GCE), Rackspace, and Cisco Metacloud.

Advantages of IaaS cloud computing layer

There are the following advantages of IaaS computing layer -

1. Shared infrastructure

IaaS allows multiple users to share the same physical infrastructure.

2. Web access to the resources

Iaas allows IT users to access resources over the internet.

3. Pay-as-per-use model

IaaS providers provide services based on the pay-as-per-use basis. The users are required to pay
for what they have used.
4. Focus on the core business

IaaS providers focus on the organization's core business rather than on IT infrastructure.

5. On-demand scalability

On-demand scalability is one of the biggest advantages of IaaS. Using IaaS, users do not worry
about to upgrade software and troubleshoot the issues related to hardware components.

Disadvantages of IaaS cloud computing layer

1. Security

Security is one of the biggest issues in IaaS. Most of the IaaS providers are not able to provide
100% security.

2. Maintenance & Upgrade

Although IaaS service providers maintain the software, but they do not upgrade the software for
some organizations.

3. Interoperability issues

It is difficult to migrate VM from one IaaS provider to the other, so the customers might face
problem related to vendor lock-in.

Platform as a Service | PaaS

Platform as a Service (PaaS) provides a runtime environment. It allows programmers to easily


create, test, run, and deploy web applications. You can purchase these applications from a cloud
service provider on a pay-as-per use basis and access them using the Internet connection. In PaaS,
back end scalability is managed by the cloud service provider, so end- users do not need to worry
about managing the infrastructure.

PaaS includes infrastructure (servers, storage, and networking) and platform (middleware,
development tools, database management systems, business intelligence, and more) to support
the web application life cycle.

Characteristics of PaaS

There are the following characteristics of PaaS -

o Accessible to various users via the same development application.


o Integrates with web services and databases.
o Builds on virtualization technology, so resources can easily be scaled up or down as per
the organization's need.
o Support multiple languages and frameworks.
o Provides an ability to "Auto-scale".

Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App Engine,
Apache Stratos, Magento Commerce Cloud, and OpenShift.

PaaS provider provides the following services –

1. Programming languages

PaaS providers provide various programming languages for the developers to develop the
applications. Some popular programming languages provided by PaaS providers are Java, PHP,
Ruby, Perl, and Go.

2. Application frameworks

PaaS providers provide application frameworks to easily understand the application development.
Some popular application frameworks provided by PaaS providers are Node.js, Drupal, Joomla,
WordPress, Spring, Play, Rack, and Zend.

3. Databases

PaaS providers provide various databases such as ClearDB, PostgreSQL, MongoDB, and Redis
to communicate with the applications.

4. Other tools

PaaS providers provide various other tools that are required to develop, test, and deploy the
applications.

Advantages of PaaS

There are the following advantages of PaaS -

1) Simplified Development

PaaS allows developers to focus on development and innovation without worrying about
infrastructure management.

2) Lower risk
No need for up-front investment in hardware and software. Developers only need a PC and an
internet connection to start building applications.

3) Prebuilt business functionality

Some PaaS vendors also provide already defined business functionality so that users can avoid
building everything from very scratch and hence can directly start the projects only.

4) Instant community

PaaS vendors frequently provide online communities where the developer can get the ideas to
share experiences and seek advice from others.

5) Scalability

Applications deployed can scale from one to thousands of users without any changes to the
applications.

Disadvantages of PaaS cloud computing layer

1) Vendor lock-in

One has to write the applications according to the platform provided by the PaaS vendor, so the
migration of an application to another PaaS vendor would be a problem.

2) Data Privacy

Corporate data, whether it can be critical or not, will be private, so if it is not located within the
walls of the company, there can be a risk in terms of privacy of data.

3) Integration with the rest of the systems applications

It may happen that some applications are local, and some are in the cloud. So there will be
chances of increased complexity when we want to use data which in the cloud with the local data.

Software as a Service | SaaS

SaaS is also known as "On-Demand Software". It is a software distribution model in which


services are hosted by a cloud service provider. These services are available to end-users over the
internet so, the end-users do not need to install any software on their devices to access these
services.

There are the following services provided by SaaS providers -


Business Services - SaaS Provider provides various business services to start-up the business.
The SaaS business services include ERP (Enterprise Resource Planning), CRM (Customer
Relationship Management), billing, and sales.

SaaS provider provides the following services –

Document Management - SaaS document management is a software application offered by a


third party (SaaS providers) to create, manage, and track electronic documents.

Example: Slack, Samepage, Box, and Zoho Forms.

Social Networks - As we all know, social networking sites are used by the general public, so
social networking service providers use SaaS for their convenience and handle the general
public's information.

Mail Services - To handle the unpredictable number of users and load on e-mail services, many
e-mail providers offering their services using SaaS.

Advantages of SaaS cloud computing layer

1) SaaS is easy to buy

SaaS pricing is based on a monthly fee or annual fee subscription, so it allows organizations to
access business functionality at a low cost, which is less than licensed applications.

Unlike traditional software, which is sold as a licensed based with an up-front cost (and often an
optional ongoing support fee), SaaS providers are generally pricing the applications using a
subscription fee, most commonly a monthly or annually fee.

2. One to Many

SaaS services are offered as a one-to-many model means a single instance of the application is
shared by multiple users.

3. Less hardware required for SaaS

The software is hosted remotely, so organizations do not need to invest in additional hardware.

4. Low maintenance required for SaaS

Software as a service removes the need for installation, set-up, and daily maintenance for the
organizations. The initial set-up cost for SaaS is typically less than the enterprise software. SaaS
vendors are pricing their applications based on some usage parameters, such as a number of users
using the application. So SaaS does easy to monitor and automatic updates.
5. No special software or hardware versions required

All users will have the same version of the software and typically access it through the web
browser. SaaS reduces IT support costs by outsourcing hardware and software maintenance and
support to the IaaS provider.

6. Multidevice support

SaaS services can be accessed from any device such as desktops, laptops, tablets, phones, and
thin clients.

7. API Integration

SaaS services easily integrate with other software or services through standard APIs.

8. No client-side installation

SaaS services are accessed directly from the service provider using the internet connection, so do
not need to require any software installation.

Disadvantages of SaaS cloud computing layer

1) Security

Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud
computing is not more secure than in-house deployment.

2) Latency issue

Since data and applications are stored in the cloud at a variable distance from the end-user, there
is a possibility that there may be greater latency when interacting with the application compared
to local deployment. Therefore, the SaaS model is not suitable for applications whose demand
response time is in milliseconds.

3) Total Dependency on Internet

Without an internet connection, most SaaS applications are not usable.

4) Switching between SaaS vendors is difficult

Switching SaaS vendors involves the difficult and slow task of transferring the very large data
files over the internet and then converting and importing them into another SaaS also.

The below table shows some popular SaaS providers and services that are provided by them.
Provider Services

Salseforce.com On-demand CRM solutions

Microsoft Office Online office suite


365

Google Apps Gmail, Google Calendar, Docs, and sites

NetSuite ERP, accounting, order management, CRM, Professionals Services


Automation (PSA), and e-commerce applications.

GoToMeeting Online meeting and video-conferencing software

Constant Contact E-mail marketing, online survey, and event marketing

Oracle CRM CRM applications

Workday, Inc Human capital management, payroll, and financial management.

What is a Cloud Deployment Model?


Cloud Deployment Model functions as a virtual computing environment with a deployment
architecture that varies depending on the amount of data you want to store and who has access
to the infrastructure.
Types of Cloud Computing Deployment Models
The cloud deployment model identifies the specific type of cloud environment based on
ownership, scale, and access, as well as the cloud’s nature and purpose. The location of the
servers you’re utilizing and who controls them are defined by a cloud deployment model. It
specifies how your cloud infrastructure will look, what you can change, and whether you will
be given services or will have to create everything yourself. Relationships between the
infrastructure and your users are also defined by cloud deployment types.
Different types of cloud computing deployment models are described below.
 Public Cloud
 Private Cloud
 Hybrid Cloud
 Community Cloud
 Multi-Cloud
Public Cloud

The public cloud makes it possible for anybody to access systems and services. The public
cloud may be less secure as it is open to everyone. The public cloud is one in which cloud
infrastructure services are provided over the internet to the general people or major industry
groups. The infrastructure in this cloud model is owned by the entity that delivers the cloud
services, not by the consumer. It is a type of cloud hosting that allows customers and users to
easily access systems and services. This form of cloud computing is an excellent example of
cloud hosting, in which service providers supply services to a variety of customers. In this
arrangement, storage backup and retrieval services are given for free, as a subscription, or on a
per-user basis. For example, Google App Engine etc.

Public Cloud
Advantages of the Public Cloud Model
 Minimal Investment: Because it is a pay-per-use service, there is no substantial
upfront fee, making it excellent for enterprises that require immediate access to resources.
 No setup cost: The entire infrastructure is fully subsidized by the cloud service
providers, thus there is no need to set up any hardware.
 Infrastructure Management is not required: Using the public cloud does not
necessitate infrastructure management.
 No maintenance: The maintenance work is done by the service provider (not users).
 Dynamic Scalability: To fulfill your company’s needs, on-demand resources are
accessible.
Disadvantages of the Public Cloud Model
 Less secure: Public cloud is less secure as resources are public so there is no guarantee
of high-level security.
 Low customization: It is accessed by many public so it can’t be customized according
to personal requirements.
Private Cloud

The private cloud deployment model is the exact opposite of the public cloud deployment
model. It’s a one-on-one environment for a single user (customer). There is no need to share
your hardware with anyone else. The distinction between private and public clouds is in how
you handle all of the hardware. It is also called the “internal cloud” & it refers to the ability to
access systems and services within a given border or organization. The cloud platform is
implemented in a cloud-based secure environment that is protected by powerful firewalls and
under the supervision of an organization’s IT department. The private cloud gives greater
flexibility of control over cloud resources.

Private Cloud
Advantages of the Private Cloud Model
 Better Control: You are the sole owner of the property. You gain complete command
over service integration, IT operations, policies, and user behavior.
 Data Security and Privacy: It’s suitable for storing corporate information to which
only authorized staff have access. By segmenting resources within the same infrastructure,
improved access and security can be achieved.
 Supports Legacy Systems: This approach is designed to work with legacy systems that
are unable to access the public cloud.
 Customization: Unlike a public cloud deployment, a private cloud allows a company
to tailor its solution to meet its specific needs.
Disadvantages of the Private Cloud Model
 Less scalable: Private clouds are scaled within a certain range as there is less number
of clients.
 Costly: Private clouds are more costly as they provide personalized facilities.
Hybrid Cloud

By bridging the public and private worlds with a layer of proprietary software, hybrid cloud
computing gives the best of both worlds. With a hybrid solution, you may host the app in a
safe environment while taking advantage of the public cloud’s cost savings. Organizations can
move data and applications between different clouds using a combination of two or more cloud
deployment methods, depending on their needs.

Hybrid Cloud
Advantages of the Hybrid Cloud Model
 Flexibility and control: Businesses with more flexibility can design personalized
solutions that meet their particular needs.
 Cost: Because public clouds provide scalability, you’ll only be responsible for paying
for the extra capacity if you require it.
 Security: Because data is properly separated, the chances of data theft by attackers are
considerably reduced.
Disadvantages of the Hybrid Cloud Model
 Difficult to manage: Hybrid clouds are difficult to manage as it is a combination of
both public and private cloud. So, it is complex.
 Slow data transmission: Data transmission in the hybrid cloud takes place through the
public cloud so latency occurs.
Community Cloud

It allows systems and services to be accessible by a group of organizations. It is a distributed


system that is created by integrating the services of different clouds to address the specific
needs of a community, industry, or business. The infrastructure of the community could be
shared between the organization which has shared concerns or tasks. It is generally managed
by a third party or by the combination of one or more organizations in the community.

Community Cloud
Advantages of the Community Cloud Model
 Cost Effective: It is cost-effective because the cloud is shared by multiple
organizations or communities.
 Security: Community cloud provides better security.
 Shared resources: It allows you to share resources, infrastructure, etc. with multiple
organizations.
 Collaboration and data sharing: It is suitable for both collaboration and data sharing.
Disadvantages of the Community Cloud Model
 Limited Scalability: Community cloud is relatively less scalable as many
organizations share the same resources according to their collaborative interests.
 Rigid in customization: As the data and resources are shared among different
organizations according to their mutual interests if an organization wants some changes
according to their needs they cannot do so because it will have an impact on other
organizations.
Factors Public Cloud Private Cloud Community Hybrid Cloud
Cloud

Complex, Complex, Complex,


requires a requires a requires a
Initial Setup Easy
professional professional professional
team to setup team to setup team to setup

Scalability
and High High Fixed High
Flexibility

Between public
Cost- Distributed cost
Cost-Effective Costly and private
Comparison among members
cloud

Reliability Low Low High High

Data Security Low High High High

Data Privacy Low High High High

Service level agreements in Cloud computing


A Service Level Agreement (SLA) is the bond for performance negotiated between the cloud
services provider and the client. Earlier, in cloud computing all Service Level Agreements
were negotiated between a client and the service consumer. Nowadays, with the initiation of
large utility-like cloud computing providers, most Service Level Agreements are standardized
until a client becomes a large consumer of cloud services. Service level agreements are also
defined at different levels which are mentioned below:
 Customer-based SLA
 Service-based SLA
 Multilevel SLA
Few Service Level Agreements are enforceable as contracts, but mostly are agreements or
contracts which are more along the lines of an Operating Level Agreement (OLA) and may not
have the restriction of law. It is fine to have an attorney review the documents before making a
major agreement to the cloud service provider. Service Level Agreements usually
specify some parameters which are mentioned below:
1. Availability of the Service (uptime)
2. Latency or the response time
3. Service components reliability
4. Each party accountability
5. Warranties
In any case, if a cloud service provider fails to meet the stated targets of minimums then the
provider has to pay the penalty to the cloud service consumer as per the agreement. So, Service
Level Agreements are like insurance policies in which the corporation has to pay as per the
agreements if any casualty occurs. Microsoft publishes the Service Level Agreements linked
with the Windows Azure Platform components, which is demonstrative of industry practice for
cloud service vendors. Each individual component has its own Service Level Agreements.
Below are two major Service Level Agreements (SLA) described:
1. Windows Azure SLA – Window Azure has different SLA’s for compute and storage.
For compute, there is a guarantee that when a client deploys two or more role instances in
separate fault and upgrade domains, client’s internet facing roles will have external
connectivity minimum 99.95% of the time. Moreover, all of the role instances of the client
are monitored and there is guarantee of detection 99.9% of the time when a role instance’s
process is not runs and initiates properly.
2. SQL Azure SLA – SQL Azure clients will have connectivity between the database and
internet gateway of SQL Azure. SQL Azure will handle a “Monthly Availability” of 99.9%
within a month. Monthly Availability Proportion for a particular tenant database is the ratio
of the time the database was available to customers to the total time in a month. Time is
measured in some intervals of minutes in a 30-day monthly cycle. Availability is always
remunerated for a complete month. A portion of time is marked as unavailable if the
customer’s attempts to connect to a database are denied by the SQL Azure gateway.
Service Level Agreements are based on the usage model. Frequently, cloud providers charge
their pay-as-per-use resources at a premium and deploy standards Service Level Agreements
only for that purpose. Clients can also subscribe at different levels that guarantees access to a
particular amount of purchased resources. The Service Level Agreements (SLAs) attached to a
subscription many times offer various terms and conditions. If client requires access to a
particular level of resources, then the client need to subscribe to a service. A usage model may
not deliver that level of access under peak load condition.
SLA Lifecycle

Steps in SLA Lifecycle

1. Discover service provider: This step involves identifying a service provider that can
meet the needs of the organization and has the capability to provide the required service.
This can be done through research, requesting proposals, or reaching out to vendors.
2. Define SLA: In this step, the service level requirements are defined and agreed upon
between the service provider and the organization. This includes defining the service level
objectives, metrics, and targets that will be used to measure the performance of the service
provider.
3. Establish Agreement: After the service level requirements have been defined, an
agreement is established between the organization and the service provider outlining the
terms and conditions of the service. This agreement should include the SLA, any penalties
for non-compliance, and the process for monitoring and reporting on the service level
objectives.
4. Monitor SLA violation: This step involves regularly monitoring the service level
objectives to ensure that the service provider is meeting their commitments. If any
violations are identified, they should be reported and addressed in a timely manner.
5. Terminate SLA: If the service provider is unable to meet the service level objectives,
or if the organization is not satisfied with the service provided, the SLA can be terminated.
This can be done through mutual agreement or through the enforcement of penalties for
non-compliance.
6. Enforce penalties for SLA Violation: If the service provider is found to be in
violation of the SLA, penalties can be imposed as outlined in the agreement. These
penalties can include financial penalties, reduced service level objectives, or termination of
the agreement.

Advantages of SLA

1. Improved communication: A better framework for communication between the


service provider and the client is established through SLAs, which explicitly outline the
degree of service that a customer may anticipate. This can make sure that everyone is
talking about the same things when it comes to service expectations.
2. Increased accountability: SLAs give customers a way to hold service providers
accountable if their services fall short of the agreed-upon standard. They also hold service
providers responsible for delivering a specific level of service.
3. Better alignment with business goals: SLAs make sure that the service being given is
in line with the goals of the client by laying down the performance goals and service level
requirements that the service provider must satisfy.
4. Reduced downtime: SLAs can help to limit the effects of service disruptions by
creating explicit protocols for issue management and resolution.
5. Better cost management: By specifying the level of service that the customer can
anticipate and providing a way to track and evaluate performance, SLAs can help to limit
costs. Making sure the consumer is getting the best value for their money can be made
easier by doing this.

Disadvantages of SLA

1. Complexity: SLAs can be complex to create and maintain, and may require significant
resources to implement and enforce.
2. Rigidity: SLAs can be rigid and may not be flexible enough to accommodate changing
business needs or service requirements.
3. Limited service options: SLAs can limit the service options available to the customer,
as the service provider may only be able to offer the specific services outlined in the
agreement.
4. Misaligned incentives: SLAs may misalign incentives between the service provider
and the customer, as the provider may focus on meeting the agreed-upon service levels
rather than on providing the best service possible.
5. Limited liability: SLAs are not legal binding contracts and often limited the liability of
the service provider in case of service failure.
What are service level objectives(SLO)?
A Service Level Objective (SLO) serves as a benchmark for indicators, parameters, or metrics
defined with specific service level targets. The objectives may be an optimal range or a specific
value for each service function or process that constitutes a cloud service.

The SLOs can also be referred to as measurable characteristics of an SLA, such as Quality of
Service (QoS) aspects that are achievable, measurable, meaningful, and acceptable for both
service providers and customers. An SLO is agreed within the SLA as an obligation, with
validity under specific conditions (such as time period) and is expressed within an SLA
document.

The measure of responsiveness and app performance may be defined through numeric indicators
such as:

 Request latency
 Batch throughput
 Failures per seconds
 Other metrics

These indicators describe the service level at any moment in time.

To understand the overall performance in context of the agreed SLA contract or availability
requirements, you’ll have to analyze these numbers over a longer time period. Mathematically,
the SLO analysis involves:

1. Aggregating the service level indicator performance over a long time.


2. Comparing the result with a numerical target for system availability.

Admission control is a validation process in communication systems where a check is


performed before a connection is established to see if current resources are sufficient for
the proposed connection.

Load balancing in Cloud Computing


Load balancing is an essential technique used in cloud computing to optimize resource
utilization and ensure that no single resource is overburdened with traffic. It is a process of
distributing workloads across multiple computing resources, such as servers, virtual machines,
or containers, to achieve better performance, availability, and scalability.
1. In cloud computing, load balancing can be implemented at various levels, including the
network layer, application layer, and database layer. The most common load balancing
techniques used in cloud computing are:
2. Network Load Balancing: This technique is used to balance the network traffic across
multiple servers or instances. It is implemented at the network layer and ensures that the
incoming traffic is distributed evenly across the available servers.
3. Application Load Balancing: This technique is used to balance the workload across
multiple instances of an application. It is implemented at the application layer and ensures
that each instance receives an equal share of the incoming requests.
4. Database Load Balancing: This technique is used to balance the workload across
multiple database servers. It is implemented at the database layer and ensures that the
incoming queries are distributed evenly across the available database servers.

Advantages:

1. Improved Performance: Load balancing helps to distribute the workload across multiple
resources, which reduces the load on each resource and improves the overall performance
of the system.
2. High Availability: Load balancing ensures that there is no single point of failure in the
system, which provides high availability and fault tolerance to handle server failures.
3. Scalability: Load balancing makes it easier to scale resources up or down as needed,
which helps to handle spikes in traffic or changes in demand.
4. Efficient Resource Utilization: Load balancing ensures that resources are used
efficiently, which reduces wastage and helps to optimize costs.

Disadvantages:

1. Complexity: Implementing load balancing in cloud computing can be complex,


especially when dealing with large-scale systems. It requires careful planning and
configuration to ensure that it works effectively.
2. Cost: Implementing load balancing can add to the overall cost of cloud computing,
especially when using specialized hardware or software.
3. Single Point of Failure: While load balancing helps to reduce the risk of a single point
of failure, it can also become a single point of failure if not implemented correctly.
4. Security: Load balancing can introduce security risks if not implemented correctly, such
as allowing unauthorized access or exposing sensitive data.
There are two elementary solutions to overcome the problem of overloading on the servers-
 First is a single-server solution in which the server is upgraded to a higher performance
server. However, the new server may also be overloaded soon, demanding another upgrade.
Moreover, the upgrading process is arduous and expensive.
 Second is a multiple-server solution in which a scalable service system on a cluster of
servers is built. That’s why it is more cost effective as well as more scalable to build a
server cluster system for network services.
Load balancing solutions can be categorized into two types –
1. Software-based load balancers: Software-based load balancers run on standard
hardware (desktop, PCs) and standard operating systems.
2. Hardware-based load balancer: Hardware-based load balancers are dedicated boxes
which include Application Specific Integrated Circuits (ASICs) adapted for a particular use.
ASICs allows high speed promoting of network traffic and are frequently used for
transport-level load balancing because hardware-based load balancing is faster in
comparison to software solution.
Load Balancers –
1. Direct Routing Requesting Dispatching Technique
2. Dispatcher-Based Load Balancing Cluster
3. Linux Virtual Load Balancer

Capacity allocation means to allocate resources for individual instances. An instance is an


activation of a service on behalf of a cloud user. Locating resources subject to multiple global
optimization constraints requires a search in a very large search space. Capacity allocation is
more challenging when the state of individual servers changes rapidly.

Energy optimization is correlated and affect the cost of providing the services; they can be done
locally, but global load balancing and energy optimization policies encounter the same
difficulties as the capacity allocation.

Quality of service (QoS) is probably the most challenging aspect of resource management and,
at the same time, possibly the most critical for the future of cloud computing.

You might also like