CCD Notes
CCD Notes
CCD Notes
In the context of cloud computing, vendor lock-in refers to a situation in which a client gets
dependent on a certain cloud provider in order to complete their computing needs. The client is
not able to readily switch to another provider without incurring large expenditures or
experiencing business disruptions. This might occur when a customer has committed
considerable resources to a certain cloud platform, such as building apps or transferring data
there and is unable to switch platforms without paying a hefty fee. When a customer makes use
of exclusive technology or services that are only offered by one particular cloud provider,
vendor lock-in can also happen. This may restrict the ability of the customer to transfer
providers and may offer the provider a powerful negotiating position when negotiating prices
and other issues.
The Problem with Cloud-Computing Standardization
Lack of standards could make cloud computing trickier to use. It could also restrict
implementation by limiting interoperability among cloud platforms and causing inconsistency in
areas such as security and interoperability. For example, the lack of standardization could keep a
customer trying to switch from a private to a public cloud from doing so as seamlessly as
switching browsers or e-mail systems. In addition, it would keep users from knowing the basic
capabilities they could expect from any cloud service.
Many of today’s in-progress standards, summarized, are being based in part on the US National
Institute of Standards and Technology’s Special Publication 800-145, a document called “The
NIST Definition of Cloud Computing (Draft).”
A key standardization issue involves virtualization, which plays a critical role in most cloud-
computing approaches.
Factors Affecting Availability:
The cloud service’s ability to recover from an outage situation and availability depends on a
few factors, including the cloud service provider’s data center architecture, application
architecture, hosting location redundancy, diversity of Internet service providers (ISPs), and
data storage architecture.
Following is a list of the major factors:
The redundant design of System as a Service and Platform as a Service application.
The architecture of the Cloud service data center should be fault-tolerant.
Having better Network connectivity and geography can resist disaster in most cases.
Customers of the cloud service should quickly respond to outages with the support team
of the Cloud Service Provider.
Sometimes the outage affects only a specific region or area of cloud services, so it is
difficult in those cases to troubleshoot the situation.
There should be reliability in the software and hardware used in delivering cloud
services.
The infrastructure of the network should be efficient and should be able to cope-up with
DDoS(distributed denial of service ) attacks on the cloud service.
Not having proper security against internal and external threats, e.g., privileged users
abusing privileges.
Regular testing and maintenance of the cloud infrastructure and applications can help
identify and fix issues before they cause downtime.
Proper capacity planning is essential to ensure that the cloud service can handle peak
traffic and usage without becoming overloaded.
Adequate backups and disaster recovery plans can help minimize the impact of outages
or data loss incidents.
Monitoring tools and alerts can help detect and respond to issues quickly, reducing
downtime and improving overall availability.
Ensuring compliance with industry standards and regulations can help minimize the
risk of security breaches and downtime due to compliance issues.
Continuous updates and patches to the cloud infrastructure and applications can help
address vulnerabilities and improve overall security and availability.
Transparency and communication with customers during outages can help manage
expectations and maintain trust in the cloud service provider.
Fault tolerance in cloud computing means creating a blueprint for ongoing work whenever some
parts are down or unavailable. It helps enterprises evaluate their infrastructure needs and
requirements and provides services in case the respective device becomes unavailable for some
reason.
It does not mean that the alternative system can provide 100% of the entire service. Still, the
concept is to keep the system usable and, most importantly, at a reasonable level in operational
mode. It is important if enterprises continue growing in a continuous mode and increase their
productivity levels.
Cloud-based backup and retrieval capabilities help you to back-up and reestablish business-
critical directories if they are breached. Thanks to its high adaptability, cloud technologies allow
efficient disaster recovery, irrespective of the task's nature or ferocity. Data is kept in a virtual
storage environment designed for increased accessibility. The program is accessible on
availability, enabling companies of various sizes to customize Disaster Recovery (DR) solutions
to their existing requirements.
Cloud disaster recovery (CDR) is simple to configure and maintain, as opposed to conventional
alternatives. Companies no longer ought to waste a lot of time transmitting data backups from
their in-house databases or hard drive to restore after a tragedy. Cloud optimizes these
procedures, decisions correctly, and information retrieval.
Cloud Disaster Recovery (CDR) is based on a sustainable program that provides you recover
safety functions fully from a catastrophe and offers remote access to a computer device in a
protected virtual world.
Recognize the cloud disaster recovery providing scalability. It must be protecting specific
information, apps, and other assets while also accommodating added resources as required and
providing sufficient efficiency as other international customers utilize the facilities. Recognize
the disaster recovery content's security needs and ensure that the vendor can offer authentication,
VPNs (virtual private networks), cryptography, and other toolkits are required to protect it's vital
resources.
Ultimately, suggest how the DR system should be designed. There are three basic DR strategies:
warm, cold, and hot. These concepts are vaguely connected to how easily a structure can be
healed.
o Warm disaster recovery
Warm disaster recovery is a reserve strategy in which copy data and systems are stored with a
cloud DR vendor and regularly updated with services and information in the prior data center.
However, the redundant assets aren't doing anything. When a disaster happens, the warm DR can
be implemented to capability approach from the DR vendor, which is usually as simple as
beginning a Virtual machine and rerouting Domain names and traffic to the DR assets. Although
recovery times might be pretty limited, the secured tasks must still experience some leisure time.
Cold disaster recovery usually entails storing information or VMware virtual (VM) pictures.
These resources are generally inaccessible unless added work is performed, such as retrieving the
stored data or filling up the picture into a Virtual machine. Cold DR is typically the easiest (often
just memory) and absolute cheapest method. Still, it requires a long time to regain, leaving the
organization with the most leisure time in the event of a disaster.
The term resource management refers to the operations used to control how capabilities
provided by Cloud resources and services are made available to other entities, whether users,
applications, or services.
Types of Resources
Physical Resource: Computer, disk, database, network, etc.
Logical Resource: Execution, monitoring, and application to communicate.
Although machine learning and cloud computing have their advantages individually, together,
they have 3 core advantages as follows:
1. Cloud works on the principle of 'pay for what you need'. The Cloud's pay-per-use model
is good for companies who wish to leverage ML capabilities for their business without
much expenditure.
2. It provides the flexibility to work with machine learning functionalities without having
advanced data science skills.
3. It helps us ease of experiment with various ML technologies and scales up as projects go
into production and demand increases.
The cloud is the future of data science. It’s where machine learning will be done, and it’s where a
lot of big data analysis will happen. And that means a lot of benefits for companies.
1) It enables experimenting and testing multiple models
For one thing, the cloud allows you to scale your machine learning projects up and down as
needed. You can start with a small set of data points and add more as you get more confident in
your predictions.
Variable usage makes it easy for enterprises to experiment with machine learning capabilities
and scale up as projects go into production and demand increases.
You can also use machine learning to run experiments on different sets of data to see what works
best. This is something that’s difficult or impossible to do on your own server at home or in your
office building. And it’s something that requires a lot of time and effort if you want to do it
yourself. In short, the cloud drastically speeds up the machine learning lifecycle.
2) It’s inexpensive
Traditional machine learning isn’t just complex and hard to set up: It’s pricey. If you want to
train and deploy large machine learning models, such as deep learning, on your own servers,
you’ll need expensive GPU cards. This is particularly true with today’s state-of-the-art models,
such as China’s natural language Wu Dao 2.0, a model with nearly 2 trillion parameters. With
such models, the cloud is a must-have, not just a nice-to-have.
In order to scale your models to accommodate large-scale needs, you’ll need high-end GPU units,
which means that they’ll remain largely unused during periods of low use. In other words, you’ll
have expensive servers sitting around collecting dust, while still requiring extensive maintenance.
On the other hand, when using machine learning in the cloud, you’re only paying for your
consumption, which works wonders for scalability. Whether you’re just personally
experimenting or servicing millions of customers, you can scale to any needs, and only pay for
what you use.
3) It needs less technical knowledge
Building, managing, and maintaining powerful servers oneself is a complex task. With the cloud,
much of the complexity behind these tasks is handled by the cloud provider.
Popular cloud services like AWS, Microsoft Azure, and Google Cloud Platform in fact offer
machine learning options that don’t require deep knowledge of AI, machine learning theory, or a
large team of data scientists.
With the cloud, AI can be deployed in a matter of minutes. It also scales automatically, so you
don’t have to worry about the technical complexity of provisioning resources or managing
infrastructure.
4) Easy integration
Most popular cloud services also provide SDKs (software developer kits) and APIs. This allows
you to embed machine learning functionality directly into applications. They also support most
programming languages.
With the cloud, you can integrate machine learning into your workflows quickly and easily. In
the past, machine learning models were difficult to integrate into existing applications. In today’s
cloud-native AI world, this is no longer the case. The Akkio AI platform provides end-to-end
AutoML automation, including model training, real-time inference, high-performance DevOps,
and more. Akkio’s no-code web application provides AI toolkits without the complexities of
traditional cloud environments, which require technical expertise in tools like Python,
TensorFlow, PyTorch, and Kubernetes.
5) It reduces time-to-value
Another important aspect of the cloud is that it reduces the time-to-value. Time-to-value is the
amount of time it takes from when you start a project to when you see results from it.
In traditional machine learning deployments, this process can take months or even years. With
the cloud, you can start seeing results in hours or days. That’s because you don’t have to
provision resources, manage infrastructure, or write code. You can simply upload your data and
start building models.
Data is the lifeblood of machine learning. The more data you have, the better your models will
be. And the cloud provides access to more data than ever before.
For example, if you’re building a predictive model for customer churn, you can access historical
customer data that’s stored in the cloud. This data can be used to train your machine learning
model so that it can make better predictions.
When done right, machine learning in the cloud is secure and private. That’s because the data is
stored in the cloud provider’s secure data center.
The cloud provider is responsible for the security of the data center and the data that’s stored
there. This means that you don’t have to worry about building your own security infrastructure.
In addition, most cloud providers offer additional security features, such as encryption, to further
protect your data.
8) It frees up resources
Machine learning in the cloud frees up resources so that you can focus on other things. For
example, if you’re building a machine learning model to predict demand for a new product, you
can use the cloud to train and deploy the model. This frees up your time so that you can focus on
other things, such as marketing the product.
UNIT 2
Front End:
The client uses the front end, which contains a client-side interface and application. Both of these
components are important to access the Cloud computing platform. The front end includes web
servers (Chrome, Firefox, Opera, etc.), clients, and mobile devices.
Back End:
The backend part helps you manage all the resources needed to provide Cloud computing
services. This Cloud architecture part includes a security mechanism, a large amount of data
storage, servers, virtual machines, traffic control mechanisms, etc.
\
Important Components of Cloud Computing Architecture
Here are some important components of Cloud computing architecture:
1. Client Infrastructure:
Client Infrastructure is a front-end component that provides a GUI. It helps users to interact with
the Cloud.
2. Application:
The application can be any software or platform which a client wants to access.
3. Service:
The service component manages which type of service you can access according to the client’s
requirements.
4. Runtime Cloud:
Runtime cloud offers the execution and runtime environment to the virtual machines.
5. Storage:
Storage is another important Cloud computing architecture component. It provides a large
amount of storage capacity in the Cloud to store and manage data.
6. Infrastructure:
It offers services on the host level, network level, and application level. Cloud infrastructure
includes hardware and software components like servers, storage, network devices, virtualization
software, and various other storage resources that are needed to support the cloud computing
model.
7. Management:
This component manages components like application, service, runtime cloud, storage,
infrastructure, and other security matters in the backend. It also establishes coordination between
them.
8. Security:
Security in the backend refers to implementing different security mechanisms for secure Cloud
systems, resources, files, and infrastructure to the end-user.
9. Internet:
Internet connection acts as the bridge or medium between frontend and backend. It allows you to
establish the interaction and communication between the frontend and backend.
IaaS is offered in three models: public, private, and hybrid cloud. The private cloud implies that
the infrastructure resides at the customer-premise. In the case of public cloud, it is located at the
cloud computing platform vendor's data center, and the hybrid cloud is a combination of the two
in which the customer selects the best of both public cloud and private cloud.
1. Compute: Computing as a Service includes virtual central processing units and virtual
main memory for the VMs that is provisioned to the end- users.
2. Storage: IaaS provider provides back-end storage for storing files.
3. Network: Network as a Service (NaaS) provides networking components such as routers,
switches, and bridges for the VMs.
4. Load balancers: It provides load balancing capability at the infrastructure layer.
Characteristics of IaaS
Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google
Compute Engine (GCE), Rackspace, and Cisco Metacloud.
1. Shared infrastructure
3. Pay-as-per-use model
IaaS providers provide services based on the pay-as-per-use basis. The users are required to pay
for what they have used.
4. Focus on the core business
IaaS providers focus on the organization's core business rather than on IT infrastructure.
5. On-demand scalability
On-demand scalability is one of the biggest advantages of IaaS. Using IaaS, users do not worry
about to upgrade software and troubleshoot the issues related to hardware components.
1. Security
Security is one of the biggest issues in IaaS. Most of the IaaS providers are not able to provide
100% security.
Although IaaS service providers maintain the software, but they do not upgrade the software for
some organizations.
3. Interoperability issues
It is difficult to migrate VM from one IaaS provider to the other, so the customers might face
problem related to vendor lock-in.
PaaS includes infrastructure (servers, storage, and networking) and platform (middleware,
development tools, database management systems, business intelligence, and more) to support
the web application life cycle.
Characteristics of PaaS
Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App Engine,
Apache Stratos, Magento Commerce Cloud, and OpenShift.
1. Programming languages
PaaS providers provide various programming languages for the developers to develop the
applications. Some popular programming languages provided by PaaS providers are Java, PHP,
Ruby, Perl, and Go.
2. Application frameworks
PaaS providers provide application frameworks to easily understand the application development.
Some popular application frameworks provided by PaaS providers are Node.js, Drupal, Joomla,
WordPress, Spring, Play, Rack, and Zend.
3. Databases
PaaS providers provide various databases such as ClearDB, PostgreSQL, MongoDB, and Redis
to communicate with the applications.
4. Other tools
PaaS providers provide various other tools that are required to develop, test, and deploy the
applications.
Advantages of PaaS
1) Simplified Development
PaaS allows developers to focus on development and innovation without worrying about
infrastructure management.
2) Lower risk
No need for up-front investment in hardware and software. Developers only need a PC and an
internet connection to start building applications.
Some PaaS vendors also provide already defined business functionality so that users can avoid
building everything from very scratch and hence can directly start the projects only.
4) Instant community
PaaS vendors frequently provide online communities where the developer can get the ideas to
share experiences and seek advice from others.
5) Scalability
Applications deployed can scale from one to thousands of users without any changes to the
applications.
1) Vendor lock-in
One has to write the applications according to the platform provided by the PaaS vendor, so the
migration of an application to another PaaS vendor would be a problem.
2) Data Privacy
Corporate data, whether it can be critical or not, will be private, so if it is not located within the
walls of the company, there can be a risk in terms of privacy of data.
It may happen that some applications are local, and some are in the cloud. So there will be
chances of increased complexity when we want to use data which in the cloud with the local data.
Social Networks - As we all know, social networking sites are used by the general public, so
social networking service providers use SaaS for their convenience and handle the general
public's information.
Mail Services - To handle the unpredictable number of users and load on e-mail services, many
e-mail providers offering their services using SaaS.
SaaS pricing is based on a monthly fee or annual fee subscription, so it allows organizations to
access business functionality at a low cost, which is less than licensed applications.
Unlike traditional software, which is sold as a licensed based with an up-front cost (and often an
optional ongoing support fee), SaaS providers are generally pricing the applications using a
subscription fee, most commonly a monthly or annually fee.
2. One to Many
SaaS services are offered as a one-to-many model means a single instance of the application is
shared by multiple users.
The software is hosted remotely, so organizations do not need to invest in additional hardware.
Software as a service removes the need for installation, set-up, and daily maintenance for the
organizations. The initial set-up cost for SaaS is typically less than the enterprise software. SaaS
vendors are pricing their applications based on some usage parameters, such as a number of users
using the application. So SaaS does easy to monitor and automatic updates.
5. No special software or hardware versions required
All users will have the same version of the software and typically access it through the web
browser. SaaS reduces IT support costs by outsourcing hardware and software maintenance and
support to the IaaS provider.
6. Multidevice support
SaaS services can be accessed from any device such as desktops, laptops, tablets, phones, and
thin clients.
7. API Integration
SaaS services easily integrate with other software or services through standard APIs.
8. No client-side installation
SaaS services are accessed directly from the service provider using the internet connection, so do
not need to require any software installation.
1) Security
Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud
computing is not more secure than in-house deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-user, there
is a possibility that there may be greater latency when interacting with the application compared
to local deployment. Therefore, the SaaS model is not suitable for applications whose demand
response time is in milliseconds.
Switching SaaS vendors involves the difficult and slow task of transferring the very large data
files over the internet and then converting and importing them into another SaaS also.
The below table shows some popular SaaS providers and services that are provided by them.
Provider Services
The public cloud makes it possible for anybody to access systems and services. The public
cloud may be less secure as it is open to everyone. The public cloud is one in which cloud
infrastructure services are provided over the internet to the general people or major industry
groups. The infrastructure in this cloud model is owned by the entity that delivers the cloud
services, not by the consumer. It is a type of cloud hosting that allows customers and users to
easily access systems and services. This form of cloud computing is an excellent example of
cloud hosting, in which service providers supply services to a variety of customers. In this
arrangement, storage backup and retrieval services are given for free, as a subscription, or on a
per-user basis. For example, Google App Engine etc.
Public Cloud
Advantages of the Public Cloud Model
Minimal Investment: Because it is a pay-per-use service, there is no substantial
upfront fee, making it excellent for enterprises that require immediate access to resources.
No setup cost: The entire infrastructure is fully subsidized by the cloud service
providers, thus there is no need to set up any hardware.
Infrastructure Management is not required: Using the public cloud does not
necessitate infrastructure management.
No maintenance: The maintenance work is done by the service provider (not users).
Dynamic Scalability: To fulfill your company’s needs, on-demand resources are
accessible.
Disadvantages of the Public Cloud Model
Less secure: Public cloud is less secure as resources are public so there is no guarantee
of high-level security.
Low customization: It is accessed by many public so it can’t be customized according
to personal requirements.
Private Cloud
The private cloud deployment model is the exact opposite of the public cloud deployment
model. It’s a one-on-one environment for a single user (customer). There is no need to share
your hardware with anyone else. The distinction between private and public clouds is in how
you handle all of the hardware. It is also called the “internal cloud” & it refers to the ability to
access systems and services within a given border or organization. The cloud platform is
implemented in a cloud-based secure environment that is protected by powerful firewalls and
under the supervision of an organization’s IT department. The private cloud gives greater
flexibility of control over cloud resources.
Private Cloud
Advantages of the Private Cloud Model
Better Control: You are the sole owner of the property. You gain complete command
over service integration, IT operations, policies, and user behavior.
Data Security and Privacy: It’s suitable for storing corporate information to which
only authorized staff have access. By segmenting resources within the same infrastructure,
improved access and security can be achieved.
Supports Legacy Systems: This approach is designed to work with legacy systems that
are unable to access the public cloud.
Customization: Unlike a public cloud deployment, a private cloud allows a company
to tailor its solution to meet its specific needs.
Disadvantages of the Private Cloud Model
Less scalable: Private clouds are scaled within a certain range as there is less number
of clients.
Costly: Private clouds are more costly as they provide personalized facilities.
Hybrid Cloud
By bridging the public and private worlds with a layer of proprietary software, hybrid cloud
computing gives the best of both worlds. With a hybrid solution, you may host the app in a
safe environment while taking advantage of the public cloud’s cost savings. Organizations can
move data and applications between different clouds using a combination of two or more cloud
deployment methods, depending on their needs.
Hybrid Cloud
Advantages of the Hybrid Cloud Model
Flexibility and control: Businesses with more flexibility can design personalized
solutions that meet their particular needs.
Cost: Because public clouds provide scalability, you’ll only be responsible for paying
for the extra capacity if you require it.
Security: Because data is properly separated, the chances of data theft by attackers are
considerably reduced.
Disadvantages of the Hybrid Cloud Model
Difficult to manage: Hybrid clouds are difficult to manage as it is a combination of
both public and private cloud. So, it is complex.
Slow data transmission: Data transmission in the hybrid cloud takes place through the
public cloud so latency occurs.
Community Cloud
Community Cloud
Advantages of the Community Cloud Model
Cost Effective: It is cost-effective because the cloud is shared by multiple
organizations or communities.
Security: Community cloud provides better security.
Shared resources: It allows you to share resources, infrastructure, etc. with multiple
organizations.
Collaboration and data sharing: It is suitable for both collaboration and data sharing.
Disadvantages of the Community Cloud Model
Limited Scalability: Community cloud is relatively less scalable as many
organizations share the same resources according to their collaborative interests.
Rigid in customization: As the data and resources are shared among different
organizations according to their mutual interests if an organization wants some changes
according to their needs they cannot do so because it will have an impact on other
organizations.
Factors Public Cloud Private Cloud Community Hybrid Cloud
Cloud
Scalability
and High High Fixed High
Flexibility
Between public
Cost- Distributed cost
Cost-Effective Costly and private
Comparison among members
cloud
1. Discover service provider: This step involves identifying a service provider that can
meet the needs of the organization and has the capability to provide the required service.
This can be done through research, requesting proposals, or reaching out to vendors.
2. Define SLA: In this step, the service level requirements are defined and agreed upon
between the service provider and the organization. This includes defining the service level
objectives, metrics, and targets that will be used to measure the performance of the service
provider.
3. Establish Agreement: After the service level requirements have been defined, an
agreement is established between the organization and the service provider outlining the
terms and conditions of the service. This agreement should include the SLA, any penalties
for non-compliance, and the process for monitoring and reporting on the service level
objectives.
4. Monitor SLA violation: This step involves regularly monitoring the service level
objectives to ensure that the service provider is meeting their commitments. If any
violations are identified, they should be reported and addressed in a timely manner.
5. Terminate SLA: If the service provider is unable to meet the service level objectives,
or if the organization is not satisfied with the service provided, the SLA can be terminated.
This can be done through mutual agreement or through the enforcement of penalties for
non-compliance.
6. Enforce penalties for SLA Violation: If the service provider is found to be in
violation of the SLA, penalties can be imposed as outlined in the agreement. These
penalties can include financial penalties, reduced service level objectives, or termination of
the agreement.
Advantages of SLA
Disadvantages of SLA
1. Complexity: SLAs can be complex to create and maintain, and may require significant
resources to implement and enforce.
2. Rigidity: SLAs can be rigid and may not be flexible enough to accommodate changing
business needs or service requirements.
3. Limited service options: SLAs can limit the service options available to the customer,
as the service provider may only be able to offer the specific services outlined in the
agreement.
4. Misaligned incentives: SLAs may misalign incentives between the service provider
and the customer, as the provider may focus on meeting the agreed-upon service levels
rather than on providing the best service possible.
5. Limited liability: SLAs are not legal binding contracts and often limited the liability of
the service provider in case of service failure.
What are service level objectives(SLO)?
A Service Level Objective (SLO) serves as a benchmark for indicators, parameters, or metrics
defined with specific service level targets. The objectives may be an optimal range or a specific
value for each service function or process that constitutes a cloud service.
The SLOs can also be referred to as measurable characteristics of an SLA, such as Quality of
Service (QoS) aspects that are achievable, measurable, meaningful, and acceptable for both
service providers and customers. An SLO is agreed within the SLA as an obligation, with
validity under specific conditions (such as time period) and is expressed within an SLA
document.
The measure of responsiveness and app performance may be defined through numeric indicators
such as:
Request latency
Batch throughput
Failures per seconds
Other metrics
To understand the overall performance in context of the agreed SLA contract or availability
requirements, you’ll have to analyze these numbers over a longer time period. Mathematically,
the SLO analysis involves:
Advantages:
1. Improved Performance: Load balancing helps to distribute the workload across multiple
resources, which reduces the load on each resource and improves the overall performance
of the system.
2. High Availability: Load balancing ensures that there is no single point of failure in the
system, which provides high availability and fault tolerance to handle server failures.
3. Scalability: Load balancing makes it easier to scale resources up or down as needed,
which helps to handle spikes in traffic or changes in demand.
4. Efficient Resource Utilization: Load balancing ensures that resources are used
efficiently, which reduces wastage and helps to optimize costs.
Disadvantages:
Energy optimization is correlated and affect the cost of providing the services; they can be done
locally, but global load balancing and energy optimization policies encounter the same
difficulties as the capacity allocation.
Quality of service (QoS) is probably the most challenging aspect of resource management and,
at the same time, possibly the most critical for the future of cloud computing.