Microsoft Azure
Microsoft Azure
Microsoft Azure
Cloud computing is the delivery of computing services over the internet. Computing services include common
IT infrastructure such as virtual machines, storage, databases, and networking. Cloud services also expand the
traditional IT offerings to include things like Internet of Things (IoT), machine learning (ML), and artificial
intelligence (AI).
With the shared responsibility model, these responsibilities get shared between the cloud provider and the
consumer. Physical security, power, cooling, and network connectivity are the responsibility of the cloud
provider. The consumer isn’t collocated with the datacenter, so it wouldn’t make sense for the consumer to have
any of those responsibilities.
At the same time, the consumer is responsible for the data and information stored in the cloud. (You wouldn’t
want the cloud provider to be able to read your information.) The consumer is also responsible for access
security, meaning you only give access to those who need it.
When using a cloud provider, you’ll always be responsible for:
The information and data stored in the cloud
Devices that are allowed to connect to your cloud (cell phones, computers, and so on)
The accounts and identities of the people, services, and devices within your organization
The cloud provider is always responsible for:
The physical datacenter
The physical network
The physical hosts
Your service model will determine responsibility for things like:
Operating systems
Network controls
Applications
Identity and infrastructure
Private cloud
Let’s start with a private cloud. A private cloud is, in some ways, the natural
evolution from a corporate datacenter. It’s a cloud (delivering IT services over the
internet) that’s used by a single entity. Private cloud provides much greater control
for the company and its IT department. However, it also comes with greater cost
and fewer of the benefits of a public cloud deployment. Finally, a private cloud may
be hosted from your on site datacenter. It may also be hosted in a dedicated
datacenter offsite, potentially even by a third party that has dedicated that
datacenter to your company.
Public cloud
A public cloud is built, controlled, and maintained by a third-party cloud provider.
With a public cloud, anyone that wants to purchase cloud services can access and
use resources. The general public availability is a key difference between public
and private clouds.
Hybrid cloud
A hybrid cloud is a computing environment that uses both public and private
clouds in an inter-connected environment. A hybrid cloud environment can be used
to allow a private cloud to surge for increased, temporary demand by deploying
public cloud resources. Hybrid cloud can be used to provide an extra layer of
security. For example, users can flexibly choose which services to keep in public
cloud and which to deploy to their private cloud infrastructure.
Multi-cloud
A fourth, and increasingly likely scenario is a multi-cloud scenario. In a multi-cloud
scenario, you use multiple public cloud providers. Maybe you use different features from
different cloud providers. Or maybe you started your cloud journey with one provider and
are in the process of migrating to a different provider. Regardless, in a multi-cloud
environment you deal with two (or more) public cloud providers and manage resources
and security in both environments.
Azure Arc
Azure Arc is a set of technologies that helps manage your cloud environment. Azure Arc
can help manage your cloud environment, whether it's a public cloud solely on Azure, a
private cloud in your datacenter, a hybrid configuration, or even a multi-cloud
environment running on multiple cloud providers at once.
High availability
When you’re deploying an application, a service, or any IT resources, it’s important the
resources are available when needed. High availability focuses on ensuring maximum
availability, regardless of disruptions or events that may occur.
When you’re architecting your solution, you’ll need to account for service availability
guarantees. Azure is a highly available cloud environment with uptime guarantees
depending on the service. These guarantees are part of the service-level agreements
(SLAs).
Scalability
Another major benefit of cloud computing is the scalability of cloud resources. Scalability refers to the ability to
adjust resources to meet demand. If you suddenly experience peak traffic and your systems are overwhelmed,
the ability to scale means you can add more resources to better handle the increased demand.
The other benefit of scalability is that you aren't overpaying for services. Because the cloud is a consumption-
based model, you only pay for what you use. If demand drops off, you can reduce your resources and thereby
reduce your costs.
Scaling generally comes in two varieties: vertical and horizontal. Vertical scaling is focused on increasing or
decreasing the capabilities of resources. Horizontal scaling is adding or subtracting the number of resources.
Vertical scaling
With vertical scaling, if you were developing an app and you needed more processing power, you could
vertically scale up to add more CPUs or RAM to the virtual machine. Conversely, if you realized you had over-
specified the needs, you could vertically scale down by lowering the CPU or RAM specifications.
Horizontal scaling
With horizontal scaling, if you suddenly experienced a steep jump in demand, your deployed resources could be
scaled out (either automatically or manually). For example, you could add additional virtual machines or
containers, scaling out. In the same manner, if there was a significant drop in demand, deployed resources could
be scaled in (either automatically or manually), scaling in.
Reliability
Reliability is the ability of a system to recover from failures and continue to function. It's also one of the pillars
of the Microsoft Azure Well-Architected Framework.
The cloud, by virtue of its decentralized design, naturally supports a reliable and resilient infrastructure. With a
decentralized design, the cloud enables you to have resources deployed in regions around the world. With this
global scale, even if one region has a catastrophic event other regions are still up and running. You can design
your applications to automatically take advantage of this increased reliability. In some cases, your cloud
environment itself will automatically shift to a different region for you, with no action needed on your part.
You’ll learn more about how Azure leverages global scale to provide reliability later in this series.
Predictability
Predictability in the cloud lets you move forward with confidence. Predictability can be focused on performance
predictability or cost predictability. Both performance and cost predictability are heavily influenced by the
Microsoft Azure Well-Architected Framework. Deploy a solution built around this framework and you have a
solution whose cost and performance are predictable.
Performance
Performance predictability focuses on predicting the resources needed to deliver a positive experience for your
customers. Autoscaling, load balancing, and high availability are just some of the cloud concepts that support
performance predictability. If you suddenly need more resources, autoscaling can deploy additional resources to
meet the demand, and then scale back when the demand drops. Or if the traffic is heavily focused on one area,
load balancing will help redirect some of the overload to less stressed areas.
Cost
Cost predictability is focused on predicting or forecasting the cost of the cloud spend. With the cloud, you can
track your resource use in real time, monitor resources to ensure that you’re using them in the most efficient
way, and apply data analytics to find patterns and trends that help better plan resource deployments. By
operating in the cloud and using cloud analytics and information, you can predict future costs and adjust your
resources as needed. You can even use tools like the Total Cost of Ownership (TCO) or Pricing Calculator to get
an estimate of potential cloud spend.
Whether you’re deploying infrastructure as a service or software as a service, cloud features support governance
and compliance. Things like set templates help ensure that all your deployed resources meet corporate standards
and government regulatory requirements. Plus, you can update all your deployed resources to new standards as
standards change. Cloud-based auditing helps flag any resource that’s out of compliance with your corporate
standards and provides mitigation strategies. Depending on your operating model, software patches and updates
may also automatically be applied, which helps with both governance and security.
On the security side, you can find a cloud solution that matches your security needs. If you want maximum
control of security, infrastructure as a service provides you with physical resources but lets you manage the
operating systems and installed software, including patches and maintenance. If you want patches and
maintenance taken care of automatically, platform as a service or software as a service deployments may be the
best cloud strategies for you.
And because the cloud is intended as an over-the-internet delivery of IT resources, cloud providers are typically
well suited to handle things like distributed denial of service (DDoS) attacks, making your network more robust
and secure.
By establishing a good governance footprint early, you can keep your cloud footprint updated, secure, and well
managed.
A major benefit of cloud computing is the manageability options. There are two types of manageability for
cloud computing that you’ll learn about in this series, and both are excellent benefits.
Management of the cloud
Management of the cloud speaks to managing your cloud resources. In the cloud, you can:
Automatically scale resource deployment based on need.
Deploy resources based on a preconfigured template, removing the need for manual configuration.
Monitor the health of resources and automatically replace failing resources.
Receive automatic alerts based on configured metrics, so you’re aware of performance in real time.
Management in the cloud
Management in the cloud speaks to how you’re able to manage your cloud environment and resources. You can
manage these:
Through a web portal.
Using a command line interface.
Using APIs.
Using PowerShell.
Scenarios
Some common scenarios where IaaS might make sense include:
Depending on the configuration, you or the cloud provider may be responsible for
networking settings and connectivity within your cloud environment, network and
application security, and the directory infrastructure.
Software as a service (SaaS) is the most complete cloud service model from a
product perspective. With SaaS, you’re essentially renting or using a fully
developed application. Email, financial software, messaging applications, and
connectivity software are all common examples of a SaaS implementation.
While the SaaS model may be the least flexible, it’s also the easiest to get up and
running. It requires the least amount of technical knowledge or expertise to fully
employ.
Many of the Learn exercises use a technology called the sandbox, which creates a
temporary subscription that's added to your Azure account. This temporary subscription
allows you to create Azure resources during a Learn module. Learn automatically cleans
up the temporary resources for you after you've completed the module.
Important
To ensure resiliency, a minimum of three separate availability zones are present in all
availability zone-enabled regions. However, not all Azure Regions currently support
availability zones.
Use availability zones in your apps
You want to ensure your services and data are redundant so you can protect your
information in case of failure. When you host your infrastructure, setting up your own
redundancy requires that you create duplicate hardware environments. Azure can help
make your app highly available through availability zones.
You can use availability zones to run mission-critical applications and build high-
availability into your application architecture by co-locating your compute, storage,
networking, and data resources within an availability zone and replicating in other
availability zones. Keep in mind that there could be a cost to duplicating your services
and transferring data between availability zones.
Availability zones are primarily for VMs, managed disks, load balancers, and SQL
databases. Azure services that support availability zones fall into three categories:
Zonal services: You pin the resource to a specific zone (for example, VMs,
managed disks, IP addresses).
Zone-redundant services: The platform replicates automatically across zones (for
example, zone-redundant storage, SQL Database).
Non-regional services: Services are always available from Azure geographies and
are resilient to zone-wide outages as well as region-wide outages.
Even with the additional resiliency that availability zones provide, it’s possible that an
event could be so large that it impacts multiple availability zones in a single region. To
provide even further resilience, Azure has Region Pairs.
Region pairs
Most Azure regions are paired with another region within the same geography (such as
US, Europe, or Asia) at least 300 miles away. This approach allows for the replication of
resources across a geography that helps reduce the likelihood of interruptions because
of events such as natural disasters, civil unrest, power outages, or physical network
outages that affect an entire region. For example, if a region in a pair was affected by a
natural disaster, services would automatically fail over to the other region in its region
pair.
Important
Not all Azure services automatically replicate data or automatically fall back from a failed
region to cross-replicate to another enabled region. In these scenarios, recovery and
replication must be configured by the customer.
Examples of region pairs in Azure are West US paired with East US and South-East Asia
paired with East Asia. Because the pair of regions are directly connected and far enough
apart to be isolated from regional disasters, you can use them to provide reliable
services and data redundancy.
Connectivity to Microsoft cloud services across all regions in the geopolitical region.
Global connectivity to Microsoft services across all regions with the ExpressRoute Global
Reach.
Dynamic routing between your network and Microsoft via Border Gateway Protocol
(BGP).
Built-in redundancy in every peering location for higher reliability.
Connectivity to Microsoft cloud services
ExpressRoute enables direct access to the following services in all regions:
Dynamic routing
ExpressRoute uses the BGP. BGP is used to exchange routes between on-premises
networks and resources running in Azure. This protocol enables dynamic routing
between your on-premises network and services running in the Microsoft cloud.
Built-in redundancy
Each connectivity provider uses redundant devices to ensure that connections
established with Microsoft are highly available. You can configure multiple circuits to
complement this feature.
CloudExchange colocation
Point-to-point Ethernet connection
Any-to-any connection
Directly from ExpressRoute sites
Colocation at a cloud exchange
Colocation refers to your datacenter, office, or other facility being physically colocated at
a cloud exchange, such as an ISP. If your facility is colocated at a cloud exchange, you
can request a virtual cross-connect to the Microsoft cloud.
Any-to-any networks
With any-to-any connectivity, you can integrate your wide area network (WAN) with
Azure by providing connections to your offices and datacenters. Azure integrates with
your WAN connection to provide a connection like you would have between your
datacenter and any branch offices.
Security considerations
With ExpressRoute, your data doesn't travel over the public internet, reducing the risks
associated with internet communications. ExpressRoute is a private connection from
your on-premises infrastructure to your Azure infrastructure. Even if you have an
ExpressRoute connection, DNS queries, certificate revocation list checking, and Azure
Content Delivery Network requests are still sent over the public internet.
Describe Azure DNS
Completed100 XP
3 minutes
Azure DNS is a hosting service for DNS domains that provides name resolution by using
Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your
DNS records using the same credentials, APIs, tools, and billing as your other Azure
services.
Benefits of Azure DNS
Azure DNS uses the scope and scale of Microsoft Azure to provide numerous benefits,
including:
Reliability and performance
Security
Ease of Use
Customizable virtual networks
Alias records
Reliability and performance
DNS domains in Azure DNS are hosted on Azure's global network of DNS name servers,
providing resiliency and high availability. Azure DNS uses anycast networking, so the
closest available DNS server answers each DNS query, providing fast performance and
high availability for your domain.
Security
Azure DNS is based on Azure Resource Manager, which provides features such as:
Azure role-based access control (Azure RBAC) to control who has access to specific
actions for your organization.
Activity logs to monitor how a user in your organization modified a resource or to
find an error when troubleshooting.
Resource locking to lock a subscription, resource group, or resource. Locking
prevents other users in your organization from accidentally deleting or modifying
critical resources.
Ease of use
Azure DNS can manage DNS records for your Azure services and provide DNS for your
external resources as well. Azure DNS is integrated in the Azure portal and uses the
same credentials, support contract, and billing as your other Azure services.
Because Azure DNS is running on Azure, it means you can manage your domains and
records with the Azure portal, Azure PowerShell cmdlets, and the cross-platform Azure
CLI. Applications that require automated DNS management can integrate with the
service by using the REST API and SDKs.
Customizable virtual networks with private domains
Azure DNS also supports private DNS domains. This feature allows you to use your own
custom domain names in your private virtual networks, rather than being stuck with the
Azure-provided names.
Alias records
Azure DNS also supports alias record sets. You can use an alias record set to refer to an
Azure resource, such as an Azure public IP address, an Azure Traffic Manager profile, or
an Azure Content Delivery Network (CDN) endpoint. If the IP address of the underlying
resource changes, the alias record set seamlessly updates itself during DNS resolution.
The alias record set points to the service instance, and the service instance is associated
with an IP address.
Important
You can't use Azure DNS to buy a domain name. For an annual fee, you can buy a
domain name by using App Service domains or a third-party domain name registrar.
Once purchased, your domains can be hosted in Azure DNS for record management.
LRS is the lowest-cost redundancy option and offers the least durability compared to
other options. LRS protects your data against server rack and drive failures. However, if
a disaster such as fire or flooding occurs within the data center, all replicas of a storage
account using LRS may be lost or unrecoverable. To mitigate this risk, Microsoft
recommends using zone-redundant storage (ZRS), geo-redundant storage (GRS), or geo-
zone-redundant storage (GZRS).
Zone-redundant storage
For Availability Zone-enabled Regions, zone-redundant storage (ZRS) replicates your
Azure Storage data synchronously across three Azure availability zones in the primary
region. ZRS offers durability for Azure Storage data objects of at least 12 nines
(99.9999999999%) over a given year.
With ZRS, your data is still accessible for both read and write operations even if a zone
becomes unavailable. No remounting of Azure file shares from the connected clients is
required. If a zone becomes unavailable, Azure undertakes networking updates, such as
DNS repointing. These updates may affect your application if you access data before the
updates have completed.
Microsoft recommends using ZRS in the primary region for scenarios that require high
availability. ZRS is also recommended for restricting replication of data within a country
or region to meet data governance requirements.
Redundancy in a secondary region
For applications requiring high durability, you can choose to additionally copy the data in
your storage account to a secondary region that is hundreds of miles away from the
primary region. If the data in your storage account is copied to a secondary region, then
your data is durable even in the event of a catastrophic failure that prevents the data in
the primary region from being recovered.
When you create a storage account, you select the primary region for the account. The
paired secondary region is based on Azure Region Pairs, and can't be changed.
Azure Storage offers two options for copying your data to a secondary region: geo-
redundant storage (GRS) and geo-zone-redundant storage (GZRS). GRS is similar to
running LRS in two regions, and GZRS is similar to running ZRS in the primary region and
LRS in the secondary region.
By default, data in the secondary region isn't available for read or write access unless
there's a failover to the secondary region. If the primary region becomes unavailable,
you can choose to fail over to the secondary region. After the failover has completed, the
secondary region becomes the primary region, and you can again read and write data.
Important
Because data is replicated to the secondary region asynchronously, a failure that affects
the primary region may result in data loss if the primary region can't be recovered. The
interval between the most recent writes to the primary region and the last write to the
secondary region is known as the recovery point objective (RPO). The RPO indicates the
point in time to which data can be recovered. Azure Storage typically has an RPO of less
than 15 minutes, although there's currently no SLA on how long it takes to replicate data
to the secondary region.
Geo-redundant storage
GRS copies your data synchronously three times within a single physical location in the
primary region using LRS. It then copies your data asynchronously to a single physical
location in the secondary region (the region pair) using LRS. GRS offers durability for
Azure Storage data objects of at least 16 nines (99.99999999999999%) over a given
year.
Geo-zone-redundant storage
GZRS combines the high availability provided by redundancy across availability zones
with protection from regional outages provided by geo-replication. Data in a GZRS
storage account is copied across three Azure availability zones in the primary region
(similar to ZRS) and is also replicated to a secondary geographic region, using LRS, for
protection from regional disasters. Microsoft recommends using GZRS for applications
requiring maximum consistency, durability, and availability, excellent performance, and
resilience for disaster recovery.
GZRS is designed to provide at least 16 nines (99.99999999999999%) of durability of
objects over a given year.
Read access to data in the secondary region
Geo-redundant storage (with GRS or GZRS) replicates your data to another physical
location in the secondary region to protect against regional outages. However, that data
is available to be read only if the customer or Microsoft initiates a failover from the
primary to secondary region. However, if you enable read access to the secondary
region, your data is always available, even when the primary region is running optimally.
For read access to the secondary region, enable read-access geo-redundant storage (RA-
GRS) or read-access geo-zone-redundant storage (RA-GZRS).
Describe Azure storage services
Completed100 XP
10 minutes
The Azure Storage platform includes the following data services:
Azure Blobs: A massively scalable object store for text and binary data. Also
includes support for big data analytics through Data Lake Storage Gen2.
Azure Files: Managed file shares for cloud or on-premises deployments.
Azure Queues: A messaging store for reliable messaging between application
components.
Azure Disks: Block-level storage volumes for Azure VMs.
Azure Tables: NoSQL table option for structured, non-relational data.
Benefits of Azure Storage
Azure Storage services offer the following benefits for application developers and IT
professionals:
Durable and highly available. Redundancy ensures that your data is safe if
transient hardware failures occur. You can also opt to replicate data across data
centers or geographical regions for additional protection from local catastrophes or
natural disasters. Data replicated in this way remains highly available if an
unexpected outage occurs.
Secure. All data written to an Azure storage account is encrypted by the service.
Azure Storage provides you with fine-grained control over who has access to your
data.
Scalable. Azure Storage is designed to be massively scalable to meet the data
storage and performance needs of today's applications.
Managed. Azure handles hardware maintenance, updates, and critical issues for
you.
Accessible. Data in Azure Storage is accessible from anywhere in the world over
HTTP or HTTPS. Microsoft provides client libraries for Azure Storage in a variety of
languages, including .NET, Java, Node.js, Python, PHP, Ruby, Go, and others, as
well as a mature REST API. Azure Storage supports scripting in Azure PowerShell or
Azure CLI. And the Azure portal and Azure Storage Explorer offer easy visual
solutions for working with your data.
Azure Blobs
Azure Blob storage is an object storage solution for the cloud. It can store massive
amounts of data, such as text or binary data. Azure Blob storage is unstructured,
meaning that there are no restrictions on the kinds of data it can hold. Blob storage can
manage thousands of simultaneous uploads, massive amounts of video data, constantly
growing log files, and can be reached from anywhere with an internet connection.
Blobs aren't limited to common file formats. A blob could contain gigabytes of binary
data streamed from a scientific instrument, an encrypted message for another
application, or data in a custom format for an app you're developing. One advantage of
blob storage over disk storage is that it doesn't require developers to think about or
manage disks. Data is uploaded as blobs, and Azure takes care of the physical storage
needs.
Blob storage is ideal for:
Serving images or documents directly to a browser.
Storing files for distributed access.
Streaming video and audio.
Storing data for backup and restore, disaster recovery, and archiving.
Storing data for analysis by an on-premises or Azure-hosted service.
Accessing blob storage
Objects in blob storage can be accessed from anywhere in the world via HTTP or HTTPS.
Users or client applications can access blobs via URLs, the Azure Storage REST API, Azure
PowerShell, Azure CLI, or an Azure Storage client library. The storage client libraries are
available for multiple languages, including .NET, Java, Node.js, Python, PHP, and Ruby.
Blob storage tiers
Data stored in the cloud can grow at an exponential pace. To manage costs for your
expanding storage needs, it's helpful to organize your data based on attributes like
frequency of access and planned retention period. Data stored in the cloud can be
handled differently based on how it's generated, processed, and accessed over its
lifetime. Some data is actively accessed and modified throughout its lifetime. Some data
is accessed frequently early in its lifetime, with access dropping drastically as the data
ages. Some data remains idle in the cloud and is rarely, if ever, accessed after it's
stored. To accommodate these different access needs, Azure provides several access
tiers, which you can use to balance your storage costs with your access needs.
Azure Storage offers different access tiers for your blob storage, helping you store object
data in the most cost-effective manner. The available access tiers include:
Hot access tier: Optimized for storing data that is accessed frequently (for
example, images for your website).
Cool access tier: Optimized for data that is infrequently accessed and stored for
at least 30 days (for example, invoices for your customers).
Cold access tier: Optimized for storing data that is infrequently accessed and
stored for at least 90 days.
Archive access tier: Appropriate for data that is rarely accessed and stored for at
least 180 days, with flexible latency requirements (for example, long-term
backups).
The following considerations apply to the different access tiers:
Hot, cool, and cold access tiers can be set at the account level. The archive access
tier isn't available at the account level.
Hot, cool, cold, and archive tiers can be set at the blob level, during or after
upload.
Data in the cool and cold access tiers can tolerate slightly lower availability, but
still requires high durability, retrieval latency, and throughput characteristics
similar to hot data. For cool and cold data, a lower availability service-level
agreement (SLA) and higher access costs compared to hot data are acceptable
trade-offs for lower storage costs.
Archive storage stores data offline and offers the lowest storage costs, but also the
highest costs to rehydrate and access data.
Azure Files
Azure File storage offers fully managed file shares in the cloud that are accessible via the
industry standard Server Message Block (SMB) or Network File System (NFS) protocols.
Azure Files file shares can be mounted concurrently by cloud or on-premises
deployments. SMB Azure file shares are accessible from Windows, Linux, and macOS
clients. NFS Azure Files shares are accessible from Linux or macOS clients. Additionally,
SMB Azure file shares can be cached on Windows Servers with Azure File Sync for fast
access near where the data is being used.
Azure Files key benefits:
Shared access: Azure file shares support the industry standard SMB and NFS
protocols, meaning you can seamlessly replace your on-premises file shares with
Azure file shares without worrying about application compatibility.
Fully managed: Azure file shares can be created without the need to manage
hardware or an OS. This means you don't have to deal with patching the server OS
with critical security upgrades or replacing faulty hard disks.
Scripting and tooling: PowerShell cmdlets and Azure CLI can be used to create,
mount, and manage Azure file shares as part of the administration of Azure
applications. You can create and manage Azure file shares using Azure portal and
Azure Storage Explorer.
Resiliency: Azure Files has been built from the ground up to always be available.
Replacing on-premises file shares with Azure Files means you don't have to wake
up in the middle of the night to deal with local power outages or network issues.
Familiar programmability: Applications running in Azure can access data in the
share via file system I/O APIs. Developers can therefore use their existing code and
skills to migrate existing applications. In addition to System IO APIs, you can use
Azure Storage Client Libraries or the Azure Storage REST API.
Azure Queues
Azure Queue storage is a service for storing large numbers of messages. Once stored,
you can access the messages from anywhere in the world via authenticated calls using
HTTP or HTTPS. A queue can contain as many messages as your storage account has
room for (potentially millions). Each individual message can be up to 64 KB in size.
Queues are commonly used to create a backlog of work to process asynchronously.
Queue storage can be combined with compute functions like Azure Functions to take an
action when a message is received. For example, you want to perform an action after a
customer uploads a form to your website. You could have the submit button on the
website trigger a message to the Queue storage. Then, you could use Azure Functions to
trigger an action once the message was received.
Azure Disks
Azure Disk storage, or Azure managed disks, are block-level storage volumes managed
by Azure for use with Azure VMs. Conceptually, they’re the same as a physical disk, but
they’re virtualized – offering greater resiliency and availability than a physical disk. With
managed disks, all you have to do is provision the disk, and Azure will take care of the
rest.
Azure Tables
Azure Table storage stores large amounts of structured data. Azure tables are a NoSQL
datastore that accepts authenticated calls from inside and outside the Azure cloud. This
enables you to use Azure tables to build your hybrid or multicloud solution and have your
data always available. Azure tables are ideal for storing structured, non-relational data.
Identify Azure data migration options
Completed100 XP
5 minutes
Now that you understand the different storage options within Azure, it’s important to also
understand how to get your data and information into Azure. Azure supports both real-
time migration of infrastructure, applications, and data using Azure Migrate as well as
asynchronous migration of data using Azure Data Box.
Azure Migrate
Azure Migrate is a service that helps you migrate from an on-premises environment to
the cloud. Azure Migrate functions as a hub to help you manage the assessment and
migration of your on-premises datacenter to Azure. It provides the following:
Unified migration platform: A single portal to start, run, and track your
migration to Azure.
Range of tools: A range of tools for assessment and migration. Azure Migrate
tools include Azure Migrate: Discovery and assessment and Azure Migrate: Server
Migration. Azure Migrate also integrates with other Azure services and tools, and
with independent software vendor (ISV) offerings.
Assessment and migration: In the Azure Migrate hub, you can assess and
migrate your on-premises infrastructure to Azure.
Integrated tools
In addition to working with tools from ISVs, the Azure Migrate hub also includes the
following tools to help with migration:
Azure Migrate: Discovery and assessment. Discover and assess on-premises
servers running on VMware, Hyper-V, and physical servers in preparation for
migration to Azure.
Azure Migrate: Server Migration. Migrate VMware VMs, Hyper-V VMs, physical
servers, other virtualized servers, and public cloud VMs to Azure.
Data Migration Assistant. Data Migration Assistant is a stand-alone tool to
assess SQL Servers. It helps pinpoint potential problems blocking migration. It
identifies unsupported features, new features that can benefit you after migration,
and the right path for database migration.
Azure Database Migration Service. Migrate on-premises databases to Azure
VMs running SQL Server, Azure SQL Database, or SQL Managed Instances.
Azure App Service migration assistant. Azure App Service migration assistant
is a standalone tool to assess on-premises websites for migration to Azure App
Service. Use Migration Assistant to migrate .NET and PHP web apps to Azure.
Azure Data Box. Use Azure Data Box products to move large amounts of offline
data to Azure.
Azure Data Box
Azure Data Box is a physical migration service that helps transfer large amounts of data
in a quick, inexpensive, and reliable way. The secure data transfer is accelerated by
shipping you a proprietary Data Box storage device that has a maximum usable storage
capacity of 80 terabytes. The Data Box is transported to and from your datacenter via a
regional carrier. A rugged case protects and secures the Data Box from damage during
transit.
You can order the Data Box device via the Azure portal to import or export data from
Azure. Once the device is received, you can quickly set it up using the local web UI and
connect it to your network. Once you’re finished transferring the data (either into or out
of Azure), simply return the Data Box. If you’re transferring data into Azure, the data is
automatically uploaded once Microsoft receives the Data Box back. The entire process is
tracked end-to-end by the Data Box service in the Azure portal.
Use cases
Data Box is ideally suited to transfer data sizes larger than 40 TBs in scenarios with no to
limited network connectivity. The data movement can be one-time, periodic, or an initial
bulk data transfer followed by periodic transfers.
Here are the various scenarios where Data Box can be used to import data to Azure.
Onetime migration - when a large amount of on-premises data is moved to Azure.
Moving a media library from offline tapes into Azure to create an online media
library.
Migrating your VM farm, SQL server, and applications to Azure.
Moving historical data to Azure for in-depth analysis and reporting using HDInsight.
Initial bulk transfer - when an initial bulk transfer is done using Data Box (seed)
followed by incremental transfers over the network.
Periodic uploads - when large amount of data is generated periodically and needs
to be moved to Azure.
Here are the various scenarios where Data Box can be used to export data from Azure.
Disaster recovery - when a copy of the data from Azure is restored to an on-
premises network. In a typical disaster recovery scenario, a large amount of Azure
data is exported to a Data Box. Microsoft then ships this Data Box, and the data is
restored on your premises in a short time.
Security requirements - when you need to be able to export data out of Azure due
to government or security requirements.
Migrate back to on-premises or to another cloud service provider - when you want
to move all the data back to on-premises, or to another cloud service provider,
export data via Data Box to migrate the workloads.
Once the data from your import order is uploaded to Azure, the disks on the device are
wiped clean in accordance with NIST 800-88r1 standards. For an export order, the disks
are erased once the device reaches the Azure datacenter.
Identify Azure file movement options
Completed100 XP
3 minutes
In addition to large scale migration using services like Azure Migrate and Azure Data Box,
Azure also has tools designed to help you move or interact with individual files or small
file groups. Among those tools are AzCopy, Azure Storage Explorer, and Azure File Sync.
AzCopy
AzCopy is a command-line utility that you can use to copy blobs or files to or from your
storage account. With AzCopy, you can upload files, download files, copy files between
storage accounts, and even synchronize files. AzCopy can even be configured to work
with other cloud providers to help move files back and forth between clouds.
Important
Synchronizing blobs or files with AzCopy is one-direction synchronization. When you
synchronize, you designated the source and destination, and AzCopy will copy files or
blobs in that direction. It doesn't synchronize bi-directionally based on timestamps or
other metadata.
Azure Storage Explorer
Azure Storage Explorer is a standalone app that provides a graphical interface to
manage files and blobs in your Azure Storage Account. It works on Windows, macOS, and
Linux operating systems and uses AzCopy on the backend to perform all of the file and
blob management tasks. With Storage Explorer, you can upload to Azure, download from
Azure, or move between storage accounts.
Azure File Sync
Azure File Sync is a tool that lets you centralize your file shares in Azure Files and keep
the flexibility, performance, and compatibility of a Windows file server. It’s almost like
turning your Windows file server into a miniature content delivery network. Once you
install Azure File Sync on your local Windows server, it will automatically stay bi-
directionally synced with your files in Azure.
With Azure File Sync, you can:
Use any protocol that's available on Windows Server to access your data locally,
including SMB, NFS, and FTPS.
Have as many caches as you need across the world.
Replace a failed local server by installing Azure File Sync on a new server in the
same datacenter.
Configure cloud tiering so the most frequently accessed files are replicated locally,
while infrequently accessed files are kept in the cloud until requested.
Describe Azure directory services
Completed100 XP
6 minutes
Microsoft Entra ID is a directory service that enables you to sign in and access both
Microsoft cloud applications and cloud applications that you develop. Microsoft Entra ID
can also help you maintain your on-premises Active Directory deployment.
For on-premises environments, Active Directory running on Windows Server provides an
identity and access management service that's managed by your organization. Microsoft
Entra ID is Microsoft's cloud-based identity and access management service. With
Microsoft Entra ID, you control the identity accounts, but Microsoft ensures that the
service is available globally. If you've worked with Active Directory, Microsoft Entra ID
will be familiar to you.
When you secure identities on-premises with Active Directory, Microsoft doesn't monitor
sign-in attempts. When you connect Active Directory with Microsoft Entra ID, Microsoft
can help protect you by detecting suspicious sign-in attempts at no extra cost. For
example, Microsoft Entra ID can detect sign-in attempts from unexpected locations or
unknown devices.
Who uses Microsoft Entra ID?
Microsoft Entra ID is for:
IT administrators. Administrators can use Microsoft Entra ID to control access to
applications and resources based on their business requirements.
App developers. Developers can use Microsoft Entra ID to provide a standards-
based approach for adding functionality to applications that they build, such as
adding SSO functionality to an app or enabling an app to work with a user's
existing credentials.
Users. Users can manage their identities and take maintenance actions like self-
service password reset.
Online service subscribers. Microsoft 365, Microsoft Office 365, Azure, and
Microsoft Dynamics CRM Online subscribers are already using Microsoft Entra ID to
authenticate into their account.
What does Microsoft Entra ID do?
Microsoft Entra ID provides services such as:
Authentication: This includes verifying identity to access applications and
resources. It also includes providing functionality such as self-service password
reset, multifactor authentication, a custom list of banned passwords, and smart
lockout services.
Single sign-on: Single sign-on (SSO) enables you to remember only one
username and one password to access multiple applications. A single identity is
tied to a user, which simplifies the security model. As users change roles or leave
an organization, access modifications are tied to that identity, which greatly
reduces the effort needed to change or disable accounts.
Application management: You can manage your cloud and on-premises apps by
using Microsoft Entra ID. Features like Application Proxy, SaaS apps, the My Apps
portal, and single sign-on provide a better user experience.
Device management: Along with accounts for individual people, Microsoft Entra
ID supports the registration of devices. Registration enables devices to be
managed through tools like Microsoft Intune. It also allows for device-based
Conditional Access policies to restrict access attempts to only those coming from
known devices, regardless of the requesting user account.
Can I connect my on-premises AD with Microsoft Entra ID?
If you had an on-premises environment running Active Directory and a cloud deployment
using Microsoft Entra ID, you would need to maintain two identity sets. However, you can
connect Active Directory with Microsoft Entra ID, enabling a consistent identity
experience between cloud and on-premises.
One method of connecting Microsoft Entra ID with your on-premises AD is using Microsoft
Entra Connect. Microsoft Entra Connect synchronizes user identities between on-
premises Active Directory and Microsoft Entra ID. Microsoft Entra Connect synchronizes
changes between both identity systems, so you can use features like SSO, multifactor
authentication, and self-service password reset under both systems.
What is Microsoft Entra Domain Services?
Microsoft Entra Domain Services is a service that provides managed domain services
such as domain join, group policy, lightweight directory access protocol (LDAP), and
Kerberos/NTLM authentication. Just like Microsoft Entra ID lets you use directory services
without having to maintain the infrastructure supporting it, with Microsoft Entra Domain
Services, you get the benefit of domain services without the need to deploy, manage,
and patch domain controllers (DCs) in the cloud.
A Microsoft Entra Domain Services managed domain lets you run legacy applications in
the cloud that can't use modern authentication methods, or where you don't want
directory lookups to always go back to an on-premises AD DS environment. You can lift
and shift those legacy applications from your on-premises environment into a managed
domain, without needing to manage the AD DS environment in the cloud.
Microsoft Entra Domain Services integrates with your existing Microsoft Entra tenant.
This integration lets users sign into services and applications connected to the managed
domain using their existing credentials. You can also use existing groups and user
accounts to secure access to resources. These features provide a smoother lift-and-shift
of on-premises resources to Azure.
How does Microsoft Entra Domain Services work?
When you create a Microsoft Entra Domain Services managed domain, you define a
unique namespace. This namespace is the domain name. Two Windows Server domain
controllers are then deployed into your selected Azure region. This deployment of DCs is
known as a replica set.
You don't need to manage, configure, or update these DCs. The Azure platform handles
the DCs as part of the managed domain, including backups and encryption at rest using
Azure Disk Encryption.
Is information synchronized?
A managed domain is configured to perform a one-way synchronization from Microsoft
Entra ID to Microsoft Entra Domain Services. You can create resources directly in the
managed domain, but they aren't synchronized back to Microsoft Entra ID. In a hybrid
environment with an on-premises AD DS environment, Microsoft Entra Connect
synchronizes identity information with Microsoft Entra ID, which is then synchronized to
the managed domain.
Applications, services, and VMs in Azure that connect to the managed domain can then
use common Microsoft Entra Domain Services features such as domain join, group policy,
LDAP, and Kerberos/NTLM authentication.
Describe Azure authentication methods
Completed100 XP
6 minutes
Authentication is the process of establishing the identity of a person, service, or device.
It requires the person, service, or device to provide some type of credential to prove who
they are. Authentication is like presenting ID when you’re traveling. It doesn’t confirm
that you’re ticketed, it just proves that you're who you say you are. Azure supports
multiple authentication methods, including standard passwords, single sign-on (SSO),
multifactor authentication (MFA), and passwordless.
For the longest time, security and convenience seemed to be at odds with each other.
Thankfully, new authentication solutions provide both security and convenience.
The following diagram shows the security level compared to the convenience. Notice
Passwordless authentication is high security and high convenience while passwords on
their own are low security but high convenience.
What's single sign-on?
Single sign-on (SSO) enables a user to sign in one time and use that credential to access
multiple resources and applications from different providers. For SSO to work, the
different applications and providers must trust the initial authenticator.
More identities mean more passwords to remember and change. Password policies can
vary among applications. As complexity requirements increase, it becomes increasingly
difficult for users to remember them. The more passwords a user has to manage, the
greater the risk of a credential-related security incident.
Consider the process of managing all those identities. More strain is placed on help desks
as they deal with account lockouts and password reset requests. If a user leaves an
organization, tracking down all those identities and ensuring they're disabled can be
challenging. If an identity is overlooked, this might allow access when it should have
been eliminated.
With SSO, you need to remember only one ID and one password. Access across
applications is granted to a single identity that's tied to the user, which simplifies the
security model. As users change roles or leave an organization, access is tied to a single
identity. This change greatly reduces the effort needed to change or disable accounts.
Using SSO for accounts makes it easier for users to manage their identities and for IT to
manage users.
Important
Single sign-on is only as secure as the initial authenticator because the subsequent
connections are all based on the security of the initial authenticator.
What’s multifactor authentication?
Multifactor authentication is the process of prompting a user for an extra form (or factor)
of identification during the sign-in process. MFA helps protect against a password
compromise in situations where the password was compromised but the second factor
wasn't.
Think about how you sign into websites, email, or online services. After entering your
username and password, have you ever needed to enter a code that was sent to your
phone? If so, you've used multifactor authentication to sign in.
Multifactor authentication provides additional security for your identities by requiring two
or more elements to fully authenticate. These elements fall into three categories:
Something the user knows – this might be a challenge question.
Something the user has – this might be a code that's sent to the user's mobile
phone.
Something the user is – this is typically some sort of biometric property, such as a
fingerprint or face scan.
Multifactor authentication increases identity security by limiting the impact of credential
exposure (for example, stolen usernames and passwords). With multifactor
authentication enabled, an attacker who has a user's password would also need to have
possession of their phone or their fingerprint to fully authenticate.
Compare multifactor authentication with single-factor authentication. Under single-factor
authentication, an attacker would need only a username and password to authenticate.
Multifactor authentication should be enabled wherever possible because it adds
enormous benefits to security.
What's Microsoft Entra multifactor authentication?
Microsoft Entra multifactor authentication is a Microsoft service that provides multifactor
authentication capabilities. Microsoft Entra multifactor authentication enables users to
choose an additional form of authentication during sign-in, such as a phone call or
mobile app notification.
What’s passwordless authentication?
Features like MFA are a great way to secure your organization, but users often get
frustrated with the additional security layer on top of having to remember their
passwords. People are more likely to comply when it's easy and convenient to do so.
Passwordless authentication methods are more convenient because the password is
removed and replaced with something you have, plus something you are, or something
you know.
Passwordless authentication needs to be set up on a device before it can work. For
example, your computer is something you have. Once it’s been registered or enrolled,
Azure now knows that it’s associated with you. Now that the computer is known, once
you provide something you know or are (such as a PIN or fingerprint), you can be
authenticated without using a password.
Each organization has different needs when it comes to authentication. Microsoft global
Azure and Azure Government offer the following three passwordless authentication
options that integrate with Microsoft Entra ID:
Windows Hello for Business
Microsoft Authenticator app
FIDO2 security keys
Windows Hello for Business
Windows Hello for Business is ideal for information workers that have their own
designated Windows PC. The biometric and PIN credentials are directly tied to the user's
PC, which prevents access from anyone other than the owner. With public key
infrastructure (PKI) integration and built-in support for single sign-on (SSO), Windows
Hello for Business provides a convenient method for seamlessly accessing corporate
resources on-premises and in the cloud.
Microsoft Authenticator App
You can also allow your employee's phone to become a passwordless authentication
method. You may already be using the Microsoft Authenticator App as a convenient
multifactor authentication option in addition to a password. You can also use the
Authenticator App as a passwordless option.
The Authenticator App turns any iOS or Android phone into a strong, passwordless
credential. Users can sign-in to any platform or browser by getting a notification to their
phone, matching a number displayed on the screen to the one on their phone, and then
using their biometric (touch or face) or PIN to confirm. Refer to Download and install the
Microsoft Authenticator app for installation details.
FIDO2 security keys
The FIDO (Fast IDentity Online) Alliance helps to promote open authentication standards
and reduce the use of passwords as a form of authentication. FIDO2 is the latest
standard that incorporates the web authentication (WebAuthn) standard.
FIDO2 security keys are an unphishable standards-based passwordless authentication
method that can come in any form factor. Fast Identity Online (FIDO) is an open standard
for passwordless authentication. FIDO allows users and organizations to leverage the
standard to sign-in to their resources without a username or password by using an
external security key or a platform key built into a device.
Users can register and then select a FIDO2 security key at the sign-in interface as their
main means of authentication. These FIDO2 security keys are typically USB devices, but
could also use Bluetooth or NFC. With a hardware device that handles the authentication,
the security of an account is increased as there's no password that could be exposed or
guessed.
Describe Azure external identities
Completed100 XP
3 minutes
An external identity is a person, device, service, etc. that is outside your organization.
Microsoft Entra External ID refers to all the ways you can securely interact with users
outside of your organization. If you want to collaborate with partners, distributors,
suppliers, or vendors, you can share your resources and define how your internal users
can access external organizations. If you're a developer creating consumer-facing apps,
you can manage your customers' identity experiences.
External identities may sound similar to single sign-on. With External Identities, external
users can "bring their own identities." Whether they have a corporate or government-
issued digital identity, or an unmanaged social identity like Google or Facebook, they can
use their own credentials to sign in. The external user’s identity provider manages their
identity, and you manage access to your apps with Microsoft Entra ID or Azure AD B2C to
keep your resources protected.
The following capabilities make up External Identities:
Business to business (B2B) collaboration - Collaborate with external users by
letting them use their preferred identity to sign-in to your Microsoft applications or
other enterprise applications (SaaS apps, custom-developed apps, etc.). B2B
collaboration users are represented in your directory, typically as guest users.
B2B direct connect - Establish a mutual, two-way trust with another Microsoft
Entra organization for seamless collaboration. B2B direct connect currently
supports Teams shared channels, enabling external users to access your resources
from within their home instances of Teams. B2B direct connect users aren't
represented in your directory, but they're visible from within the Teams shared
channel and can be monitored in Teams admin center reports.
Microsoft Azure Active Directory business to customer (B2C) - Publish
modern SaaS apps or custom-developed apps (excluding Microsoft apps) to
consumers and customers, while using Azure AD B2C for identity and access
management.
Depending on how you want to interact with external organizations and the types of
resources you need to share, you can use a combination of these capabilities.
With Microsoft Entra ID, you can easily enable collaboration across organizational
boundaries by using the Microsoft Entra B2B feature. Guest users from other tenants can
be invited by administrators or by other users. This capability also applies to social
identities such as Microsoft accounts.
You also can easily ensure that guest users have appropriate access. You can ask the
guests themselves or a decision maker to participate in an access review and recertify
(or attest) to the guests' access. The reviewers can give their input on each user's need
for continued access, based on suggestions from Microsoft Entra ID. When an access
review is finished, you can then make changes and remove access for guests who no
longer need it.
Describe Azure conditional access
Completed100 XP
3 minutes
Conditional Access is a tool that Microsoft Entra ID uses to allow (or deny) access to
resources based on identity signals. These signals include who the user is, where the
user is, and what device the user is requesting access from.
Conditional Access helps IT administrators:
Empower users to be productive wherever and whenever.
Protect the organization's assets.
Conditional Access also provides a more granular multifactor authentication experience
for users. For example, a user might not be challenged for second authentication factor if
they're at a known location. However, they might be challenged for a second
authentication factor if their sign-in signals are unusual or they're at an unexpected
location.
During sign-in, Conditional Access collects signals from the user, makes decisions based
on those signals, and then enforces that decision by allowing or denying the access
request or challenging for a multifactor authentication response.
The following diagram illustrates this flow:
Here, the signal might be the user's location, the user's device, or the application that
the user is trying to access.
Based on these signals, the decision might be to allow full access if the user is signing in
from their usual location. If the user is signing in from an unusual location or a location
that's marked as high risk, then access might be blocked entirely or possibly granted
after the user provides a second form of authentication.
Enforcement is the action that carries out the decision. For example, the action is to
allow access or require the user to provide a second form of authentication.
When can I use Conditional Access?
Conditional Access is useful when you need to:
Require multifactor authentication (MFA) to access an application depending on
the requester’s role, location, or network. For example, you could require MFA for
administrators but not regular users or for people connecting from outside your
corporate network.
Require access to services only through approved client applications. For example,
you could limit which email applications are able to connect to your email service.
Require users to access your application only from managed devices. A managed
device is a device that meets your standards for security and compliance.
Block access from untrusted sources, such as access from unknown or unexpected
locations.
Describe Azure role-based access control
Completed100 XP
5 minutes
When you have multiple IT and engineering teams, how can you control what access
they have to the resources in your cloud environment? The principle of least privilege
says you should only grant access up to the level needed to complete a task. If you only
need read access to a storage blob, then you should only be granted read access to that
storage blob. Write access to that blob shouldn’t be granted, nor should read access to
other storage blobs. It’s a good security practice to follow.
However, managing that level of permissions for an entire team would become tedious.
Instead of defining the detailed access requirements for each individual, and then
updating access requirements when new resources are created or new people join the
team, Azure enables you to control access through Azure role-based access control
(Azure RBAC).
Azure provides built-in roles that describe common access rules for cloud resources. You
can also define your own roles. Each role has an associated set of access permissions
that relate to that role. When you assign individuals or groups to one or more roles, they
receive all the associated access permissions.
So, if you hire a new engineer and add them to the Azure RBAC group for engineers, they
automatically get the same access as the other engineers in the same Azure RBAC
group. Similarly, if you add additional resources and point Azure RBAC at them, everyone
in that Azure RBAC group will now have those permissions on the new resources as well
as the existing resources.
How is role-based access control applied to resources?
Role-based access control is applied to a scope, which is a resource or set of resources
that this access applies to.
The following diagram shows the relationship between roles and scopes. A management
group, subscription, or resource group might be given the role of owner, so they have
increased control and authority. An observer, who isn't expected to make any updates,
might be given a role of Reader for the same scope, enabling them to review or observe
the management group, subscription, or resource group.
Scopes include:
A management group (a collection of multiple subscriptions).
A single subscription.
A resource group.
A single resource.
Observers, users managing resources, admins, and automated processes illustrate the
kinds of users or accounts that would typically be assigned each of the various roles.
Azure RBAC is hierarchical, in that when you grant access at a parent scope, those
permissions are inherited by all child scopes. For example:
When you assign the Owner role to a user at the management group scope, that
user can manage everything in all subscriptions within the management group.
When you assign the Reader role to a group at the subscription scope, the
members of that group can view every resource group and resource within the
subscription.
How is Azure RBAC enforced?
Azure RBAC is enforced on any action that's initiated against an Azure resource that
passes through Azure Resource Manager. Resource Manager is a management service
that provides a way to organize and secure your cloud resources.
You typically access Resource Manager from the Azure portal, Azure Cloud Shell, Azure
PowerShell, and the Azure CLI. Azure RBAC doesn't enforce access permissions at the
application or data level. Application security must be handled by your application.
Azure RBAC uses an allow model. When you're assigned a role, Azure RBAC allows you to
perform actions within the scope of that role. If one role assignment grants you read
permissions to a resource group and a different role assignment grants you write
permissions to the same resource group, you have both read and write permissions on
that resource group.
Describe Zero Trust model
Completed100 XP
3 minutes
Zero Trust is a security model that assumes the worst case scenario and protects
resources with that expectation. Zero Trust assumes breach at the outset, and then
verifies each request as though it originated from an uncontrolled network.
Today, organizations need a new security model that effectively adapts to the
complexity of the modern environment; embraces the mobile workforce; and protects
people, devices, applications, and data wherever they're located.
To address this new world of computing, Microsoft highly recommends the Zero Trust
security model, which is based on these guiding principles:
Verify explicitly - Always authenticate and authorize based on all available data
points.
Use least privilege access - Limit user access with Just-In-Time and Just-Enough-
Access (JIT/JEA), risk-based adaptive policies, and data protection.
Assume breach - Minimize blast radius and segment access. Verify end-to-end
encryption. Use analytics to get visibility, drive threat detection, and improve
defenses.
Adjusting to Zero Trust
Traditionally, corporate networks were restricted, protected, and generally assumed
safe. Only managed computers could join the network, VPN access was tightly
controlled, and personal devices were frequently restricted or blocked.
The Zero Trust model flips that scenario. Instead of assuming that a device is safe
because it’s within the corporate network, it requires everyone to authenticate. Then
grants access based on authentication rather than location.
Describe defense-in-depth
Completed100 XP
4 minutes
The objective of defense-in-depth is to protect information and prevent it from being
stolen by those who aren't authorized to access it.
A defense-in-depth strategy uses a series of mechanisms to slow the advance of an
attack that aims at acquiring unauthorized access to data.
Layers of defense-in-depth
You can visualize defense-in-depth as a set of layers, with the data to be secured at the
center and all the other layers functioning to protect that central data layer.
Each layer provides protection so that if one layer is breached, a subsequent layer is
already in place to prevent further exposure. This approach removes reliance on any
single layer of protection. It slows down an attack and provides alert information that
security teams can act upon, either automatically or manually.
Here's a brief overview of the role of each layer:
The physical security layer is the first line of defense to protect computing
hardware in the datacenter.
The identity and access layer controls access to infrastructure and change control.
The perimeter layer uses distributed denial of service (DDoS) protection to filter
large-scale attacks before they can cause a denial of service for users.
The network layer limits communication between resources through segmentation
and access controls.
The compute layer secures access to virtual machines.
The application layer helps ensure that applications are secure and free of security
vulnerabilities.
The data layer controls access to business and customer data that you need to
protect.
These layers provide a guideline for you to help make security configuration decisions in
all of the layers of your applications.
Azure provides security tools and features at every level of the defense-in-depth
concept. Let's take a closer look at each layer:
Physical security
Physically securing access to buildings and controlling access to computing hardware
within the datacenter are the first line of defense.
With physical security, the intent is to provide physical safeguards against access to
assets. These safeguards ensure that other layers can't be bypassed, and loss or theft is
handled appropriately. Microsoft uses various physical security mechanisms in its cloud
datacenters.
Identity and access
The identity and access layer is all about ensuring that identities are secure, that access
is granted only to what's needed, and that sign-in events and changes are logged.
At this layer, it's important to:
Control access to infrastructure and change control.
Use single sign-on (SSO) and multifactor authentication.
Audit events and changes.
Perimeter
The network perimeter protects from network-based attacks against your resources.
Identifying these attacks, eliminating their impact, and alerting you when they happen
are important ways to keep your network secure.
At this layer, it's important to:
Use DDoS protection to filter large-scale attacks before they can affect the
availability of a system for users.
Use perimeter firewalls to identify and alert on malicious attacks against your
network.
Network
At this layer, the focus is on limiting the network connectivity across all your resources to
allow only what's required. By limiting this communication, you reduce the risk of an
attack spreading to other systems in your network.
At this layer, it's important to:
Limit communication between resources.
Deny by default.
Restrict inbound internet access and limit outbound access where appropriate.
Implement secure connectivity to on-premises networks.
Compute
Malware, unpatched systems, and improperly secured systems open your environment
to attacks. The focus in this layer is on making sure that your compute resources are
secure and that you have the proper controls in place to minimize security issues.
At this layer, it's important to:
Secure access to virtual machines.
Implement endpoint protection on devices and keep systems patched and current.
Application
Integrating security into the application development lifecycle helps reduce the number
of vulnerabilities introduced in code. Every development team should ensure that its
applications are secure by default.
At this layer, it's important to:
Ensure that applications are secure and free of vulnerabilities.
Store sensitive application secrets in a secure storage medium.
Make security a design requirement for all application development.
Data
Those who store and control access to data are responsible for ensuring that it's properly
secured. Often, regulatory requirements dictate the controls and processes that must be
in place to ensure the confidentiality, integrity, and availability of the data.
In almost all cases, attackers are after data:
Stored in a database.
Stored on disk inside virtual machines.
Stored in software as a service (SaaS) applications, such as Office 365.
Managed through cloud storage.
Describe Microsoft Defender for Cloud
Completed100 XP
6 minutes
Defender for Cloud is a monitoring tool for security posture management and threat
protection. It monitors your cloud, on-premises, hybrid, and multicloud environments to
provide guidance and notifications aimed at strengthening your security posture.
Defender for Cloud provides the tools needed to harden your resources, track your
security posture, protect against cyber attacks, and streamline security management.
Deployment of Defender for Cloud is easy, it’s already natively integrated to Azure.
Protection everywhere you’re deployed
Because Defender for Cloud is an Azure-native service, many Azure services are
monitored and protected without needing any deployment. However, if you also have an
on-premises datacenter or are also operating in another cloud environment, monitoring
of Azure services may not give you a complete picture of your security situation.
When necessary, Defender for Cloud can automatically deploy a Log Analytics agent to
gather security-related data. For Azure machines, deployment is handled directly. For
hybrid and multicloud environments, Microsoft Defender plans are extended to non-
Azure machines with the help of Azure Arc. Cloud security posture management (CSPM)
features are extended to multicloud machines without the need for any agents.
Azure-native protections
Defender for Cloud helps you detect threats across:
Azure PaaS services – Detect threats targeting Azure services including Azure App
Service, Azure SQL, Azure Storage Account, and more data services. You can also
perform anomaly detection on your Azure activity logs using the native integration
with Microsoft Defender for Cloud Apps (formerly known as Microsoft Cloud App
Security).
Azure data services – Defender for Cloud includes capabilities that help you
automatically classify your data in Azure SQL. You can also get assessments for
potential vulnerabilities across Azure SQL and Storage services, and
recommendations for how to mitigate them.
Networks – Defender for Cloud helps you limit exposure to brute force attacks. By
reducing access to virtual machine ports, using the just-in-time VM access, you can
harden your network by preventing unnecessary access. You can set secure access
policies on selected ports, for only authorized users, allowed source IP address
ranges or IP addresses, and for a limited amount of time.
Defend your hybrid resources
In addition to defending your Azure environment, you can add Defender for Cloud
capabilities to your hybrid cloud environment to protect your non-Azure servers. To help
you focus on what matters the most, you'll get customized threat intelligence and
prioritized alerts according to your specific environment.
To extend protection to on-premises machines, deploy Azure Arc and enable Defender
for Cloud's enhanced security features.
Defend resources running on other clouds
Defender for Cloud can also protect resources in other clouds (such as AWS and GCP).
For example, if you've connected an Amazon Web Services (AWS) account to an Azure
subscription, you can enable any of these protections:
Defender for Cloud's CSPM features extend to your AWS resources. This agentless
plan assesses your AWS resources according to AWS-specific security
recommendations, and includes the results in the secure score. The resources will
also be assessed for compliance with built-in standards specific to AWS (AWS CIS,
AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's
asset inventory page is a multicloud enabled feature helping you manage your
AWS resources alongside your Azure resources.
Microsoft Defender for Containers extends its container threat detection and
advanced defenses to your Amazon EKS Linux clusters.
Microsoft Defender for Servers brings threat detection and advanced defenses to
your Windows and Linux EC2 instances.
Assess, Secure, and Defend
Defender for Cloud fills three vital needs as you manage the security of your resources
and workloads in the cloud and on-premises:
Continuously assess – Know your security posture. Identify and track
vulnerabilities.
Secure – Harden resources and services with Azure Security Benchmark.
Defend – Detect and resolve threats to resources, workloads, and services.
Continuously assess
Defender for cloud helps you continuously assess your environment. Defender for Cloud
includes vulnerability assessment solutions for your virtual machines, container
registries, and SQL servers.
Microsoft Defender for servers includes automatic, native integration with Microsoft
Defender for Endpoint. With this integration enabled, you'll have access to the
vulnerability findings from Microsoft threat and vulnerability management.
Between these assessment tools you’ll have regular, detailed vulnerability scans that
cover your compute, data, and infrastructure. You can review and respond to the results
of these scans all from within Defender for Cloud.
Secure
From authentication methods to access control to the concept of Zero Trust, security in
the cloud is an essential basic that must be done right. In order to be secure in the cloud,
you have to ensure your workloads are secure. To secure your workloads, you need
security policies in place that are tailored to your environment and situation. Because
policies in Defender for Cloud are built on top of Azure Policy controls, you're getting the
full range and flexibility of a world-class policy solution. In Defender for Cloud, you can
set your policies to run on management groups, across subscriptions, and even for a
whole tenant.
One of the benefits of moving to the cloud is the ability to grow and scale as you need,
adding new services and resources as necessary. Defender for Cloud is constantly
monitoring for new resources being deployed across your workloads. Defender for Cloud
assesses if new resources are configured according to security best practices. If not,
they're flagged and you get a prioritized list of recommendations for what you need to
fix. Recommendations help you reduce the attack surface across each of your resources.
The list of recommendations is enabled and supported by the Azure Security Benchmark.
This Microsoft-authored, Azure-specific, benchmark provides a set of guidelines for
security and compliance best practices based on common compliance frameworks.
In this way, Defender for Cloud enables you not just to set security policies, but to apply
secure configuration standards across your resources.
To help you understand how important each recommendation is to your overall security
posture, Defender for Cloud groups the recommendations into security controls and adds
a secure score value to each control. The secure score gives you an at-a-glance indicator
of the health of your security posture, while the controls give you a working list of things
to consider to improve your security score and your overall security posture.
Defend
The first two areas were focused on assessing, monitoring, and maintaining your
environment. Defender for Cloud also helps you defend your environment by providing
security alerts and advanced threat protection features.
Security alerts
When Defender for Cloud detects a threat in any area of your environment, it generates
a security alert. Security alerts:
Describe details of the affected resources
Suggest remediation steps
Provide, in some cases, an option to trigger a logic app in response
Whether an alert is generated by Defender for Cloud or received by Defender for Cloud
from an integrated security product, you can export it. Defender for Cloud's threat
protection includes fusion kill-chain analysis, which automatically correlates alerts in
your environment based on cyber kill-chain analysis, to help you better understand the
full story of an attack campaign, where it started, and what kind of impact it had on your
resources.
Advanced threat protection
Defender for cloud provides advanced threat protection features for many of your
deployed resources, including virtual machines, SQL databases, containers, web
applications, and your network. Protections include securing the management ports of
your VMs with just-in-time access, and adaptive application controls to create allowlists
for what apps should and shouldn't run on your machines.
Describe factors that can affect costs in Azure
Completed100 XP
7 minutes
The following video provides an introduction to things that can impact your costs in
Azure.
Azure shifts development costs from the capital expense (CapEx) of building out and
maintaining infrastructure and facilities to an operational expense (OpEx) of renting
infrastructure as you need it, whether it’s compute, storage, networking, and so on.
That OpEx cost can be impacted by many factors. Some of the impacting factors are:
Resource type
Consumption
Maintenance
Geography
Subscription type
Azure Marketplace
Resource type
A number of factors influence the cost of Azure resources. The type of resources, the
settings for the resource, and the Azure region will all have an impact on how much a
resource costs. When you provision an Azure resource, Azure creates metered instances
for that resource. The meters track the resources' usage and generate a usage record
that is used to calculate your bill.
Examples
With a storage account, you specify a type such as blob, a performance tier, an access
tier, redundancy settings, and a region. Creating the same storage account in different
regions may show different costs and changing any of the settings may also impact the
price.
With a virtual machine (VM), you may have to consider licensing for the operating
system or other software, the processor and number of cores for the VM, the attached
storage, and the network interface. Just like with storage, provisioning the same virtual
machine in different regions may result in different costs.
Consumption
Pay-as-you-go has been a consistent theme throughout, and that’s the cloud payment
model where you pay for the resources that you use during a billing cycle. If you use
more compute this cycle, you pay more. If you use less in the current cycle, you pay less.
It’s a straight forward pricing mechanism that allows for maximum flexibility.
However, Azure also offers the ability to commit to using a set amount of cloud
resources in advance and receiving discounts on those “reserved” resources. Many
services, including databases, compute, and storage all provide the option to commit to
a level of use and receive a discount, in some cases up to 72 percent.
When you reserve capacity, you’re committing to using and paying for a certain amount
of Azure resources during a given period (typically one or three years). With the back-up
of pay-as-you-go, if you see a sudden surge in demand that eclipses what you’ve pre-
reserved, you just pay for the additional resources in excess of your reservation. This
model allows you to recognize significant savings on reliable, consistent workloads while
also having the flexibility to rapidly increase your cloud footprint as the need arises.
Maintenance
The flexibility of the cloud makes it possible to rapidly adjust resources based on
demand. Using resource groups can help keep all of your resources organized. In order
to control costs, it’s important to maintain your cloud environment. For example, every
time you provision a VM, additional resources such as storage and networking are also
provisioned. If you deprovision the VM, those additional resources may not deprovision
at the same time, either intentionally or unintentionally. By keeping an eye on your
resources and making sure you’re not keeping around resources that are no longer
needed, you can help control cloud costs.
Geography
When you provision most resources in Azure, you need to define a region where the
resource deploys. Azure infrastructure is distributed globally, which enables you to
deploy your services centrally or closest to your customers, or something in between.
With this global deployment comes global pricing differences. The cost of power, labor,
taxes, and fees vary depending on the location. Due to these variations, Azure resources
can differ in costs to deploy depending on the region.
Network traffic is also impacted based on geography. For example, it’s less expensive to
move information within Europe than to move information from Europe to Asia or South
America.
Network Traffic
Billing zones are a factor in determining the cost of some Azure services.
Bandwidth refers to data moving in and out of Azure datacenters. Some inbound data
transfers (data going into Azure datacenters) are free. For outbound data transfers (data
leaving Azure datacenters), data transfer pricing is based on zones.
A zone is a geographical grouping of Azure regions for billing purposes. The bandwidth
pricing page has additional information on pricing for data ingress, egress, and transfer.
Subscription type
Some Azure subscription types also include usage allowances, which affect costs.
For example, an Azure free trial subscription provides access to a number of Azure
products that are free for 12 months. It also includes credit to spend within your first 30
days of sign-up. You'll get access to more than 25 products that are always free (based
on resource and region availability).
Azure Marketplace
Azure Marketplace lets you purchase Azure-based solutions and services from third-party
vendors. This could be a server with software preinstalled and configured, or managed
network firewall appliances, or connectors to third-party backup services. When you
purchase products through Azure Marketplace, you may pay for not only the Azure
services that you’re using, but also the services or expertise of the third-party vendor.
Billing structures are set by the vendor.
All solutions available in Azure Marketplace are certified and compliant with Azure
policies and standards. The certification policies may vary based on the service or
solution type and Azure service involved. Commercial marketplace certification
policies has additional information on Azure Marketplace certifications.
Compare the Pricing and Total Cost of Ownership calculators
Completed100 XP
2 minutes
The pricing calculator and the total cost of ownership (TCO) calculator are two
calculators that help you understand potential Azure expenses. Both calculators are
accessible from the internet, and both calculators allow you to build out a configuration.
However, the two calculators have very different purposes.
Pricing calculator
The pricing calculator is designed to give you an estimated cost for provisioning
resources in Azure. You can get an estimate for individual resources, build out a solution,
or use an example scenario to see an estimate of the Azure spend. The pricing
calculator’s focus is on the cost of provisioned resources in Azure.
Note
The Pricing calculator is for information purposes only. The prices are only an estimate.
Nothing is provisioned when you add resources to the pricing calculator, and you won't
be charged for any services you select.
With the pricing calculator, you can estimate the cost of any provisioned resources,
including compute, storage, and associated network costs. You can even account for
different storage options like storage type, access tier, and redundancy.
TCO calculator
The TCO calculator is designed to help you compare the costs for running an on-
premises infrastructure compared to an Azure Cloud infrastructure. With the TCO
calculator, you enter your current infrastructure configuration, including servers,
databases, storage, and outbound network traffic. The TCO calculator then compares the
anticipated costs for your current environment with an Azure environment supporting
the same infrastructure requirements.
With the TCO calculator, you enter your configuration, add in assumptions like power and
IT labor costs, and are presented with an estimation of the cost difference to run the
same environment in your current datacenter or in Azure.
What is the Microsoft Cloud Adoption Framework for Azure?
The Cloud Adoption Framework for Azure is a collection of documentation, technical
guidance, best practices, and tools that aid in aligning business, organizational
readiness, and technology strategies. This alignment enables a clear and actionable
journey to the cloud that rapidly delivers on the desired business outcomes.
The cloud fundamentally changes how organizations procure and use technology
resources. With the cloud, they can provision and consume resources only when needed.
While the cloud offers tremendous flexibility in design choices, organizations need a
proven and consistent methodology for adopting cloud technologies. The Cloud Adoption
Framework meets that need. It can help guide your decisions throughout cloud adoption
to accelerate a specific business objective.
How is it structured?
The Cloud Adoption Framework helps customers undertake a simplified cloud journey in
three main stages:
Plan
Ready
Adopt
These three main stages are preceded by a business strategy phase and surrounded by
an operations phase that expands through the cloud adoption journey.
The Cloud Adoption Framework contains detailed information to cover an end-to-end
cloud adoption journey:
It begins with setting the business strategy, which should align to actionable
technology projects that deliver on the desired business outcomes.
It then describes how the organization must:
o Prepare its people with technical readiness.
o Adjust processes to drive business and technology changes.
o Enable business outcomes through implementation of the defined
technology plan.
Finally, it covers cloud operations, such as governance, resources, and people and
change management.
The cloud offers nearly unlimited potential, but successful adoption requires careful
planning and strategy. The adoption strategy depends on where you are in your cloud
journey. When you think about your use of the cloud, what is your motivation?
Next, let's define the strategy that might trigger an organization to move to the cloud.
Define strategy
Completed100 XP
10 minutes
Organizations adopt the cloud to help drive business transformation, such as processes
and product improvement, market growth, and increased profitability. Let's look at the
most common motivation triggers for cloud adoption.
Across organizations of all types, sizes, and industries, the decision to invest in cloud
technologies is often tightly connected to a critical business event. The reason for this
connection is because the cloud might enable the appropriate solution for the event.
Proper cloud technology implementation might turn a reactive response into an
innovation opportunity to drive growth for the organization.
Motivations
Organizations find different triggers to adopt new technologies like Azure. Some triggers
drive the organization to migrate current applications. Other triggers require creation of
new capabilities, products, and experiences.
Some common migration and innovation triggers include:
Preparation for new technical capabilities
Gaining scale to meet market or geographic demands
Cost savings
Reduction in vendor or technical complexity
Optimization of internal operations
Increased business agility
Improvements to customer experiences or engagements
Transformation of products or services
Disruption of the market from new products or services
There are many reasons or triggers for cloud adoption. Which triggers are most relevant
to your business? Where do you see the most opportunity to take advantage of the
benefits of cloud technology? Identifying these opportunities will help you develop your
cloud adoption plan.
Strategy
When you define your cloud business strategy, you should consider business impact,
turnaround time, global reach, performance, and more. Here are key areas you need to
focus on:
Establish clear business outcomes: Drive transparency and engagement for
your journey across the organization.
Define business justification: Identify business value opportunities to then
select the right technology.
Implementing the first application is key to learning and testing with confidence, as your
cloud adoption journey starts. Use a two-pronged approach to select it:
Business criteria: Identify an application currently in operation where the owner
has a strong motivation to move to the cloud.
Technical criteria: Select an application that has minimum dependencies and
can be moved as a small group of assets.
Rehost
Also known as a lift-and-shift migration, a rehost effort moves a current state asset to
the chosen cloud provider, with minimal change to overall architecture.
Reduce capital expense.
Free up datacenter space.
Achieve rapid return on investment in the cloud.
Refactor
Refactor also refers to the application development process of refactoring code to allow
an application to deliver on new business opportunities.
Experience faster and shorter updates.
Benefit from code portability.
Achieve greater cloud efficiency in the areas of resources, speed, cost.
Rearchitect
When aging applications aren't compatible with the cloud, they might need to be
rearchitected to produce cost and operational efficiencies in the cloud.
Gain application scale and agility.
Adopt new cloud capabilities more easily.
Use a mix of technology stacks.
Rebuild/New
Unsupported, misaligned, or out-of-date on-premises applications might be too
expensive to carry forward. A new code base with a cloud-native design might be the
most appropriate and efficient path.
Accelerate innovation.
Build applications faster.
Reduce operational cost.
Replace
Sometimes the best approach is to replace the current application with a hosted
application that meets all functionality required in the cloud.
Standardize around industry best practices.
Accelerate adoption of business process-driven approaches.
Reallocate development investments into applications that create competitive
differentiation or advantages.
Create your cloud adoption plan
As you develop a business justification model for your organization's cloud journey,
identify business outcomes that can be mapped to specific cloud capabilities and
business strategies to reach the desired state of transformation. Documenting all these
outcomes and business strategies serves as the foundation for your organization's cloud
adoption plan.
Key steps to build this plan are to:
Review sample business outcomes.
Identify the leading metrics that best represent progress toward the identified
business outcomes.
Establish a financial model that aligns with the outcomes and learning metrics.
Tip
Links to sample business outcomes, the business outcome template, learning metrics,
the financial model, and the digital estate document are available in the Summary and
resources unit at the end of this module.
The Azure readiness guide introduces features that help you organize resources,
control costs, and secure and manage your organization. Links to sample skills-readiness
learning paths on Microsoft Learn and Azure Support are available in the Summary and
resources unit at the end of this module.
Create your landing zone
Before you begin to build and deploy solutions with Azure services, make sure your
environment is ready. The term landing zone is used to describe an environment that's
provisioned and prepared to host workloads in a cloud environment, such as Azure. A
fully functioning landing zone is the final deliverable of any iteration of the Cloud
Adoption Framework for Azure methodology.
Each landing zone is part of a broader solution for organizing resources across a cloud
environment. These resources include management groups, resource groups, and
subscriptions. Azure offers many services that help you organize resources, control
costs, and secure and manage your organization's Azure subscription. Microsoft Cost
Management + Billing also provides a few ways to help you predict, analyze, and
manage costs.
Note
For an interactive experience, view the environment-readiness content in the Azure
portal. Go to the Azure Quickstart Center in the Azure portal, and select introduction to
Azure setup. Then follow the step-by-step instructions.
Tip
Standards-based Azure Blueprints samples are available and ready to use. Visit the list of
available samples that are ready to use or modify for your needs, linked in the Summary
and resources unit at the end of this module.
Cloud migration is one way to effectively balance a cloud portfolio. This is often the
fastest and most agile approach in the short term. Conversely, some benefits of the
cloud might not be realized without additional future modification. Enterprises and mid-
market customers use this approach to accelerate the pace of change, avoid planned
capital expenditures, and reduce ongoing operational costs.
The strategy and tools you use to migrate an application to Azure largely depend on your
business motivations, technology strategies, and timelines. Your decisions are also based
on a deep understanding of the application and the assets to be migrated. These assets
include infrastructure, apps, and data. This decision tree serves as high-level guidance to
help you select the best tools to use based on migration decisions.
Migration preparation: Establish a rough migration backlog, based largely on the
current state and desired outcomes.
Business outcomes: The key business objectives that drive this migration.
They're defined in the Plan phase.
Digital estate estimate: A rough estimate of the number and condition of
workloads to be migrated. It's defined in the Plan phase.
Roles and responsibilities: A clear definition of the team structure, separation
of responsibilities, and access requirements. They're defined in the Ready phase.
Change management requirements: The cadence, processes, and
documentation required to review and approve changes. They're defined in the
Ready phase.
Cloud innovation
Cloud-native applications and data accelerate development and experimentation cycles.
Older applications can take advantage of many of the same cloud-native benefits by
modernizing the solution or components of the solution. Modern DevOps and software
development lifecycle (SDLC) approaches that use cloud technology shorten the time
from idea to product transformation. Combined, these tools invite the customer into the
process to create shorter feedback loops and better customer experiences.
Modern approaches to infrastructure deployment, operations, and governance are
rapidly bridging the gaps between development and operations. Modernization and
innovation in the IT portfolio create tighter alignment with DevOps and accelerate
innovations across the digital estate and application portfolio.
Cloud governance
Cloud governance creates guardrails that keep the organization on a safe path
throughout the journey. The Cloud Adoption Framework for Azure governance model
identifies key areas of importance. Each area relates to different types of risks the
organization must address as it adopts more cloud services.
Because governance requirements will evolve throughout the cloud adoption journey, a
flexible approach to governance is required. IT governance must move quickly and keep
pace with business demands to stay relevant during cloud adoption.
Incremental governance relies on a small set of corporate policies, processes, and tools
to establish a foundation for adoption and governance. That foundation is called
a minimum viable product (MVP). An MVP allows the governance team to quickly
incorporate governance into implementations throughout the adoption lifecycle. After
this MVP is deployed, additional layers of governance can be quickly incorporated into
the environment.
Tip
To determine where you should start to implement your own cloud governance, use the
Microsoft assessment tools linked in the Summary and resources unit at the end of this
module.
Cloud management
The goal of the Manage methodology is to maximize ongoing business returns by
creating balance between stability and operational costs. Stable business operations
lead to stable revenue streams. Controlled operational costs reduce the overhead to
drive more profit from the business processes.
Tip
Links to the scalability, availability, and resiliency resources are available in
the Summary and resources unit at the end of this module.
Cloud operations creates a maturity model that helps the team fulfill commitments to
the business. In the early stages of maturity, customers focus on basic needs such as
inventory and visibility into cloud assets and performance. As operations in the cloud
mature, the team can use cloud native or hybrid approaches to maintaining operational
compliance, which reduces the likelihood of interruptions through configuration and state
management. After compliance is achieved, protection and recovery services provide
low-impact ways to reduce the duration and effect of business process interruptions.
During platform operations, aspects of various platforms (like containers or data
platforms) are adjusted and automated to improve performance.