Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Microsoft Azure

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 66

Microsoft Azure

Cloud computing is the delivery of computing services over the internet. Computing services include common
IT infrastructure such as virtual machines, storage, databases, and networking. Cloud services also expand the
traditional IT offerings to include things like Internet of Things (IoT), machine learning (ML), and artificial
intelligence (AI).
With the shared responsibility model, these responsibilities get shared between the cloud provider and the
consumer. Physical security, power, cooling, and network connectivity are the responsibility of the cloud
provider. The consumer isn’t collocated with the datacenter, so it wouldn’t make sense for the consumer to have
any of those responsibilities.
At the same time, the consumer is responsible for the data and information stored in the cloud. (You wouldn’t
want the cloud provider to be able to read your information.) The consumer is also responsible for access
security, meaning you only give access to those who need it.
When using a cloud provider, you’ll always be responsible for:
 The information and data stored in the cloud
 Devices that are allowed to connect to your cloud (cell phones, computers, and so on)
 The accounts and identities of the people, services, and devices within your organization
The cloud provider is always responsible for:
 The physical datacenter
 The physical network
 The physical hosts
Your service model will determine responsibility for things like:
 Operating systems
 Network controls
 Applications
 Identity and infrastructure

 Private cloud
 Let’s start with a private cloud. A private cloud is, in some ways, the natural
evolution from a corporate datacenter. It’s a cloud (delivering IT services over the
internet) that’s used by a single entity. Private cloud provides much greater control
for the company and its IT department. However, it also comes with greater cost
and fewer of the benefits of a public cloud deployment. Finally, a private cloud may
be hosted from your on site datacenter. It may also be hosted in a dedicated
datacenter offsite, potentially even by a third party that has dedicated that
datacenter to your company.
 Public cloud
 A public cloud is built, controlled, and maintained by a third-party cloud provider.
With a public cloud, anyone that wants to purchase cloud services can access and
use resources. The general public availability is a key difference between public
and private clouds.
 Hybrid cloud
 A hybrid cloud is a computing environment that uses both public and private
clouds in an inter-connected environment. A hybrid cloud environment can be used
to allow a private cloud to surge for increased, temporary demand by deploying
public cloud resources. Hybrid cloud can be used to provide an extra layer of
security. For example, users can flexibly choose which services to keep in public
cloud and which to deploy to their private cloud infrastructure.

Multi-cloud
A fourth, and increasingly likely scenario is a multi-cloud scenario. In a multi-cloud
scenario, you use multiple public cloud providers. Maybe you use different features from
different cloud providers. Or maybe you started your cloud journey with one provider and
are in the process of migrating to a different provider. Regardless, in a multi-cloud
environment you deal with two (or more) public cloud providers and manage resources
and security in both environments.

Azure Arc
Azure Arc is a set of technologies that helps manage your cloud environment. Azure Arc
can help manage your cloud environment, whether it's a public cloud solely on Azure, a
private cloud in your datacenter, a hybrid configuration, or even a multi-cloud
environment running on multiple cloud providers at once.

Azure VMware Solution


What if you’re already established with VMware in a private cloud environment but want
to migrate to a public or hybrid cloud? Azure VMware Solution lets you run your VMware
workloads in Azure with seamless integration and scalability.

Describe the consumption-based model


Completed100 XP
 3 minutes
When comparing IT infrastructure models, there are two types of expenses to consider. Capital expenditure
(CapEx) and operational expenditure (OpEx).
CapEx is typically a one-time, up-front expenditure to purchase or secure tangible resources. A new building,
repaving the parking lot, building a datacenter, or buying a company vehicle are examples of CapEx.
In contrast, OpEx is spending money on services or products over time. Renting a convention center, leasing a
company vehicle, or signing up for cloud services are all examples of OpEx.
Cloud computing falls under OpEx because cloud computing operates on a consumption-based model. With
cloud computing, you don’t pay for the physical infrastructure, the electricity, the security, or anything else
associated with maintaining a datacenter. Instead, you pay for the IT resources you use. If you don’t use any IT
resources this month, you don’t pay for any IT resources.
This consumption-based model has many benefits, including:
 No upfront costs.
 No need to purchase and manage costly infrastructure that users might not use to its fullest potential.
 The ability to pay for more resources when they're needed.
 The ability to stop paying for resources that are no longer needed.

High availability
When you’re deploying an application, a service, or any IT resources, it’s important the
resources are available when needed. High availability focuses on ensuring maximum
availability, regardless of disruptions or events that may occur.

When you’re architecting your solution, you’ll need to account for service availability
guarantees. Azure is a highly available cloud environment with uptime guarantees
depending on the service. These guarantees are part of the service-level agreements
(SLAs).

Scalability
Another major benefit of cloud computing is the scalability of cloud resources. Scalability refers to the ability to
adjust resources to meet demand. If you suddenly experience peak traffic and your systems are overwhelmed,
the ability to scale means you can add more resources to better handle the increased demand.
The other benefit of scalability is that you aren't overpaying for services. Because the cloud is a consumption-
based model, you only pay for what you use. If demand drops off, you can reduce your resources and thereby
reduce your costs.
Scaling generally comes in two varieties: vertical and horizontal. Vertical scaling is focused on increasing or
decreasing the capabilities of resources. Horizontal scaling is adding or subtracting the number of resources.
Vertical scaling
With vertical scaling, if you were developing an app and you needed more processing power, you could
vertically scale up to add more CPUs or RAM to the virtual machine. Conversely, if you realized you had over-
specified the needs, you could vertically scale down by lowering the CPU or RAM specifications.
Horizontal scaling
With horizontal scaling, if you suddenly experienced a steep jump in demand, your deployed resources could be
scaled out (either automatically or manually). For example, you could add additional virtual machines or
containers, scaling out. In the same manner, if there was a significant drop in demand, deployed resources could
be scaled in (either automatically or manually), scaling in.
Reliability
Reliability is the ability of a system to recover from failures and continue to function. It's also one of the pillars
of the Microsoft Azure Well-Architected Framework.
The cloud, by virtue of its decentralized design, naturally supports a reliable and resilient infrastructure. With a
decentralized design, the cloud enables you to have resources deployed in regions around the world. With this
global scale, even if one region has a catastrophic event other regions are still up and running. You can design
your applications to automatically take advantage of this increased reliability. In some cases, your cloud
environment itself will automatically shift to a different region for you, with no action needed on your part.
You’ll learn more about how Azure leverages global scale to provide reliability later in this series.
Predictability
Predictability in the cloud lets you move forward with confidence. Predictability can be focused on performance
predictability or cost predictability. Both performance and cost predictability are heavily influenced by the
Microsoft Azure Well-Architected Framework. Deploy a solution built around this framework and you have a
solution whose cost and performance are predictable.
Performance
Performance predictability focuses on predicting the resources needed to deliver a positive experience for your
customers. Autoscaling, load balancing, and high availability are just some of the cloud concepts that support
performance predictability. If you suddenly need more resources, autoscaling can deploy additional resources to
meet the demand, and then scale back when the demand drops. Or if the traffic is heavily focused on one area,
load balancing will help redirect some of the overload to less stressed areas.
Cost
Cost predictability is focused on predicting or forecasting the cost of the cloud spend. With the cloud, you can
track your resource use in real time, monitor resources to ensure that you’re using them in the most efficient
way, and apply data analytics to find patterns and trends that help better plan resource deployments. By
operating in the cloud and using cloud analytics and information, you can predict future costs and adjust your
resources as needed. You can even use tools like the Total Cost of Ownership (TCO) or Pricing Calculator to get
an estimate of potential cloud spend.
Whether you’re deploying infrastructure as a service or software as a service, cloud features support governance
and compliance. Things like set templates help ensure that all your deployed resources meet corporate standards
and government regulatory requirements. Plus, you can update all your deployed resources to new standards as
standards change. Cloud-based auditing helps flag any resource that’s out of compliance with your corporate
standards and provides mitigation strategies. Depending on your operating model, software patches and updates
may also automatically be applied, which helps with both governance and security.
On the security side, you can find a cloud solution that matches your security needs. If you want maximum
control of security, infrastructure as a service provides you with physical resources but lets you manage the
operating systems and installed software, including patches and maintenance. If you want patches and
maintenance taken care of automatically, platform as a service or software as a service deployments may be the
best cloud strategies for you.
And because the cloud is intended as an over-the-internet delivery of IT resources, cloud providers are typically
well suited to handle things like distributed denial of service (DDoS) attacks, making your network more robust
and secure.
By establishing a good governance footprint early, you can keep your cloud footprint updated, secure, and well
managed.
A major benefit of cloud computing is the manageability options. There are two types of manageability for
cloud computing that you’ll learn about in this series, and both are excellent benefits.
Management of the cloud
Management of the cloud speaks to managing your cloud resources. In the cloud, you can:
 Automatically scale resource deployment based on need.
 Deploy resources based on a preconfigured template, removing the need for manual configuration.
 Monitor the health of resources and automatically replace failing resources.
 Receive automatic alerts based on configured metrics, so you’re aware of performance in real time.
Management in the cloud
Management in the cloud speaks to how you’re able to manage your cloud environment and resources. You can
manage these:
 Through a web portal.
 Using a command line interface.
 Using APIs.
 Using PowerShell.

Infrastructure as a service (IaaS) is the most flexible category of cloud services, as it


provides you the maximum amount of control for your cloud resources. In an IaaS model,
the cloud provider is responsible for maintaining the hardware, network connectivity (to
the internet), and physical security. You’re responsible for everything else: operating
system installation, configuration, and maintenance; network configuration; database
and storage configuration; and so on. With IaaS, you’re essentially renting the hardware
in a cloud datacenter, but what you do with that hardware is up to you.

 Shared responsibility model


 The shared responsibility model applies to all the cloud service types. IaaS places
the largest share of responsibility with you. The cloud provider is responsible for
maintaining the physical infrastructure and its access to the internet. You’re
responsible for installation and configuration, patching and updates, and security.

Scenarios
Some common scenarios where IaaS might make sense include:

 Lift-and-shift migration: You’re setting up cloud resources similar to your on-prem


datacenter, and then simply moving the things running on-prem to running on the
IaaS infrastructure.
 Testing and development: You have established configurations for development
and test environments that you need to rapidly replicate. You can start up or shut
down the different environments rapidly with an IaaS structure, while maintaining
complete control.
 Platform as a service (PaaS) is a middle ground between renting space in a
datacenter (infrastructure as a service) and paying for a complete and deployed
solution (software as a service). In a PaaS environment, the cloud provider
maintains the physical infrastructure, physical security, and connection to the
internet. They also maintain the operating systems, middleware, development
tools, and business intelligence services that make up a cloud solution. In a PaaS
scenario, you don't have to worry about the licensing or patching for operating
systems and databases.

 PaaS is well suited to provide a complete development environment without the


headache of maintaining all the development infrastructure.

 Shared responsibility model


 The shared responsibility model applies to all the cloud service types. PaaS splits
the responsibility between you and the cloud provider. The cloud provider is
responsible for maintaining the physical infrastructure and its access to the
internet, just like in IaaS. In the PaaS model, the cloud provider will also maintain
the operating systems, databases, and development tools. Think of PaaS like using
a domain joined machine: IT maintains the device with regular updates, patches,
and refreshes.

 Depending on the configuration, you or the cloud provider may be responsible for
networking settings and connectivity within your cloud environment, network and
application security, and the directory infrastructure.

Some common scenarios where PaaS might make sense include:

 Development framework: PaaS provides a framework that developers can build


upon to develop or customize cloud-based applications. Similar to the way you
create an Excel macro, PaaS lets developers create applications using built-in
software components. Cloud features such as scalability, high-availability, and
multi-tenant capability are included, reducing the amount of coding that
developers must do.
 Analytics or business intelligence: Tools provided as a service with PaaS allow
organizations to analyze and mine their data, finding insights and patterns and
predicting outcomes to improve forecasting, product design decisions, investment
returns, and other business decisions.

 Software as a service (SaaS) is the most complete cloud service model from a
product perspective. With SaaS, you’re essentially renting or using a fully
developed application. Email, financial software, messaging applications, and
connectivity software are all common examples of a SaaS implementation.
 While the SaaS model may be the least flexible, it’s also the easiest to get up and
running. It requires the least amount of technical knowledge or expertise to fully
employ.

 Shared responsibility model


 The shared responsibility model applies to all the cloud service types. SaaS is the
model that places the most responsibility with the cloud provider and the least
responsibility with the user. In a SaaS environment you’re responsible for the data
that you put into the system, the devices that you allow to connect to the system,
and the users that have access. Nearly everything else falls to the cloud provider.
The cloud provider is responsible for physical security of the datacenters, power,
network connectivity, and application development and patching.

Some common scenarios for SaaS are:

 Email and messaging.


 Business productivity applications.
 Finance and expense tracking.

What is the Microsoft Learn sandbox?

Many of the Learn exercises use a technology called the sandbox, which creates a
temporary subscription that's added to your Azure account. This temporary subscription
allows you to create Azure resources during a Learn module. Learn automatically cleans
up the temporary resources for you after you've completed the module.

Describe Azure physical infrastructure


Completed100 XP
 6 minutes
Throughout your journey with Microsoft Azure, you’ll hear and use terms like Regions,
Availability Zones, Resources, Subscriptions, and more. This module focuses on the core
architectural components of Azure. The core architectural components of Azure may be
broken down into two main groupings: the physical infrastructure, and the management
infrastructure.
Physical infrastructure
The physical infrastructure for Azure starts with datacenters. Conceptually, the
datacenters are the same as large corporate datacenters. They’re facilities with
resources arranged in racks, with dedicated power, cooling, and networking
infrastructure.
As a global cloud provider, Azure has datacenters around the world. However, these
individual datacenters aren’t directly accessible. Datacenters are grouped into Azure
Regions or Azure Availability Zones that are designed to help you achieve resiliency and
reliability for your business-critical workloads.
The Global infrastructure site gives you a chance to interactively explore the underlying
Azure infrastructure.
Regions
A region is a geographical area on the planet that contains at least one, but potentially
multiple datacenters that are nearby and networked together with a low-latency
network. Azure intelligently assigns and controls the resources within each region to
ensure workloads are appropriately balanced.
When you deploy a resource in Azure, you'll often need to choose the region where you
want your resource deployed.
Note
Some services or virtual machine (VM) features are only available in certain regions,
such as specific VM sizes or storage types. There are also some global Azure services
that don't require you to select a particular region, such as Microsoft Entra ID, Azure
Traffic Manager, and Azure DNS.
Availability Zones
Availability zones are physically separate datacenters within an Azure region. Each
availability zone is made up of one or more datacenters equipped with independent
power, cooling, and networking. An availability zone is set up to be an isolation
boundary. If one zone goes down, the other continues working. Availability zones are
connected through high-speed, private fiber-optic networks.

Important
To ensure resiliency, a minimum of three separate availability zones are present in all
availability zone-enabled regions. However, not all Azure Regions currently support
availability zones.
Use availability zones in your apps
You want to ensure your services and data are redundant so you can protect your
information in case of failure. When you host your infrastructure, setting up your own
redundancy requires that you create duplicate hardware environments. Azure can help
make your app highly available through availability zones.
You can use availability zones to run mission-critical applications and build high-
availability into your application architecture by co-locating your compute, storage,
networking, and data resources within an availability zone and replicating in other
availability zones. Keep in mind that there could be a cost to duplicating your services
and transferring data between availability zones.
Availability zones are primarily for VMs, managed disks, load balancers, and SQL
databases. Azure services that support availability zones fall into three categories:
 Zonal services: You pin the resource to a specific zone (for example, VMs,
managed disks, IP addresses).
 Zone-redundant services: The platform replicates automatically across zones (for
example, zone-redundant storage, SQL Database).
 Non-regional services: Services are always available from Azure geographies and
are resilient to zone-wide outages as well as region-wide outages.
Even with the additional resiliency that availability zones provide, it’s possible that an
event could be so large that it impacts multiple availability zones in a single region. To
provide even further resilience, Azure has Region Pairs.
Region pairs
Most Azure regions are paired with another region within the same geography (such as
US, Europe, or Asia) at least 300 miles away. This approach allows for the replication of
resources across a geography that helps reduce the likelihood of interruptions because
of events such as natural disasters, civil unrest, power outages, or physical network
outages that affect an entire region. For example, if a region in a pair was affected by a
natural disaster, services would automatically fail over to the other region in its region
pair.
Important
Not all Azure services automatically replicate data or automatically fall back from a failed
region to cross-replicate to another enabled region. In these scenarios, recovery and
replication must be configured by the customer.
Examples of region pairs in Azure are West US paired with East US and South-East Asia
paired with East Asia. Because the pair of regions are directly connected and far enough
apart to be isolated from regional disasters, you can use them to provide reliable
services and data redundancy.

Additional advantages of region pairs:


 If an extensive Azure outage occurs, one region out of every pair is prioritized to
make sure at least one is restored as quickly as possible for applications hosted in
that region pair.
 Planned Azure updates are rolled out to paired regions one region at a time to
minimize downtime and risk of application outage.
 Data continues to reside within the same geography as its pair (except for Brazil
South) for tax- and law-enforcement jurisdiction purposes.
Important
Most regions are paired in two directions, meaning they are the backup for the region
that provides a backup for them (West US and East US back each other up). However,
some regions, such as West India and Brazil South, are paired in only one direction. In a
one-direction pairing, the Primary region does not provide backup for its secondary
region. So, even though West India’s secondary region is South India, South India does
not rely on West India. West India's secondary region is South India, but South India's
secondary region is Central India. Brazil South is unique because it's paired with a region
outside of its geography. Brazil South's secondary region is South Central US. The
secondary region of South Central US isn't Brazil South.
Sovereign Regions
In addition to regular regions, Azure also has sovereign regions. Sovereign regions are
instances of Azure that are isolated from the main instance of Azure. You may need to
use a sovereign region for compliance or legal purposes.
Azure sovereign regions include:
 US DoD Central, US Gov Virginia, US Gov Iowa and more: These regions are
physical and logical network-isolated instances of Azure for U.S. government
agencies and partners. These datacenters are operated by screened U.S.
personnel and include additional compliance certifications.
 China East, China North, and more: These regions are available through a unique
partnership between Microsoft and 21Vianet, whereby Microsoft doesn't directly
maintain the datacenters.
Describe Azure management infrastructure
Completed100 XP
 7 minutes
The management infrastructure includes Azure resources and resource groups,
subscriptions, and accounts. Understanding the hierarchical organization will help you
plan your projects and products within Azure.
Azure resources and resource groups
A resource is the basic building block of Azure. Anything you create, provision, deploy,
etc. is a resource. Virtual Machines (VMs), virtual networks, databases, cognitive
services, etc. are all considered resources within Azure.
Resource groups are simply groupings of resources. When you create a resource, you’re
required to place it into a resource group. While a resource group can contain many
resources, a single resource can only be in one resource group at a time. Some
resources may be moved between resource groups, but when you move a resource to a
new group, it will no longer be associated with the former group. Additionally, resource
groups can't be nested, meaning you can’t put resource group B inside of resource group
A.
Resource groups provide a convenient way to group resources together. When you apply
an action to a resource group, that action will apply to all the resources within the
resource group. If you delete a resource group, all the resources will be deleted. If you
grant or deny access to a resource group, you’ve granted or denied access to all the
resources within the resource group.
When you’re provisioning resources, it’s good to think about the resource group
structure that best suits your needs.
For example, if you’re setting up a temporary dev environment, grouping all the
resources together means you can deprovision all of the associated resources at once by
deleting the resource group. If you’re provisioning compute resources that will need
three different access schemas, it may be best to group resources based on the access
schema, and then assign access at the resource group level.
There aren’t hard rules about how you use resource groups, so consider how to set up
your resource groups to maximize their usefulness for you.
Azure subscriptions
In Azure, subscriptions are a unit of management, billing, and scale. Similar to how
resource groups are a way to logically organize resources, subscriptions allow you to
logically organize your resource groups and facilitate billing.

Using Azure requires an Azure subscription. A subscription provides you with


authenticated and authorized access to Azure products and services. It also allows you to
provision resources. An Azure subscription links to an Azure account, which is an identity
in Microsoft Entra ID or in a directory that Microsoft Entra ID trusts.
An account can have multiple subscriptions, but it’s only required to have one. In a multi-
subscription account, you can use the subscriptions to configure different billing models
and apply different access-management policies. You can use Azure subscriptions to
define boundaries around Azure products, services, and resources. There are two types
of subscription boundaries that you can use:
 Billing boundary: This subscription type determines how an Azure account is
billed for using Azure. You can create multiple subscriptions for different types of
billing requirements. Azure generates separate billing reports and invoices for each
subscription so that you can organize and manage costs.
 Access control boundary: Azure applies access-management policies at the
subscription level, and you can create separate subscriptions to reflect different
organizational structures. An example is that within a business, you have different
departments to which you apply distinct Azure subscription policies. This billing
model allows you to manage and control access to the resources that users
provision with specific subscriptions.
Create additional Azure subscriptions
Similar to using resource groups to separate resources by function or access, you might
want to create additional subscriptions for resource or billing management purposes. For
example, you might choose to create additional subscriptions to separate:
 Environments: You can choose to create subscriptions to set up separate
environments for development and testing, security, or to isolate data for
compliance reasons. This design is particularly useful because resource access
control occurs at the subscription level.
 Organizational structures: You can create subscriptions to reflect different
organizational structures. For example, you could limit one team to lower-cost
resources, while allowing the IT department a full range. This design allows you to
manage and control access to the resources that users provision within each
subscription.
 Billing: You can create additional subscriptions for billing purposes. Because costs
are first aggregated at the subscription level, you might want to create
subscriptions to manage and track costs based on your needs. For instance, you
might want to create one subscription for your production workloads and another
subscription for your development and testing workloads.
Azure management groups
The final piece is the management group. Resources are gathered into resource groups,
and resource groups are gathered into subscriptions. If you’re just starting in Azure that
might seem like enough hierarchy to keep things organized. But imagine if you’re
dealing with multiple applications, multiple development teams, in multiple geographies.
If you have many subscriptions, you might need a way to efficiently manage access,
policies, and compliance for those subscriptions. Azure management groups provide a
level of scope above subscriptions. You organize subscriptions into containers called
management groups and apply governance conditions to the management groups. All
subscriptions within a management group automatically inherit the conditions applied to
the management group, the same way that resource groups inherit settings from
subscriptions and resources inherit from resource groups. Management groups give you
enterprise-grade management at a large scale, no matter what type of subscriptions you
might have. Management groups can be nested.
Management group, subscriptions, and resource group hierarchy
You can build a flexible structure of management groups and subscriptions to organize
your resources into a hierarchy for unified policy and access management. The following
diagram shows an example of creating a hierarchy for governance by using
management groups.
Some examples of how you could use management groups might be:
 Create a hierarchy that applies a policy. You could limit VM locations to the US
West Region in a group called Production. This policy will inherit onto all the
subscriptions that are descendants of that management group and will apply to all
VMs under those subscriptions. This security policy can't be altered by the
resource or subscription owner, which allows for improved governance.
 Provide user access to multiple subscriptions. By moving multiple
subscriptions under a management group, you can create one Azure role-based
access control (Azure RBAC) assignment on the management group. Assigning
Azure RBAC at the management group level means that all sub-management
groups, subscriptions, resource groups, and resources underneath that
management group would also inherit those permissions. One assignment on the
management group can enable users to have access to everything they need
instead of scripting Azure RBAC over different subscriptions.
Important facts about management groups:
 10,000 management groups can be supported in a single directory.
 A management group tree can support up to six levels of depth. This limit doesn't
include the root level or the subscription level.
 Each management group and subscription can support only one parent.
Describe Azure virtual machines
Completed100 XP
 6 minutes
With Azure Virtual Machines (VMs), you can create and use VMs in the cloud. VMs
provide infrastructure as a service (IaaS) in the form of a virtualized server and can be
used in many ways. Just like a physical computer, you can customize all of the
software running on your VM. VMs are an ideal choice when you need:
 Total control over the operating system (OS).
 The ability to run custom software.
 To use custom hosting configurations.
An Azure VM gives you the flexibility of virtualization without having to buy and
maintain the physical hardware that runs the VM. However, as an IaaS offering, you
still need to configure, update, and maintain the software that runs on the VM.
You can even create or use an already created image to rapidly provision VMs. You
can create and provision a VM in minutes when you select a preconfigured VM image.
An image is a template used to create a VM and may already include an OS and other
software, like development tools or web hosting environments.
Scale VMs in Azure
You can run single VMs for testing, development, or minor tasks. Or you can group
VMs together to provide high availability, scalability, and redundancy. Azure can also
manage the grouping of VMs for you with features such as scale sets and availability
sets.
Virtual machine scale sets
Virtual machine scale sets let you create and manage a group of identical, load-
balanced VMs. If you simply created multiple VMs with the same purpose, you’d need
to ensure they were all configured identically and then set up network routing
parameters to ensure efficiency. You’d also have to monitor the utilization to
determine if you need to increase or decrease the number of VMs.
Instead, with virtual machine scale sets, Azure automates most of that work. Scale
sets allow you to centrally manage, configure, and update a large number of VMs in
minutes. The number of VM instances can automatically increase or decrease in
response to demand, or you can set it to scale based on a defined schedule. Virtual
machine scale sets also automatically deploy a load balancer to make sure that your
resources are being used efficiently. With virtual machine scale sets, you can build
large-scale services for areas such as compute, big data, and container workloads.
Virtual machine availability sets
Virtual machine availability sets are another tool to help you build a more resilient,
highly available environment. Availability sets are designed to ensure that VMs
stagger updates and have varied power and network connectivity, preventing you
from losing all your VMs with a single network or power failure.
Availability accomplish these objectives by grouping VMs in two ways: update domain
and fault domain.
 Update domain: The update domain groups VMs that can be rebooted at the
same time. This setup allows you to apply updates while knowing that only one
update domain grouping is offline at a time. All of the machines in one update
domain update. An update group going through the update process is given a 30-
minute time to recover before maintenance on the next update domain starts.
 Fault domain: The fault domain groups your VMs by common power source and
network switch. By default, an availability set splits your VMs across up to three
fault domains. This helps protect against a physical power or networking failure by
having VMs in different fault domains (thus being connected to different power and
networking resources).
Best of all, there’s no additional cost for configuring an availability set. You only pay
for the VM instances you create.
Examples of when to use VMs
Some common examples or use cases for virtual machines include:
 During testing and development. VMs provide a quick and easy way to create
different OS and application configurations. Test and development personnel can
then easily delete the VMs when they no longer need them.
 When running applications in the cloud. The ability to run certain applications
in the public cloud as opposed to creating a traditional infrastructure to run them
can provide substantial economic benefits. For example, an application might need
to handle fluctuations in demand. Shutting down VMs when you don't need them or
quickly starting them up to meet a sudden increase in demand means you pay
only for the resources you use.
 When extending your datacenter to the cloud: An organization can extend
the capabilities of its own on-premises network by creating a virtual network in
Azure and adding VMs to that virtual network. Applications like SharePoint can
then run on an Azure VM instead of running locally. This arrangement makes it
easier or less expensive to deploy than in an on-premises environment.
 During disaster recovery: As with running certain types of applications in the
cloud and extending an on-premises network to the cloud, you can get significant
cost savings by using an IaaS-based approach to disaster recovery. If a primary
datacenter fails, you can create VMs running on Azure to run your critical
applications and then shut them down when the primary datacenter becomes
operational again.
Move to the cloud with VMs
VMs are also an excellent choice when you move from a physical server to the cloud
(also known as lift and shift). You can create an image of the physical server and host
it within a VM with little or no changes. Just like a physical on-premises server, you
must maintain the VM: you’re responsible for maintaining the installed OS and
software.
VM Resources
When you provision a VM, you’ll also have the chance to pick the resources that are
associated with that VM, including:
 Size (purpose, number of processor cores, and amount of RAM)
 Storage disks (hard disk drives, solid state drives, etc.)
 Networking (virtual network, public IP address, and port configuration)

Describe Azure virtual desktop


Another type of virtual machine is the Azure Virtual Desktop. Azure Virtual Desktop is a
desktop and application virtualization service that runs on the cloud. It enables you to
use a cloud-hosted version of Windows from any location. Azure Virtual Desktop works
across devices and operating systems, and works with apps that you can use to access
remote desktops or most modern browsers.
The following video gives you an overview of Azure Virtual Desktop:
Enhance security
Azure Virtual Desktop provides centralized security management for users' desktops with
Microsoft Entra ID. You can enable multifactor authentication to secure user sign-ins. You
can also secure access to data by assigning granular role-based access controls (RBACs)
to users.
With Azure Virtual Desktop, the data and apps are separated from the local hardware.
The actual desktop and apps are running in the cloud, meaning the risk of confidential
data being left on a personal device is reduced. Additionally, user sessions are isolated in
both single and multi-session environments.
Multi-session Windows 10 or Windows 11 deployment
Azure Virtual Desktop lets you use Windows 10 or Windows 11 Enterprise multi-session,
the only Windows client-based operating system that enables multiple concurrent users
on a single VM. Azure Virtual Desktop also provides a more consistent experience with
broader application support compared to Windows Server-based operating systems.
Describe Azure containers
Completed100 XP
 6 minutes
While virtual machines are an excellent way to reduce costs versus the investments that
are necessary for physical hardware, they're still limited to a single operating system per
virtual machine. If you want to run multiple instances of an application on a single host
machine, containers are an excellent choice.
What are containers?
Containers are a virtualization environment. Much like running multiple virtual machines
on a single physical host, you can run multiple containers on a single physical or virtual
host. Unlike virtual machines, you don't manage the operating system for a container.
Virtual machines appear to be an instance of an operating system that you can connect
to and manage. Containers are lightweight and designed to be created, scaled out, and
stopped dynamically. It's possible to create and deploy virtual machines as application
demand increases, but containers are a lighter weight, more agile method. Containers
are designed to allow you to respond to changes on demand. With containers, you can
quickly restart if there's a crash or hardware interruption. One of the most popular
container engines is Docker, and Azure supports Docker.
Compare virtual machines to containers
The following video highlights several of the important differences between virtual
machines and containers:
Azure Container Instances
Azure Container Instances offer the fastest and simplest way to run a container in Azure;
without having to manage any virtual machines or adopt any additional services. Azure
Container Instances are a platform as a service (PaaS) offering. Azure Container
Instances allow you to upload your containers and then the service runs the containers
for you.
Azure Container Apps
Azure Container Apps are similar in many ways to a container instance. They allow you
to get up and running right away, they remove the container management piece, and
they're a PaaS offering. Container Apps have extra benefits such as the ability to
incorporate load balancing and scaling. These other functions allow you to be more
elastic in your design.
Azure Kubernetes Service
Azure Kubernetes Service (AKS) is a container orchestration service. An orchestration
service manages the lifecycle of containers. When you're deploying a fleet of containers,
AKS can make fleet management simpler and more efficient.
Use containers in your solutions
Containers are often used to create solutions by using a microservice architecture. This
architecture is where you break solutions into smaller, independent pieces. For example,
you might split a website into a container hosting your front end, another hosting your
back end, and a third for storage. This split allows you to separate portions of your app
into logical sections that can be maintained, scaled, or updated independently.
Imagine your website back-end reaches capacity, but the front end and storage aren't
stressed. With containers, you could scale the back-end separately to improve
performance. If something necessitated such a change, you could also choose to change
the storage service or modify the front end without impacting any of the other
components.
Describe Azure functions
Completed100 XP
 4 minutes
Azure Functions is an event-driven, serverless compute option that doesn’t require
maintaining virtual machines or containers. If you build an app using VMs or containers,
those resources have to be “running” in order for your app to function. With Azure
Functions, an event wakes the function, alleviating the need to keep resources
provisioned when there are no events.
Serverless computing in Azure
Benefits of Azure Functions
Using Azure Functions is ideal when you're only concerned about the code running your
service and not about the underlying platform or infrastructure. Functions are commonly
used when you need to perform work in response to an event (often via a REST request),
timer, or message from another Azure service, and when that work can be completed
quickly, within seconds or less.
Functions scale automatically based on demand, so they may be a good choice when
demand is variable.
Azure Functions runs your code when it triggers and automatically deallocates resources
when the function is finished. In this model, Azure only charges you for the CPU time
used while your function runs.
Functions can be either stateless or stateful. When they're stateless (the default), they
behave as if they restart every time they respond to an event. When they're stateful
(called Durable Functions), a context is passed through the function to track prior
activity.
Functions are a key component of serverless computing. They're also a general compute
platform for running any type of code. If the needs of the developer's app change, you
can deploy the project in an environment that isn't serverless. This flexibility allows you
to manage scaling, run on virtual networks, and even completely isolate the functions.
Describe application hosting options
Completed100 XP
 3 minutes
If you need to host your application on Azure, you might initially turn to a virtual machine
(VM) or containers. Both VMs and containers provide excellent hosting solutions. VMs
give you maximum control of the hosting environment and allow you to configure it
exactly how you want. VMs also may be the most familiar hosting method if you’re new
to the cloud. Containers, with the ability to isolate and individually manage different
aspects of the hosting solution, can also be a robust and compelling option.
There are other hosting options that you can use with Azure, including Azure App
Service.
Azure App Service
App Service enables you to build and host web apps, background jobs, mobile back-ends,
and RESTful APIs in the programming language of your choice without managing
infrastructure. It offers automatic scaling and high availability. App Service supports
Windows and Linux. It enables automated deployments from GitHub, Azure DevOps, or
any Git repo to support a continuous deployment model.
Azure App Service is a robust hosting option that you can use to host your apps in Azure.
Azure App Service lets you focus on building and maintaining your app, and Azure
focuses on keeping the environment up and running.
Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and
mobile back ends. It supports multiple languages, including .NET, .NET Core, Java, Ruby,
Node.js, PHP, or Python. It also supports both Windows and Linux environments.
Types of app services
With App Service, you can host most common app service styles like:
 Web apps
 API apps
 WebJobs
 Mobile apps
App Service handles most of the infrastructure decisions you deal with in hosting web-
accessible apps:
 Deployment and management are integrated into the platform.
 Endpoints can be secured.
 Sites can be scaled quickly to handle high traffic loads.
 The built-in load balancing and traffic manager provide high availability.
All of these app styles are hosted in the same infrastructure and share these benefits.
This flexibility makes App Service the ideal choice to host web-oriented applications.
Web apps
App Service includes full support for hosting web apps by using ASP.NET, ASP.NET Core,
Java, Ruby, Node.js, PHP, or Python. You can choose either Windows or Linux as the host
operating system.
API apps
Much like hosting a website, you can build REST-based web APIs by using your choice of
language and framework. You get full Swagger support and the ability to package and
publish your API in Azure Marketplace. The produced apps can be consumed from any
HTTP- or HTTPS-based client.
WebJobs
You can use the WebJobs feature to run a program (.exe, Java, PHP, Python, or Node.js)
or script (.cmd, .bat, PowerShell, or Bash) in the same context as a web app, API app, or
mobile app. They can be scheduled or run by a trigger. WebJobs are often used to run
background tasks as part of your application logic.
Mobile apps
Use the Mobile Apps feature of App Service to quickly build a back end for iOS and
Android apps. With just a few actions in the Azure portal, you can:
 Store mobile app data in a cloud-based SQL database.
 Authenticate customers against common social providers, such as MSA, Google, X,
and Facebook.
 Send push notifications.
 Execute custom back-end logic in C# or Node.js.
On the mobile app side, there's SDK support for native iOS and Android, Xamarin, and
React native apps.
Describe Azure virtual networking
Completed100 XP
 5 minutes
Azure virtual networks and virtual subnets enable Azure resources, such as VMs, web
apps, and databases, to communicate with each other, with users on the internet, and
with your on-premises client computers. You can think of an Azure network as an
extension of your on-premises network with resources that link other Azure resources.
Azure virtual networks provide the following key networking capabilities:
 Isolation and segmentation
 Internet communications
 Communicate between Azure resources
 Communicate with on-premises resources
 Route network traffic
 Filter network traffic
 Connect virtual networks
Azure virtual networking supports both public and private endpoints to enable
communication between external or internal resources with other internal resources.
 Public endpoints have a public IP address and can be accessed from anywhere in
the world.
 Private endpoints exist within a virtual network and have a private IP address from
within the address space of that virtual network.
Isolation and segmentation
Azure virtual network allows you to create multiple isolated virtual networks. When you
set up a virtual network, you define a private IP address space by using either public or
private IP address ranges. The IP range only exists within the virtual network and isn't
internet routable. You can divide that IP address space into subnets and allocate part of
the defined address space to each named subnet.
For name resolution, you can use the name resolution service built into Azure. You also
can configure the virtual network to use either an internal or an external DNS server.
Internet communications
You can enable incoming connections from the internet by assigning a public IP address
to an Azure resource, or putting the resource behind a public load balancer.
Communicate between Azure resources
You want to enable Azure resources to communicate securely with each other. You can
do that in one of two ways:
 Virtual networks can connect not only VMs but other Azure resources, such as the
App Service Environment for Power Apps, Azure Kubernetes Service, and Azure
virtual machine scale sets.
 Service endpoints can connect to other Azure resource types, such as Azure SQL
databases and storage accounts. This approach enables you to link multiple Azure
resources to virtual networks to improve security and provide optimal routing
between resources.
Communicate with on-premises resources
Azure virtual networks enable you to link resources together in your on-premises
environment and within your Azure subscription. In effect, you can create a network that
spans both your local and cloud environments. There are three mechanisms for you to
achieve this connectivity:
 Point-to-site virtual private network connections are from a computer outside your
organization back into your corporate network. In this case, the client computer
initiates an encrypted VPN connection to connect to the Azure virtual network.
 Site-to-site virtual private networks link your on-premises VPN device or gateway
to the Azure VPN gateway in a virtual network. In effect, the devices in Azure can
appear as being on the local network. The connection is encrypted and works over
the internet.
 Azure ExpressRoute provides a dedicated private connectivity to Azure that
doesn't travel over the internet. ExpressRoute is useful for environments where
you need greater bandwidth and even higher levels of security.
Route network traffic
By default, Azure routes traffic between subnets on any connected virtual networks, on-
premises networks, and the internet. You also can control routing and override those
settings, as follows:
 Route tables allow you to define rules about how traffic should be directed. You
can create custom route tables that control how packets are routed between
subnets.
 Border Gateway Protocol (BGP) works with Azure VPN gateways, Azure Route
Server, or Azure ExpressRoute to propagate on-premises BGP routes to Azure
virtual networks.
Filter network traffic
Azure virtual networks enable you to filter traffic between subnets by using the following
approaches:
 Network security groups are Azure resources that can contain multiple inbound
and outbound security rules. You can define these rules to allow or block traffic,
based on factors such as source and destination IP address, port, and protocol.
 Network virtual appliances are specialized VMs that can be compared to a
hardened network appliance. A network virtual appliance carries out a particular
network function, such as running a firewall or performing wide area network
(WAN) optimization.
Connect virtual networks
You can link virtual networks together by using virtual network peering. Peering allows
two virtual networks to connect directly to each other. Network traffic between peered
networks is private, and travels on the Microsoft backbone network, never entering the
public internet. Peering enables resources in each virtual network to communicate with
each other. These virtual networks can be in separate regions. This feature allows you to
create a global interconnected network through Azure.
User-defined routes (UDR) allow you to control the routing tables between subnets within
a virtual network or between virtual networks. This allows for greater control over
network traffic flow.
Describe Azure virtual private networks
Completed100 XP
 5 minutes
A virtual private network (VPN) uses an encrypted tunnel within another network. VPNs
are typically deployed to connect two or more trusted private networks to one another
over an untrusted network (typically the public internet). Traffic is encrypted while
traveling over the untrusted network to prevent eavesdropping or other attacks. VPNs
can enable networks to safely and securely share sensitive information.
VPN gateways
A VPN gateway is a type of virtual network gateway. Azure VPN Gateway instances are
deployed in a dedicated subnet of the virtual network and enable the following
connectivity:
 Connect on-premises datacenters to virtual networks through a site-to-site
connection.
 Connect individual devices to virtual networks through a point-to-site connection.
 Connect virtual networks to other virtual networks through a network-to-network
connection.
All data transfer is encrypted inside a private tunnel as it crosses the internet. You can
deploy only one VPN gateway in each virtual network. However, you can use one
gateway to connect to multiple locations, which includes other virtual networks or on-
premises datacenters.
When setting up a VPN gateway, you must specify the type of VPN - either policy-based
or route-based. The primary distinction between these two types is how they determine
which traffic needs encryption. In Azure, regardless of the VPN type, the method of
authentication employed is a preshared key.
 Policy-based VPN gateways specify statically the IP address of packets that should
be encrypted through each tunnel. This type of device evaluates every data packet
against those sets of IP addresses to choose the tunnel where that packet is going
to be sent through.
 In Route-based gateways, IPSec tunnels are modeled as a network interface or
virtual tunnel interface. IP routing (either static routes or dynamic routing
protocols) decides which one of these tunnel interfaces to use when sending each
packet. Route-based VPNs are the preferred connection method for on-premises
devices. They're more resilient to topology changes such as the creation of new
subnets.
Use a route-based VPN gateway if you need any of the following types of connectivity:
 Connections between virtual networks
 Point-to-site connections
 Multisite connections
 Coexistence with an Azure ExpressRoute gateway
High-availability scenarios
If you’re configuring a VPN to keep your information safe, you also want to be sure that
it’s a highly available and fault tolerant VPN configuration. There are a few ways to
maximize the resiliency of your VPN gateway.
Active/standby
By default, VPN gateways are deployed as two instances in an active/standby
configuration, even if you only see one VPN gateway resource in Azure. When planned
maintenance or unplanned disruption affects the active instance, the standby instance
automatically assumes responsibility for connections without any user intervention.
Connections are interrupted during this failover, but they typically restore within a few
seconds for planned maintenance and within 90 seconds for unplanned disruptions.
Active/active
With the introduction of support for the BGP routing protocol, you can also deploy VPN
gateways in an active/active configuration. In this configuration, you assign a unique
public IP address to each instance. You then create separate tunnels from the on-
premises device to each IP address. You can extend the high availability by deploying an
additional VPN device on-premises.
ExpressRoute failover
Another high-availability option is to configure a VPN gateway as a secure failover path
for ExpressRoute connections. ExpressRoute circuits have resiliency built in. However,
they aren't immune to physical problems that affect the cables delivering connectivity or
outages that affect the complete ExpressRoute location. In high-availability scenarios,
where there's risk associated with an outage of an ExpressRoute circuit, you can also
provision a VPN gateway that uses the internet as an alternative method of connectivity.
In this way, you can ensure there's always a connection to the virtual networks.
Zone-redundant gateways
In regions that support availability zones, VPN gateways and ExpressRoute gateways can
be deployed in a zone-redundant configuration. This configuration brings resiliency,
scalability, and higher availability to virtual network gateways. Deploying gateways in
Azure availability zones physically and logically separates gateways within a region while
protecting your on-premises network connectivity to Azure from zone-level failures.
These gateways require different gateway stock keeping units (SKUs) and use Standard
public IP addresses instead of Basic public IP addresses.

Describe Azure ExpressRoute


Completed
100 XP
4 minutes
Azure ExpressRoute lets you extend your on-premises networks into the Microsoft cloud
over a private connection, with the help of a connectivity provider. This connection is
called an ExpressRoute Circuit. With ExpressRoute, you can establish connections to
Microsoft cloud services, such as Microsoft Azure and Microsoft 365. This feature allows
you to connect offices, datacenters, or other facilities to the Microsoft cloud. Each
location would have its own ExpressRoute circuit.

Connectivity can be from an any-to-any (IP VPN) network, a point-to-point Ethernet


network, or a virtual cross-connection through a connectivity provider at a colocation
facility. ExpressRoute connections don't go over the public Internet. This setup allows
ExpressRoute connections to offer more reliability, faster speeds, consistent latencies,
and higher security than typical connections over the Internet.

Features and benefits of ExpressRoute


There are several benefits to using ExpressRoute as the connection service between
Azure and on-premises networks.

Connectivity to Microsoft cloud services across all regions in the geopolitical region.
Global connectivity to Microsoft services across all regions with the ExpressRoute Global
Reach.
Dynamic routing between your network and Microsoft via Border Gateway Protocol
(BGP).
Built-in redundancy in every peering location for higher reliability.
Connectivity to Microsoft cloud services
ExpressRoute enables direct access to the following services in all regions:

Microsoft Office 365


Microsoft Dynamics 365
Azure compute services, such as Azure Virtual Machines
Azure cloud services, such as Azure Cosmos DB and Azure Storage
Global connectivity
You can enable ExpressRoute Global Reach to exchange data across your on-premises
sites by connecting your ExpressRoute circuits. For example, say you had an office in
Asia and a datacenter in Europe, both with ExpressRoute circuits connecting them to the
Microsoft network. You could use ExpressRoute Global Reach to connect those two
facilities, allowing them to communicate without transferring data over the public
internet.

Dynamic routing
ExpressRoute uses the BGP. BGP is used to exchange routes between on-premises
networks and resources running in Azure. This protocol enables dynamic routing
between your on-premises network and services running in the Microsoft cloud.

Built-in redundancy
Each connectivity provider uses redundant devices to ensure that connections
established with Microsoft are highly available. You can configure multiple circuits to
complement this feature.

ExpressRoute connectivity models


ExpressRoute supports four models that you can use to connect your on-premises
network to the Microsoft cloud:

CloudExchange colocation
Point-to-point Ethernet connection
Any-to-any connection
Directly from ExpressRoute sites
Colocation at a cloud exchange
Colocation refers to your datacenter, office, or other facility being physically colocated at
a cloud exchange, such as an ISP. If your facility is colocated at a cloud exchange, you
can request a virtual cross-connect to the Microsoft cloud.

Point-to-point Ethernet connection


Point-to-point ethernet connection refers to using a point-to-point connection to connect
your facility to the Microsoft cloud.

Any-to-any networks
With any-to-any connectivity, you can integrate your wide area network (WAN) with
Azure by providing connections to your offices and datacenters. Azure integrates with
your WAN connection to provide a connection like you would have between your
datacenter and any branch offices.

Directly from ExpressRoute sites


You can connect directly into the Microsoft's global network at a peering location
strategically distributed across the world. ExpressRoute Direct provides dual 100 Gbps or
10-Gbps connectivity, which supports Active/Active connectivity at scale.

Security considerations
With ExpressRoute, your data doesn't travel over the public internet, reducing the risks
associated with internet communications. ExpressRoute is a private connection from
your on-premises infrastructure to your Azure infrastructure. Even if you have an
ExpressRoute connection, DNS queries, certificate revocation list checking, and Azure
Content Delivery Network requests are still sent over the public internet.
Describe Azure DNS
Completed100 XP
 3 minutes
Azure DNS is a hosting service for DNS domains that provides name resolution by using
Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your
DNS records using the same credentials, APIs, tools, and billing as your other Azure
services.
Benefits of Azure DNS
Azure DNS uses the scope and scale of Microsoft Azure to provide numerous benefits,
including:
 Reliability and performance
 Security
 Ease of Use
 Customizable virtual networks
 Alias records
Reliability and performance
DNS domains in Azure DNS are hosted on Azure's global network of DNS name servers,
providing resiliency and high availability. Azure DNS uses anycast networking, so the
closest available DNS server answers each DNS query, providing fast performance and
high availability for your domain.
Security
Azure DNS is based on Azure Resource Manager, which provides features such as:
 Azure role-based access control (Azure RBAC) to control who has access to specific
actions for your organization.
 Activity logs to monitor how a user in your organization modified a resource or to
find an error when troubleshooting.
 Resource locking to lock a subscription, resource group, or resource. Locking
prevents other users in your organization from accidentally deleting or modifying
critical resources.
Ease of use
Azure DNS can manage DNS records for your Azure services and provide DNS for your
external resources as well. Azure DNS is integrated in the Azure portal and uses the
same credentials, support contract, and billing as your other Azure services.
Because Azure DNS is running on Azure, it means you can manage your domains and
records with the Azure portal, Azure PowerShell cmdlets, and the cross-platform Azure
CLI. Applications that require automated DNS management can integrate with the
service by using the REST API and SDKs.
Customizable virtual networks with private domains
Azure DNS also supports private DNS domains. This feature allows you to use your own
custom domain names in your private virtual networks, rather than being stuck with the
Azure-provided names.
Alias records
Azure DNS also supports alias record sets. You can use an alias record set to refer to an
Azure resource, such as an Azure public IP address, an Azure Traffic Manager profile, or
an Azure Content Delivery Network (CDN) endpoint. If the IP address of the underlying
resource changes, the alias record set seamlessly updates itself during DNS resolution.
The alias record set points to the service instance, and the service instance is associated
with an IP address.
Important
You can't use Azure DNS to buy a domain name. For an annual fee, you can buy a
domain name by using App Service domains or a third-party domain name registrar.
Once purchased, your domains can be hosted in Azure DNS for record management.

Describe Azure storage accounts


Completed100 XP
 5 minutes
The following video introduces the different services that should be available with Azure
Storage.
A storage account provides a unique namespace for your Azure Storage data that's
accessible from anywhere in the world over HTTP or HTTPS. Data in this account is
secure, highly available, durable, and massively scalable.
When you create your storage account, you’ll start by picking the storage account type.
The type of account determines the storage services and redundancy options and has an
impact on the use cases. Below is a list of redundancy options that will be covered later
in this module:
 Locally redundant storage (LRS)
 Geo-redundant storage (GRS)
 Read-access geo-redundant storage (RA-GRS)
 Zone-redundant storage (ZRS)
 Geo-zone-redundant storage (GZRS)
 Read-access geo-zone-redundant storage (RA-GZRS)
Expand table
Type Supported Redundancy Usage
services Options
Standard Blob Storage LRS, GRS, RA- Standard storage account type for
Type Supported Redundancy Usage
services Options
general- (including Data GRS, ZRS, blobs, file shares, queues, and tables.
purpose Lake Storage), GZRS, RA- Recommended for most scenarios
v2 Queue Storage, GZRS using Azure Storage. If you want
Table Storage, and support for network file system (NFS)
Azure Files in Azure Files, use the premium file
shares account type.
Premium Blob Storage LRS, ZRS Premium storage account type for
block (including Data block blobs and append blobs.
blobs Lake Storage) Recommended for scenarios with high
transaction rates or that use smaller
objects or require consistently low
storage latency.
Premium Azure Files LRS, ZRS Premium storage account type for file
file shares only. Recommended for
shares enterprise or high-performance scale
applications. Use this account type if
you want a storage account that
supports both Server Message Block
(SMB) and NFS file shares.
Premium Page blobs only LRS Premium storage account type for
page page blobs only.
blobs
Storage account endpoints
One of the benefits of using an Azure Storage Account is having a unique namespace in
Azure for your data. In order to do this, every storage account in Azure must have a
unique-in-Azure account name. The combination of the account name and the Azure
Storage service endpoint forms the endpoints for your storage account.
When naming your storage account, keep these rules in mind:
 Storage account names must be between 3 and 24 characters in length and may
contain numbers and lowercase letters only.
 Your storage account name must be unique within Azure. No two storage accounts
can have the same name. This supports the ability to have a unique, accessible
namespace in Azure.
The following table shows the endpoint format for Azure Storage services.
Expand table
Storage service Endpoint
Blob Storage https://<storage-account-name>.blob.core.windows.net
Data Lake Storage Gen2 https://<storage-account-name>.dfs.core.windows.net
Azure Files https://<storage-account-name>.file.core.windows.net
Queue Storage https://<storage-account-name>.queue.core.windows.net
Table Storage https://<storage-account-name>.table.core.windows.net

Describe Azure storage redundancy


Completed100 XP
 6 minutes
Azure Storage always stores multiple copies of your data so that it's protected from
planned and unplanned events such as transient hardware failures, network or power
outages, and natural disasters. Redundancy ensures that your storage account meets its
availability and durability targets even in the face of failures.
When deciding which redundancy option is best for your scenario, consider the tradeoffs
between lower costs and higher availability. The factors that help determine which
redundancy option you should choose include:
 How your data is replicated in the primary region.
 Whether your data is replicated to a second region that is geographically distant to
the primary region, to protect against regional disasters.
 Whether your application requires read access to the replicated data in the
secondary region if the primary region becomes unavailable.
Redundancy in the primary region
Data in an Azure Storage account is always replicated three times in the primary region.
Azure Storage offers two options for how your data is replicated in the primary region,
locally redundant storage (LRS) and zone-redundant storage (ZRS).
Locally redundant storage
Locally redundant storage (LRS) replicates your data three times within a single data
center in the primary region. LRS provides at least 11 nines of durability
(99.999999999%) of objects over a given year.

LRS is the lowest-cost redundancy option and offers the least durability compared to
other options. LRS protects your data against server rack and drive failures. However, if
a disaster such as fire or flooding occurs within the data center, all replicas of a storage
account using LRS may be lost or unrecoverable. To mitigate this risk, Microsoft
recommends using zone-redundant storage (ZRS), geo-redundant storage (GRS), or geo-
zone-redundant storage (GZRS).
Zone-redundant storage
For Availability Zone-enabled Regions, zone-redundant storage (ZRS) replicates your
Azure Storage data synchronously across three Azure availability zones in the primary
region. ZRS offers durability for Azure Storage data objects of at least 12 nines
(99.9999999999%) over a given year.
With ZRS, your data is still accessible for both read and write operations even if a zone
becomes unavailable. No remounting of Azure file shares from the connected clients is
required. If a zone becomes unavailable, Azure undertakes networking updates, such as
DNS repointing. These updates may affect your application if you access data before the
updates have completed.
Microsoft recommends using ZRS in the primary region for scenarios that require high
availability. ZRS is also recommended for restricting replication of data within a country
or region to meet data governance requirements.
Redundancy in a secondary region
For applications requiring high durability, you can choose to additionally copy the data in
your storage account to a secondary region that is hundreds of miles away from the
primary region. If the data in your storage account is copied to a secondary region, then
your data is durable even in the event of a catastrophic failure that prevents the data in
the primary region from being recovered.
When you create a storage account, you select the primary region for the account. The
paired secondary region is based on Azure Region Pairs, and can't be changed.
Azure Storage offers two options for copying your data to a secondary region: geo-
redundant storage (GRS) and geo-zone-redundant storage (GZRS). GRS is similar to
running LRS in two regions, and GZRS is similar to running ZRS in the primary region and
LRS in the secondary region.
By default, data in the secondary region isn't available for read or write access unless
there's a failover to the secondary region. If the primary region becomes unavailable,
you can choose to fail over to the secondary region. After the failover has completed, the
secondary region becomes the primary region, and you can again read and write data.
Important
Because data is replicated to the secondary region asynchronously, a failure that affects
the primary region may result in data loss if the primary region can't be recovered. The
interval between the most recent writes to the primary region and the last write to the
secondary region is known as the recovery point objective (RPO). The RPO indicates the
point in time to which data can be recovered. Azure Storage typically has an RPO of less
than 15 minutes, although there's currently no SLA on how long it takes to replicate data
to the secondary region.
Geo-redundant storage
GRS copies your data synchronously three times within a single physical location in the
primary region using LRS. It then copies your data asynchronously to a single physical
location in the secondary region (the region pair) using LRS. GRS offers durability for
Azure Storage data objects of at least 16 nines (99.99999999999999%) over a given
year.

Geo-zone-redundant storage
GZRS combines the high availability provided by redundancy across availability zones
with protection from regional outages provided by geo-replication. Data in a GZRS
storage account is copied across three Azure availability zones in the primary region
(similar to ZRS) and is also replicated to a secondary geographic region, using LRS, for
protection from regional disasters. Microsoft recommends using GZRS for applications
requiring maximum consistency, durability, and availability, excellent performance, and
resilience for disaster recovery.
GZRS is designed to provide at least 16 nines (99.99999999999999%) of durability of
objects over a given year.
Read access to data in the secondary region
Geo-redundant storage (with GRS or GZRS) replicates your data to another physical
location in the secondary region to protect against regional outages. However, that data
is available to be read only if the customer or Microsoft initiates a failover from the
primary to secondary region. However, if you enable read access to the secondary
region, your data is always available, even when the primary region is running optimally.
For read access to the secondary region, enable read-access geo-redundant storage (RA-
GRS) or read-access geo-zone-redundant storage (RA-GZRS).
Describe Azure storage services
Completed100 XP
 10 minutes
The Azure Storage platform includes the following data services:
 Azure Blobs: A massively scalable object store for text and binary data. Also
includes support for big data analytics through Data Lake Storage Gen2.
 Azure Files: Managed file shares for cloud or on-premises deployments.
 Azure Queues: A messaging store for reliable messaging between application
components.
 Azure Disks: Block-level storage volumes for Azure VMs.
 Azure Tables: NoSQL table option for structured, non-relational data.
Benefits of Azure Storage
Azure Storage services offer the following benefits for application developers and IT
professionals:
 Durable and highly available. Redundancy ensures that your data is safe if
transient hardware failures occur. You can also opt to replicate data across data
centers or geographical regions for additional protection from local catastrophes or
natural disasters. Data replicated in this way remains highly available if an
unexpected outage occurs.
 Secure. All data written to an Azure storage account is encrypted by the service.
Azure Storage provides you with fine-grained control over who has access to your
data.
 Scalable. Azure Storage is designed to be massively scalable to meet the data
storage and performance needs of today's applications.
 Managed. Azure handles hardware maintenance, updates, and critical issues for
you.
 Accessible. Data in Azure Storage is accessible from anywhere in the world over
HTTP or HTTPS. Microsoft provides client libraries for Azure Storage in a variety of
languages, including .NET, Java, Node.js, Python, PHP, Ruby, Go, and others, as
well as a mature REST API. Azure Storage supports scripting in Azure PowerShell or
Azure CLI. And the Azure portal and Azure Storage Explorer offer easy visual
solutions for working with your data.
Azure Blobs
Azure Blob storage is an object storage solution for the cloud. It can store massive
amounts of data, such as text or binary data. Azure Blob storage is unstructured,
meaning that there are no restrictions on the kinds of data it can hold. Blob storage can
manage thousands of simultaneous uploads, massive amounts of video data, constantly
growing log files, and can be reached from anywhere with an internet connection.
Blobs aren't limited to common file formats. A blob could contain gigabytes of binary
data streamed from a scientific instrument, an encrypted message for another
application, or data in a custom format for an app you're developing. One advantage of
blob storage over disk storage is that it doesn't require developers to think about or
manage disks. Data is uploaded as blobs, and Azure takes care of the physical storage
needs.
Blob storage is ideal for:
 Serving images or documents directly to a browser.
 Storing files for distributed access.
 Streaming video and audio.
 Storing data for backup and restore, disaster recovery, and archiving.
 Storing data for analysis by an on-premises or Azure-hosted service.
Accessing blob storage
Objects in blob storage can be accessed from anywhere in the world via HTTP or HTTPS.
Users or client applications can access blobs via URLs, the Azure Storage REST API, Azure
PowerShell, Azure CLI, or an Azure Storage client library. The storage client libraries are
available for multiple languages, including .NET, Java, Node.js, Python, PHP, and Ruby.
Blob storage tiers
Data stored in the cloud can grow at an exponential pace. To manage costs for your
expanding storage needs, it's helpful to organize your data based on attributes like
frequency of access and planned retention period. Data stored in the cloud can be
handled differently based on how it's generated, processed, and accessed over its
lifetime. Some data is actively accessed and modified throughout its lifetime. Some data
is accessed frequently early in its lifetime, with access dropping drastically as the data
ages. Some data remains idle in the cloud and is rarely, if ever, accessed after it's
stored. To accommodate these different access needs, Azure provides several access
tiers, which you can use to balance your storage costs with your access needs.
Azure Storage offers different access tiers for your blob storage, helping you store object
data in the most cost-effective manner. The available access tiers include:
 Hot access tier: Optimized for storing data that is accessed frequently (for
example, images for your website).
 Cool access tier: Optimized for data that is infrequently accessed and stored for
at least 30 days (for example, invoices for your customers).
 Cold access tier: Optimized for storing data that is infrequently accessed and
stored for at least 90 days.
 Archive access tier: Appropriate for data that is rarely accessed and stored for at
least 180 days, with flexible latency requirements (for example, long-term
backups).
The following considerations apply to the different access tiers:
 Hot, cool, and cold access tiers can be set at the account level. The archive access
tier isn't available at the account level.
 Hot, cool, cold, and archive tiers can be set at the blob level, during or after
upload.
 Data in the cool and cold access tiers can tolerate slightly lower availability, but
still requires high durability, retrieval latency, and throughput characteristics
similar to hot data. For cool and cold data, a lower availability service-level
agreement (SLA) and higher access costs compared to hot data are acceptable
trade-offs for lower storage costs.
 Archive storage stores data offline and offers the lowest storage costs, but also the
highest costs to rehydrate and access data.
Azure Files
Azure File storage offers fully managed file shares in the cloud that are accessible via the
industry standard Server Message Block (SMB) or Network File System (NFS) protocols.
Azure Files file shares can be mounted concurrently by cloud or on-premises
deployments. SMB Azure file shares are accessible from Windows, Linux, and macOS
clients. NFS Azure Files shares are accessible from Linux or macOS clients. Additionally,
SMB Azure file shares can be cached on Windows Servers with Azure File Sync for fast
access near where the data is being used.
Azure Files key benefits:
 Shared access: Azure file shares support the industry standard SMB and NFS
protocols, meaning you can seamlessly replace your on-premises file shares with
Azure file shares without worrying about application compatibility.
 Fully managed: Azure file shares can be created without the need to manage
hardware or an OS. This means you don't have to deal with patching the server OS
with critical security upgrades or replacing faulty hard disks.
 Scripting and tooling: PowerShell cmdlets and Azure CLI can be used to create,
mount, and manage Azure file shares as part of the administration of Azure
applications. You can create and manage Azure file shares using Azure portal and
Azure Storage Explorer.
 Resiliency: Azure Files has been built from the ground up to always be available.
Replacing on-premises file shares with Azure Files means you don't have to wake
up in the middle of the night to deal with local power outages or network issues.
 Familiar programmability: Applications running in Azure can access data in the
share via file system I/O APIs. Developers can therefore use their existing code and
skills to migrate existing applications. In addition to System IO APIs, you can use
Azure Storage Client Libraries or the Azure Storage REST API.
Azure Queues
Azure Queue storage is a service for storing large numbers of messages. Once stored,
you can access the messages from anywhere in the world via authenticated calls using
HTTP or HTTPS. A queue can contain as many messages as your storage account has
room for (potentially millions). Each individual message can be up to 64 KB in size.
Queues are commonly used to create a backlog of work to process asynchronously.
Queue storage can be combined with compute functions like Azure Functions to take an
action when a message is received. For example, you want to perform an action after a
customer uploads a form to your website. You could have the submit button on the
website trigger a message to the Queue storage. Then, you could use Azure Functions to
trigger an action once the message was received.
Azure Disks
Azure Disk storage, or Azure managed disks, are block-level storage volumes managed
by Azure for use with Azure VMs. Conceptually, they’re the same as a physical disk, but
they’re virtualized – offering greater resiliency and availability than a physical disk. With
managed disks, all you have to do is provision the disk, and Azure will take care of the
rest.
Azure Tables
Azure Table storage stores large amounts of structured data. Azure tables are a NoSQL
datastore that accepts authenticated calls from inside and outside the Azure cloud. This
enables you to use Azure tables to build your hybrid or multicloud solution and have your
data always available. Azure tables are ideal for storing structured, non-relational data.
Identify Azure data migration options
Completed100 XP
 5 minutes
Now that you understand the different storage options within Azure, it’s important to also
understand how to get your data and information into Azure. Azure supports both real-
time migration of infrastructure, applications, and data using Azure Migrate as well as
asynchronous migration of data using Azure Data Box.
Azure Migrate
Azure Migrate is a service that helps you migrate from an on-premises environment to
the cloud. Azure Migrate functions as a hub to help you manage the assessment and
migration of your on-premises datacenter to Azure. It provides the following:
 Unified migration platform: A single portal to start, run, and track your
migration to Azure.
 Range of tools: A range of tools for assessment and migration. Azure Migrate
tools include Azure Migrate: Discovery and assessment and Azure Migrate: Server
Migration. Azure Migrate also integrates with other Azure services and tools, and
with independent software vendor (ISV) offerings.
 Assessment and migration: In the Azure Migrate hub, you can assess and
migrate your on-premises infrastructure to Azure.
Integrated tools
In addition to working with tools from ISVs, the Azure Migrate hub also includes the
following tools to help with migration:
 Azure Migrate: Discovery and assessment. Discover and assess on-premises
servers running on VMware, Hyper-V, and physical servers in preparation for
migration to Azure.
 Azure Migrate: Server Migration. Migrate VMware VMs, Hyper-V VMs, physical
servers, other virtualized servers, and public cloud VMs to Azure.
 Data Migration Assistant. Data Migration Assistant is a stand-alone tool to
assess SQL Servers. It helps pinpoint potential problems blocking migration. It
identifies unsupported features, new features that can benefit you after migration,
and the right path for database migration.
 Azure Database Migration Service. Migrate on-premises databases to Azure
VMs running SQL Server, Azure SQL Database, or SQL Managed Instances.
 Azure App Service migration assistant. Azure App Service migration assistant
is a standalone tool to assess on-premises websites for migration to Azure App
Service. Use Migration Assistant to migrate .NET and PHP web apps to Azure.
 Azure Data Box. Use Azure Data Box products to move large amounts of offline
data to Azure.
Azure Data Box
Azure Data Box is a physical migration service that helps transfer large amounts of data
in a quick, inexpensive, and reliable way. The secure data transfer is accelerated by
shipping you a proprietary Data Box storage device that has a maximum usable storage
capacity of 80 terabytes. The Data Box is transported to and from your datacenter via a
regional carrier. A rugged case protects and secures the Data Box from damage during
transit.
You can order the Data Box device via the Azure portal to import or export data from
Azure. Once the device is received, you can quickly set it up using the local web UI and
connect it to your network. Once you’re finished transferring the data (either into or out
of Azure), simply return the Data Box. If you’re transferring data into Azure, the data is
automatically uploaded once Microsoft receives the Data Box back. The entire process is
tracked end-to-end by the Data Box service in the Azure portal.
Use cases
Data Box is ideally suited to transfer data sizes larger than 40 TBs in scenarios with no to
limited network connectivity. The data movement can be one-time, periodic, or an initial
bulk data transfer followed by periodic transfers.
Here are the various scenarios where Data Box can be used to import data to Azure.
 Onetime migration - when a large amount of on-premises data is moved to Azure.
 Moving a media library from offline tapes into Azure to create an online media
library.
 Migrating your VM farm, SQL server, and applications to Azure.
 Moving historical data to Azure for in-depth analysis and reporting using HDInsight.
 Initial bulk transfer - when an initial bulk transfer is done using Data Box (seed)
followed by incremental transfers over the network.
 Periodic uploads - when large amount of data is generated periodically and needs
to be moved to Azure.
Here are the various scenarios where Data Box can be used to export data from Azure.
 Disaster recovery - when a copy of the data from Azure is restored to an on-
premises network. In a typical disaster recovery scenario, a large amount of Azure
data is exported to a Data Box. Microsoft then ships this Data Box, and the data is
restored on your premises in a short time.
 Security requirements - when you need to be able to export data out of Azure due
to government or security requirements.
 Migrate back to on-premises or to another cloud service provider - when you want
to move all the data back to on-premises, or to another cloud service provider,
export data via Data Box to migrate the workloads.
Once the data from your import order is uploaded to Azure, the disks on the device are
wiped clean in accordance with NIST 800-88r1 standards. For an export order, the disks
are erased once the device reaches the Azure datacenter.
Identify Azure file movement options
Completed100 XP
 3 minutes
In addition to large scale migration using services like Azure Migrate and Azure Data Box,
Azure also has tools designed to help you move or interact with individual files or small
file groups. Among those tools are AzCopy, Azure Storage Explorer, and Azure File Sync.
AzCopy
AzCopy is a command-line utility that you can use to copy blobs or files to or from your
storage account. With AzCopy, you can upload files, download files, copy files between
storage accounts, and even synchronize files. AzCopy can even be configured to work
with other cloud providers to help move files back and forth between clouds.
Important
Synchronizing blobs or files with AzCopy is one-direction synchronization. When you
synchronize, you designated the source and destination, and AzCopy will copy files or
blobs in that direction. It doesn't synchronize bi-directionally based on timestamps or
other metadata.
Azure Storage Explorer
Azure Storage Explorer is a standalone app that provides a graphical interface to
manage files and blobs in your Azure Storage Account. It works on Windows, macOS, and
Linux operating systems and uses AzCopy on the backend to perform all of the file and
blob management tasks. With Storage Explorer, you can upload to Azure, download from
Azure, or move between storage accounts.
Azure File Sync
Azure File Sync is a tool that lets you centralize your file shares in Azure Files and keep
the flexibility, performance, and compatibility of a Windows file server. It’s almost like
turning your Windows file server into a miniature content delivery network. Once you
install Azure File Sync on your local Windows server, it will automatically stay bi-
directionally synced with your files in Azure.
With Azure File Sync, you can:
 Use any protocol that's available on Windows Server to access your data locally,
including SMB, NFS, and FTPS.
 Have as many caches as you need across the world.
 Replace a failed local server by installing Azure File Sync on a new server in the
same datacenter.
 Configure cloud tiering so the most frequently accessed files are replicated locally,
while infrequently accessed files are kept in the cloud until requested.
Describe Azure directory services
Completed100 XP
 6 minutes
Microsoft Entra ID is a directory service that enables you to sign in and access both
Microsoft cloud applications and cloud applications that you develop. Microsoft Entra ID
can also help you maintain your on-premises Active Directory deployment.
For on-premises environments, Active Directory running on Windows Server provides an
identity and access management service that's managed by your organization. Microsoft
Entra ID is Microsoft's cloud-based identity and access management service. With
Microsoft Entra ID, you control the identity accounts, but Microsoft ensures that the
service is available globally. If you've worked with Active Directory, Microsoft Entra ID
will be familiar to you.
When you secure identities on-premises with Active Directory, Microsoft doesn't monitor
sign-in attempts. When you connect Active Directory with Microsoft Entra ID, Microsoft
can help protect you by detecting suspicious sign-in attempts at no extra cost. For
example, Microsoft Entra ID can detect sign-in attempts from unexpected locations or
unknown devices.
Who uses Microsoft Entra ID?
Microsoft Entra ID is for:
 IT administrators. Administrators can use Microsoft Entra ID to control access to
applications and resources based on their business requirements.
 App developers. Developers can use Microsoft Entra ID to provide a standards-
based approach for adding functionality to applications that they build, such as
adding SSO functionality to an app or enabling an app to work with a user's
existing credentials.
 Users. Users can manage their identities and take maintenance actions like self-
service password reset.
 Online service subscribers. Microsoft 365, Microsoft Office 365, Azure, and
Microsoft Dynamics CRM Online subscribers are already using Microsoft Entra ID to
authenticate into their account.
What does Microsoft Entra ID do?
Microsoft Entra ID provides services such as:
 Authentication: This includes verifying identity to access applications and
resources. It also includes providing functionality such as self-service password
reset, multifactor authentication, a custom list of banned passwords, and smart
lockout services.
 Single sign-on: Single sign-on (SSO) enables you to remember only one
username and one password to access multiple applications. A single identity is
tied to a user, which simplifies the security model. As users change roles or leave
an organization, access modifications are tied to that identity, which greatly
reduces the effort needed to change or disable accounts.
 Application management: You can manage your cloud and on-premises apps by
using Microsoft Entra ID. Features like Application Proxy, SaaS apps, the My Apps
portal, and single sign-on provide a better user experience.
 Device management: Along with accounts for individual people, Microsoft Entra
ID supports the registration of devices. Registration enables devices to be
managed through tools like Microsoft Intune. It also allows for device-based
Conditional Access policies to restrict access attempts to only those coming from
known devices, regardless of the requesting user account.
Can I connect my on-premises AD with Microsoft Entra ID?
If you had an on-premises environment running Active Directory and a cloud deployment
using Microsoft Entra ID, you would need to maintain two identity sets. However, you can
connect Active Directory with Microsoft Entra ID, enabling a consistent identity
experience between cloud and on-premises.
One method of connecting Microsoft Entra ID with your on-premises AD is using Microsoft
Entra Connect. Microsoft Entra Connect synchronizes user identities between on-
premises Active Directory and Microsoft Entra ID. Microsoft Entra Connect synchronizes
changes between both identity systems, so you can use features like SSO, multifactor
authentication, and self-service password reset under both systems.
What is Microsoft Entra Domain Services?
Microsoft Entra Domain Services is a service that provides managed domain services
such as domain join, group policy, lightweight directory access protocol (LDAP), and
Kerberos/NTLM authentication. Just like Microsoft Entra ID lets you use directory services
without having to maintain the infrastructure supporting it, with Microsoft Entra Domain
Services, you get the benefit of domain services without the need to deploy, manage,
and patch domain controllers (DCs) in the cloud.
A Microsoft Entra Domain Services managed domain lets you run legacy applications in
the cloud that can't use modern authentication methods, or where you don't want
directory lookups to always go back to an on-premises AD DS environment. You can lift
and shift those legacy applications from your on-premises environment into a managed
domain, without needing to manage the AD DS environment in the cloud.
Microsoft Entra Domain Services integrates with your existing Microsoft Entra tenant.
This integration lets users sign into services and applications connected to the managed
domain using their existing credentials. You can also use existing groups and user
accounts to secure access to resources. These features provide a smoother lift-and-shift
of on-premises resources to Azure.
How does Microsoft Entra Domain Services work?
When you create a Microsoft Entra Domain Services managed domain, you define a
unique namespace. This namespace is the domain name. Two Windows Server domain
controllers are then deployed into your selected Azure region. This deployment of DCs is
known as a replica set.
You don't need to manage, configure, or update these DCs. The Azure platform handles
the DCs as part of the managed domain, including backups and encryption at rest using
Azure Disk Encryption.
Is information synchronized?
A managed domain is configured to perform a one-way synchronization from Microsoft
Entra ID to Microsoft Entra Domain Services. You can create resources directly in the
managed domain, but they aren't synchronized back to Microsoft Entra ID. In a hybrid
environment with an on-premises AD DS environment, Microsoft Entra Connect
synchronizes identity information with Microsoft Entra ID, which is then synchronized to
the managed domain.

Applications, services, and VMs in Azure that connect to the managed domain can then
use common Microsoft Entra Domain Services features such as domain join, group policy,
LDAP, and Kerberos/NTLM authentication.
Describe Azure authentication methods
Completed100 XP
 6 minutes
Authentication is the process of establishing the identity of a person, service, or device.
It requires the person, service, or device to provide some type of credential to prove who
they are. Authentication is like presenting ID when you’re traveling. It doesn’t confirm
that you’re ticketed, it just proves that you're who you say you are. Azure supports
multiple authentication methods, including standard passwords, single sign-on (SSO),
multifactor authentication (MFA), and passwordless.
For the longest time, security and convenience seemed to be at odds with each other.
Thankfully, new authentication solutions provide both security and convenience.
The following diagram shows the security level compared to the convenience. Notice
Passwordless authentication is high security and high convenience while passwords on
their own are low security but high convenience.
What's single sign-on?
Single sign-on (SSO) enables a user to sign in one time and use that credential to access
multiple resources and applications from different providers. For SSO to work, the
different applications and providers must trust the initial authenticator.
More identities mean more passwords to remember and change. Password policies can
vary among applications. As complexity requirements increase, it becomes increasingly
difficult for users to remember them. The more passwords a user has to manage, the
greater the risk of a credential-related security incident.
Consider the process of managing all those identities. More strain is placed on help desks
as they deal with account lockouts and password reset requests. If a user leaves an
organization, tracking down all those identities and ensuring they're disabled can be
challenging. If an identity is overlooked, this might allow access when it should have
been eliminated.
With SSO, you need to remember only one ID and one password. Access across
applications is granted to a single identity that's tied to the user, which simplifies the
security model. As users change roles or leave an organization, access is tied to a single
identity. This change greatly reduces the effort needed to change or disable accounts.
Using SSO for accounts makes it easier for users to manage their identities and for IT to
manage users.
Important
Single sign-on is only as secure as the initial authenticator because the subsequent
connections are all based on the security of the initial authenticator.
What’s multifactor authentication?
Multifactor authentication is the process of prompting a user for an extra form (or factor)
of identification during the sign-in process. MFA helps protect against a password
compromise in situations where the password was compromised but the second factor
wasn't.
Think about how you sign into websites, email, or online services. After entering your
username and password, have you ever needed to enter a code that was sent to your
phone? If so, you've used multifactor authentication to sign in.
Multifactor authentication provides additional security for your identities by requiring two
or more elements to fully authenticate. These elements fall into three categories:
 Something the user knows – this might be a challenge question.
 Something the user has – this might be a code that's sent to the user's mobile
phone.
 Something the user is – this is typically some sort of biometric property, such as a
fingerprint or face scan.
Multifactor authentication increases identity security by limiting the impact of credential
exposure (for example, stolen usernames and passwords). With multifactor
authentication enabled, an attacker who has a user's password would also need to have
possession of their phone or their fingerprint to fully authenticate.
Compare multifactor authentication with single-factor authentication. Under single-factor
authentication, an attacker would need only a username and password to authenticate.
Multifactor authentication should be enabled wherever possible because it adds
enormous benefits to security.
What's Microsoft Entra multifactor authentication?
Microsoft Entra multifactor authentication is a Microsoft service that provides multifactor
authentication capabilities. Microsoft Entra multifactor authentication enables users to
choose an additional form of authentication during sign-in, such as a phone call or
mobile app notification.
What’s passwordless authentication?
Features like MFA are a great way to secure your organization, but users often get
frustrated with the additional security layer on top of having to remember their
passwords. People are more likely to comply when it's easy and convenient to do so.
Passwordless authentication methods are more convenient because the password is
removed and replaced with something you have, plus something you are, or something
you know.
Passwordless authentication needs to be set up on a device before it can work. For
example, your computer is something you have. Once it’s been registered or enrolled,
Azure now knows that it’s associated with you. Now that the computer is known, once
you provide something you know or are (such as a PIN or fingerprint), you can be
authenticated without using a password.
Each organization has different needs when it comes to authentication. Microsoft global
Azure and Azure Government offer the following three passwordless authentication
options that integrate with Microsoft Entra ID:
 Windows Hello for Business
 Microsoft Authenticator app
 FIDO2 security keys
Windows Hello for Business
Windows Hello for Business is ideal for information workers that have their own
designated Windows PC. The biometric and PIN credentials are directly tied to the user's
PC, which prevents access from anyone other than the owner. With public key
infrastructure (PKI) integration and built-in support for single sign-on (SSO), Windows
Hello for Business provides a convenient method for seamlessly accessing corporate
resources on-premises and in the cloud.
Microsoft Authenticator App
You can also allow your employee's phone to become a passwordless authentication
method. You may already be using the Microsoft Authenticator App as a convenient
multifactor authentication option in addition to a password. You can also use the
Authenticator App as a passwordless option.
The Authenticator App turns any iOS or Android phone into a strong, passwordless
credential. Users can sign-in to any platform or browser by getting a notification to their
phone, matching a number displayed on the screen to the one on their phone, and then
using their biometric (touch or face) or PIN to confirm. Refer to Download and install the
Microsoft Authenticator app for installation details.
FIDO2 security keys
The FIDO (Fast IDentity Online) Alliance helps to promote open authentication standards
and reduce the use of passwords as a form of authentication. FIDO2 is the latest
standard that incorporates the web authentication (WebAuthn) standard.
FIDO2 security keys are an unphishable standards-based passwordless authentication
method that can come in any form factor. Fast Identity Online (FIDO) is an open standard
for passwordless authentication. FIDO allows users and organizations to leverage the
standard to sign-in to their resources without a username or password by using an
external security key or a platform key built into a device.
Users can register and then select a FIDO2 security key at the sign-in interface as their
main means of authentication. These FIDO2 security keys are typically USB devices, but
could also use Bluetooth or NFC. With a hardware device that handles the authentication,
the security of an account is increased as there's no password that could be exposed or
guessed.
Describe Azure external identities
Completed100 XP
 3 minutes
An external identity is a person, device, service, etc. that is outside your organization.
Microsoft Entra External ID refers to all the ways you can securely interact with users
outside of your organization. If you want to collaborate with partners, distributors,
suppliers, or vendors, you can share your resources and define how your internal users
can access external organizations. If you're a developer creating consumer-facing apps,
you can manage your customers' identity experiences.
External identities may sound similar to single sign-on. With External Identities, external
users can "bring their own identities." Whether they have a corporate or government-
issued digital identity, or an unmanaged social identity like Google or Facebook, they can
use their own credentials to sign in. The external user’s identity provider manages their
identity, and you manage access to your apps with Microsoft Entra ID or Azure AD B2C to
keep your resources protected.
The following capabilities make up External Identities:
 Business to business (B2B) collaboration - Collaborate with external users by
letting them use their preferred identity to sign-in to your Microsoft applications or
other enterprise applications (SaaS apps, custom-developed apps, etc.). B2B
collaboration users are represented in your directory, typically as guest users.
 B2B direct connect - Establish a mutual, two-way trust with another Microsoft
Entra organization for seamless collaboration. B2B direct connect currently
supports Teams shared channels, enabling external users to access your resources
from within their home instances of Teams. B2B direct connect users aren't
represented in your directory, but they're visible from within the Teams shared
channel and can be monitored in Teams admin center reports.
 Microsoft Azure Active Directory business to customer (B2C) - Publish
modern SaaS apps or custom-developed apps (excluding Microsoft apps) to
consumers and customers, while using Azure AD B2C for identity and access
management.
Depending on how you want to interact with external organizations and the types of
resources you need to share, you can use a combination of these capabilities.
With Microsoft Entra ID, you can easily enable collaboration across organizational
boundaries by using the Microsoft Entra B2B feature. Guest users from other tenants can
be invited by administrators or by other users. This capability also applies to social
identities such as Microsoft accounts.
You also can easily ensure that guest users have appropriate access. You can ask the
guests themselves or a decision maker to participate in an access review and recertify
(or attest) to the guests' access. The reviewers can give their input on each user's need
for continued access, based on suggestions from Microsoft Entra ID. When an access
review is finished, you can then make changes and remove access for guests who no
longer need it.
Describe Azure conditional access
Completed100 XP
 3 minutes
Conditional Access is a tool that Microsoft Entra ID uses to allow (or deny) access to
resources based on identity signals. These signals include who the user is, where the
user is, and what device the user is requesting access from.
Conditional Access helps IT administrators:
 Empower users to be productive wherever and whenever.
 Protect the organization's assets.
Conditional Access also provides a more granular multifactor authentication experience
for users. For example, a user might not be challenged for second authentication factor if
they're at a known location. However, they might be challenged for a second
authentication factor if their sign-in signals are unusual or they're at an unexpected
location.
During sign-in, Conditional Access collects signals from the user, makes decisions based
on those signals, and then enforces that decision by allowing or denying the access
request or challenging for a multifactor authentication response.
The following diagram illustrates this flow:

Here, the signal might be the user's location, the user's device, or the application that
the user is trying to access.
Based on these signals, the decision might be to allow full access if the user is signing in
from their usual location. If the user is signing in from an unusual location or a location
that's marked as high risk, then access might be blocked entirely or possibly granted
after the user provides a second form of authentication.
Enforcement is the action that carries out the decision. For example, the action is to
allow access or require the user to provide a second form of authentication.
When can I use Conditional Access?
Conditional Access is useful when you need to:
 Require multifactor authentication (MFA) to access an application depending on
the requester’s role, location, or network. For example, you could require MFA for
administrators but not regular users or for people connecting from outside your
corporate network.
 Require access to services only through approved client applications. For example,
you could limit which email applications are able to connect to your email service.
 Require users to access your application only from managed devices. A managed
device is a device that meets your standards for security and compliance.
 Block access from untrusted sources, such as access from unknown or unexpected
locations.
Describe Azure role-based access control
Completed100 XP
 5 minutes
When you have multiple IT and engineering teams, how can you control what access
they have to the resources in your cloud environment? The principle of least privilege
says you should only grant access up to the level needed to complete a task. If you only
need read access to a storage blob, then you should only be granted read access to that
storage blob. Write access to that blob shouldn’t be granted, nor should read access to
other storage blobs. It’s a good security practice to follow.
However, managing that level of permissions for an entire team would become tedious.
Instead of defining the detailed access requirements for each individual, and then
updating access requirements when new resources are created or new people join the
team, Azure enables you to control access through Azure role-based access control
(Azure RBAC).
Azure provides built-in roles that describe common access rules for cloud resources. You
can also define your own roles. Each role has an associated set of access permissions
that relate to that role. When you assign individuals or groups to one or more roles, they
receive all the associated access permissions.
So, if you hire a new engineer and add them to the Azure RBAC group for engineers, they
automatically get the same access as the other engineers in the same Azure RBAC
group. Similarly, if you add additional resources and point Azure RBAC at them, everyone
in that Azure RBAC group will now have those permissions on the new resources as well
as the existing resources.
How is role-based access control applied to resources?
Role-based access control is applied to a scope, which is a resource or set of resources
that this access applies to.
The following diagram shows the relationship between roles and scopes. A management
group, subscription, or resource group might be given the role of owner, so they have
increased control and authority. An observer, who isn't expected to make any updates,
might be given a role of Reader for the same scope, enabling them to review or observe
the management group, subscription, or resource group.
Scopes include:
 A management group (a collection of multiple subscriptions).
 A single subscription.
 A resource group.
 A single resource.
Observers, users managing resources, admins, and automated processes illustrate the
kinds of users or accounts that would typically be assigned each of the various roles.
Azure RBAC is hierarchical, in that when you grant access at a parent scope, those
permissions are inherited by all child scopes. For example:
 When you assign the Owner role to a user at the management group scope, that
user can manage everything in all subscriptions within the management group.
 When you assign the Reader role to a group at the subscription scope, the
members of that group can view every resource group and resource within the
subscription.
How is Azure RBAC enforced?
Azure RBAC is enforced on any action that's initiated against an Azure resource that
passes through Azure Resource Manager. Resource Manager is a management service
that provides a way to organize and secure your cloud resources.
You typically access Resource Manager from the Azure portal, Azure Cloud Shell, Azure
PowerShell, and the Azure CLI. Azure RBAC doesn't enforce access permissions at the
application or data level. Application security must be handled by your application.
Azure RBAC uses an allow model. When you're assigned a role, Azure RBAC allows you to
perform actions within the scope of that role. If one role assignment grants you read
permissions to a resource group and a different role assignment grants you write
permissions to the same resource group, you have both read and write permissions on
that resource group.
Describe Zero Trust model
Completed100 XP
 3 minutes
Zero Trust is a security model that assumes the worst case scenario and protects
resources with that expectation. Zero Trust assumes breach at the outset, and then
verifies each request as though it originated from an uncontrolled network.
Today, organizations need a new security model that effectively adapts to the
complexity of the modern environment; embraces the mobile workforce; and protects
people, devices, applications, and data wherever they're located.
To address this new world of computing, Microsoft highly recommends the Zero Trust
security model, which is based on these guiding principles:
 Verify explicitly - Always authenticate and authorize based on all available data
points.
 Use least privilege access - Limit user access with Just-In-Time and Just-Enough-
Access (JIT/JEA), risk-based adaptive policies, and data protection.
 Assume breach - Minimize blast radius and segment access. Verify end-to-end
encryption. Use analytics to get visibility, drive threat detection, and improve
defenses.
Adjusting to Zero Trust
Traditionally, corporate networks were restricted, protected, and generally assumed
safe. Only managed computers could join the network, VPN access was tightly
controlled, and personal devices were frequently restricted or blocked.
The Zero Trust model flips that scenario. Instead of assuming that a device is safe
because it’s within the corporate network, it requires everyone to authenticate. Then
grants access based on authentication rather than location.

Describe defense-in-depth
Completed100 XP
 4 minutes
The objective of defense-in-depth is to protect information and prevent it from being
stolen by those who aren't authorized to access it.
A defense-in-depth strategy uses a series of mechanisms to slow the advance of an
attack that aims at acquiring unauthorized access to data.
Layers of defense-in-depth
You can visualize defense-in-depth as a set of layers, with the data to be secured at the
center and all the other layers functioning to protect that central data layer.
Each layer provides protection so that if one layer is breached, a subsequent layer is
already in place to prevent further exposure. This approach removes reliance on any
single layer of protection. It slows down an attack and provides alert information that
security teams can act upon, either automatically or manually.
Here's a brief overview of the role of each layer:
 The physical security layer is the first line of defense to protect computing
hardware in the datacenter.
 The identity and access layer controls access to infrastructure and change control.
 The perimeter layer uses distributed denial of service (DDoS) protection to filter
large-scale attacks before they can cause a denial of service for users.
 The network layer limits communication between resources through segmentation
and access controls.
 The compute layer secures access to virtual machines.
 The application layer helps ensure that applications are secure and free of security
vulnerabilities.
 The data layer controls access to business and customer data that you need to
protect.
These layers provide a guideline for you to help make security configuration decisions in
all of the layers of your applications.
Azure provides security tools and features at every level of the defense-in-depth
concept. Let's take a closer look at each layer:
Physical security
Physically securing access to buildings and controlling access to computing hardware
within the datacenter are the first line of defense.
With physical security, the intent is to provide physical safeguards against access to
assets. These safeguards ensure that other layers can't be bypassed, and loss or theft is
handled appropriately. Microsoft uses various physical security mechanisms in its cloud
datacenters.
Identity and access
The identity and access layer is all about ensuring that identities are secure, that access
is granted only to what's needed, and that sign-in events and changes are logged.
At this layer, it's important to:
 Control access to infrastructure and change control.
 Use single sign-on (SSO) and multifactor authentication.
 Audit events and changes.
Perimeter
The network perimeter protects from network-based attacks against your resources.
Identifying these attacks, eliminating their impact, and alerting you when they happen
are important ways to keep your network secure.
At this layer, it's important to:
 Use DDoS protection to filter large-scale attacks before they can affect the
availability of a system for users.
 Use perimeter firewalls to identify and alert on malicious attacks against your
network.
Network
At this layer, the focus is on limiting the network connectivity across all your resources to
allow only what's required. By limiting this communication, you reduce the risk of an
attack spreading to other systems in your network.
At this layer, it's important to:
 Limit communication between resources.
 Deny by default.
 Restrict inbound internet access and limit outbound access where appropriate.
 Implement secure connectivity to on-premises networks.
Compute
Malware, unpatched systems, and improperly secured systems open your environment
to attacks. The focus in this layer is on making sure that your compute resources are
secure and that you have the proper controls in place to minimize security issues.
At this layer, it's important to:
 Secure access to virtual machines.
 Implement endpoint protection on devices and keep systems patched and current.
Application
Integrating security into the application development lifecycle helps reduce the number
of vulnerabilities introduced in code. Every development team should ensure that its
applications are secure by default.
At this layer, it's important to:
 Ensure that applications are secure and free of vulnerabilities.
 Store sensitive application secrets in a secure storage medium.
 Make security a design requirement for all application development.
Data
Those who store and control access to data are responsible for ensuring that it's properly
secured. Often, regulatory requirements dictate the controls and processes that must be
in place to ensure the confidentiality, integrity, and availability of the data.
In almost all cases, attackers are after data:
 Stored in a database.
 Stored on disk inside virtual machines.
 Stored in software as a service (SaaS) applications, such as Office 365.
 Managed through cloud storage.
Describe Microsoft Defender for Cloud
Completed100 XP
 6 minutes
Defender for Cloud is a monitoring tool for security posture management and threat
protection. It monitors your cloud, on-premises, hybrid, and multicloud environments to
provide guidance and notifications aimed at strengthening your security posture.
Defender for Cloud provides the tools needed to harden your resources, track your
security posture, protect against cyber attacks, and streamline security management.
Deployment of Defender for Cloud is easy, it’s already natively integrated to Azure.
Protection everywhere you’re deployed
Because Defender for Cloud is an Azure-native service, many Azure services are
monitored and protected without needing any deployment. However, if you also have an
on-premises datacenter or are also operating in another cloud environment, monitoring
of Azure services may not give you a complete picture of your security situation.
When necessary, Defender for Cloud can automatically deploy a Log Analytics agent to
gather security-related data. For Azure machines, deployment is handled directly. For
hybrid and multicloud environments, Microsoft Defender plans are extended to non-
Azure machines with the help of Azure Arc. Cloud security posture management (CSPM)
features are extended to multicloud machines without the need for any agents.
Azure-native protections
Defender for Cloud helps you detect threats across:
 Azure PaaS services – Detect threats targeting Azure services including Azure App
Service, Azure SQL, Azure Storage Account, and more data services. You can also
perform anomaly detection on your Azure activity logs using the native integration
with Microsoft Defender for Cloud Apps (formerly known as Microsoft Cloud App
Security).
 Azure data services – Defender for Cloud includes capabilities that help you
automatically classify your data in Azure SQL. You can also get assessments for
potential vulnerabilities across Azure SQL and Storage services, and
recommendations for how to mitigate them.
 Networks – Defender for Cloud helps you limit exposure to brute force attacks. By
reducing access to virtual machine ports, using the just-in-time VM access, you can
harden your network by preventing unnecessary access. You can set secure access
policies on selected ports, for only authorized users, allowed source IP address
ranges or IP addresses, and for a limited amount of time.
Defend your hybrid resources
In addition to defending your Azure environment, you can add Defender for Cloud
capabilities to your hybrid cloud environment to protect your non-Azure servers. To help
you focus on what matters the most, you'll get customized threat intelligence and
prioritized alerts according to your specific environment.
To extend protection to on-premises machines, deploy Azure Arc and enable Defender
for Cloud's enhanced security features.
Defend resources running on other clouds
Defender for Cloud can also protect resources in other clouds (such as AWS and GCP).
For example, if you've connected an Amazon Web Services (AWS) account to an Azure
subscription, you can enable any of these protections:
 Defender for Cloud's CSPM features extend to your AWS resources. This agentless
plan assesses your AWS resources according to AWS-specific security
recommendations, and includes the results in the secure score. The resources will
also be assessed for compliance with built-in standards specific to AWS (AWS CIS,
AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's
asset inventory page is a multicloud enabled feature helping you manage your
AWS resources alongside your Azure resources.
 Microsoft Defender for Containers extends its container threat detection and
advanced defenses to your Amazon EKS Linux clusters.
 Microsoft Defender for Servers brings threat detection and advanced defenses to
your Windows and Linux EC2 instances.
Assess, Secure, and Defend
Defender for Cloud fills three vital needs as you manage the security of your resources
and workloads in the cloud and on-premises:
 Continuously assess – Know your security posture. Identify and track
vulnerabilities.
 Secure – Harden resources and services with Azure Security Benchmark.
 Defend – Detect and resolve threats to resources, workloads, and services.

Continuously assess
Defender for cloud helps you continuously assess your environment. Defender for Cloud
includes vulnerability assessment solutions for your virtual machines, container
registries, and SQL servers.
Microsoft Defender for servers includes automatic, native integration with Microsoft
Defender for Endpoint. With this integration enabled, you'll have access to the
vulnerability findings from Microsoft threat and vulnerability management.
Between these assessment tools you’ll have regular, detailed vulnerability scans that
cover your compute, data, and infrastructure. You can review and respond to the results
of these scans all from within Defender for Cloud.
Secure
From authentication methods to access control to the concept of Zero Trust, security in
the cloud is an essential basic that must be done right. In order to be secure in the cloud,
you have to ensure your workloads are secure. To secure your workloads, you need
security policies in place that are tailored to your environment and situation. Because
policies in Defender for Cloud are built on top of Azure Policy controls, you're getting the
full range and flexibility of a world-class policy solution. In Defender for Cloud, you can
set your policies to run on management groups, across subscriptions, and even for a
whole tenant.
One of the benefits of moving to the cloud is the ability to grow and scale as you need,
adding new services and resources as necessary. Defender for Cloud is constantly
monitoring for new resources being deployed across your workloads. Defender for Cloud
assesses if new resources are configured according to security best practices. If not,
they're flagged and you get a prioritized list of recommendations for what you need to
fix. Recommendations help you reduce the attack surface across each of your resources.
The list of recommendations is enabled and supported by the Azure Security Benchmark.
This Microsoft-authored, Azure-specific, benchmark provides a set of guidelines for
security and compliance best practices based on common compliance frameworks.
In this way, Defender for Cloud enables you not just to set security policies, but to apply
secure configuration standards across your resources.
To help you understand how important each recommendation is to your overall security
posture, Defender for Cloud groups the recommendations into security controls and adds
a secure score value to each control. The secure score gives you an at-a-glance indicator
of the health of your security posture, while the controls give you a working list of things
to consider to improve your security score and your overall security posture.
Defend
The first two areas were focused on assessing, monitoring, and maintaining your
environment. Defender for Cloud also helps you defend your environment by providing
security alerts and advanced threat protection features.
Security alerts
When Defender for Cloud detects a threat in any area of your environment, it generates
a security alert. Security alerts:
 Describe details of the affected resources
 Suggest remediation steps
 Provide, in some cases, an option to trigger a logic app in response
Whether an alert is generated by Defender for Cloud or received by Defender for Cloud
from an integrated security product, you can export it. Defender for Cloud's threat
protection includes fusion kill-chain analysis, which automatically correlates alerts in
your environment based on cyber kill-chain analysis, to help you better understand the
full story of an attack campaign, where it started, and what kind of impact it had on your
resources.
Advanced threat protection
Defender for cloud provides advanced threat protection features for many of your
deployed resources, including virtual machines, SQL databases, containers, web
applications, and your network. Protections include securing the management ports of
your VMs with just-in-time access, and adaptive application controls to create allowlists
for what apps should and shouldn't run on your machines.
Describe factors that can affect costs in Azure
Completed100 XP
 7 minutes
The following video provides an introduction to things that can impact your costs in
Azure.
Azure shifts development costs from the capital expense (CapEx) of building out and
maintaining infrastructure and facilities to an operational expense (OpEx) of renting
infrastructure as you need it, whether it’s compute, storage, networking, and so on.
That OpEx cost can be impacted by many factors. Some of the impacting factors are:
 Resource type
 Consumption
 Maintenance
 Geography
 Subscription type
 Azure Marketplace
Resource type
A number of factors influence the cost of Azure resources. The type of resources, the
settings for the resource, and the Azure region will all have an impact on how much a
resource costs. When you provision an Azure resource, Azure creates metered instances
for that resource. The meters track the resources' usage and generate a usage record
that is used to calculate your bill.
Examples
With a storage account, you specify a type such as blob, a performance tier, an access
tier, redundancy settings, and a region. Creating the same storage account in different
regions may show different costs and changing any of the settings may also impact the
price.

With a virtual machine (VM), you may have to consider licensing for the operating
system or other software, the processor and number of cores for the VM, the attached
storage, and the network interface. Just like with storage, provisioning the same virtual
machine in different regions may result in different costs.
Consumption
Pay-as-you-go has been a consistent theme throughout, and that’s the cloud payment
model where you pay for the resources that you use during a billing cycle. If you use
more compute this cycle, you pay more. If you use less in the current cycle, you pay less.
It’s a straight forward pricing mechanism that allows for maximum flexibility.
However, Azure also offers the ability to commit to using a set amount of cloud
resources in advance and receiving discounts on those “reserved” resources. Many
services, including databases, compute, and storage all provide the option to commit to
a level of use and receive a discount, in some cases up to 72 percent.
When you reserve capacity, you’re committing to using and paying for a certain amount
of Azure resources during a given period (typically one or three years). With the back-up
of pay-as-you-go, if you see a sudden surge in demand that eclipses what you’ve pre-
reserved, you just pay for the additional resources in excess of your reservation. This
model allows you to recognize significant savings on reliable, consistent workloads while
also having the flexibility to rapidly increase your cloud footprint as the need arises.
Maintenance
The flexibility of the cloud makes it possible to rapidly adjust resources based on
demand. Using resource groups can help keep all of your resources organized. In order
to control costs, it’s important to maintain your cloud environment. For example, every
time you provision a VM, additional resources such as storage and networking are also
provisioned. If you deprovision the VM, those additional resources may not deprovision
at the same time, either intentionally or unintentionally. By keeping an eye on your
resources and making sure you’re not keeping around resources that are no longer
needed, you can help control cloud costs.
Geography
When you provision most resources in Azure, you need to define a region where the
resource deploys. Azure infrastructure is distributed globally, which enables you to
deploy your services centrally or closest to your customers, or something in between.
With this global deployment comes global pricing differences. The cost of power, labor,
taxes, and fees vary depending on the location. Due to these variations, Azure resources
can differ in costs to deploy depending on the region.
Network traffic is also impacted based on geography. For example, it’s less expensive to
move information within Europe than to move information from Europe to Asia or South
America.
Network Traffic
Billing zones are a factor in determining the cost of some Azure services.
Bandwidth refers to data moving in and out of Azure datacenters. Some inbound data
transfers (data going into Azure datacenters) are free. For outbound data transfers (data
leaving Azure datacenters), data transfer pricing is based on zones.
A zone is a geographical grouping of Azure regions for billing purposes. The bandwidth
pricing page has additional information on pricing for data ingress, egress, and transfer.
Subscription type
Some Azure subscription types also include usage allowances, which affect costs.
For example, an Azure free trial subscription provides access to a number of Azure
products that are free for 12 months. It also includes credit to spend within your first 30
days of sign-up. You'll get access to more than 25 products that are always free (based
on resource and region availability).
Azure Marketplace
Azure Marketplace lets you purchase Azure-based solutions and services from third-party
vendors. This could be a server with software preinstalled and configured, or managed
network firewall appliances, or connectors to third-party backup services. When you
purchase products through Azure Marketplace, you may pay for not only the Azure
services that you’re using, but also the services or expertise of the third-party vendor.
Billing structures are set by the vendor.
All solutions available in Azure Marketplace are certified and compliant with Azure
policies and standards. The certification policies may vary based on the service or
solution type and Azure service involved. Commercial marketplace certification
policies has additional information on Azure Marketplace certifications.
Compare the Pricing and Total Cost of Ownership calculators
Completed100 XP
 2 minutes
The pricing calculator and the total cost of ownership (TCO) calculator are two
calculators that help you understand potential Azure expenses. Both calculators are
accessible from the internet, and both calculators allow you to build out a configuration.
However, the two calculators have very different purposes.
Pricing calculator
The pricing calculator is designed to give you an estimated cost for provisioning
resources in Azure. You can get an estimate for individual resources, build out a solution,
or use an example scenario to see an estimate of the Azure spend. The pricing
calculator’s focus is on the cost of provisioned resources in Azure.
Note
The Pricing calculator is for information purposes only. The prices are only an estimate.
Nothing is provisioned when you add resources to the pricing calculator, and you won't
be charged for any services you select.
With the pricing calculator, you can estimate the cost of any provisioned resources,
including compute, storage, and associated network costs. You can even account for
different storage options like storage type, access tier, and redundancy.

TCO calculator
The TCO calculator is designed to help you compare the costs for running an on-
premises infrastructure compared to an Azure Cloud infrastructure. With the TCO
calculator, you enter your current infrastructure configuration, including servers,
databases, storage, and outbound network traffic. The TCO calculator then compares the
anticipated costs for your current environment with an Azure environment supporting
the same infrastructure requirements.
With the TCO calculator, you enter your configuration, add in assumptions like power and
IT labor costs, and are presented with an estimation of the cost difference to run the
same environment in your current datacenter or in Azure.
What is the Microsoft Cloud Adoption Framework for Azure?
The Cloud Adoption Framework for Azure is a collection of documentation, technical
guidance, best practices, and tools that aid in aligning business, organizational
readiness, and technology strategies. This alignment enables a clear and actionable
journey to the cloud that rapidly delivers on the desired business outcomes.
The cloud fundamentally changes how organizations procure and use technology
resources. With the cloud, they can provision and consume resources only when needed.
While the cloud offers tremendous flexibility in design choices, organizations need a
proven and consistent methodology for adopting cloud technologies. The Cloud Adoption
Framework meets that need. It can help guide your decisions throughout cloud adoption
to accelerate a specific business objective.
How is it structured?
The Cloud Adoption Framework helps customers undertake a simplified cloud journey in
three main stages:
 Plan
 Ready
 Adopt
These three main stages are preceded by a business strategy phase and surrounded by
an operations phase that expands through the cloud adoption journey.
The Cloud Adoption Framework contains detailed information to cover an end-to-end
cloud adoption journey:
 It begins with setting the business strategy, which should align to actionable
technology projects that deliver on the desired business outcomes.
 It then describes how the organization must:
o Prepare its people with technical readiness.
o Adjust processes to drive business and technology changes.
o Enable business outcomes through implementation of the defined
technology plan.
 Finally, it covers cloud operations, such as governance, resources, and people and
change management.

The cloud offers nearly unlimited potential, but successful adoption requires careful
planning and strategy. The adoption strategy depends on where you are in your cloud
journey. When you think about your use of the cloud, what is your motivation?
Next, let's define the strategy that might trigger an organization to move to the cloud.
Define strategy
Completed100 XP
 10 minutes
Organizations adopt the cloud to help drive business transformation, such as processes
and product improvement, market growth, and increased profitability. Let's look at the
most common motivation triggers for cloud adoption.
Across organizations of all types, sizes, and industries, the decision to invest in cloud
technologies is often tightly connected to a critical business event. The reason for this
connection is because the cloud might enable the appropriate solution for the event.
Proper cloud technology implementation might turn a reactive response into an
innovation opportunity to drive growth for the organization.

Watch this video to learn more.

Motivations
Organizations find different triggers to adopt new technologies like Azure. Some triggers
drive the organization to migrate current applications. Other triggers require creation of
new capabilities, products, and experiences.
Some common migration and innovation triggers include:
 Preparation for new technical capabilities
 Gaining scale to meet market or geographic demands
 Cost savings
 Reduction in vendor or technical complexity
 Optimization of internal operations
 Increased business agility
 Improvements to customer experiences or engagements
 Transformation of products or services
 Disruption of the market from new products or services

There are many reasons or triggers for cloud adoption. Which triggers are most relevant
to your business? Where do you see the most opportunity to take advantage of the
benefits of cloud technology? Identifying these opportunities will help you develop your
cloud adoption plan.
Strategy
When you define your cloud business strategy, you should consider business impact,
turnaround time, global reach, performance, and more. Here are key areas you need to
focus on:
 Establish clear business outcomes: Drive transparency and engagement for
your journey across the organization.
 Define business justification: Identify business value opportunities to then
select the right technology.
Implementing the first application is key to learning and testing with confidence, as your
cloud adoption journey starts. Use a two-pronged approach to select it:
 Business criteria: Identify an application currently in operation where the owner
has a strong motivation to move to the cloud.
 Technical criteria: Select an application that has minimum dependencies and
can be moved as a small group of assets.

The first application an organization deploys to the cloud is often done so in an


experimental environment with no operational or governance capacity. It's important to
select an application that doesn't interact with secure data. Carefully consider which
application is a good candidate. As you plan subsequent releases and additional
applications are deployed to the cloud, you create the first prioritized migration
application and the first prioritized release backlog. Over time, you create and continue
to shape the optimal environment for future deployments.
Establish clear business outcomes
The most successful cloud adoption journeys start with a business outcome in mind,
backed up by financial reasoning and support. A business outcome is a concise, defined,
and observable result or change in business performance that's captured by a specific
measure. The cloud strategy team consists of business leaders from finance, IT
infrastructure, and application groups. The team leads the cloud analysis and planning
phase. In this phase, the cloud strategy team is responsible for:
 Reviewing business outcomes and creating the business justification plan for
possible use cases for cloud adoption.
 Building or facilitating the cloud rationalization process, selecting the first
application, and managing subsequent prioritized backlogs.
 Managing communications with key stakeholders and promoting the cloud
adoption journey success and learnings.
Remember that the chief financial officer (CFO) can be a key player in creating and
landing a cloud adoption plan and can drive the value of migration and innovation,
including creating a financial plan for adoption.
Here are some tools to support you in your financial planning:
 Azure Total Cost of Ownership (TCO) Calculator: Use the TCO calculator to
estimate the cost savings you can realize by migrating your application workloads
to Azure.
 Azure pricing calculator: Estimate your expected monthly bill by using the
pricing calculator.
 Microsoft Cost Management + Billing: Use and manage Azure and other cloud
resources through a multiple-cloud cost management solution.
Tip
Links to the TCO calculator, Azure pricing calculator, and Microsoft Cost Management +
Billing tools are available in the Summary and resources unit at the end of this module.
Define business justification
Developing a clear business justification for cloud adoption with tangible, relevant costs
and returns can be a complex process. First, review some common cloud computing
business value areas to help justify the cloud adoption journey:
 Cost: Eliminates capital expense.
 Scale: Ability to scale elastically, delivering the right amount of IT resources
 Productivity: Removes the need for many IT management chores
 Reliability: Eases the burden of data backup, disaster recovery, and business
continuity

Here are the key points from this unit:


 Motivations for cloud adoption include:
o Migration triggers, such as cost saving and operations optimization.
o Innovation triggers, such as scaling to meet market or geographical
demands.
 The Cloud Adoption Framework for Azure enables an actionable cloud journey that
rapidly delivers on the desired business outcomes.
 The key areas to focus on when you develop your cloud business strategy are to:
o Define your business justification by identifying business value opportunities.
o Establish clear business outcomes to drive transparency and engagement.
 Microsoft provides tools to support you in your financial planning:
o Azure total cost of ownership calculator
o Azure pricing calculator
o Microsoft Cost Management + Billing
 Your first adoption project should align with your motivations for adoption.
Now that you've learned about how to define your business outcomes and overall
strategy for cloud adoption, let's get started and create a plan for this journey.
Plan
Completed100 XP
 5 minutes
How the cloud can advance your business strategy depends on your situation. The cloud
delivers fundamental technology benefits that can aid in executing multiple business
strategies. Using cloud-based approaches can improve business agility, reduce costs,
accelerate time to market, and even allow businesses to quickly expand into new
markets.
As your organization moves forward in your cloud adoption journey, proper planning is
key to your success. Your organization already has technology investments, so you must
understand your current state and then develop a prioritization plan for your cloud
journey.
In this stage, you focus on two main actions:
 Rationalize your digital estate: Understand the organization's current digital
estate to maximize return and minimize risks by running a workload assessment.
 Create your cloud adoption plan: Develop a plan where prioritized workloads
are defined and aligned with business outcomes.

Watch this video to learn more.

Rationalize your digital estate


A digital estate is the collection of IT assets that power business processes and
supporting operations. To begin cloud rationalization of the digital estate, inventory all
the digital assets the organization owns today. Then, evaluate each asset to determine
the best way to migrate or modernize each component to the cloud.
During this process, we recommend that you proceed incrementally, application by
application. Don't make decisions too broadly or too early across the entire application
portfolio.
There are five options for cloud rationalization, sometimes referred to as the Five Rs:
Rationalization option
Expected business outcome

Rehost
Also known as a lift-and-shift migration, a rehost effort moves a current state asset to
the chosen cloud provider, with minimal change to overall architecture.
 Reduce capital expense.
 Free up datacenter space.
 Achieve rapid return on investment in the cloud.

Refactor
Refactor also refers to the application development process of refactoring code to allow
an application to deliver on new business opportunities.
 Experience faster and shorter updates.
 Benefit from code portability.
 Achieve greater cloud efficiency in the areas of resources, speed, cost.

Rearchitect
When aging applications aren't compatible with the cloud, they might need to be
rearchitected to produce cost and operational efficiencies in the cloud.
 Gain application scale and agility.
 Adopt new cloud capabilities more easily.
 Use a mix of technology stacks.

Rebuild/New
Unsupported, misaligned, or out-of-date on-premises applications might be too
expensive to carry forward. A new code base with a cloud-native design might be the
most appropriate and efficient path.
 Accelerate innovation.
 Build applications faster.
 Reduce operational cost.

Replace
Sometimes the best approach is to replace the current application with a hosted
application that meets all functionality required in the cloud.
 Standardize around industry best practices.
 Accelerate adoption of business process-driven approaches.
 Reallocate development investments into applications that create competitive
differentiation or advantages.
Create your cloud adoption plan
As you develop a business justification model for your organization's cloud journey,
identify business outcomes that can be mapped to specific cloud capabilities and
business strategies to reach the desired state of transformation. Documenting all these
outcomes and business strategies serves as the foundation for your organization's cloud
adoption plan.
Key steps to build this plan are to:
 Review sample business outcomes.
 Identify the leading metrics that best represent progress toward the identified
business outcomes.
 Establish a financial model that aligns with the outcomes and learning metrics.
Tip
Links to sample business outcomes, the business outcome template, learning metrics,
the financial model, and the digital estate document are available in the Summary and
resources unit at the end of this module.

Here are the key points from this unit:


 In the plan stage, there are two major actions: rationalizing your digital estate and
creating your cloud adoption plan.
 In the Plan phase, there are five options for cloud rationalization: rehost, refactor,
rearchitect, rebuild/new, and replace. During this process, we recommend that you
proceed incrementally.
Let's talk next about how to prepare your organization, your business processes, and
your environment for your cloud adoption journey.
Ready
Completed100 XP
 8 minutes
We just looked at how a business plan aligned to a digital estate rationalization can
ensure you know why you'll benefit from moving to the cloud. Cloud adoption is a
strategic change that requires involvement from both business decision makers and end
users. Now, let's talk about how to get your organization ready for this journey:
 Define skills and support readiness: Create and implement a skills-readiness
plan to:
o Address current gaps.
o Ensure that IT and business people are ready for the change and the new
technologies.
o Define support needs.
 Create your landing zone: Set up a migration target in the cloud to handle
prioritized applications.

Watch this video to learn more.

The Azure readiness guide introduces features that help you organize resources,
control costs, and secure and manage your organization. Links to sample skills-readiness
learning paths on Microsoft Learn and Azure Support are available in the Summary and
resources unit at the end of this module.
Create your landing zone
Before you begin to build and deploy solutions with Azure services, make sure your
environment is ready. The term landing zone is used to describe an environment that's
provisioned and prepared to host workloads in a cloud environment, such as Azure. A
fully functioning landing zone is the final deliverable of any iteration of the Cloud
Adoption Framework for Azure methodology.

Each landing zone is part of a broader solution for organizing resources across a cloud
environment. These resources include management groups, resource groups, and
subscriptions. Azure offers many services that help you organize resources, control
costs, and secure and manage your organization's Azure subscription. Microsoft Cost
Management + Billing also provides a few ways to help you predict, analyze, and
manage costs.
Note
For an interactive experience, view the environment-readiness content in the Azure
portal. Go to the Azure Quickstart Center in the Azure portal, and select introduction to
Azure setup. Then follow the step-by-step instructions.
Tip
Standards-based Azure Blueprints samples are available and ready to use. Visit the list of
available samples that are ready to use or modify for your needs, linked in the Summary
and resources unit at the end of this module.

Here are the key points from this unit:


 Cloud adoption is a strategic change that requires involvement from both business
decision makers and end users.
 When you define skills and support readiness, create and implement a skills-
readiness plan to address current gaps, ensure that people are ready for the
change, and define support needs.
 The process of creating your landing zone sets up a migration target in the cloud
to handle prioritized applications.
Next, you'll learn about how to implement your cloud adoption plan based on either a
migration or innovation path.
Adopt
Completed100 XP
 7 minutes
At this point, you've established your business justification and defined your business
outcomes. You've prepared your organization. Your people and your Azure environment
are ready to deploy your prioritized applications. You're ready to adopt cloud
technologies following the selected digital estate rationalization path.
As discussed, your organization has unique motivations to adopt the cloud. They all
converge into migration or innovation to the cloud.

Watch this video to learn more.


Cloud migration
Cloud migration is the process of moving existing digital assets to a cloud platform.
Existing assets are replicated to the cloud with minimal modifications. After an
application or workload becomes operational in the cloud, users are transitioned from
the existing solution to the cloud solution.

Cloud migration is one way to effectively balance a cloud portfolio. This is often the
fastest and most agile approach in the short term. Conversely, some benefits of the
cloud might not be realized without additional future modification. Enterprises and mid-
market customers use this approach to accelerate the pace of change, avoid planned
capital expenditures, and reduce ongoing operational costs.
The strategy and tools you use to migrate an application to Azure largely depend on your
business motivations, technology strategies, and timelines. Your decisions are also based
on a deep understanding of the application and the assets to be migrated. These assets
include infrastructure, apps, and data. This decision tree serves as high-level guidance to
help you select the best tools to use based on migration decisions.
Migration preparation: Establish a rough migration backlog, based largely on the
current state and desired outcomes.
 Business outcomes: The key business objectives that drive this migration.
They're defined in the Plan phase.
 Digital estate estimate: A rough estimate of the number and condition of
workloads to be migrated. It's defined in the Plan phase.
 Roles and responsibilities: A clear definition of the team structure, separation
of responsibilities, and access requirements. They're defined in the Ready phase.
 Change management requirements: The cadence, processes, and
documentation required to review and approve changes. They're defined in the
Ready phase.
Cloud innovation
Cloud-native applications and data accelerate development and experimentation cycles.
Older applications can take advantage of many of the same cloud-native benefits by
modernizing the solution or components of the solution. Modern DevOps and software
development lifecycle (SDLC) approaches that use cloud technology shorten the time
from idea to product transformation. Combined, these tools invite the customer into the
process to create shorter feedback loops and better customer experiences.
Modern approaches to infrastructure deployment, operations, and governance are
rapidly bridging the gaps between development and operations. Modernization and
innovation in the IT portfolio create tighter alignment with DevOps and accelerate
innovations across the digital estate and application portfolio.

Here are the key points from this unit:


 Cloud migration is the process of moving existing digital assets to a cloud platform.
Adopt is divided into two different options, migrate and innovate.
 Each cloud migration activity is contained during one of the following processes, as
it relates to the migration backlog: assess, migrate, optimize, and secure. Then,
you manage each backlog asset.
 Modernization and innovation in the IT portfolio create tighter alignment with
DevOps and accelerate innovations across the digital estate and application
portfolio.
You've learned how to plan, get ready, and start to deploy your first applications to the
cloud. Now let's talk about governance and management in the cloud.
Govern and manage
Completed100 XP
 6 minutes
The process of adopting the cloud is a journey, not a destination. Along the way, there
are clear milestones and tangible business benefits. The final state of cloud adoption is
unknown when an organization begins the journey. As your organization moves or
deploys new applications to the cloud, this final state starts to form. It's important to
consider the following aspects of managing and operating a cloud platform:
 Define governance solutions for your cloud environment that meet your
organization's business needs, provide agility, and control risks.
 Manage your cloud environment based on the governance solutions to
allow it to evolve, grow, and adapt to your organization's changing business needs.

Watch this video to learn more.

Cloud governance
Cloud governance creates guardrails that keep the organization on a safe path
throughout the journey. The Cloud Adoption Framework for Azure governance model
identifies key areas of importance. Each area relates to different types of risks the
organization must address as it adopts more cloud services.
Because governance requirements will evolve throughout the cloud adoption journey, a
flexible approach to governance is required. IT governance must move quickly and keep
pace with business demands to stay relevant during cloud adoption.
Incremental governance relies on a small set of corporate policies, processes, and tools
to establish a foundation for adoption and governance. That foundation is called
a minimum viable product (MVP). An MVP allows the governance team to quickly
incorporate governance into implementations throughout the adoption lifecycle. After
this MVP is deployed, additional layers of governance can be quickly incorporated into
the environment.
Tip
To determine where you should start to implement your own cloud governance, use the
Microsoft assessment tools linked in the Summary and resources unit at the end of this
module.
Cloud management
The goal of the Manage methodology is to maximize ongoing business returns by
creating balance between stability and operational costs. Stable business operations
lead to stable revenue streams. Controlled operational costs reduce the overhead to
drive more profit from the business processes.
Tip
Links to the scalability, availability, and resiliency resources are available in
the Summary and resources unit at the end of this module.
Cloud operations creates a maturity model that helps the team fulfill commitments to
the business. In the early stages of maturity, customers focus on basic needs such as
inventory and visibility into cloud assets and performance. As operations in the cloud
mature, the team can use cloud native or hybrid approaches to maintaining operational
compliance, which reduces the likelihood of interruptions through configuration and state
management. After compliance is achieved, protection and recovery services provide
low-impact ways to reduce the duration and effect of business process interruptions.
During platform operations, aspects of various platforms (like containers or data
platforms) are adjusted and automated to improve performance.

Here are the key points from this unit:


 As your organization moves or deploys new applications to the cloud, it's important
to consider these aspects of operating a cloud platform:
o Define governance solutions for your cloud environment.
o Manage your cloud environment.
 The Cloud Adoption Framework governance model identifies key areas of
importance. Each area relates to different types of risks the organization must
address as it adopts more cloud services. The Five Disciplines of Cloud Governance
are Cost Management, Security Baseline, Resource Consistency, Identity Baseline,
and Deployment Acceleration.
Next, let's see what you've learned about the Cloud Adoption Framework with a
knowledge check.

Compare the Pricing and Total Cost of Ownership calculators


Completed100 XP
 2 minutes
The pricing calculator and the total cost of ownership (TCO) calculator are two
calculators that help you understand potential Azure expenses. Both calculators are
accessible from the internet, and both calculators allow you to build out a configuration.
However, the two calculators have very different purposes.
Pricing calculator
The pricing calculator is designed to give you an estimated cost for provisioning
resources in Azure. You can get an estimate for individual resources, build out a solution,
or use an example scenario to see an estimate of the Azure spend. The pricing
calculator’s focus is on the cost of provisioned resources in Azure.
Note
The Pricing calculator is for information purposes only. The prices are only an estimate.
Nothing is provisioned when you add resources to the pricing calculator, and you won't
be charged for any services you select.
With the pricing calculator, you can estimate the cost of any provisioned resources,
including compute, storage, and associated network costs. You can even account for
different storage options like storage type, access tier, and redundancy.
TCO calculator
The TCO calculator is designed to help you compare the costs for running an on-
premises infrastructure compared to an Azure Cloud infrastructure. With the TCO
calculator, you enter your current infrastructure configuration, including servers,
databases, storage, and outbound network traffic. The TCO calculator then compares the
anticipated costs for your current environment with an Azure environment supporting
the same infrastructure requirements.
With the TCO calculator, you enter your configuration, add in assumptions like power and
IT labor costs, and are presented with an estimation of the cost difference to run the
same environment in your current datacenter or in Azure.

You might also like