Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Microsoft Workloads in AWS

Download as pdf or txt
Download as pdf or txt
You are on page 1of 250

In this course, you will learn the technical fundamentals of running Microsoft

workloads on Amazon Web Services, or AWS. The course is designed for pre-sales
engineers at APN Consulting partner organizations to learn how to discuss the
technical advantages of AWS for Windows. You will learn about the various tools
available to migrate, develop, build, deploy, manage, and operate Microsoft
applications and Windows Servers on AWS. You will see case studies and reference
architectures to showcase how some AWS customer architectures have been
designed for common Microsoft workloads including SQL and Active Directory. This
course is available in both instructor-led and web-based delivery formats.
In this course, you will learn how to:
- Provide a technical overview of Microsoft workloads on AWS
- Discuss the technical advantages and positioning for Microsoft workloads on AWS,
- Provide guidance to customers who are architecting common Microsoft workloads
for AWS, and
- Explain the various tools to develop, deploy, and manage Microsoft workloads on
AWS
This course is organized into seven topics:
• Module one covers how to position AWS for managing and hosting Microsoft
workloads.
• Module two covers how to architect foundational AWS services to support running
Microsoft workloads.
• Module three covers how to run Microsoft Windows Server instances in Amazon
EC2, and create custom Amazon Machine Images (AMI) for running Microsoft
workloads.
• Module four covers how to deploy and run Directory services in AWS.
• Module five covers running SQL Server databases on AWS.
• Module six covers how to automate operations with AWS services.
• Module seven covers how tools Amazon provides help you to build and run .NET
applications on AWS.
In the first module, you learn how to position AWS for managing and hosting
Microsoft workloads.
In this module, you will learn how to position AWS for managing and hosting
Microsoft workloads.
You will learn which drivers and challenges lead to using AWS for Microsoft
workloads, and the benefits AWS provides.
You also learn how to assess the current workloads to find cost savings when moving
to AWS.
AWS focuses on helping customers optimize their investments in enterprise
applications. We recognize that Microsoft software is widely used by customers of all
sizes.

A large portion of enterprise customers have a Microsoft Enterprise Agreement,


which is a 3 year licensing contract. These EA licenses are primarily for on-premises
workloads. Customers want to optimize their EA investments by reducing costs,
improving performance, and increasing agility. Often, customers overspend here,
which provides AWS an opportunity to help optimize their licensing.

Microsoft, VMware, SAP, IBM, and Oracle still represent a major contributors to IT
budgets. Customers want to reduce their technical debt with these legacy enterprise
software investments. APN Partners can help customers do this by migrating their
workloads to AWS.

Additionally, a significant portion of all on-premises enterprise applications are


Windows-based. These points illustrate the tremendous market opportunity for
partners to assist customers in upgrading, migrating, and optimizing Microsoft
workloads on AWS.
As a result of these market conditions, AWS is experiencing a high demand from
customers to support Microsoft workloads.
Amazon Elastic Compute Cloud, or Amazon EC2, for Windows is now among the top
five AWS services. EC2 for Windows is currently growing at 63 percent globally on
AWS. The numbers show that it’s a large and fast growing business.
Next, you will learn how AWS supports Microsoft workloads in many ways.
AWS continually expands its services to support many cloud workloads. AWS now has
more than 100 services. Microsoft workloads and .NET applications are supported by
AWS, and AWS provides more services and more functionality within those
services.

AWS offers security services that provide fine-grained control. AWS has many
security certifications and provides support to highly available applications.
AWS also has experience building reliable, secure, scalable, and cost-effective
infrastructure for active customers every month.

Visit the AWS web site for the most up-to-date list of services.
8

AWS has more than 10 years of innovation for Microsoft workloads that run on AWS.
AWS offers over 150 Amazon Elastic Compute Cloud, or Amazon EC2. instance types.
AWS also offers more than 60 different Amazon Machine Images, or AMIs, for
Microsoft workloads. Recently, AWS announced the availability of Windows Server
2019 AMIs for Amazon EC2. Windows Server 2019 offers a variety of new features,
including smaller and more efficient Windows containers, support for Linux
containers for application modernization, and the App Compatibility Feature on
Demand. Windows Server 2019 AMIs are available in all public AWS Regions and in
AWS GovCloud (US).
Microsoft Premier Support helps AWS assist end customers. AWS and Microsoft have
new areas of support integration to help customers.

In addition, AWS support engineers can escalate issues directly to Microsoft Support
on behalf of AWS business or enterprise tier customers who run Microsoft
workloads. AWS does not share any customer information or specific details without
the customer’s permission.
.
Secure: The AWS Cloud uses a security-in-layers approach to provide the protection
that organizations require without sacrificing scale, control, speed, or performance. In
addition to several options for network security, AWS protects data and applications
with 256-bit encryption and provides fine-grained access controls to resources via
AWS Identity and Access Management (IAM).

Reliable: AWS for Microsoft workloads offers a highly reliable environment where
replacement instances can be rapidly and predictably provisioned. The AWS Service
Level Agreement commitment is designed for 99.95% availability for each Region.
Each Region comprises at least two physically isolated facilities that are known as
Availability Zones, which helps keep instances highly available. AWS currently features
61 Availability Zones in 20 Regions. These Regions provide organizations the
reassurance that their mission-critical data and applications will be available, even in
the face of natural disasters and other rare events that might cause systems failures.

High-performance: AWS for Microsoft workloads provides a high-performance


environment. It allows organizations to automate complex tasks without sacrificing
the visibility and monitoring capabilities they need so that their workloads are secure
and functioning as expected. AWS also provides automatic scaling capabilities that
are flexible enough to scale manually, by schedule, by policy, or by auto-rebalance.

Familiar: AWS for Microsoft workloads is compatible with the most-used Microsoft
applications, such as Microsoft System Center and VMware vCenter. Add-ins were
developed to provide seamless integration between these traditional applications
and the AWS Cloud. This enables organizations to use existing tools from a single,
familiar console to manage on-premises virtual machines and cloud workloads that
are based on Microsoft.

Cost-effective: AWS for Microsoft workloads is the solution for organizations that
must access enterprise-grade computing resources in an affordable way. A global
cloud-computing infrastructure enables organizations to benefit from economies of
scale, which reduces the total costs of enterprise IT. AWS is designed to offer value by
enabling elastic consumption that scales with customers’ needs, pay-as-you-go
pricing models, and no long-term service commitments.

Flexible: With AWS for Microsoft workloads, organizations have the flexibility to
choose the computing, storage, and networking capacity they need, which services to
use, and how they want to use them. Elastic service capabilities allow scaling of
resources up or down in real-time as needs change, enabling a lean, adaptable
infrastructure. Automation capabilities can be enabled to leverage this elasticity
instantly based on easily customizable rules and volume thresholds.

Extensive: AWS for Microsoft workloads offers an extensive line of features and
services. AWS has been continually expanding its services to support virtually any
cloud workload, and it now has more than 40 services that range from compute,
storage, networking, database, analytics, application services, deployment,
management, and mobile. Designed to work together, these services are highly
customizable and available for a variety of programming interfaces including .NET,
Visual Studio, and Windows PowerShell. AWS expands and improves these services
continually.

Innovative: AWS's rapid pace of innovation helps enterprises focus on what's most
important to them when navigating through the many services available.
Here’s another reason companies choose AWS—global reach.

AWS has the largest global footprint of any cloud provider in the market today. Each
AWS Region has multiple Availability Zones and data centers. AWS has been running
high quality cloud infrastructure technology products and services since 2006. We
know our customers care about the availability and performance of their applications.
With AWS, customers can deploy applications across multiple Availability Zones in the
same Region for increased fault tolerance and low latency. Availability Zones are
connected to each other with fast, private fiber-optic networks. Customers can easily
architect applications that automatically failover between Availability Zones without
interruption.

Understanding AWS Regions, Availability Zones, and edge locations is important.


Other cloud providers may define their terms differently, which can result in
inaccurate comparisons.

For the most up-to-date details on AWS Regions, Availability Zones, edge locations,
and data centers, visit our website.
In this section, you will learn about some specific use cases for Microsoft workloads.
With AWS, you can run the full array of Microsoft workloads on AWS. Several
examples are shown here.
We continually expand our services to support virtually any cloud workload. We now
have over 100 services to offer. Among those offerings, AWS provides numerous
Windows and .NET services and functionality.

AWS also offers a broad selection of services along with much deeper functionality
within most of these services, including deeper functionality for Windows such as the
AWS Deep Learning AMI for Microsoft Windows Server and the first fully managed
native-Windows file system available in the cloud with FSx for Windows File Server.
Shown here are some use cases for Microsoft Workloads on AWS

Fast move to the cloud/Lift and shift


AWS offers breadth, depth, and global reach. Customers like Hess are seeing better
performance with their Microsoft applications on AWS than on-premises. Click the
Hess case study link to learn more.

Optimize SQL Server and Active Directory


AWS offers you flexibility, choice, and a number of options for SQL Server databases.
AWS also offers options to manage your Active Directory services.

​Modernize .NET applications


​It’s easy to get started using the tools you know. After you are on AWS, you have
access to new technologies like AWS Lambda and containers. You can move towards a
DevOps model at your own speed.

Innovate
There are also many integration points between Microsoft workloads and the broad
set of AWS services, which can enable you to innovate and drive business agility.
Amazon EC2 makes it easy to start and manage your Windows instances.
You can run Microsoft Windows Server 2008 and 2008 R2, 2012 and 2012 R2, 2016
and 2019 on EC2 instances. Amazon EC2 instances that run Microsoft Windows
Server provide a secure, reliable, and high-performance environment for deploying
applications and Windows workloads. You can also use preconfigured Amazon
Machine Images, or AMIs, with different combinations of Windows and SQL Server to
help you migrate your Windows workloads quickly.
You can use AWS options to maintain legacy applications in the AWS Cloud, or rewrite
legacy applications while you migrate to more modern operating systems.

Instructor note regarding 32-bit applications, Windows 2003: AWS no longer provides
AMIs that support these operating systems.
Microsoft workloads can use AWS services in multiple ways. This example shows
Active Directory, which is a fundamental Windows workload.

Here, you can see three deployment options:


• You can extend your on-premises AD into AWS,
• You can re-host AD on AWS by using EC2 instances,
• You can use AWS Managed Microsoft AD, which provides you with a set of highly
available domain controllers, monitoring and recovery, data replication, snapshots,
and software updates. These resources are automatically configured and managed
for you.

For the first two options, you must still administer your deployments:
• You install and manage domain controllers, and
• You manually join EC2 instances to your self-managed AD.

Customers can extend their Active Directory Domain to AWS and use the identities
they manage in Active Directory directory services to use Office online.
SQL Server is another foundational workload, and you can choose from multiple
options for deployment.

Windows on Amazon EC2


For corporate or third-party legacy and custom applications, including line-of-
business applications, you can launch a database to support these applications by
using Amazon EC2 and Amazon Elastic Block Store, or Amazon EBS.

Amazon RDS
Amazon Relational Database Service, or Amazon RDS, is a managed service that
makes it easy to deploy a relational database to support line-of-business applications
that run on AWS:
 It automates database administration tasks, such as provisioning, patching,
backup, recovery, failure detection, and repair;
 Multi-AZ deployments provide automatic failover; and
 It integrates with IAM for granular control over resource permissions.

Re-factor
Additional savings and flexibility can be realized with a move to a variety of open
source database solutions on AWS. Customers can save significant cost by moving off
of the proprietary SQL Server engine and onto a fully managed relational database
services like Aurora, which is based on open source standards MySQL and PostreSQL.
AWS has available refactoring tooling and services to help customers move to cloud-
native solutions such as Aurora.

Whether you decide to self-manage your customer’s environment with Amazon EC2
or deploy to a managed service with Amazon RDS, you will have:
 A cost-effective option for hosting SQL Server;
 Compete control for managing software, compute, and storage resources; and
 Rapid provisioning through relational database AMIs that enable you to store
database machine images.
The key to continued cost savings is the efficient management of ongoing operations.
AWS provides a set of management tools that enables you to programmatically
provision, monitor, and automate all the components of your cloud environment.
Using these tools, you can maintain consistent controls without restricting the speed
of development. AWS provides four kinds of management tools that work together
and are integrated with every part of the AWS Cloud:
• AWS CloudFormation – Model and provision all your cloud infrastructure
resources;
• AWS Systems Manager – Gain operational insights and take action on AWS
resources;
• Amazon CloudWatch – Gain visibility of your cloud resources and applications;
• AWS License Manager – Set rules to manage, discover, and report software license
usage; and
• AWS OpsWorks – Automate operations with Chef and Puppet.
AWS provides full support for .NET applications and Windows Workloads.
Additionally, AWS supports various features in .NET, .NET Core, and Core 2.1. AWS
services such as AWS Lambda, AWS X-Ray, and AWS CodeStar can help build modern
serverless and DevOps solutions. These services also provide deep integration with
tools that developers already use to build .NET applications, like Visual Studio and
Visual studio team services. This means that developers can use familiar tools and
also benefit from using the breadth of AWS products and services. To help developers
learn about various AWS services and get started quickly, AWS provides a range of
resources and tools, and AWS also offers a GitHub community.
You can access many case studies on the AWS website, which discuss customer
success stories.

One relevant case study is related specifically to Windows.

Dole Food Company is an American-based agricultural multinational


corporation that distributes its products in 90 countries. Searching for a
solution to host its MSFT SharePoint sites, the company chose AWS because
of cost, efficiency, and to improve operational efficiency. By running on AWS,
Dole can launch a new SharePoint website in minutes, and they estimate
savings of $350,000 in operating expenses.
In this module, you learned how to position AWS for managing and hosting Microsoft
workloads.
You learned which drivers and challenges lead to using AWS for Microsoft workloads,
and the benefits AWS provides.
You also learned how to assess the current workloads to find cost savings when
moving to AWS.
In this module, you will learn how foundational Amazon Web Services, or AWS,
services pertain to running Microsoft workloads. These foundational services include
compute services, such as Amazon Elastic Compute Cloud, or Amazon EC2; storage
services; networking services; and domain services.
You will learn how to discuss the shared responsibility model, and how to use Virtual
Private Cloud (VPC), including Security Groups, Network Access Control Lists, and
firewalls.
You will also learn how to choose storage options for Microsoft workloads.
When you educate customers about running Microsoft workloads on AWS,
show how the components in an on-premises deployment connect to AWS.
One of the best ways to show how to architect a Windows-based solution in
the cloud is to show the mapping of components from on-premises resources
to AWS Cloud services.
1. Servers, such as web and application servers, are replaced with Amazon
EC2 instances that run the same software. Because Amazon EC2
instances can run a variety of Windows Server operating systems, most
Windows-based applications can be run on Amazon EC2 instances. AWS
provides Amazon Machine Images, or AMIs, for both 64-bit and 32-bit
Microsoft Windows Server operating systems.
2. Cloud storage is typically more reliable, scalable, and secure than
traditional storage systems. AWS offers a complete range of cloud storage
services to support application and archival requirements that replace on-
premises system-attached and network-attached storage.
3. The LDAP server can be replaced with AWS Directory Service. AWS
Directory Service supports LDAP authentication and it allows you to set up
and run Microsoft Active Directory in the cloud. You can also use it to
connect your AWS resources with an existing on-premises Microsoft Active
Directory.
4. Software-based load balancers are replaced with Elastic Load Balancing
load balancers. Elastic Load Balancing is a fully managed load-balancing
solution that scales automatically as needed and can perform health
checks on attached resources. Elastic Load Balancing can redistribute load
away from unhealthy resources as necessary.
5. Databases can be replaced with Amazon Relational Database Service, or
Amazon RDS. By using Amazon RDS, you can run Amazon Aurora,
PostgreSQL, MySQL, MariaDB, Oracle, and Microsoft SQL Server on a
managed AWS solution. Amazon RDS offers master, read replica, and
standby instances.
6. Finally, Amazon RDS instances can be automatically backed up to Amazon
Simple Storage Service, or Amazon S3, thus replacing the need for on-
premises hardware for database backups.
6

This diagram illustrates how the shared responsibility model works and which
elements are part of each type of responsibility.

The discussion about cloud security starts by introducing the shared


responsibility model. Though AWS provisions and maintains the underlying
cloud infrastructure, customers must perform the security configuration tasks
that ensure safety in the cloud. The scope of responsibility for AWS goes from
the ground up to the hypervisor. AWS secures the hardware, software,
facilities, and networks that run AWS products and services. Customers are
responsible for securely configuring the services they deploy and use.

AWS also :
• Obtains industry certifications and independent third-party attestations
• Publishes information about AWS security and control practices in
whitepapers and website content
• Provides certificates, reports, and other documentation directly to AWS
customers as required under non-disclosure agreements

The security configuration you must complete varies, depending on how


sensitive your data is and which services you select. For example, AWS
services, such as Amazon EC2 and Amazon S3, are completely under your
control. They require you to perform all of the necessary security configuration
and management tasks. With Amazon EC2, you are responsible for managing
the guest operating system, or OS. For each EC2 instance, you must apply
updates and security patches, secure application software or utilities you
install on the instances, and configure the AWS-provided firewalls, which are
called security groups.

When you use services that are managed by AWS, such as Amazon RDS,
Amazon Redshift, or Amazon WorkDocs, you don’t have to worry about
launching and maintaining instances or patching the guest OS or applications.
AWS handles these tasks for you. For these managed services, basic security
configuration tasks happen automatically, such as data backups, database
replication, and firewall configuration.

However, there are certain security features that you should configure, no
matter which AWS service you use. These features that you must configure
include user accounts and credentials and credentials for AWS Identity and
Access Management, or IAM; SSL for data transmissions; and user activity
logging

AWS Support provides a highly personalized level of service for customers


who seek technical help.
7

The shared responsibility model for infrastructure services like Amazon EC2
specifies that AWS manages the security of the following assets:
• Facilities, including Regions, Availability Zones, and edge locations;
• Physical security of hardware;
• Network infrastructure; and
• Virtualization infrastructure.

Customers are responsible for the security of their cloud computing assets,
including:
• Amazon Machine Images, or AMIs;
• Operating systems;
• Applications;
• Data in transit;
• Data at rest;
• Data stores;
• Credentials; and
• Policies and configuration.
Compliance requirements are deeper than data sovereignty regulations and
geographic location. To implement the necessary security controls across the
operating environment, AWS recommends a layered security approach. AWS offers
complementary features and services to implement the necessary controls. Many of
these control measures apply to layers that AWS controls, which means that AWS
handles the security of the cloud, specifically the physical infrastructures that host
your resources.

AWS maintains a secured infrastructure that secures the hardware, software,


facilities, and networks that run AWS products and services. Secured infrastructure
includes the following resources:
• AWS data centers, which are nondescript facilities with 24/7 security guards, two-
factor authentication, access logging and review, video surveillance, and disk
degaussing and destruction.
• Hardware infrastructure that includes servers, storage devices, and other
appliances that AWS services use.
• Software infrastructure that includes host operating systems, service applications,
and virtualization software.
• Network infrastructure that includes routers, switches, load balancers, firewalls,
and cabling. It also includes continuous network monitoring at external
boundaries, secure access points, and redundant infrastructure.
• Certifications and attestations that AWS successfully completed, including
multiple audits, attestations, and certifications.
• Compliance alignments and frameworks that include published security or
compliance requirements for a specific purpose, such as a specific industry or
function. For these types of programs, AWS provides various capabilities such as
security features, and enabling materials, which include compliance playbooks,
mapping documents, and whitepapers.

In the next slides, you learn about additional security controls you can implement,
including:
• Virtual private clouds, or VPCs, which also include subnets;
• Security groups;
• Network access control lists, or network ACLs; and
• Firewalls.
Amazon Virtual Private Cloud, or Amazon VPC, allows you to add another
layer of network security to your instances. You can use Amazon VPC to
create private subnets, and you can even add an IPsec virtual private network,
or VPN, tunnel between your network and your VPC. Amazon VPC enables
you to define your own network topology, including definitions for subnets,
network access control lists, internet gateways, routing tables, and virtual
private gateways. The subnets that you create can be defined as either private
or public.

A virtual private cloud, or VPC, is a virtual network that is dedicated to your


AWS account. Each VPC is a distinct, isolated network within the cloud;
network traffic within each VPC is isolated from all other VPCs. You can launch
your AWS resources, such as Amazon EC2 instances, into your VPC. VPCs
can now operate in a dual-stack mode, with the ability to assign both IPv4 and
IPv6 addresses on Amazon EC2 instances.
When you launch a Windows-based Amazon EC2 instance, you must select the VPC
and subnet where the Amazon EC2 instance is deployed. The VPC and subnet
determine the networking characteristics, such as the IP address and routing tables
for the virtual machine. VPCs act as a network isolation component that blocks traffic
entering the VPC unless there is a defined network path. Each VPC is contained within
a Region. Resources within that VPC cannot exist outside of that Region. There are
ways to connect VPCs in different Regions to each other without going through the
public internet. However, resources in different Availability Zones within that Region
can exist in the same VPC.
Part of the security landscape is ensuring your application’s reliability. By using
multiple Availability Zones, you can design Windows workloads on AWS for high
availability. In this case, you can deploy Windows-based Amazon EC2 instances inside
the same VPC, but in different Availability Zones and subnets. You can also use an
Elastic Load Balancing load balancer to route traffic between the destination
endpoints.
A security group acts as a virtual firewall to control inbound and outbound
traffic for your instance. When you launch an instance in a VPC, you must
specify a security group for the instance. If you don't specify a particular group
at launch time, the instance is automatically assigned to the default security
group for the VPC. You can assign up to five security groups to an instance.
Security groups act at the instance level, not the subnet level. Therefore, each
instance in a subnet in your VPC could be assigned to a different set of
security groups.

Security groups are stateful: responses to allowed inbound traffic are allowed
to flow outbound regardless of outbound rules, and vice versa. Traffic can be
restricted by IP protocol, by service port, and by source or destination IP
address. These IP addresses can be individual IP addresses or IP addresses
that are in a Classless Inter-Domain Routing, or CIDR, block. You can also
restrict traffic sources to those that come from other security groups. If you
add and remove rules from the security group, those changes are
automatically applied to the instances that are associated with the security
group.
Note: These virtual firewalls cannot be controlled through the guest OS;
instead, they can be modified only through the invocation of Amazon VPC
application programming interfaces, or APIs.

The level of security provided by the firewall is a function of the ports that you
open, and for what duration and purpose. Well-informed traffic management
and security design are still required on a per-instance basis. AWS further
encourages you to apply additional per-instance filters with host-based
firewalls, such as iptables or the Windows Firewall, so they can be state-
sensitive, dynamic, and respond automatically.
In this example, a security group allows inbound requests to port 443 to the remote
desktop gateway only if the request comes from one of the corporate data center IP
addresses. This allows the data center staff to connect to the remote desktop
gateway, and it blocks all connection requests from other IP address spaces.

A second security group for the application server allows connections to port 3389
only if they come from instances in the remote desktop gateway security group. This
allows the instance to remain in a private subnet, while allowing server
administrators to manage the application server by connecting through the remote
desktop gateway.
A network access control list, or network ACL, is an optional layer of security
that acts as a firewall for controlling traffic in and out of a subnet. You can set
up network ACLs with rules similar to your security groups, which adds an
additional layer of security to your VPC.

Network ACLs are stateless; responses to allowed inbound traffic are subject
to the rules for outbound traffic, and vice versa. A network ACL is a numbered
list of rules that are evaluated in order, starting with the lowest numbered rule.
The rules determine whether traffic is allowed in or out of any subnet
associated with the network ACL. A network ACL has separate inbound and
outbound rules, and each rule can either allow or deny traffic.

Your VPC automatically comes with a modifiable default network ACL. By


default, it allows all inbound and outbound traffic. You can create custom
network ACLs. Each custom network ACL starts out closed, which means that
it permits no traffic, until you add a rule.

Each subnet must be associated with a network ACL. If you don't explicitly
associate a subnet with a network ACL, the subnet is automatically associated
with the default network ACL. The default network ACL allows all traffic to flow
in and out of each subnet.

Like security groups, network ACLs are managed through Amazon VPC APIs.
They add an additional layer of protection and enable additional security
through the separation of duties.
A network access control list, or network ACL, is an optional layer of security
that acts as a firewall for controlling traffic in and out of a subnet. You can set
up network ACLs with rules similar to your security groups, which adds an
additional layer of security to your VPC.

Network ACLs are stateless; responses to allowed inbound traffic are subject
to the rules for outbound traffic, and vice versa. A network ACL is a numbered
list of rules that are evaluated in order, starting with the lowest numbered rule.
The rules determine whether traffic is allowed in or out of any subnet
associated with the network ACL. A network ACL has separate inbound and
outbound rules, and each rule can either allow or deny traffic.

Your VPC automatically comes with a modifiable default network ACL. By


default, it allows all inbound and outbound traffic. You can create custom
network ACLs. Each custom network ACL starts out closed, which means that
it permits no traffic, until you add a rule.

Each subnet must be associated with a network ACL. If you don't explicitly
associate a subnet with a network ACL, the subnet is automatically associated
with the default network ACL. The default network ACL allows all traffic to flow
in and out of each subnet.

Like security groups, network ACLs are managed through Amazon VPC APIs.
They add an additional layer of protection and enable additional security
through the separation of duties.
Most companies usually don’t migrate to the cloud and abandon their physical data
centers immediately. In many situations, you must connect the Amazon EC2 instance
to an on-premises Windows domain. A company with an existing data center might
still use that data center for critical tasks, while also extending their capabilities by
hosting specific applications and services in AWS. Companies in such situations can
choose to use a virtual private network, or VPN, solution. A VPN enables users to
establish secure connections into your VPC via an Amazon EC2 instance. Alternatively,
you can use AWS Direct Connect, or DX, to integrate the VPNs that the company
created with their existing data centers. Using DX enables interaction between
computers in the data center and the resources that run in AWS.

DX is a unique solution that helps companies get their important applications access
to the AWS network with scale, speed, and consistency. DX does not involve the
internet. Instead, it uses dedicated, private network connections between your on-
premises solutions and AWS.

Service benefits
DX is useful in several scenarios, and some common scenarios are described in the
following sections.
Transferring large datasets
Consider a high performance computing, or HPC, application that operates on large
datasets that must be transferred between your data center and AWS. For such
applications, you can connect to the cloud using DX.

Network transfers will not compete for internet bandwidth at your data center or
office location.

The high-bandwidth link reduces the potential for network congestion and degraded
application performance.

Reduced network transfer costs


By using DX to transfer large datasets, you can limit the internet bandwidth used by
your application. By doing so, you can reduce network fees that you pay to your
internet service provider, or ISP. You can also reduce paying for increased internet
bandwidth commitments or new contracts. In addition, all data transferred over DX is
charged at the reduced DX data transfer rate instead of at internet data transfer rates,
which can reduce your network costs.

Improved application performance


Applications that require predictable network performance can also benefit from DX.
Examples include applications that operate on real-time data feeds, such as audio or
video streams. In such cases, a dedicated network connection can provide more
consistent network performance than standard internet connectivity.

Security and compliance


Enterprise security or regulatory policies sometimes require applications that are
hosted on AWS to be accessed only through private network circuits. DX is a solution
to this requirement because traffic between your data center and your application
flows through the dedicated private network connection.

Hybrid cloud architectures


Applications that require access to existing data center equipment that you own can
also benefit from DX. The next section discusses this use case and illustrates different
scenarios that can be supported by DX.
If you have multiple VPCs running inside AWS, such as a VPC for Engineering and a
VPC for Finance, you can use VPC peering within AWS to connect these virtual
networks to one another. You can also establish VPC peering relationships with VPCs
that reside in other AWS accounts.
This diagram illustrates a possible architecture for a Microsoft workload, which could
easily be a custom line-of-business web application:
• In the first build, you see traffic from external sources; and
• In the second build, you see traffic from internal sources.

This type of architecture can be built by using an AWS Quick Start. AWS
CloudFormation templates accelerate the deployment.

You can run .NET applications in EC2 instances that run Windows Server,
and you can run fully managed databases with Amazon RDS for SQL
Server.

Multiple Availability Zones, Elastic Load Balancing, and automatic scaling


add resiliency and robust availability.
AWS provides you with flexible, cost-effective, and easy-to-use data storage options
for your instances. Each option has a unique combination of performance and
durability. These storage options can be used independently, or in combination, to
suit your requirements:

• Amazon EC2 instance store – Many instances can access storage from disks that
are physically attached to the host computer. The instance store, which is also
known as ephemeral storage, is physically attached to the host computer. It
provides temporary block-level storage for use with an instance. Instance store
volumes are usable only from a single instance during its lifetime. They can't be
detached and then attached to another instance. The data in an instance store
persists only during the lifetime of its associated instance. If an instance reboots,
either intentionally or unintentionally, the data in the instance store does not
persist. Unlike Amazon EBS volumes, you cannot take snapshots of an instance
store. The instance store is often used to temporarily store things such as swap
files or caches.

• Amazon EBS – Amazon EBS volumes provide durable, detachable, block-level


storage volumes for your Amazon EC2 instances. You can use Amazon EBS as a
primary storage device for data that requires frequent and granular updates.
Because they are directly attached to the instances, they can provide extremely
low latency between where the data is stored and where it might be used on the
instance. For this reason, they can be used to run a database with an Amazon EC2
instance. Amazon EBS volumes can also be used to back up your instances into
Amazon Machine Images, or AMIs. AMIs are stored in Amazon S3, and they can be
reused to create new Amazon EC2 instances.

• Amazon S3 – Amazon S3 is a repository for internet data. Amazon S3 provides


access to a reliable and inexpensive data storage infrastructure. It is designed to
enable web-scale computing. You can store and retrieve any amount of data at any
time, from Amazon EC2 or anywhere on the web. For example, you can use
Amazon S3 to store backup copies of your data and applications.

• Amazon FSx – Amazon FSx is a fully managed, native Microsoft Windows file
system built on Windows Server. With Amazon FSx, you can move your Windows-
based applications that require file storage to AWS. Amazon FSx supports the
Server Message Block, or SMB, protocol; the Windows NT file system, or NTFS;
Active Directory integration; and the Distributed File System, or DFS.

• Amazon EFS – Amazon EFS is a fully managed, cloud-native file system for a broad
range of Linux-based business applications. Accessible via the NFS protocol, it
provides simple, scalable elastic file storage and is easily shared among multiple
applications, instances, and on-premises servers simultaneously.
When you use Amazon EBS, consider these key points:
• Choose storage types that optimize cost and performance, and
• Provision enough IOPS for your workload.

By applying the flexible storage options to a workload, you can architect a performant
and cost-effective solution. For example, say that you are using an Amazon EC2
instance to run Microsoft SQL Server. You would need multiple types of storage with
different requirements for I/O performance, durability, latency sensitivity, and
persistence. For standard database reads and writes, you could use an Amazon EBS
Provisioned IOPS volume. This type of EBS volume could help ensure that the
read/write speed remains consistent during utilization, while also remaining
persistent in the event of disk failure. You could also use a General Purpose SSD
volume for the boot volume of the Amazon EC2 instance because this will not impact
the read/write performance after it is booted. For the TempDB data files, it is critical
that these files have the fastest possible read/write speed, which would use instance
store, or ephemeral storage, volumes. Because these volumes are not persistent, you
could archive the TempDB data files to Amazon S3 on a schedule. In Amazon S3, the
files could be held in a durable state.
With Amazon EBS, you can use any of the standard RAID configurations that you
would use with a traditional bare-metal server, as long as that particular RAID
configuration is supported by the operating system for your instance. This is because
all RAID is accomplished at the software level. For greater I/O performance than you
can achieve with a single volume, RAID 0 can stripe multiple volumes together, For
on-instance redundancy, RAID 1 can mirror two volumes together. However, the
maximum performance of Amazon EBS depends on the instance type.

An Amazon EBS-optimized instance uses an optimized configuration stack and


provides additional, dedicated capacity for Amazon EBS I/O. This optimization
provides the best performance for your Amazon EBS volumes by minimizing
contention between Amazon EBS I/O and other traffic from your instance. EBS–
optimized instances deliver dedicated bandwidth to Amazon EBS, with options
between 500 Mbps and 12,000 Mbps, depending on the instance type you use.
Amazon S3 can be used to offload storage requirements from the Amazon EC2
instance. Windows workloads that require large BLOB storage, like SharePoint and
Exchange, can be hosted on Amazon EC2 while also using Amazon S3 to store the
objects in a durable manner. Third-party tools in the AWS Marketplace enable the
user to configure applications to connect to Amazon S3 instead of using a storage
volume on the Amazon EC2 instance. This can result in better performance for the
end user and a lower cost of operation for the workload owner.
Amazon FSx provides a fully managed Windows-native file system. It delivers the
compatibility, features, and performance needed to run Windows enterprise
applications in the cloud:

1. It’s Windows-native – Amazon FSx for Windows File Server is built on Windows
Server. It provides file storage that supports the Windows file system features that
you use. It also provides file access via the SMB protocol, and integrates with
Active Directory. Amazon FSx for Windows File Server supports the following
features:
• DFS Namespaces and DFS Replication;
• Access Control Lists, or ACLs;
• NT File System, or NTFS; and
• VSS, or Volume Shadow Copy Service

2. It’s fully managed – Amazon FSx for Windows File Server sets up and provisions
file servers, storage volumes, and reduces the need for administrative overhead.
The service automatically updates Windows Server software, detects and corrects
hardware failures, and regularly performs backups.
3. It delivers fast performance – Amazon FSx for Windows File Server is built on SSD
storage and provides per-file-system-throughput of up to 2 GB per second. You
can tune the throughput level independent of your file system size. You can group
multiple Amazon FSx file systems together for up to 10 GB per second of
throughput across petabytes of data.

4. It’s accessible in the AWS cloud and on-premises – Use AWS Direct Connect and
Virtual Private Networks to connect FSx file shares to services that reside on-
premises. By using VPC peering, you can access FSx across Virtual Private Clouds.

5. It’s secure and compliant – Amazon FSx for Windows File Server automatically
encrypts all your data at rest and your data in transit. Amazon FSx is compliant
with the Payment Card Industry Data Security Standard, or PCI-DSS. For sensitive
workloads that are regulated by the Health Insurance Portability and
Accountability Act, or HIPAA, Amazon FSx is a HIPAA Eligible Service. To control
user access, Amazon FSx supports Windows access control lists, or ACLs. It also
protects your data with automatic daily backups of your file systems. Access is
also controlled by using AWS Identity and Access Management, or IAM, and
VPC security groups. Amazon FSx integrates with AWS CloudTrail to
monitor and log your API calls, letting you see actions that users on your
Amazon FSx resources take.

6. It fully supports the SMB protocol – Clients include Microsoft Windows Server
2008 and later, Amazon WorkSpaces and Amazon AppStream 2.0, VMware Cloud
on AWS, and Linux distributions that run smbclient.
The following list offers links to more information about topics you learned in this module:
• AWS maintains certifications and attestations that you can reference by visiting the
linked page.
• For an introduction to the AWS security model, please read the linked white paper.
• For more information, about VPCs, see the linked page
• For more information about EC2 instance types, see the linked page
In this module, you learned how foundational Amazon Web Services, or AWS,
services pertain to running Microsoft workloads. These foundational services include
compute services, such as Amazon Elastic Compute Cloud, or Amazon EC2; storage
services; networking services; and domain services.
You learned how to discuss the shared responsibility model, and how to use Virtual
Private Cloud (VPC), including Security Groups, Network Access Control Lists, and
firewalls.
You also learned how to choose storage options for your Microsoft workloads.
Welcome to the Running Microsoft Windows Server on AWS module.
In this module, you will learn how to run Microsoft Windows Server instances in
Amazon Elastic Compute Cloud, or EC2. You will also learn how to create custom
Amazon Machine Images, known as AMIs, for running Microsoft workloads.
Amazon EC2 offers virtual machines, or instances, that customers can launch and
manage with a few clicks or a few lines of code. EC2 supports Windows Server 2008
through 2019.

With EC2, customers can create, save, and reuse their own server images as Amazon
Machine Images. They can launch one instance at a time, or launch a whole fleet of
instances. Following the pay-as-you-go model, customers can add and terminate
instances as needed.

EC2 offers many types of instances, with various levels of CPU, memory, storage,
networking, graphics, and general-purpose performance.
Customers have full control over virtual instances. Customers have full root access
and/or administrative control over accounts, services, and applications. AWS does not
have any access rights to a customer’s instances or guest operating system, or OS.

AWS Identity and Account Management, or IAM, is used for authentication and
authorization of access to each customer’s AWS resources, but not for OS-level
access. To access the operating system on a customer’s Amazon EC2 instances, the
customer needs a different set of credentials. In the AWS shared responsibility model,
each customer owns the OS credentials, although AWS helps bootstrap the initial
access to the OS.

Customers can connect remotely to their Windows instances that use Remote
Desktop Protocol, or RDP, by using an RDP certificate generated for their instance.

They also control the updating and patching of their guest OS, including security
updates.
To provision an Amazon EC2 instance running Windows Server, multiple pieces of
information are required. The list shown here isn’t an exhaustive list; it’s a survey of
some of the most important items needed to provision a running, secure instance
that is a member of a Microsoft Active Directory Domain.

• Starting with item 1, customers must select an Amazon Machine Image to create
a new instance. AMIs provide the base virtual machine image for the
instance. Customers can select one from AWS Marketplace, create one
from an existing Amazon EC2 instance, or use one provided by AWS. You
will learn more about AMIs in the next section.
• Next, customers need network placement and addressing. All Amazon EC2
instances exist in a network. To determine where an instance is placed and
what type of IP addressing is assigned to it by default, customers can check
the Amazon Virtual Private Cloud, Amazon VPC, settings, in which the
instance is launched.
• Third, a customer’s instance types and sizes needed to support the customer’s
operating system, application, and Windows Server usage requirements
depend on the workload. Amazon EC2 allows customers to choose from
multiple instance types and sizes to select the proper infrastructure
resources for each instance.
• For domain membership, many enterprise customers use Microsoft Active
Directory Domain Services to manage objects across their corporate environment.
Customers can configure an instance to be treated as a domain object by
provisioning Amazon EC2 instances as members of the Active Directory Domain
when the instances are created.
• By configuring user data, customers can supply a batch file or PowerShell script for
the Windows instance to run when it starts. Customers can completely set up a
new instance without logging in directly to the instance. You will learn more about
user data in the next section.
• An Amazon EC2 instance can use two basic types of block storage –
ephemeral storage or Amazon Elastic Block Store, or Amazon EBS,
volumes. Ephemeral storage exists only for the life of the instance; Amazon
EBS volumes persist even after the instance has been stopped or
terminated.
• Tags help customers manage their instances, images, and other EC2 resources by
assigning categories, such as by owner, purpose, billing entity, or environment.
Customers can assign up to 50 tags to an EC2 instance.
• Finally, security groups are stateful firewalls that surround individual
Amazon EC2 instances and allow customers to control the traffic allowed to
pass to the instance. Security groups are applied to specific instances,
rather than network entry points. This increases security and gives
administrators finer granularity control when they grant access to the
instance.
7

Every enterprise application deployment requires proper planning for server


capacity and sizing. As such, customers must select the appropriate Amazon
EC2 instance type for each server role in a Windows-based deployment.

Amazon EC2 has a wide selection of instance types, including combinations of


CPU, memory, storage, and networking capacity. This gives customers the
flexibility to choose the appropriate capacity and mix of resources they need
for their applications. When customers choose an instance type, consider each
family’s attributes, including:

• Number of cores
• Amount of memory
• Amount and type of storage
• Network performance
• Intel processor technologies

Each type and family includes multiple sizes – small, medium, large, extra
large, double-extra large, and so forth.
Each deployment is different, so customers should follow Microsoft's detailed
guidance on how to properly size their environments based on the number of
users and workloads involved. As a starting point, customers can consider the
minimum requirements for each server role, add additional capacity over the
absolute minimum requirements to allow for growth, and map the requirements
to an Amazon EC2 instance type and size.

For example, the numbers shown here come from the Microsoft SharePoint
deployment guide’s system requirements.
Launching new instances and running tests in parallel is a simple process on AWS.
AWS recommends measuring the performance of applications to identify appropriate
instance types and validate application architecture. Customers should also conduct
rigorous load/scale testing to ensure that their applications can scale as intended.
Customers can avoid overprovisioning and underprovisioning by changing instance
sizes and types as their needs change.

Also, customers should analyze whether their applications can scale across multiple
Amazon EC2 instances by design. They should design applications that are resilient to
reboot and relaunch, to allow for scaling horizontally instead of vertically, where
possible. Tools such as Amazon CloudWatch and AWS Cost Explorer help customers
collect data to track, analyze, and improve expenditures.

In some architectures, using Reserved and Spot Instances to perform workloads can
result in significant savings.
An AMI is a template that contains a software configuration, such as an operating
system, application server, and applications. A customer can use an AMI to launch an
instance, which is the copy of the AMI running as a virtual server on a host computer
in an AWS data center. Customers can launch as many instances as they want from an
AMI, and they can also launch instances from as many AMIs as needed.

An AMI includes the following three components:


• A read-only file system image that includes the operating system and any
additional software required to deliver a service or a portion of it
• Launch permissions that control which AWS accounts can use the AMI to launch
instances, and
• A block device mapping that specifies the volumes to attach to the instance when
it is launched
AWS provides a set of publicly available AMIs that contain software configurations
specific to Windows. Using these AMIs, customers can quickly start building and
deploying applications using Amazon EC2. To begin, customers choose the AMI that
meets their specific requirements. Then, they launch an instance using the AMI. AWS
currently provides AMIs based on the following Windows versions:

• Windows Server 2019


• Windows Server 2016 (64-bit)
• Windows Server 2012 R2 (64-bit)
• Windows Server 2012 (64-bit)
• Windows Server 2008 R2 (64-bit)
• Windows Server 2008 (64-bit)
• Windows Server 2008 (32-bit)

Some of these AMIs also include an edition of Microsoft SQL Server, which can be
Enterprise Edition, Standard, Express, or Web.

Customers can launch an instance from an AWS Windows AMI with Microsoft SQL
Server to run the instance as a database server. Alternatively, customers can launch
an instance from any Windows AMI and then install the database software that they
need on the instance.

Windows Server 2003 is no longer provided, but customers can deploy their own
Windows Server 2003 – 32 or 64 bit – in Amazon EC2 to give them time on a secure
and stable environment while they migrate to a more modern OS.

For more information about currently available Windows AMIs, use the link shown
here.
After customers successfully launch and log in to an instance, they can configure the
instance for a specific application’s requirements. EC2Launch is a set of Windows
PowerShell scripts that runs on Windows Server 2016 and later AMIs. The EC2Launch
scripts replace the EC2Config service that is included on Windows Server 2012 R2 and
earlier AMIs. Both scripts provide similar functions.

EC2Launch performs the following tasks by default during the initial instance boot:
• Sets up new wallpaper that renders information about the instance
• Sets the computer name
• Sends instance information to the Amazon EC2 console
• Sends the RDP certificate thumbprint to the EC2 console.
• Sets a random password for the administrator account.
• Adds DNS suffixes.
• Dynamically extends the operating system partition to include any unpartitioned
space.
• Executes user data (if specified); you will learn more about user data next
• Sets persistent static routes to reach the metadata service and AWS Key
Management Service
Customers can also use EC2Launch to forward messages to the AWS console, initialize
secondary EBS volumes, and configure and schedule Sysprep to run on reboot.
By specifying user data, customers can supply a script to a Windows instance that
executes a series of commands. Scripts can take the form of batch or PowerShell
scripts on Windows instances. By using user data, customers can completely set up a
new instance without ever logging in directly to the instance.

The scripts customers supply do not have to do all of the work themselves. A user
data script could, for example, download and execute a longer script that is stored in
an Amazon S3 bucket. Customers can also download and install a Configuration
Management system, such as Chef or Puppet, and kick off an initialization task from a
reusable Chef Cookbook or Puppet Module.

For EC2Config or EC2Launch to execute user data scripts, customers must


enclose the lines of the specified script in script tags.

Customers can run any command that that can be run in a command prompt
window or a Windows PowerShell command prompt.

If customers use an AMI that includes the AWS Tools for Windows PowerShell, they
can also use those cmdlets. If an IAM role is associated with the instance, the
customer does not need to specify credentials to the cmdlets. Applications that run
on the instance can use the role's credentials to access AWS resources such as
Amazon S3 buckets, as shown in the PowerShell with AWS tools example.

Here, you can see two different ways to pass user data to a Windows instance, by
using:
• A set of Windows batch commands, or
• A PowerShell script

The batch example script uses a few simple calls to the winrm utility to configure the
instance to allow remote administration via the Windows Remote Management
service.

The PowerShell example script uses built-in PowerShell commands to configure the
Windows instance as a web server running Internet Information Services, or IIS. As
you will see in this module’s lab, this script could be extended to install a full, working
ASP.NET application as well—all without the customer logging in to the instance
directly.
Instance metadata is data about the instance that you can use to configure the
instance from a script or command. Customers can use instance metadata to have
the user data and other scripts become self-describing.
Instance metadata is divided into categories. Some examples are listed here.
Customers can create their own AMIs that contain customized settings, installed
applications, and configurations.

Customers can launch single or multiple instances from an AMI when they need the
same configuration.

AMIs:
• Contain all customizations,
• Are anchored to the current Region,
• Reboot the instance by default to ensure consistency, and
• Create the instance with all attached volumes

Some key points about AMIs include the following:


• Customers must run Sysprep to strip instance-specific networking information.
• EC2Config and EC2Launch support a shutdown with sysprep option, which
customers can use in their customizations.
• Building an AMI creates a snapshot. Storage and data retrieval costs are incurred
for snapshots of EBS volumes, which are stored in Amazon S3.
• Creating images directly from snapshots does not work with Windows volumes.
Customers should create an AMI from an existing instance.
Here’s how to create a custom AMI from a running EBS instance:
1. First, select an appropriate EBS backed AMI to serve as a starting point for your
new AMI.
2. Next, choose Launch to launch an instance of the EBS-backed AMI that you
selected.
3. While the instance is running, connect to it and customize it for your needs.
4. In the navigation pane, choose Instances and select your instance. Then, from the
Actions menu, chose Image, and Create Image. . You give the AMI a name,
description, and add instance volumes. During the AMI creation process, Amazon
EC2 creates snapshots of your instance’s root volume and any other EBS volumes
attached to your instance.
5. While your AMI is being created, you can choose AMIs in the navigation pane to
view its status.
6. Finally, launch an instance from your new AMI and verify that it works properly.

Alternatively, use the AWS command-line interface to create an image that is based
on an existing instance as shown on the screen.
For more details, use the link shown.
<no audio>
Your customers must understand the differences among licensing models.

Under the License Included model, AWS manages the license. Customers pay as they
go. AWS provides the images and supports legacy versions.

Under the License Mobility model, customers must have active Microsoft software
assurance. Microsoft does require a verification process. Customers can import their
images and software. Eligible software includes Microsoft SQL, RDS, Exchange, and
SharePoint.

The final model is Bring Your Own License. Most customers choose Dedicated Host.
Windows server can be deployed on Dedicated Hosts. Customers are responsible for
compliance with Microsoft, and customers can import and use their own software on
these servers.

Software Assurance and License Mobility are not required for licenses purchased
prior to 10/1/2019 and not upgraded to versions released after 10/1/2019.

Understanding what Microsoft licensing customers own is a critical element to


helping customers build the finance business case to migrate Microsoft workloads to
AWS.

For more details about licensing models, refer to the training catalog for the extended
licensing training course.
Certifications and attestations include the following:
• AWS publishes a Service Organization Controls SOC 1 report, published under
both the SSAE 16 and the ISAE 3402 professional standards, as SOC 2-Security
and SOC 3 Report.
• AWS achieved ISO 9001, ISO 27001, ISO 27017, and ISO 27018 certifications, was
successfully validated as a Level 1 service provider under the Payment Card
Industry (PCI) Data Security Standard (DSS), and currently offers HIPAA Business
Associate Agreements to covered entities and their business associates subject to
HIPAA.
• AWS achieved FedRAMP compliance, received authorization from the United
States General Services Administration to operate at the FISMA Moderate level,
and is also the platform for applications with Authorities to Operate, or ATOs, under
the Defense Information Assurance Certification and Accreditation Program, or
DIACAP. NIST, FIPS 140-2, CJIS, and DoD SRG Levels 2 and 4 are some of the
other certifications AWS has received.
• For more information, see: http://aws.amazon.com/compliance/

For more information about AWS security, see:


• https://aws.amazon.com/premiumsupport/
• http://d0.awsstatic.com/whitepapers/Security/Intro_to_AWS_Security.pdf
• https://d0.awsstatic.com/whitepapers/aws-security-whitepaper.pdf

For more information, about Amazon VPCs see: http://aws.amazon.com/vpc/

For more information about EC2 instance types, see:


• http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/instance-types.html
• http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html
• http://aws.amazon.com/ec2/instance-types/

For more information on Amazon EC2 instance sizing for Microsoft


SharePoint, see:
http://docs.aws.amazon.com/quickstart/latest/sharepoint/ec2.html.

For more information on Windows AMIs, see:


http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/AMIs.html.

For a full description of metadata and the available options, see


http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
In this module, you learned how to run Microsoft Windows Server instances in
Amazon Elastic Compute Cloud, or EC2. You will also learn how to create custom
Amazon Machine Images, known as AMIs, for running Microsoft workloads.
In this module, you will learn how to run Active Directory services on AWS. You will
learn about three options available for deploying Active Directory on AWS and how to
position each option for acceptance. You will also learn how to join domains, and
provide authentication and network naming services that apply to running Active
Directory on AWS.
To begin, consider these three areas—single sign-on, group access, and
central policy management. Windows-oriented organizations take advantage
of Active Directory in these three areas.

Windows environments provide a seamless single sign-on experience. This


adds security and convenience for applications when users sign on to
authenticated Active Directory accounts. Active Directory is required for
authentication and authorization. The AWS Directory Service allows customers
to assign AWS Identity Access Management, or IAM, roles to Active Directory
users and groups in the AWS Cloud and on existing, on-premises Microsoft
Active Directory users and groups. This is done by using the Active Directory
Connector.

Once customers have more than a few users, they often centralize application
and resource access, so they can manage access control policies for
applications and resources, such as printers and file shares. Customers use
Active Directory-integrated group policies to centralize access.

For the third area, Active Directory provides a way for computers to join an
Active Directory domain. This makes it possible to centrally manage
computers by using Active Directory group policies.

Active Directory must be available in an infrastructure to support these three


capabilities.
Active Directory, which is a fundamental Microsoft workload, can be deployed in
three ways:

• Customers can connect. They can extend their on-premises Active Directory into
AWS by joining cloud-based workloads to their existing directory domain.
• Or, customers can re-host. They can host Active Directory on AWS by installing
Active Directory instances on Amazon Elastic Compute Cloud, or Amazon EC2.
• Or, customers can re-platform. They can use AWS Managed Microsoft Active
Directory, which provides a set of highly available domain controllers, monitoring
and recovery, data replication, snapshots, and software updates that are
automatically configured and managed.

Furthermore, customers can use on-premises Active Directory credentials by setting


up an Active Directory trust, which allows customers to grant access to resources to
users, groups, and computers across entities.
<CONNECT>
Your customers can join cloud-based workloads to their existing on-premises Active
Directory infrastructures. Active Directory is deployed in the customer data center,
and Windows Servers are deployed into Amazon Virtual Private Cloud, or Amazon VPC,
subnets in the AWS Cloud. Windows Servers can be promoted to domain controllers
to make AD DS highly available in the AWS Cloud. Ports to support cloud to on-
premises Active Directory must be accessible, preferably through Direct Connect or
VPN connections.

<RE-HOST>
Amazon EC2 Active Directory is Active Directory that customers can manage and run
in the cloud. It can be standalone or replicated with your customer’s on-premises
network. Customers are responsible for all management and availability. If your
customer is replicating to an on-premises network, they must open all ports required
for replication, which is more ports than just for a trust. Using a cloud-based self-
managed Active Directory reduces Active Directory traffic to on-premises networks,
by Amazon EC2 workloads in the cloud.

<RE-PLATFORM>
Using Active Directory in the cloud results in lower latency for workload authorization.
It also reduces chances for failure due to a network outage on an on-premises
network, particularly in virtual desktop infrastructure, or VDI, scenarios. If running
standalone, customers do not have to open Active Directory replication or trust ports.
If using a trust, your customer only needs to open the trust ports, which are fewer
ports than required for replication. While trusts require some communications to an
on-premises infrastructure, the traffic is limited to the data centers.

AWS Microsoft Active Directory is a managed solution that eliminates the need for
customers to handle availability, monitoring, patches, and backups. It can be a fully
contained Active Directory in the cloud, where your customers manage users, groups,
and computers. It can also support cross-forest trusts to an on-premises Active
Directory.
CONNECT
Customers who run minimal EC2 instances that require access to Active Directory, and are
willing to accept some latency to Active Directory over on-premises links may select to
extend on-premises Active Directory to the AWS Cloud. To do so, customers must adopt
security policies, which allow Active Directory ports to be exposed to the internet, and
architect highly available connectivity to on-premises Active Directory services.

For customers who are considering Active Directory on-premises only, make sure
they understand how the link latency might affect application performance. This is
because any Kerberos-authorized traffic from services in the cloud must communicate
through the link to on-premises. This increases the round-trip delays in processing
application requests. Customers also need to understand the security implications of
opening their corporate network for access by cloud apps for authentication and
authorization.

EC2
Customers who run applications that are not yet supported by AWS Managed Active
Directory, such as Exchange or SharePoint, and need a replicated, multi-Region Active
Directory solution can choose to host Active Directory on EC2.

The decision to use Amazon EC2 Active Directory instances is primarily driven by two
issues:
• Microsoft Active Directory does not currently delegate key permissions that are
needed to support some applications. The classes of applications involved typically
require schema extensions, special service accounts, or access to containers
outside of the delegated OU. Examples of applications affected by this include
Exchange and SharePoint. Before making a decision, customers should have a
complete list of applications they want to run in the cloud, and they should conduct
a review of permissions required versus permissions granted in Microsoft Active
Directory.
• While trusts can be effective, customers might need to replicate an Active Directory
solution across multiple Regions. Because Microsoft Active Directory cannot be
part of an on-premises forest, Amazon EC2 Active Directory instances are required
for a cloud-based Active Directory solution.

MANAGED ACTIVE DIRECTORY


ant to minimize cost and effort to run Active
Customers who w
Directory, and run cloud-based applications, such as
• Amazon Relational Database Service (Amazon RDS)
SQL Server,
• AWS enterprise applications, or
• Windows workloads on EC2
May choose to use the AWS Managed Active Directory
service. In order to use Windows authentication for
Amazon RDS, you must use AWS Managed Active
Directory.
In the next section, you will learn about the Active Directory Connector service.
Active Directory Connector is a proxy service that provides an easy way to connect
compatible AWS applications, such as Amazon WorkSpaces, Amazon QuickSight, and
Amazon EC2 for Windows Server instances, to an existing on-premises Microsoft
Active Directory.

With Active Directory Connector, customers can connect AWS applications to existing
on-premises Microsoft Active Directory domains. Active Directory Connector does
not require directory synchronization or federation infrastructure. With Active
Directory Connector, customers can forward AWS sign-in requests to on-premises
Active Directory domain controllers for authentication. The AWS services shown here
can connect to Active Directory Connector.
With Active Directory Connector, customers simply add one service account to their
Active Directory. Active Directory Connector eliminates the need for directory
synchronization, or the cost and complexity of hosting a federation infrastructure.
When customers add users to AWS applications, such as Amazon QuickSight, Active
Directory Connector reads the existing Active Directory to create lists of users and
groups from which to select. When users log in to the AWS applications, Active
Directory Connector forwards sign-in requests to the customer’s on-premises Active
Directory domain controllers for authentication. Active Directory Connector redirects
directory requests in the AWS environment to an on-premises Microsoft Active
Directory without caching information in the cloud.

Customers can manage AWS resources, such as EC2 instances or Amazon Simple
Storage Service, or Amazon S3, buckets, through IAM role-based access to the AWS
Management Console, and join EC2 Windows instances to an on-premises Active
Directory domain through Active Directory Connector. Active Directory Connector
also allows users to access the AWS Management Console and manage AWS
resources by logging in with their existing Active Directory credentials.

With Active Directory Connector, your customers continue to manage Active


Directory as they do now. For example, they add new users and groups, and update
passwords using standard Active Directory administration tools in their on-premises
Active Directory. This helps customers consistently enforce their security policies,
such as password expiration, password history, and account lockouts, whether users
are accessing resources on premises or in the AWS Cloud.

Customers can also use Active Directory Connector to enable multi-factor


authentication, or MFA, for AWS application users by connecting it to an existing
RADIUS-based MFA infrastructure. This provides an additional layer of security when
users access AWS applications.

Active Directory Connector comes in two sizes: small and large. For organizations that
have up to 500 users, customers use small Active Directory Connector. Customers use
large Active Directory Connector when they have from 500 to 5,000 users.
This illustration shows the authentication flow and network path that is used when
customers enable AWS Management Console access via the Active Directory
Connector:

1. First, a user opens the secure custom sign-in page and supplies their Active
Directory user name and password.
2. Next, the authentication request is sent over Secure Sockets Layer, or SSL, to
Active Directory Connector.
3. Third, Active Directory Connector performs LDAP authentication to Active
Directory. The Active Directory Connector locates the nearest domain controllers
by querying the SRV DNS records for the domain.
4. After the user is authenticated, Active Directory Connector calls the Security
Token Service, or STS, AssumeRole method to get temporary security credentials
for that user. Using those temporary security credentials, Active Directory
Connector constructs a sign-in URL that users use to access the console

When customers expose their on-premises Active Directory through Active


Directory Connector via the Amazon VPC, all Amazon EC2 instances that
need Active Directory must have access to on-premises Active Directory. This
might present security concerns for companies that do not want external traffic
coming into the on-premises network. Latency for Active Directory-dependent
cloud workloads is higher, and Amazon EC2 instances may have issues if the
network link to the on-premises network fails.
In this example, the customer deploys and manages their Active Directory Domain
Services installation on Amazon EC2 instances. In this way, they can set up Active
Directory on EC2 instances in the same way they manage on-premises directory
services. Customers will have full end-to-end control over their directory, and they
can use all the features a self-managed service provides.
Amazon provides Quick Starts that you can use to accelerate Microsoft services
deployments. The example shown here helps customers manage and deploy their
own Active Directory Domain Services on Amazon EC2 instances in an Amazon Virtual
Private Cloud (Amazon VPC) that includes the following:
• Amazon VPCs with subnets
• NAT gateways
• Private and public routes
• Systems Manager Automation documents that set up and configure AD DS and
Active Directory-integrated DNS
• Windows Server instances
• Security groups and rules for traffic between instances
• Active Directory sites and subnets
• Sync/replication or trust to corporate domain controllers
The example is built on the AWS Cloud infrastructure, and sets up and configures AD
DS and Active Directory-integrated DNS on the AWS Cloud with trust or replication to
corporate Active Directory domain controllers. Because it doesn’t include AWS
Directory Service, the customer handles the AD DS maintenance and monitoring
tasks. In this scenario:
1. Active Directory is deployed in the customer data center, and Windows servers are
deployed into two different VPC subnets
2. Communications with on-premises networks are established through a VPN tunnel
or AWS Direct Connect
3. The Windows servers can be promoted to Domain Controllers in the on-premises
Active Directory forest, making AD DS highly available in the AWS cloud.
4. Additional instances that are deployed in the VPC will have access to cloud-based
domain controllers for secure, low-latency services and DNS.
AWS Directory Service for Microsoft Active Directory, also known as AWS Managed
Microsoft Active Directory, is powered by an actual Microsoft Windows Server Active
Directory that’s managed by AWS, in the AWS Cloud. It enables customers to migrate
a broad range of Active Directory–aware applications to the AWS Cloud.

With AWS Managed Microsoft Active Directory, customers can run directory-aware
workloads in the AWS Cloud, including Microsoft SharePoint and custom .NET and
SQL Server-based applications. It also supports AWS managed applications and
services, including Amazon WorkSpaces, Amazon WorkDocs, Amazon QuickSight,
Amazon Chime, Amazon Connect, and Amazon Relational Database Service for
Microsoft SQL Server, or Amazon RDS for SQL Server.

AWS Directory Service for Microsoft Active Directory is powered by Windows Server
2012 R2. When customers select and launch this directory type, it is created as a
highly available pair of domain controllers connected to the customer’s Amazon VPC.
The domain controllers run in different Availability Zones in a Region of the
customer’s choice. Host monitoring and recovery, data replication, snapshots, and
software updates are automatically configured and managed for customers.
AWS provides monitoring, daily snapshots, and recovery as part of the service—your
customers add users and groups to AWS Managed Microsoft Active Directory, and
administer Group Policy by using familiar Active Directory tools that run on a
Windows computer joined to the AWS Managed Microsoft Active Directory domain.
Customers can also scale the directory by deploying additional domain controllers,
and they can improve application performance by distributing requests across a
larger number of domain controllers.

AWS Managed Microsoft Active Directory is approved for applications in the AWS
Cloud that are subject to the United States Health Insurance Portability and
Accountability Act, or HIPAA, or the Payment Card Industry Data Security Standard,
known as PCI DSS. Customers enable compliance for their directories.
AWS Directory Service for Microsoft Active Directory, also known as AWS Managed
Microsoft Active Directory, is powered by an actual Microsoft Windows Server Active
Directory that’s managed by AWS, in the AWS Cloud. It enables customers to migrate
a broad range of Active Directory–aware applications to the AWS Cloud.

With AWS Managed Microsoft Active Directory, customers can run directory-aware
workloads in the AWS Cloud, including Microsoft SharePoint and custom .NET and
SQL Server-based applications. It also supports AWS managed applications and
services, including Amazon WorkSpaces, Amazon WorkDocs, Amazon QuickSight,
Amazon Chime, Amazon Connect, and Amazon Relational Database Service for
Microsoft SQL Server, or Amazon RDS for SQL Server.

AWS Directory Service for Microsoft Active Directory is powered by Windows Server
2012 R2. When customers select and launch this directory type, it is created as a
highly available pair of domain controllers connected to the customer’s Amazon VPC.
The domain controllers run in different Availability Zones in a Region of the
customer’s choice. Host monitoring and recovery, data replication, snapshots, and
software updates are automatically configured and managed for customers.
AWS provides monitoring, daily snapshots, and recovery as part of the service—your
customers add users and groups to AWS Managed Microsoft Active Directory, and
administer Group Policy by using familiar Active Directory tools that run on a
Windows computer joined to the AWS Managed Microsoft Active Directory domain.
Customers can also scale the directory by deploying additional domain controllers,
and they can improve application performance by distributing requests across a
larger number of domain controllers.

AWS Managed Microsoft Active Directory is approved for applications in the AWS
Cloud that are subject to the United States Health Insurance Portability and
Accountability Act, or HIPAA, or the Payment Card Industry Data Security Standard,
known as PCI DSS. Customers enable compliance for their directories.
All compatible applications work with user credentials that customers store in AWS
Managed Microsoft Active Directory, or customers can connect to their existing
Active Directory infrastructure with a trust, and use credentials from an Active
Directory running on-premises or on EC2 Windows. If a customer joins EC2 instances
to an AWS Managed Microsoft Active Directory, their users can access Windows
workloads in the AWS Cloud with the same Windows single sign-on (SSO) experience
as when they access workloads in the on-premises network. In this scenario:
1. AWS Managed Microsoft Active Directory is deployed in two Availability Zones
2. Communications with on-premises networks are established through a VPN tunnel
or AWS Direct Connect
3. Customers connect to their existing on-premises Active Directory infrastructure to
the AWS Managed AD with a trust.
4. Users can access Windows workloads in the AWS Cloud with the same Windows
single sign-on (SSO) experience as when they access workloads in the on-premises
network.
AWS Managed Microsoft Active Directory is available in two editions: Standard and
Enterprise.

AWS Managed Microsoft Active Directory Standard Edition is optimized to be a


primary directory for small and midsize businesses with up to 5,000 employees. It
provides enough storage capacity to support up to 30,000 directory objects, such as
users, groups, and computers.

AWS Managed Microsoft Active Directory Enterprise Edition is designed to support


enterprise organizations with up to 500,000 directory objects.

Pricing varies by edition, Region, number of domain controllers, and directory


sharing.

Switching between editions is not supported, so your customers must be sure that
they choose the correct edition.

For information about pricing, check online to get the most up-to-date numbers.
AWS Quick Starts provide AWS CloudFormation templates to support three
deployment scenarios for Active Directory implementation. For each scenario,
customers also have the option to create a new Amazon VPC or use an existing
Amazon VPC infrastructure. Customers can choose the scenario that best fits their
needs.

With scenario 1, shown here, customers deploy and manage their own AD DS
installation on the Amazon EC2 instances. The AWS CloudFormation template for this
scenario builds the AWS Cloud infrastructure, and sets up and configures AD DS and
Active Directory-integrated DNS in the AWS Cloud. It doesn’t include AWS Directory
Service, so customers must handle all AD DS maintenance and monitoring tasks.
Customers can also choose to deploy the Quick Start into an existing Amazon VPC
infrastructure.

In scenario 2, customers extend on-premises AD DS to AWS on Amazon EC2


instances. The AWS CloudFormation template for this scenario builds the base AWS
Cloud infrastructure for AD DS, and customers perform several manual steps to
extend their existing network to AWS and to promote their domain controllers. As in
scenario 1, customers manage all AD DS tasks. They can also deploy the Quick Start
into an existing Amazon VPC infrastructure.

With scenario 3, customers deploy AD DS with AWS Directory Service in the AWS
Cloud. The AWS CloudFormation template for this scenario builds the base AWS
Cloud infrastructure and deploys AWS Directory Service for Microsoft Active
Directory, which offers managed AD DS functionality in the AWS Cloud. AWS Directory
Service takes care of AD DS tasks, such as building a highly available directory
topology, monitoring domain controllers, and configuring backups and snapshots. As
with the first two scenarios, customers can deploy the Quick Start into an existing
Amazon VPC infrastructure.
Customers can enable their users to access Microsoft Office 365 with
credentials managed in AWS Directory Service for Microsoft Active Directory,
also known as AWS Microsoft Active Directory. To do this, customers deploy
Microsoft Azure AD Connect and Active Directory Federation Services for
Windows Server 2016, known as AD FS 2016, with AWS Microsoft Active
Directory. AWS Microsoft Active Directory enables customers to build a
Windows environment in the AWS Cloud, synchronize AWS Microsoft Active
Directory users into Microsoft Azure Active Directory, and use Office 365, all
without needing to create and manage Active Directory domain controllers.
Customers can benefit from the broad set of AWS Cloud services for compute,
storage, database, and Internet of Things, or IoT, while continuing to use
Office 365 business productivity apps—all with a single Active Directory
domain.

Office 365 provides different options to support user authentication with


identities that come from Active Directory. One common way to do this is to
use Azure AD Connect and AD FS together with Active Directory directory. In
this model, customers use Azure AD Connect to synchronize user names from
Active Directory into Azure Active Directory so Office 365 can use the
identities. To complete this solution, customers use AD FS to enable Office
365 to authenticate the identities against the Active Directory directory.

Read the associated blog post to learn how to use Azure AD Connect and AD
FS with AWS Microsoft Active Directory, so your customers’ employees can
access Office 365 with their Active Directory credentials.
In this section, you will learn about joining EC2 instances to an Active Directory
domain.
Customers can seamlessly join an EC2 instance to a directory domain when the
instance is launched using the Amazon EC2 Systems Manager. If customers need to
manually join an EC2 instance to their domain, they must launch the instance in the
proper Region and security group or subnet, and then join the instance to the
domain.

To connect remotely to these instances, customers must have IP connectivity to the


instances from the network they are connecting from. In most cases, this requires
that an internet or private gateway is attached to the Amazon VPC and that the
instance has a reachable IP address.

When customers launch an instance using the Amazon EC2 console, they can
join the instance to a domain. If they don't already have a Systems Manager
document, the wizard creates one and associates it with the instance.
Customers can also join the instance to the domain by associating the
Systems Manager document to the instance by using the AWS Tools for
PowerShell or the AWS Command Line Interface, called the AWS CLI.
In this module, you learned how to run Active Directory services on AWS. You learned
about three options available for deploying Active Directory on AWS and how to
position each option for acceptance. You also learned how to join domains, and
provide authentication and network naming services that apply to running Active
Directory on AWS.
In this module, you will learn how to run SQL Server databases on AWS. You will also
learn how to choose the most suitable deployment options, and select compute and
storage resources. Finally, you will learn how to migrate databases from existing
platforms to AWS.
A foundational workload is the database service. With AWS, customers have multiple
deployment options. Whether they decide to manage the environment with Amazon
Elastic Compute Cloud, or Amazon EC2; deploy to a managed service with the
Amazon Relational Database Service, or Amazon RDS; or migrate to native, open
databases, they will have:
 A cost-effective option for hosting databases,
 Complete control for managing software, compute, and storage resources, and
 Rapid provisioning through relational database Amazon Machine Images, or AMIs,
that enable them to provision servers with the database service already installed

For customers who re-host SQL Server on Amazon EC2:


• EC2 is the AW self-managed solution. AWS manages the hardware and
infrastructure, but the customer retains administrative rights to take care of the
rest.
• This solution is great for customers looking for a more familiar cloud experience
and want to retain a higher control level, customization, and administrative access
to their workloads.
• Once migrated, AWS helps customers upgrade to a newer SQL version. For EC2,
AWS offers a streamlined SQL and Windows 2008 Upgrade Tool.

For customers who re-platform SQL Server on Amazon RDS:


• Amazon RDS is the AWS managed DB solution. Amazon RDS for SQL Server makes
it easy to set up, operate, and scale SQL Server deployments in the cloud.
• Amazon RDS frees the customer to focus on application development by managing
time-consuming database administration tasks, including provisioning, backups,
software patching, monitoring, and hardware scaling.
• Once migrated, AWS helps customers upgrade to a newer SQL version. On RDS,
this is an easy four-click process.
• Customers use RDS automation to shift resources and focus on their business
value-making tasks.

For customers who re-factor SQL Server and adopt cloud-native services on their
own timetable:
Additional savings and flexibility can be realized with a move to a variety of open
source database solutions on AWS. Customers can save significant cost by moving off
the proprietary SQL Server engine and onto a fully managed relational database
service, like Amazon Aurora, which is based on open source standards MySQL and
PostgreSQL. AWS has available refactoring tooling and services to help customers
move to cloud-native solutions, such as Aurora.

Note that Microsoft ended their support for SQL Server 2008 on July 9, 2019. To learn
more about migrating legacy applications to AWS, visit the AWS website.
Customers have two options to run SQL Server on AWS.
The first option is to re-host SQL Server on EC2 Windows
For corporate or third-party legacy and custom applications, including line-of-
business applications, customers can launch a database to support these apps by
using Amazon EC2 and Amazon EBS.

The second option to run SQL Server on AWS is to re-platform to Amazon RDS
Amazon Relational Database Service is a managed service that makes it easy to
deploy a relational database to support line-of-business applications that run on
AWS:
• Amazon RDS automates database administration tasks, such as provisioning,
patching, backup, recovery, failure detection, and repair.
• It runs in Multi-AZ deployments to provide automatic failover, and
• It integrates with AWS Identity and Access Management for granular resource
permission controls.

Whether customers decide to self-manage their customer environment with EC2 or


deploy to a managed service with RDS, they will have:
 A cost-effective option for hosting SQL Server,
 Complete control for managing software, compute, and storage resources, and
 Rapid provisioning through relational database AMIs that enable customers to
store database machine images.
Amazon EC2 is supported with either bring-your-own software or preconfigured
AMIs. Customers can choose one of the preconfigured options, or create custom-built
solutions that use versions or editions that the preconfigured options do not support.
With SQL Server on EC2, the customer manages virtual machine security, storage,
network ports, and so forth. The customer also maintains full SQL Server sysadmin
privileges to do so.

At times, some customers struggle to set up a multi-site, high availability option for
their SQL Server instance, either because of expense or technical challenges. With
Amazon RDS for SQL Server, customers can select an option when they launch an
Amazon RDS instance to set up a Multi-AZ SQL Server cluster that uses synchronous
replication between two Availability Zones, by using database mirroring.

Both EC2 and RDS support storage encryption for all editions using KMS, and
customers running Enterprise Edition can use Transparent Data Encryption with both
services.

Both services support Windows and SQL Server authentication, and AWS manages
the Operating System installation.
If customers want to take advantage of automated software patching, they should
choose Amazon RDS for SQL Server; otherwise, they will need to manage the
maintenance tasks with SQL Server on Amazon EC2.

If customers want to install third-party tools or run specific database maintenance


plans, they should run SQL Server on Amazon EC2; otherwise, AWS provides all
necessary tools and maintenance.
Customers can save significant cost by adopting cloud-based database services or
moving off the proprietary SQL Server engine and onto a fully managed relational
database service. AWS provides refactoring tools and services to help customers
move to cloud-native solutions, such as Aurora or Redshift.

Amazon RDS
Amazon RDS is a managed service that helps customers set up, operate, and scale a
relational database in the cloud. There’s no need for hardware or software
installation. RDS provides cost-efficient and resizable capacity while automating
administration tasks, such as hardware provisioning, database setup, patching, and
backups.
• Customers can use RDS to replace most user-managed databases, and it can be
instantiated in minutes. Customers can also control when patching takes place.
• As with many AWS services, it’s pay-as-you-go. In addition, customers can bring
their own licenses for databases, such as Microsoft SQL Server, if they want.
• RDS frees database administrators, or DBAs, from 70% of the typical database
maintenance work. This service is like moving an on-premises database to the
cloud.
Amazon Aurora
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for
the cloud that combines the performance and availability of traditional enterprise
databases with the simplicity and cost-effectiveness of open source databases.
Amazon Aurora has an architecture that decouples the storage and compute
components.

Amazon Aurora is fully managed by Amazon RDS.

Aurora is faster than other standard databases and provides the security, availability,
and reliability of commercial databases at much lower cost.

Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that


automatically scales up to 64 TB per database instance. It delivers high performance
and availability with up to 15 low-latency read replicas, point-in-time recovery,
continuous backup to Amazon Simple Storage Service, or Amazon S3, and replication
across three Availability Zones, or AZs. Amazon Aurora is designed to offer greater
than 99.99% availability, replicating six copies of data across three Availability Zones,
and backing up data continuously to Amazon S3.

Amazon Aurora provides multiple levels of security for databases, which includes
network isolation, encryption at rest by using AWS Key Management Service, or KMS,
and encryption of data in transit using Secure Sockets Layer, or SSL.

Migrating data from Microsoft SQL Server databases to Amazon Aurora can be done
using the AWS Database Migration Service. Customers can begin a data migration
with a few clicks, and the source database remains fully operational during the
migration, minimizing downtime to applications using that database.

Amazon Redshift
Amazon Redshift is a fast, scalable data warehouse that customers can use to analyze
the data across their data warehouse and data lake. Amazon Redshift delivers 10
times faster performance than other data warehouses by using machine learning,
massively parallel query execution, and columnar storage on high-performance disk.
Customers can set up and deploy a new data warehouse in minutes, and run queries
across petabytes of data in their Amazon Redshift data warehouse, and exabytes of
data in their data lake built on Amazon S3.

Amazon Redshift uses machine learning, a massively parallel architecture, compute-


optimized hardware, and result set caching to deliver high throughput and subsecond
response times, even with thousands of concurrent queries. With Amazon Redshift,
customers spend less time waiting, and more time gaining insights from their data.
Amazon Redshift is streamlined, enabling customers to deploy a new data warehouse
in minutes. Amazon Redshift automates most of the common administrative tasks to
manage, monitor, and scale a data warehouse. This helps customers break free from
the complexities of managing on-premises data warehouses.

Amazon Redshift is less than 1/10th the cost of traditional, on-premises data
warehouses. Amazon Redshift requires no upfront costs, and customers only pay for
what they use.

Amazon Redshift enables customers to scale from querying gigabytes to exabytes of


data across their Amazon Redshift data warehouse and Amazon S3 data lake. They
can quickly analyze any size of data in S3 with no extract, transform, and load, known
as ETL, required. Customers can resize their Amazon Redshift cluster with a few clicks
on the console or an API call. With Amazon Redshift, customers can scale up or down
as their needs change.

Amazon Redshift extends customers’ data warehouse to their data lake to help them
gain unique insights that they could not get by querying independent data silos. They
can directly query open data formats stored in Amazon S3 with Redshift Spectrum, a
feature of Amazon Redshift, without the need for unnecessary data movement. This
enables customers to analyze data across their data warehouse and data lake,
together, with a single service.

Amazon Redshift runs mission-critical workloads for large financial services,


healthcare, retail, and government organizations. The database can be encrypted
using AWS KMS or hardware security module, or HSM. Customers can also isolate
their clusters by using Amazon Virtual Private Cloud, or Amazon VPC.
In this section, you learn how to choose an instance type for SQL server or RDS.
Customers can scale the compute and memory resources that power their database
deployments up or down, typically in minutes.

As storage requirements grow, customers can also provision additional storage. SQL
Server supports up to 16 TB, and storage scaling is on-the-fly, with zero or near-zero
downtime.

Storage and compute instance types are decoupled. When a customer scales a
database instance up or down, the storage size and type remain the same.
From a planning perspective, many SQL Server workloads benefit from large amounts
of memory, in relation to CPU, for caching purposes. Customers should consider
memory-optimized instances, unless their particular workload is processing heavy,
such as running stored procedures, complex reporting queries, or computations.
Licensing and edition also determine availability, with Express Edition unavailable on
the largest classes, and Enterprise Edition only available on the largest classes if the
customer chooses the Licensing Included model. The edition can also determine
available storage options.

Storage can also be modified, which, in most cases, does not involve downtime. Older
instance types might require a short period of downtime during the first scale storage
operation performed on the instance. On SQL Server, the new storage size is made
available to the database within minutes of the operation starting. It can also be
scheduled to occur during the next maintenance window. Storage performance will
be degraded for a period, usually for several hours, but it can be several days, after
storage is modified, while the new storage configuration is optimized. Ongoing
storage optimization is indicated through the DB instance status, by using the console
or API. Once storage has been modified, it cannot be modified again for 6 hours, or as
long as the instance is undergoing storage optimization, whichever is longer. Storage
optimization time is roughly proportional to the pre-modification storage size of the
instance.
10

Highlighted on this slide are the instance types that are available for Database
Services.
SQL Server instance performance depends on many factors, but if customers focus on
the infrastructure level, it broadly depends on CPU resources, amount of memory,
network throughput, storage performance, and size.

At AWS, practically all these depend on the DB instance class a customer selects to
run the instance. Because the storage is network attached, the overall networking
capabilities of the instance class affect input-output throughput. Customers also have
an array of storage subsystem choices, with different performance levels and prices.

Amazon RDS supports three types of instance classes – standard, burstable


performance, and memory-optimized:
• Standard provides a balance of compute, memory, and network resources,
• Burstable performance provides a baseline performance level, with the ability to
burst to full CPU usage, and
• Memory-optimized includes X and R types. X offers one of the lowest price per
gibibyte of RAM among the database instance classes, and R offers improved
networking and Amazon Elastic Block Store, or Amazon EBS, performance.
In each instance class, customers can choose an instance type that provides the
needed balance of virtual CPUs, or VCPUs, memory, bandwidth, and network
performance.

Amazon RDS DB instances use Amazon EBS volumes for database and log storage.
Customers can choose from general purpose and Provisioned IOPS storage types,
depending on their storage performance and size requirements.
Similar to Amazon EC2, the RDS storage subsystem offers an array of performance
levels and price. Here, you can see the SSD-based options, and their size and
performance attributes.

The first option, GP2, General Purpose SSDs with predictable performance and burst
capabilities, is the most popular. This is especially good for workloads that have some
variability.

The second option, IO1, Provisioned IOPS, is a good choice when input-output needs
are high and consistent.

Every workload is different, so customers should test their workloads. With the
licensing Included model in RDS, customers only pay for licensing costs while an
instance is operational, which makes testing and benchmarking cost-effective.

Note that AWS is phasing out magnetic storage for RDS, but the Throughput
Optimized HDD offering, called ST1, on EC2, is performant for sequential writes,
which makes it well-suited for database backups.
Database mirroring is a feature that provides a complete or almost complete
mirror of a database, depending on the operating mode, on a separate
database instance. This feature increases the availability and protection of
mirrored databases, and provides a mechanism to keep mirrored databases
available during upgrades.

Always On Availability Groups is an enterprise-level feature that provides high


availability and disaster recovery to SQL Server databases. Always On
Availability Groups uses advanced features of Windows Failover Cluster and
SQL Server Enterprise Edition. These availability groups support the failover of
a set of user databases as one distinct unit or group. User databases defined
in an availability group consist of primary read-write databases along with
multiple sets of related secondary databases. These secondary databases can
be made available to the application tier as read-only copies of the primary
databases, thus providing a scale-out architecture for read workloads.
Customers can also use the secondary databases for backup operations.

Log shipping provides a mechanism to automatically send transaction log


backups from a primary database on one database instance to one or more
secondary databases on separate database instances. Although log shipping
is typically considered a disaster recovery feature, it can also provide high
availability by allowing secondary database instances to be promoted as the
primary if the primary database instance fails.

Log shipping offers many benefits to increase the availability of log-shipped


databases. Besides the benefits of disaster recovery and high availability
already mentioned, log shipping also provides access to secondary databases
to use as read-only copies of the database. This feature is available between
restore jobs. It also enables customers to configure a lag delay, or a longer
delay time, which helps customers recover accidentally changed data on the
primary database before the changes are shipped to the secondary database.
AWS recommends running the primary and secondary DB instances in
separate Availability Zones, and optionally deploying an optional monitor
instance to track all the details of log shipping. Backup, copy, restore, and
failure events for a log shipping group are available from the monitor instance.
AWS provides a Quick Start that implements a high availability solution built with
Microsoft Windows Server and SQL Server, and uses the Always On Availability
Groups feature of SQL Server Enterprise Edition.

Implementing a Windows Server Failover Cluster, or WSFC, cluster in the


AWS Cloud, which is a prerequisite for deploying an Always On Availability
Group, is similar to deploying it in an on-premises setting, as long as the
customer meets two key requirements:

• The customer deploys the cluster nodes inside an Amazon VPC, and
• Deploys WSFC cluster nodes in separate subnets.

At a high level, the architecture includes SQL Server instances deployed with
replication across two Availability Zones, as well as a domain controller
instance in each Availability Zone to handle Active Directory and DNS
requests. It also keeps the SQL Server instances from being exposed publicly
by placing them in private subnets, with NATs for outbound traffic and Remote
Desktop Gateway instances for remote management by administrators.
Customers can deploy this architecture by using the AWS-provided instructions in
the Quick Start on the AWS website.
Here are some notes about working with Multi-AZ deployments for Microsoft SQL
Server database instances:

• To use SQL Server Multi-AZ with Mirroring with a SQL Server database instance in
an Amazon VPC, customers first create a database subnet group that has subnets
in at least two distinct Availability Zones. They then assign the database subnet
group to the SQL Server database instance being mirrored.

• When a database instance is modified to be a Multi-AZ deployment, during the


modification, it has a status of Modifying. Amazon RDS creates the standby mirror
and makes a backup of the primary database instance. Once the process is
complete, the status of the primary database instance becomes available.

• Multi-AZ deployments maintain all databases on the same node. If a database on


the primary host fails over, all SQL Server databases fail over as one atomic unit to
the standby host. Amazon RDS provisions a new healthy host and replaces the
unhealthy host.

• Multi-AZ with mirroring supports one standby mirror.


• Users, logins, and permissions are automatically replicated on the standby mirror.
Customers don’t need to re-create them. User-defined server roles, which is a SQL
Server 2012 feature, are not replicated in Multi-AZ instances.

• If customers have SQL Server Agent jobs, they need to re-create them in the
secondary, as the jobs are stored in the MSDB database, and the database can't be
replicated via mirroring. Customers should create the jobs first in the original
primary, then fail over, and create the same jobs in the new primary.

• Customers might observe elevated latencies compared to a standard database


instance deployment, in a single Availability Zone, as a result of the synchronous
data replication performed on their behalf.

• Failover times are affected by the time it takes to complete the recovery process.
Large transactions increase the failover time.

• When customers restore a backup file to a Multi-AZ database instance, mirroring is


terminated and then reestablished. Mirroring is terminated and reestablished for
all databases on the database instance, not just the one being restored. While
Amazon RDS reestablishes mirroring, the database instance can't fail over. It can
take 30 minutes or more to reestablish mirroring, depending on the size of the
restore. For more information, see the AWS website.
Amazon RDS automatically performs a failover when the following events occur:
• Availability in the primary Availability Zone is lost
• Network connectivity to the primary database node is lost
• Compute unit failure on the primary database node
• Storage failure on the primary database node
Amazon RDS Multi-AZ deployments do not fail over automatically in response
to database operations such as long running queries, deadlocks, or database
corruption errors. When operations such as DB instance scaling or system
upgrades like OS patching are initiated for Multi-AZ deployments, they are
generally applied first on the secondary instance, before the automatic fail over
of the primary instance, for enhanced availability.
For more information about Multi-AZ SQL Server deployments with database
mirroring and Always On Availability Groups, see the AWS website.
Customers can deploy Microsoft SQL Server from many starting points, including:
• AWS Management Console
• AWS Command Line Interface, or AWS CLI
• AWS software development kits, or SDKs
• AWS CloudFormation
• AWS Toolkit for Eclipse
• AWS Toolkit for Visual Studio, and
• AWS Tools for Windows PowerShell

Customers can use the following SDKs to perform Amazon RDS functions from within
their applications:
• Android
• iOS
• Java
• JavaScript
• .NET
• Node.js
• PHP
• Python, boto,
• Ruby, and
• Xamarin
A key to managing SQL Server deployments at scale is automation – the ability to
programmatically provision the entire cluster, without manual intervention.

A customer can do this in a number of ways. Here’s a snippet of PowerShell code


using the AWS Tools for Windows PowerShell module to provision a new SQL Server
database instance from scratch. This snipped contains a bit more than the minimum
required parameters, for illustration purposes, but they generally fall into these
categories:

• General properties and performance.


• Reliability and tuning characteristics.
• Whether the instance is joined to a domain provided by the Directory Service, and
• Networking and security constructs to apply to the DB instance – AWS handles the
rest of the work.
From a monitoring perspective, Amazon CloudWatch provides a comprehensive set of
metrics that allows customers to keep tabs on how their SQL Server workloads
operate. Customers can set alarms and notifications for dangerous conditions.
Metrics such as CPU use, read-write IOPS, memory use, and connections show
customers how their workloads relate to the selected instance type and storage type.

Customers also have access to SQL Server ecosystem tools to analyze performance.

The aws command shown on the screen lists the types of metrics that are available from RDS
via CloudWatch.
22

Additionally, RDS Enhanced Monitoring is available for SQL Server. This monitoring
solution provides detailed OS-level metrics with up to 1-second granularity. Unlike
traditional CloudWatch metrics available at the infrastructure level, Enhanced
Monitoring collects metrics using an agent running on the instance itself. It has access
to more granular data, but might contribute to the load of the DB instance and report
slightly different numbers. The standard CloudWatch metrics resolution is 5 minutes,
but custom metrics can be much smaller.

Enhanced Monitoring reports on granular CPU use, disks, processes, threads,


network, and process load.
With Amazon RDS, customers can use common ecosystem tools and features to
access and manage their DB instances. However because Amazon RDS is a fully
managed environment, it does not provide customers with OS-level access.
Customers won’t be able to RDP into the environment or use OS administrator
credentials, and they will not have access to the underlying file system. As such, some
SQL Server functions and features that rely on that level of access will not operate
correctly. Customers can use their DB instance as a data source for SQL Server
Analysis Service (SSAS), SQL Server Reporting Service (SSRS), an SQL Server
Integration Service (SSIS), but it will not be able to run the services on the DB
instance. In addition, each Amazon RDS instance has a 30-database limit.

Additionally, while certain SQL Server features, such as maintenance plans, database
mail, linked servers, and Microsoft Distributed Transaction Coordinator, or MSDTC,
are not supported, some AWS services or Amazon RDS features fill the same roles,
often in more robust ways, such as using automatic backups instead of maintenance
plans, or Amazon Simple Email Service, or Amazon SES, for sending email with high
deliverability.

For more information on limited linked server support, visit the AWS website.
AWS provides automated backup and recovery, with point-in-time restore capability
for up to 35 days in the past. Customers can always instruct the service to take
manual snapshots that aren’t subject to the 35-day window, or copy automated
snapshots to convert them to manual snapshots. Both of these features require a
designated window of 30 minutes or more, where AWS can perform the activities.
The maintenance window is once a week. The backup window is daily.

RDS also allows customers to back up and restore using .BAK files, providing access to
SQL Server’s native backup functionality. This is commonly used to restore on-
premises, or EC2, SQL Server backups to an RDS instance. It also allows customers to:

• Use the native SQL Server backup and restore functionality,


• Save .BAK files to Amazon S3 buckets,
• Restore on-premises SQL Server backups to an RDS instance, and
• Conduct database-level operations.

For more information, see the AWS website.


To enable native backup and restore on RDS for SQL Server instance, customers:
1. Create a new Amazon S3 bucket or use an existing one.
2. Create an AWS IAM role to grant RDS access to an S3 bucket or a folder in it.
3. Attach the IAM role to an RDS for SQL Server instance, using Option Groups, and
finally,
4. Use SQL Server Management Studio to call stored procedures that expose the
.BAK
AWS offers a centralized and auditable approach to configuring server parameters
and features. Two service features enable that: Parameters Groups and Option
Groups.

• Parameter Groups are used to change the tuning parameters of the DB engine.
• Option Groups are currently used to enable Transparent Data Encryption in
Enterprise Edition, and enable the SQL Server native backup and restore
functionality.

Both groups have a set of predefined default configurations with sensible default
settings matching vendor recommendations. These are suitable for most workloads.

Customers can customize the groups, by creating derivative groups with their own
settings, and then they can apply the groups to the DB instances they operate. At any
point, customers know exactly what configuration each of their DB instances is
running.
How can customers secure SQL Server on premises at the network layer? Should they
place it behind a firewall? Limit access to it using route tables and network access
control lists? Customers can deploy the same design they use on premises when they
run SQL Server on AWS.

Secure the network


Customers deploy their SQL Server instances in an Amazon Virtual Private Cloud and
define multiple subnets, each specific to an Availability Zone. These subnets can be
specific to the database tier. Then, with routing rules and network ACLs, customers
control the flow of traffic between the subnets and external networks.

Restrict traffic to the instance with Network access control lists and security groups.
You learned about network ACLs and security groups earlier in this course.

Avoid or limit public access to all of your instances by placing instances in private
subnets.

Turn on forced SSL to ensure that all database connections are encrypted.
Secure the data
Does your customer encrypt their SQL Server data today? If they are required to
encrypt their data today, AWS is a highly secure option for their SQL Server workload.
Having a data protection strategy in place is key for every business.

At AWS, customers are responsible for encrypting their data. They should encrypt
their data at rest by using AWS tools, such as AWS KMS and encrypted EBS volumes
to store their data. They should use application layer encryption like TDE or column-
level encryption.

Also, customers can encrypt data in transit using SSL. For Amazon RDS, the SSL
certificate includes the DB instance endpoint as the common name (CN) for the
SSL certificate to guard against spoofing attacks.

Some of the mechanisms in use today to encrypt data… customers can continue and
use those when running SQL Server on AWS.
From an access perspective, you can help your customers address these concerns:

Control access to the infrastructure and ability to make modifications to the DB


instances. In a traditional environment, these would be the people with physical
access to the servers. The people who can shut them down, start them up, wipe
them, and so forth. At AWS, this is called instance access. AWS Identity and Access
Management, or IAM, controls permissions to create, modify, and delete AWS
resources, such as Amazon VPC, Amazon EC2, and Amazon RDS.

Customers can enhance security by enabling Multi-Factor Authentication, or MFA, for


this level of access. They can:
• Lock away AWS account root user credentials,
• Grant least privileges to IAM users and groups roles,
• Use strong password policies,
• Rotate credentials, and
• Use federated access from Active Directory

Currently, customers have a mechanism to log and audit access to their


infrastructure, which describes what they did and when.
VPC Flow Logs enables users to capture information about the IP traffic that goes into
or out of the network interfaces in a VPC.
AWS CloudTrail is a web service that records AWS API calls for an account and delivers
log files. The recorded information includes the identity of the API caller, the time of
the API call, the source IP address of the API caller, the request, and the response.
Logs are stored in an Amazon S3 bucket.
SQL 2017 is available for EC2 and Amazon RDS, including for Multi-AZ deployments.
SQL Server 2017 on Linux is supported on Amazon EC2, with RHEL licenses included.
Cluster and cluster-less availability groups, such as WSFC, Pacemaker, and None, are
available.

Customers can install SQL Server on Linux in minutes with just a few commands.
Deploying SQL Server requires running just a few Docker commands, as shown here.
Customers should consider how much IOPS and throughput performance their
workload needs, and employ techniques from the following list to find the right
combination of throughput and performance – they should
• Enable EBS optimization on an instance,
• Create a single volume for data and logs,
• Format with a 64 K allocation unit size,
• Match the total EBS IOPS and throughput to instance type, and
• Stripe EBS PIOPS volumes for more than 20,000 IOPS.

An example volume layout is shown here. Each drive mapping represents an attached
volume and can be a different storage type.
By using the commands on this page, customers can configure their SQL Server’s
tempdb to reside on instance storage.

First, they use the ALTER SQL commands to move tempdb files to instance-storage-
backed drives.

Then, customers modify the system drive’s discretionary access control list, or DACL,
to grant the SQL service account access to the storage drive.
To optimize tempdb use, customers should consider the following techniques – they
might:
• Use multiple tempdb files, creating a 1:1 mapping with up to eight CPUs,
• Stripe multiple instance storage disks for higher input-output,
• Change SQL Server service startup to Automatic (Delayed Start) to allow instance
storage to provision,
• Script and automate configuration on instance boot, or
• Use a striping solution offered by their AWS consulting partner.
Another optimization option you can use is to enable instant database file
initialization. What is database file initialization?

Normally, database files are initialized to overwrite leftover disk data. File
initialization causes some DB operations to take longer. Instant database file
initialization reclaims unused disk space without zeroing it out.

On this slide, you can see how customers can enable Instant database file
initialization .
This slide illustrates the various components to be migrated as part of a database
migration. Many tools are available that can accomplish some or all of the migration
tasks.
Customer data is not locked in RDS SQL Server. Customers can move data to and from
Amazon RDS in many ways.

You have already seen how customers can use .BAK files to save and restore
databases.

Customers can also use the Publishing Wizard to export flat T-SQL files and import
them using sqlcmd.

For more advanced use cases, customers can use the AWS Database Migration
Service. This tool is especially useful if customers want to achieve zero or near-zero
downtime migrations, or deploy read replicas of the master databases in a separate
Region. It handles the initial load of data and performs change data capture, so
customers can keep up with changes asynchronously. It’s also highly available, so
customer replication jobs can run on an ongoing basis. And, it supports
heterogeneous migrations, between different DB engines, from MySQL, Oracle, or
PostgreSQL to SQL Server, and with databases in different locations, such as EC2, RDS,
and on premises.
Customers can use the AWS Marketplace where independent software vendors, or
ISVs, offer third-party data movement solutions and tools.

Finally, customers can use push replication, as documented on the AWS website.
The database migration strategy customers choose depends on several factors,
including:
• The size of the database,
• Network connectivity between the source server and AWS,
• The version and edition of the database,
• The amount of time available for migration, and
• available database options, tools, and utilities.

Your customer’s strategy will also depend on whether the migration and cutover to
AWS will be done in one step or a sequence of steps over time.

A one-step migration is a good option for small databases that can be shut down for
24 to 72 hours. During this downtime, all the data from the source database is
extracted and migrated to the destination database in AWS. The destination database
in AWS is tested and validated for data consistency with the source. After all
validations are completed successfully, the database traffic is cut over to AWS.

However, a two-step migration process is more commonly used because it requires


only minimal downtime and can be used for databases of any size. In this method,
the data is extracted from the source database at a specific time, preferably during
non-peak use, and migrated while the database is still up and running. Because there
is no downtime at this stage, the migration window can be sufficiently large. After the
data migration task is completed, customers validate the data in the destination
database for consistency and can also do functionality and performance tests for
connectivity to external applications or any other criteria as needed. During this time,
because the source database is still up and running, changes need to be propagated
(or replicated) before final cutover. At this point, the customer would schedule a
downtime for the database, usually a few hours, and synchronize the source and
destination databases. After all the changed data is migrated, the customer validates
the data in the destination database, performs necessary tests, and finally, does a
cutover of the database traffic to AWS.

Customers might have mission-critical databases that cannot have any downtime.
Performing such zero or near-zero downtime migrations requires detailed planning
and appropriate data replication tools. Customers will need to use continuous data
replication tools for such scenarios. Synchronous replication could affect the
performance of the source database while the replication is happening. So if a few
minutes of database downtime are acceptable, customers might want asynchronous
replication instead. With the zero or near-zero downtime migration, customers have
more flexibility on when to perform the cutover, because the source and destination
databases are always in sync.
Depending on whether your customer runs their database on Amazon EC2 or uses
Amazon RDS, the process for data migration can differ. For example, users don’t have
OS-level access in Amazon RDS instances. Customers must understand the different
strategies, so they can choose the one that best fits their needs. They can simply “lift-
and-shift” a database to run on an Amazon EC2 instance. This might be the easiest
and quickest method to migrate their database. However, they will need to consider
various factors, like licensing, compatibility, and support. Often, customers re-
platform and/or re-factor the database tier so they can access AWS Cloud benefits.

Customers can also use DMS to migrate their on-premises database to a database
running on an Amazon EC2 instance. DMS can migrate databases with zero or near-
zero downtime.

AWS Database Migration Service also provides a schema conversion tool to help
convert SQL Server T-SQL code to equivalent code in the Amazon Aurora MySQL
dialect of SQL. When a code fragment cannot be automatically converted to the
target language, the AWS Database Migration Service clearly documents all locations
that require manual input from the application developer.
Customers can use AWS Database Migration Service for both one-time data migration
into RDS and EC2-based databases, as well as for continuous data replication. The
AWS Database Migration Service captures changes on the source database and
applies them in a transactional-consistent way to the target. Continuous replication
can be done from the data center to the databases in AWS or in the reverse,
replicating to a database in the data center from a database in AWS. Ongoing
continuous replication can also be done between homogenous or heterogeneous
databases.

Customers can use AWS DMS for multiple migration scenarios, as shown here. To use
AWS DMS, one endpoint must always be located on an AWS service. Migration from
an on-premises database to another on-premises database is not supported.

Customers can use the Database Migration Service to enable migrations with
little downtime.

Customers start by launching a DMS replication instance in their AWS account.

Next, they provide database connection information to connect to the on-


premises database from AWS.

Then, they select which tables, schemas, or databases to migrate, and a DMS
replication task loads the data and synchronizes it on an ongoing basis. When
using the continuous data replication mode, customers do not have to perform the
switchover to production. Instead, the data replication task runs until the customer
changes or terminates it.

Finally, at any time, customers can change the application’s configuration to


connect to the AWS database, instead of the on-premises database.
In this module, you learned how to run SQL Server databases on AWS. You also
learned how to choose which deployment option is most suitable, and how to select
which compute and storage resources to use. Finally, you learned how to migrate
databases from existing platforms to AWS.
In this module, you will learn how to automate Microsoft workloads operations with
AWS services.

You will also learn how to migrate virtual machines, or VMs, and server applications
to AWS.

Finally, you will learn how to use AWS services to provision workload environments,
automate change and configuration, and provide ongoing maintenance.
Customers can use VM Import/Export to import VM images from existing
virtualization environments to Amazon Elastic Compute Cloud, or Amazon EC2, as
Amazon Machine Images, or AMIs. Customers use the AMIs to launch instances. They
can then export the VM images from an instance, and import them to virtualization
environments. Customers can import Microsoft Windows and Linux VMs that use
VMware ESX, VMware Workstation, Microsoft Hyper-V, or Citrix Xen virtualization
formats. They can also export previously imported Amazon EC2 instances to VMware
ESX, Microsoft Hyper-V, or Citrix Xen formats.

Exported virtual machines can be imported in the following formats:


• Open Virtualization Archive, or OVA,
• Virtual Machine Disk, or VMDK,
• Virtual Hard Disk, or VHD, or
• Raw

With some virtualization environments, customers can export to Open Virtualization


Format, or OVF, which typically includes one or more VMDK or VHD files.

Customers can import machine images by using the AWS Command Line Interface, or
AWS CLI, or the AWS Management Portal for vCenter Server.

To import VM using the AWS CLI, customers must complete the following steps:
1. First, the customer must download and install the AWS Command Line Interface.
2. Then, they upload the VM image to Amazon Simple Storage Service, or Amazon
Simple Storage Service, or Amazon S3, using the CLI. Multipart uploads provide
improved performance. As an alternative, customers can send the VM image to
AWS using the AWS Snowball service.
3. Once the VM image is uploaded, the customer imports the VM using the ec2
import-image command. As part of this command, the customer specifies the
licensing model and other parameters for the imported image.
4. Next, the customer uses the ec2 describe-import-image-tasks command to
monitor the import progress.
5. Finally, once the import task is completed, the customer uses the ec2 run-
instances command to create an Amazon EC2 instance from the AMI generated
during the import process.

To learn about importing VMs using the VMware vSphere virtualization platform,
refer to the AWS website.
AWS Server Migration Service, or AWS SMS, automates the migration of
Hyper-V and VMware virtual machines to the AWS Cloud. AWS SMS
incrementally replicates server VMs as cloud-hosted AMIs that are ready for
deployment on Amazon EC2.

Customers can begin migrating a group of servers with just a few clicks in the
AWS Management Console. After the migration starts, AWS SMS manages
the complexities of the migration process, including automatically replicating
volumes of live servers to AWS and creating new AMIs periodically. Customers
can quickly launch EC2 instances from AMIs in the console. Working with
AMIs, they can easily test and update cloud-based images before deploying
them in production.

AWS SMS orchestrates server migrations by allowing customers to schedule


replications and track progress for a group of servers. They can schedule
initial replications, configure replication intervals, and track progress for each
server using the console.

To perform migrations faster while minimizing network bandwidth and reducing


server downtime, customers can migrate only the on-premises servers’
incremental changes. Incremental AWS SMS replication minimizes the
business impact often associated with application downtime during the final
cutover.

AWS Server Migration Service is free to use; customers pay only for the
storage resources that the migration uses during the migration process.
This slide shows the general steps for using the Server Migration Service.

First, customers must prepare their on-premises VMs to meet the general
Server Migration Service requirements. This preparation includes disabling
antivirus or intrusion detection software, and allowing remote access from the
connector through SSH, on Linux VMs, or Remote Desktop, on Windows VMs.

Second, customers manage the migration by deploying a connector appliance


to their on-premises environment. The connector appliance is a preconfigured
VM downloaded from AWS. The customer deploys the AWS Server Migration
Connector virtual appliance on their on-premises VMware vCenter or Hyper-V
environment. The connector provides a catalog of servers that it later converts
to AMI images that can run in Amazon EC2.

Next, customers replicate VMs by using the AWS SMS console or the
command line interface. They import a server catalog from the connector, and
create one or more replication jobs to automate the replications. Replication
jobs can start immediately or at a later date, up to 30 days in the future.
Customers can stop and delete replication jobs after the replication is
complete.

As shown in Step 4, when the scheduled replication job starts, the SMS
connector takes a snapshot of the selected VM, converts the snapshot to an
OVF format, and uploads the VMDK disk file to an S3 bucket.

Finally, in step 5, SMS automatically converts the VMDK into an Amazon Elastic
Block Store, or Amazon EBS, snapshot, makes the proper changes to the boot
partition, and injects EC2 drivers into the image. The result is an AWS AMI,
which can be used to launch EC2 servers.
Previously, you learned about AWS Migration tools and services, such as:
• AWS Server Migration Service,
• AWS Database Migration Service, or AWS DMS, and
• AWS Schema Conversion Tool.

AWS also provides data tools that customers can use to accelerate
transferring data to the cloud. AWS offers the data transfer services shown
here:
• AWS Snowball uses secure appliances to transfer large amounts of data
into and out of AWS. AWS Snowmobile is a transport that uses a secure
40-foot shipping container to transfer data.
• AWS Storage Gateway is an on-premises storage gateway that links a
customer’s environment directly to AWS.
• The AWS DataSync service makes it easy to automate moving data from
on-premises storage and Amazon S3 or Amazon Elastic File System, or
Amazon EFS faster than open-source tools.
• Amazon S3 Transfer Acceleration uses Amazon CloudFront edge
locations to enable fast, easy, and secure transfers of files over long
distances between the customer’s client and Amazon S3 bucket.

7
• AWS Direct Connect lets customers establish a dedicated physical
connection between a network and one of the AWS Direct Connect
locations.
• And Amazon Kinesis Data Firehose loads streaming data into Amazon S3
or Amazon Redshift

For more information about AWS migration and data transfer services, visit the
Cloud Data Migration section on the AWS website.
By using configuration management tools, customers use code to represent the state
of infrastructure – such as, seeing what’s running inside EC2 instances. Compared to
AWS CloudFormation, which automates the creation of resources like EC2 instances,
S3 buckets, and Amazon Relational Database Service, or Amazon RDS, instances,
configuration management represents the configuration of the software that runs on
the network’s compute servers. With configuration management, customers can:

• Use code to represent the running infrastructure state,


• Operate at scale and automate by using code,
• Assure configuration is compliant and repeatable,
• Ensure that hosts are compliant and at the desired state,
• Align resources with specific policies – and report on their status,
• Automatically enforce security policy and remediate unwanted changes by
automatically detecting drift and reapplying the desired state, or removing the
node from the cluster, and
• Automate the manual steps needed to complete configuration tasks.
AWS OpsWorks is a configuration management service that helps customers
configure and operate applications. OpsWorks comes in three types – Chef, Chef
Automate, and Puppet Enterprise. Customers use:

• Chef cookbooks and solutions with OpsWorks Stacks,


• OpsWorks for Chef Automate for configuration management, and
• OpsWorks for Puppet Enterprise to offer tools that enforce desired-state
configuration.

By using Puppet as a managed service, AWS performs the necessary automatic


backups, updates, and upgrades. AWS also performs the operating system updates on
the Chef or Puppet servers that OpsWorks controls.

OpsWorks includes the Chef or Puppet management dashboards that customers can
use to quickly view change management operations status and host compliance.

OpsWorks brings a programmable infrastructure in the AWS Cloud and on premises.


Customers can manage change configuration in the same way for hybrid
environments. By sharing the same configuration across cloud and on premises,
customers can make their EC2 instances look exactly like their on-premises instances,
and vice versa.

Using a programmable infrastructure, customers define configuration using the


configuration management servers library of desired state configurations to ensure
compliance and consistent configuration.

With OpsWorks, scaling the properly configured cloud environment is easier because
customers can avoid performing manual tasks. OpsWorks includes AWS CLI
commands that automatically register instances in EC2 Auto Scaling Groups with the
Chef or Puppet configuration management server as part of the EC2 user data
settings.

Finally, both Chef and Puppet offer support from active user communities. Customers
can use community-developed libraries, which abstract the configuration from OS
details to deploy software and configuration in an operating system-independent
way. By using the Chef Supermarket or Puppet Forge, customers can take advantage
of a variety of supported configuration modules for most popular software
installations that are already developed and tested. Community-created configuration
exists for a wide variety of server types.
Here, you see some typical use cases for
adopting AWS OpsWorks for configuration
management.

One example is Bootstrapping new


instances, in which customers
• Create new Windows Servers and apply the
desired state from code in GitHub.
• Then, they bootstrap from Chef Supermarket or
Puppet Forge, and change the templates, where
needed, to customize them for your needs.

Another use case is to update


configurations on instances or servers that
are running. In this case, customers:
• Apply policy changes or new software versions,
and then
• Use a single commit to apply pretested changes
to single servers or fleets.

Define policies is another use case.


Customers:
• Define configurations that enforce a policy on
all the instances or servers. For example, to
prevent file systems filling, all nodes must run a
new log rotation policy.
• The Configuration Manager server then uses
vetted configurations to bring a server into
spec.

The final use case shown here is to use


continuous integration and continuous
delivery, or CICD, pipelines to promote
changes and adopt a software metaphor.
Customers can:
• Use promotion to drive changes from the
developer’s desktop to production, and
• Revert changes quickly, which reduces the risks
and fears associated with making changes.
To get started with AWS OpsWorks, customers should perform the following steps:
1. Create a configuration management server, using Puppet or Chef.
2. Download a starter-kit. The kit includes the software and configuration to get the
first instance up and running.
3. Download the UI credentials package, which includes certificates and
configuration to customize and configure the developer desktop.
4. Upload and run change management code, such as Chef recipes or Puppet
manifests.
5. Use deployment and management tools with AWS OpsWorks, such as:
o AWS console
o AWS CLI
o AWS software development kits (SDKs)
Enterprises often bring their traditional on-premises toolset to manage their
cloud and hybrid environments. When customers change how they manage their
servers - when they change from treating a server like a house plant to treating their
services like a field of plants – that’s when their traditional toolset falls short.

AWS Systems Manager is an AWS service that customers can use to view and control
their infrastructure on AWS. By using the Systems Manager console, customers can
view operational data from multiple AWS services and automate operational tasks
across their AWS resources. Systems Manager helps customers maintain security and
compliance by scanning managed instances and reporting on (or taking corrective
action on) any policy violations it detects.

Systems Manager also helps customers configure and maintain their managed
instances. Supported machine types include Amazon EC2 instances, on-premises
servers, and VMs, including VMs in other cloud environments. Supported operating
system types include Windows Server, multiple distributions of Linux, and Raspbian.

Using Systems Manager, customers can associate AWS resources by applying the
same identifying resource tag to each of them. They can then view operational data
for the resources as a resource group.

AWS Systems Manager helps improve customers’ security posture through


integration with AWS Identity and Access Management, or IAM. With Systems
Manager, customers can apply granular permissions to control the actions users
perform.
AWS Systems Manager includes a set of capabilities that:
• Enables role-based server management,
• Audits every management action,
• Manages Windows and Linux instances running anywhere, including Amazon
EC2, other clouds, or on premises, and
• Scales to manage 1 to 10,000 servers or more.
AWS Systems Manager is composed of individual capabilities, which are grouped into
categories. The capabilities included in this course are shown here:
• Run Command provides remote execution across instances without using SSH or
PowerShell.
• Session Manager enables customers to access EC2 instances through an
interactive browser-based shell.
• State Manager – Maintain consistent configuration for instances and applications.
• Inventory collects a software catalog and configuration for instances.
• Compliance enables customers to see which resources are out of compliance and
take action.
• Patch Manager simplifies operating system patching for Linux or Windows.
• Documents enables customers to author configuration changes and automation
workflows in documents that can use to execute across a fleet.
• Parameter Store provides a way to manage secrets or plan-text data.
• Automation runs tasks on group resources by using built-in or custom
automations.
• And finally, Maintenance Windows define windows of time to perform tasks.

This section reviews each of these capabilities in more detail.


The AWS Systems Manager Agent, or SSM Agent, is Amazon software that customers
install and configure on an Amazon EC2 instance, on-premises server, or virtual
machine. SSM Agent makes it possible for Systems Manager to update, manage, and
configure these resources by processing requests using Run Command and State
Manager capabilities. The agent processes requests from the Systems Manager
service in the AWS Cloud, and then runs them. SSM Agent logs activity and then
sends status and execution information to the Systems Manager service.

SSM Agent is preinstalled in the following Amazon AMIs:


• Windows Server 2016 and newer – SSM Agent only
• Windows Server 2012R2 and older – SSM Agent and EC2Config, and
• Amazon Linux, Amazon Linux2, and Ubuntu AMIs

SSM Agent is easy to download and install on other platforms, such as Red Hat
Enterprise Linux.

The SSM Agent logs activity for Run Command, State Manager, joining domains, and
Amazon CloudWatch. It’s open source, and its code and release notes are available on
GitHub.
Customers can run the SSM Agent on corporate servers and VMs on premises. To run
SSM Agent in hybrid environments, customers must complete the following steps:
1. Install a Transport Layer Security, or TLS, certificate on the computer that runs the
SSM Agent.
2. Create a managed-instance activation code from the AWS console or API. In the
activation, provide a description and a count of how many instances will activate.
Also, select an IAM role that SSM Agent uses to retrieve parameter objects,
commands, and so forth, from Systems Manager.
3. Download, install, and start the agent, using the activation code.

Activation codes have an expiration date. When customers create activation codes,
they should record and store them separately; the codes are only available once.
Computers that run SSM Agent require outbound internet access or a VPC endpoint,
but not inbound internet access.
Systems Manager documents contain configuration changes and automation
workflows that customers can use to execute changes across the fleet using Systems
Manager capabilities. With SSM documents, customers can use code to remotely
manage instances, ensure desired states for resources, and automate IT operations.

Amazon provides documents for many common tasks, or customers can author their
own in JSON or YAML. They can also store and execute documents from remote
locations, like GitHub or Amazon S3. Systems Manager supports creating and running
different document versions, sharing documents across AWS accounts, and tagging
documents.

Systems Manager uses the command, policy, and automation documents:


• Customers use command documents to perform specific administrative tasks
using Run Command or apply a policy using State Manager.
• Customers use policy documents to enforce a policy on targets using State
Manager.
• And, finally, customers use automation documents to perform common
maintenance and deployment tasks using Automation.
With AWS Systems Manager Run Command, customers can remotely manage
instance configurations securely. Using Run Command, they can delegate access to
their instances so that they eliminate the need for Remote Desktop Protocol, or RDP,
and secure shell access.

Customers can retrieve the status and output of commands they run with Run
Command, and receive notifications about them.

Before customers can use Run Command to manage instances, they must perform
the following tasks:
• Install the SSM Agent on the instances.
• Configure an IAM user policy for any user who will run commands, and an IAM
instance profile role for any instance that will process commands, or activate the
instance with an activation code. And,
• Configure the network so instances have network connection to Systems
Manager.

When customers use Run Command, they choose a command document that
specifies the type of command they want to run. Customers specify the command to
run and its parameters. Then, they specify which instances run the command, either
by specifying a tag or selecting specific instances.

Customers can store the commands’ output in an Amazon S3 bucket and send
notifications.
AWS Systems Manager State Manager automates the process of keeping Amazon EC2
and hybrid infrastructure in a customer-defined state. With State Manager, customers
can perform the following types of tasks:
• Bootstrap instances with software at startup,
• Download and update agents, even the SSM Agent,
• Configure network settings,
• Join instances to a Windows domain,
• Patch instances with software updates throughout their lifecycle, and
• Run scripts.

To use State Manager, customers first determine the desired state to apply. For
example, they can automate the process of installing Windows updates. Customers
must create an association to assign instances to the intended state.

An SSM document describes the intended state of the instance or service. Amazon
provides many preconfigured documents that customers can use to create the
association, or customers can create their own.

Next, customers create an association, which binds the instances to the document
and schedules frequency for updating the state. Scheduling includes unplanned and
periodic rates.

After customers create the association, State Manager applies the configuration
according to the defined schedule. Customers can view the status and history from
the State Manager page.
Systems Manager Automation simplifies common maintenance and deployment
tasks of Amazon EC2 instances and other AWS resources. Automation enables
customers to do the following.
• Build automation workflows to configure and manage instances and AWS
resources,
• Create custom workflows or use predefined workflows maintained by AWS,
• Receive notifications about automation tasks and workflows by using Amazon
CloudWatch Events, and
• Monitor automation progress and execution details by using the Amazon EC2 or
the AWS Systems Manager console.

With Automation, customers control the workflows with repeatable steps. Steps can
include manual interaction, for example, to provide approval steps. Automation uses
Amazon Simple Notification Service, or SNS, notifications to approve steps.

With Automation, customers can delegate specific tasks to users who use Systems
Manager. For example, a user can't launch EC2 instance, but they can start an
automation task that creates an EC2 instance from a specific AMI.
Automation integrates with several AWS services to manage complex tasks. For
example, plugins are included to be able to perform the following tasks:
• Create or delete AWS CloudFormation stacks,
• Invoke AWS Lambda functions,
• Create an AMI from an instance,
• Launch an instance from an AMI,
• Start a Run Command, and
• Run other automations.
With AWS Systems Manager Patch Manager, customers can manage their Windows
or Linux servers patching in Amazon EC2 or on premises. To use Patch Manager,
customers must complete a number of tasks:
• They must create a patch baseline, or use one of the many Amazon-provided
baselines. A baseline defines which patches are approved for installation on the
instances. Customers can approve or reject specific patches, or create automatic
approval rules for certain types of updates, such as critical security updates.
• Customers must organize instances into patch groups. Patch groups tie instances to
a patch baseline. For example, customers can create patch groups for
Development, Test, and Production, and apply new patches to the Test group. They
can also use patch groups for reporting purposes, for example, to show which
patches are applied on production Windows Servers.
• Customers must also schedule patches to be applied by assigning them to a
Maintenance Window.
• Finally, customers must monitor patch completion and compliance status
information.

Patch Groups organize and associate instances with a specific patch baseline. They
help ensure that customers deploy appropriate patches to the correct set of
instances. Customers can view status and compliance by patch group. There can be
many patch groups, but each instance can be a member of only one patch group.
AWS Systems Manager Maintenance Windows let customers schedule and control
running potentially disruptive administrative tasks. Each Maintenance Window has a
schedule. Customers use cron or rate expressions to schedule when maintenance
starts.

They set a 1- to 24-hour maximum duration. Setting a duration does not stop
running tasks; it only stops scheduling remaining tasks.

Customers can perform Run Command, automation, Lambda, and step functions in a
Maintenance Window on instances selected by ID or tag. Maintenance Windows
retain executions history for 30 days.
AWS Systems Manager Inventory provides visibility into customers’ Amazon EC2 and
on-premises computing environments.

To use Inventory, customers select targets by instance ID or tag, or use State Manager
to create associations. Next, they schedule when to collect inventory metadata, and
choose the types of metadata to collect.

Customers can also create their own custom inventory types, such as rack location, to
add to the metadata collection.

With Inventory, customers create an end-to-end collection of operating system-


specific information they want to collect. They can query the collection using
attributes as filters, or view inventory types in the AWS console. Inventory can
integrate with AWS Config to record changes over time and detect when an instance’s
configuration drifts from the predefined standards.
Customers use AWS Systems Manager Compliance to scan instances for patch
compliance and configuration inconsistencies.

They can collect and aggregate data from multiple AWS accounts and Regions, and
identify specific resources that aren’t compliant.

By default, Compliance displays current compliance data about Systems Manager


Patch Manager patching and Systems Manager State Manager associations. Systems
Manager Compliance offers the additional benefits and features:

• Customers can view compliance history and change tracking for Patch Manager
patching data and State Manager associations by using AWS Config.
• They can customize Systems Manager Compliance to create their own compliance
types based on their IT or business requirements.
• And, customers can remediate issues by using Systems Manager Run Command,
State Manager, or Amazon CloudWatch Events.
Parameter Store provides a centralized, encrypted store for sensitive information
customers use in administrative tasks to manage instances and operating systems.

Customers use Parameter Store to eliminate manually managing configuration files,


and store values to use in Run Command, State Manager, and Automation
capabilities, as well as other services, such as AWS Lambda, AWS CloudFormation,
and EC2 container service.

AWS Key Management Service, or KMS, integration helps customers encrypt their
sensitive information and protect their keys’ security.

Customers can track changes to parameters by using version control.

Access to parameters is managed with IAM, so customers can also limit access to
data to the users who need it, on the resources they can use.
With AWS Systems Manager, customers can view and control their infrastructure both
in the AWS cloud, and on-premises. Customers can automate operational tasks, and
maintain inventory and compliance from a single resource.
AWS CloudFormation enables customers to provision and manage their infrastructure
as code. With AWS CloudFormation, customers can plan and design their architecture
to be secure, reliable, performant, and efficient. Customers tell AWS CloudFormation
what must be created, not how to create it.

AWS CloudFormation uses text-based templates to describe infrastructures to build


with AWS services and resources. Customers can manage the templates as source
code, with change control, validation, and versioning.

AWS CloudFormation allows for reusable component design strategies and


understands dependencies, so it supports rollbacks and versioning of the
infrastructure stacks it provisions.
To set up a self-managed Active Directory Domain Server, or AD DS, across two
Availability Zones, customers must manually perform the steps shown here, as an
overview:

1. First, they sign in to the AWS console.


2. Then, they choose a Region and key pair to use.
3. Next they set up the Amazon VPC, including private and public subnets in two
Availability Zones.
4. They configure two NAT gateways in the public subnets,
5. Configure private and public routes, and
6. Enable inbound traffic into the Amazon VPC for administrative access to Remote
Desktop Gateway, or RD Gateway.
7. Customers then create Systems Manager Automation documents that set up and
configure AD DS and AD-integrated DNS.
8. They store the alternate domain administrator credentials in Secrets Manager,
and
9. Use Secrets Manager to generate and store Restore Mode and Domain
Administrator passwords.
10. At this point, customers launch instances using the Windows Server 2016 AMI,
and
11. configure security groups and rules for traffic between instances.
12. Finally, they set up and configure Active Directory sites and subnets.
Quick Starts are built to help customers deploy popular technologies on AWS. Quick
Starts include AWS CloudFormation templates to automate the deployment.

Here are the steps a customer would follow for an AWS CloudFormation Quick Start
to set up a self-managed Active Directory Domain Server across two Availability
Zones. The customer would:

1. Sign in to the AWS console.


2. Choose a Region and key pair to use.
3. Launch the AWS Quick Start, which specifies the template to use, and
4. Specify details for the parameters.
The AWS CloudFormation service helps model and set up Amazon Web Services
resources. A JSON or YAML text template describes the AWS resources in the
architecture and AWS CloudFormation provisions, and configures the resources. AWS
CloudFormation creates and configures AWS resources and handles the
dependencies among resources.

Instead of manually using individual services to provision resources for an application,


and configuring them to work together, customers can create or modify an existing
AWS CloudFormation template that they use to create an AWS CloudFormation stack.

A stack is a collection of resources a customer manages as a unit, which simplifies


resource deployment. When a customer creates an AWS CloudFormation stack, AWS
CloudFormation provisions the resources, configures their properties, and starts the
resources. When the stack is deleted, AWS CloudFormation terminates and deletes
the resources for the customer.

Because the AWS CloudFormation template describes all the resources customers
need for an application, they can replicate an application by reusing the template. If
they need additional availability, for example, they can use the same template to
create stacks consistently and repeatedly in multiple Regions. Or, they can start a
disaster recovery site, which is always in sync and provisioned the same way as the
production architecture but in a different Region.

Because the AWS CloudFormation template is a text file, customers can manage
infrastructure revisions and control as they would manage source code. When they
need to change resources, such as upgrade some resources in the stack, they can
compare the changed template with the original, and create a change set. By using
change sets, customers can preview how implementing the changes to the stack
might impact resources that are running. Customers can decide whether to
implement the changes or explore other changes instead.

In addition, customers can use previous versions of the template to revert the
infrastructure to a previous version.
To use AWS CloudFormation, customers must complete the following steps:
1. To begin, a customer must create or use an existing template. Customers can
create a YAML or JSON file, or use the AWS CloudFormation Designer to build the
template graphically. They can start from an example template from the AWS
CloudFormation Sample Template Library to learn the basics of creating a
template. They can also use Parameters in the template to declare values to use
when they create the stack.
2. Next, the customer saves the template locally or in an S3 bucket.
3. Then, the customer uses AWS CloudFormation to create a stack based on the
saved template, by using the AWS Management Console’s AWS CloudFormation
console or the command line interface.
4. Finally, while AWS CloudFormation configures and constructs the resources
specified in the stack, the customer monitors the resource creation process in the
AWS CloudFormation console. When the stack reaches the status
CREATE_COMPLETE, the customer can start using the resources.
The stack sets feature extends stacks by enabling customers to create, update, and
delete stacks across multiple accounts and Regions with a single operation. After
customers set up a trust relationship among the accounts where they create stacks,
the AWS CloudFormation template uses stack sets to allow the customer to create,
update, and delete stacks in specified target accounts and Regions.
In this module, you learned how to automate Microsoft workloads operations with
AWS services.

You also learned how to migrate virtual machines and server applications to AWS.

Finally, you learned how to use AWS services to provision workload environments,
automate change and configuration, and provide ongoing maintenance.
In this module, you will learn how to use AWS to build and run .NET applications.

You will also learn what tools to use to build architectures that support .NET, and
what code management services and code build architectures are available.

Finally, you will learn how to use AWS PowerShell to automate functions from
scripted solutions.
AWS provides full support for .NET applications and Windows workloads.
Additionally, AWS supports .NET, .NET Core, and Core 2.1, including AWS Lambda,
AWS X-Ray, and AWS CodeStar for building modern serverless and DevOps-centric
solutions. These solutions can provide deep integration with tools developers already
use to build .NET apps, like Visual Studio and Visual Studio Team Services. This means
developers can work with familiar to tools while they benefit from the broad variety
of AWS products and services.
The AWS Tools for PowerShell download is a Microsoft Software Installer, or MSI,
package that installs the following components:
• Microsoft .NET Framework Features
• AWS SDK for .NET
• AWS Tools for Windows PowerShell
• AWS Command Line Interface

The AWS Tools for Windows PowerShell provides PowerShell modules that are built
on the functionality exposed by the AWS SDK for .NET. The AWS PowerShell tools
enable customers to script operations on AWS resources from the PowerShell
command line. Although the cmdlets are implemented using the service clients and
methods from the SDK, the cmdlets provide an idiomatic PowerShell experience for
specifying parameters and handling results.

The Tools for Windows PowerShell and Tools for PowerShell Core are flexible in how
they enable customers to handle credentials, including support for the AWS Identity
and Access Management, or IAM, infrastructure. Customers can use the tools with
IAM user credentials, temporary security tokens, and IAM roles.
The Tools for PowerShell supports the same set of services and Regions that are
supported by the SDK. Customers can install the Tools for PowerShell on computers
running Windows, Linux, or macOS operating systems.
PowerShell Desired State Configuration, or DSC, is built on open standards. It provides
a configuration management platform that is built into to operating systems later
than Windows Server 2012 R2 and Windows 8.1, and it’s also provided for Linux.
Some AWS Quick Starts that run Windows Server instances use DSC.

DSC is flexible enough to function reliably and consistently in each stage of the
deployment lifecycle of development, test, pre-production, and production, as well as
during scale-out.

DSC is a declarative platform used for configuration, deployment, and management


of systems. It consists of three primary components – configurations, resources, and
the Local Configuration Manager, or LCM:
• Configurations are declarative PowerShell scripts that define and configure
instances of resources. When running the configuration, DSC and the resources
being called by the configuration will perform the functions that ensure that the
system exists in the state laid out by the configuration.
• Resources contain the code that puts and keeps the target of a configuration in the
specified state. Resources reside in PowerShell modules and can be written to
model an element as generic as a file or a Windows process, or as specific as an IIS
server.
• The LCM is the engine by which DSC facilitates the interaction between resources
and configurations. The LCM regularly polls the system using the control flow
implemented by resources to ensure that the state defined by a configuration is
maintained. If the system is out of state, the LCM makes calls to the code in
resources to bring the system into compliance according to the configuration.

DSC uses lightweight commands called cmdlets to express a desired state. DSC
provides a similar framework to Chef and Puppet.
When using DSC to apply a desired configuration for a system, the customer creates a
configuration script with PowerShell that explains what the system should look like.
Customers use the configuration script to generate a Management Object Format, or
MOF, file, which is then pushed or pulled by a node to apply the desired state.
PowerShell DSC uses vendor-neutral MOF files to enable cross-platform
management, so the node can be either a Windows or a Linux system.
AWS provides a variety of tools to help .NET developers work with AWS products and
services.

The AWS SDK for .NET is an open-source toolkit that helps Windows developers build
.NET applications that tap into the cost-effective, scalable, and reliable AWS
infrastructure services, such as Amazon S3, Amazon EC2, AWS Lambda, and more.
The AWS SDK for .NET supports development on any platform that supports the .NET
Framework 3.5 or later. The AWS SDK for .NET also targets .NET Standard 1.3.
Customers can use it with .NET Core 1.x or .NET Core 2.0.

The Toolkit for Visual Studio is a plugin for the Visual Studio integrated development
environment, or IDE, that makes it easier to develop, debug, and deploy .NET
applications that use AWS. The Toolkit for Visual Studio provides Visual Studio
templates for AWS services, and deployment wizards for web applications and
serverless applications.

The AWS Tools for Microsoft Visual Studio Team Services, or VSTS, adds tasks to
enable build and release pipelines in VSTS and Team Foundation Server, or TFS, to
work with AWS services. Customers can work with Amazon S3, AWS Elastic Beanstalk,
AWS CodeDeploy, AWS Lambda, AWS CloudFormation, Amazon SQS, and Amazon
SNS. Customers can also run commands using the Windows PowerShell module and
the AWS CLI.

Finally, the AWS Cloud Development Kit, or CDK, supports .NET. The AWS CDK is a
software development framework that defines cloud infrastructure code and
provisions it through AWS CloudFormation.
The AWS Toolkit for Visual Studio provides Visual Studio project templates that
customers can use as starting points for AWS console and web applications. As a
customer’s application runs, they can use the AWS Explorer to view the AWS
resources used by the application. For example, if an application creates buckets in
Amazon S3, the customer can use AWS Explorer to view the buckets and their
contents. If a customer needs to provision AWS resources for an application, the
customer can create them manually using the AWS Explorer or use the AWS
CloudFormation templates included with this toolkit to provision web application
environments hosted on Amazon EC2.

In this example, you can see how to:


• Browse files stored in an S3 bucket,
• Upload and download files, and
• Create pre-signed URLs to objects to pass around and change the permissions of
files.

If the bucket was used with Amazon CloudFront, you could also perform invalidation
requests in the bucket browser.
The AWS Tools for Microsoft Visual Studio Team Services adds tasks to easily enable
build and release pipelines in VSTS and Team Foundation Server to work with AWS
services, including Amazon S3, AWS Elastic Beanstalk, AWS CodeDeploy, AWS
Lambda, AWS CloudFormation, Amazon SQS, and Amazon SNS, and run commands
using the AWS Tools for Windows PowerShell module and the AWS CLI.

• Using VSTS, customer can transfer files to and from Amazon S3 buckets. Customers
can upload files to an S3 bucket with the Amazon S3 upload task or download from
a bucket with the Amazon S3 download task.

• Customers can also deploy .NET Core serverless applications or standalone


functions to AWS Lambda to invoke an AWS Lambda function. Lambda functions
are invoked in the build or release pipeline.

• Customers can create and update AWS CloudFormation stacks.

• They can also deploy applications to AWS Elastic Beanstalk, including ASP.NET or
ASP.NET Core applications.
• Customers can deploy to Amazon EC2 with AWS CodeDeploy.

• They can send a message to an SNS Topic or SQS Queue by running AWS Tools for
Windows PowerShell scripts.
• Customers can use cmdlets from the AWS Tools for Windows PowerShell module,
optionally installing the module before use.

• And, customers can run AWS CLI commands against an AWS connection.

For more information about VSTS tools, see the AWS vsts tools website on github.
AWS Tools for Microsoft Visual Studio enables customers to quickly deploy and
manage applications in the AWS Cloud without worrying about the infrastructure.

With Visual Studio 2013, 2015, and 2017, customers can directly deploy applications
to Elastic Beanstalk.

Customers can deploy .NET Core 1.0, 1.1, 2.0, and 2.1 web applications, and .NET
Framework web applications.
Follow the link on the screen to watch Jill from AWS demonstrate how to deploy
applications faster by using the AWS Visual Studio Toolkit and AWS Elastic Beanstalk.

https://www.youtube.com/watch?v=B190tcu1ERk (4:23)
The AWS Cloud Development Kit, or AWS CDK, is an open-source software
development framework that customers use to define cloud infrastructures in code,
and provision them in AWS CloudFormation. The CDK integrates with AWS services
and provides a higher-level object-level abstraction to define AWS resources.

Customers can use C# and other common language runtime-based programming


languages with AWS CDK for .NET to define AWS infrastructure.

With CDK, customers can use simple constructs to build infrastructure rather than
complex AWS resource configuration code.
They can write CDK code by using familiar development environments such as Visual
Studio, Visual Studio Code, or JetBrains Rider.

Best practices are built into the CDK, so customers’ code follows sensible, safe
defaults while still allowing infrastructures to fit the use case.

Customers can create and share their own CDK constructs, which are packaged in the
.NET NuGet format.
15

AWS CodePipeline is a fully managed continuous delivery service that helps


customers automate their release pipelines for fast, reliable application and
infrastructure updates. CodePipeline automates the build, test, and deploy phases of
a release process every time there is a code change, based on the release model
defined. This enables customers to rapidly and reliably deliver features and updates.

Customers can easily integrate AWS CodePipeline with third-party services, such as
GitHub and others shown here, or with custom plugins. With AWS CodePipeline,
customers only pay for what they use. There are no upfront fees or long-term
commitments.
AWS CodeStar is a cloud-based service for creating, managing, and working with
software development projects on AWS. Customers can quickly develop, build, and
deploy applications on AWS with an AWS CodeStar project. An AWS CodeStar project
creates and integrates AWS services for a project development toolchain. Depending
on the choice of AWS CodeStar project template, the toolchain might include source
control, build, deployment, virtual servers, serverless resources, and more. AWS
CodeStar also manages the permissions required for project users, who are called
team members. By adding users as team members to an AWS CodeStar project,
project owners can efficiently grant each team member role-appropriate access to a
project and its resources.
This slide depicts how AWS services for DevOps align to steps in application lifecycle
management.
In this module, you learned how to use AWS to build and run .NET applications.

You also learned what tools to use to build architectures that support .NET, and what
code management services and code build architectures are available.

Finally, you learned how to use AWS PowerShell to automate functions from scripted
solutions.
In this course, you learned the technical fundamentals of running Microsoft
workloads on Amazon Web Services, or AWS. You learned about the various tools
available to migrate, develop, build, deploy, manage, and operate Microsoft
applications and Windows Servers on AWS. You saw case studies and reference
architectures to showcase how some AWS customer architectures have been
designed for common Microsoft workloads including SQL and Active Directory. This
course is available in both instructor-led and web-based delivery formats.
In this course, you learned how to:
- Provide a technical overview of Microsoft workloads on AWS
- Discuss the technical advantages and positioning for Microsoft workloads on AWS,
- Provide guidance to customers who are architecting common Microsoft workloads
for AWS, and
- Explain the various tools to develop, deploy, and manage Microsoft workloads on
AWS
We discussed seven topics:
• Module one covered how to position AWS for managing and hosting Microsoft
workloads.
• Module two covered how to architect foundational AWS services to support
running Microsoft workloads.
• Module three covered how to run Microsoft Windows Server instances in Amazon
EC2, and create custom Amazon Machine Images (AMI) for running Microsoft
workloads.
• Module four covered how to deploy and run Directory services in AWS.
• Module five covered running SQL Server databases on AWS.
• Module six covered how to automate operations with AWS services.
• Module seven covered how tools Amazon provides help you to build and run .NET
applications on AWS.

You might also like