Microsoft Workloads in AWS
Microsoft Workloads in AWS
Microsoft Workloads in AWS
workloads on Amazon Web Services, or AWS. The course is designed for pre-sales
engineers at APN Consulting partner organizations to learn how to discuss the
technical advantages of AWS for Windows. You will learn about the various tools
available to migrate, develop, build, deploy, manage, and operate Microsoft
applications and Windows Servers on AWS. You will see case studies and reference
architectures to showcase how some AWS customer architectures have been
designed for common Microsoft workloads including SQL and Active Directory. This
course is available in both instructor-led and web-based delivery formats.
In this course, you will learn how to:
- Provide a technical overview of Microsoft workloads on AWS
- Discuss the technical advantages and positioning for Microsoft workloads on AWS,
- Provide guidance to customers who are architecting common Microsoft workloads
for AWS, and
- Explain the various tools to develop, deploy, and manage Microsoft workloads on
AWS
This course is organized into seven topics:
• Module one covers how to position AWS for managing and hosting Microsoft
workloads.
• Module two covers how to architect foundational AWS services to support running
Microsoft workloads.
• Module three covers how to run Microsoft Windows Server instances in Amazon
EC2, and create custom Amazon Machine Images (AMI) for running Microsoft
workloads.
• Module four covers how to deploy and run Directory services in AWS.
• Module five covers running SQL Server databases on AWS.
• Module six covers how to automate operations with AWS services.
• Module seven covers how tools Amazon provides help you to build and run .NET
applications on AWS.
In the first module, you learn how to position AWS for managing and hosting
Microsoft workloads.
In this module, you will learn how to position AWS for managing and hosting
Microsoft workloads.
You will learn which drivers and challenges lead to using AWS for Microsoft
workloads, and the benefits AWS provides.
You also learn how to assess the current workloads to find cost savings when moving
to AWS.
AWS focuses on helping customers optimize their investments in enterprise
applications. We recognize that Microsoft software is widely used by customers of all
sizes.
Microsoft, VMware, SAP, IBM, and Oracle still represent a major contributors to IT
budgets. Customers want to reduce their technical debt with these legacy enterprise
software investments. APN Partners can help customers do this by migrating their
workloads to AWS.
AWS offers security services that provide fine-grained control. AWS has many
security certifications and provides support to highly available applications.
AWS also has experience building reliable, secure, scalable, and cost-effective
infrastructure for active customers every month.
Visit the AWS web site for the most up-to-date list of services.
8
AWS has more than 10 years of innovation for Microsoft workloads that run on AWS.
AWS offers over 150 Amazon Elastic Compute Cloud, or Amazon EC2. instance types.
AWS also offers more than 60 different Amazon Machine Images, or AMIs, for
Microsoft workloads. Recently, AWS announced the availability of Windows Server
2019 AMIs for Amazon EC2. Windows Server 2019 offers a variety of new features,
including smaller and more efficient Windows containers, support for Linux
containers for application modernization, and the App Compatibility Feature on
Demand. Windows Server 2019 AMIs are available in all public AWS Regions and in
AWS GovCloud (US).
Microsoft Premier Support helps AWS assist end customers. AWS and Microsoft have
new areas of support integration to help customers.
In addition, AWS support engineers can escalate issues directly to Microsoft Support
on behalf of AWS business or enterprise tier customers who run Microsoft
workloads. AWS does not share any customer information or specific details without
the customer’s permission.
.
Secure: The AWS Cloud uses a security-in-layers approach to provide the protection
that organizations require without sacrificing scale, control, speed, or performance. In
addition to several options for network security, AWS protects data and applications
with 256-bit encryption and provides fine-grained access controls to resources via
AWS Identity and Access Management (IAM).
Reliable: AWS for Microsoft workloads offers a highly reliable environment where
replacement instances can be rapidly and predictably provisioned. The AWS Service
Level Agreement commitment is designed for 99.95% availability for each Region.
Each Region comprises at least two physically isolated facilities that are known as
Availability Zones, which helps keep instances highly available. AWS currently features
61 Availability Zones in 20 Regions. These Regions provide organizations the
reassurance that their mission-critical data and applications will be available, even in
the face of natural disasters and other rare events that might cause systems failures.
Familiar: AWS for Microsoft workloads is compatible with the most-used Microsoft
applications, such as Microsoft System Center and VMware vCenter. Add-ins were
developed to provide seamless integration between these traditional applications
and the AWS Cloud. This enables organizations to use existing tools from a single,
familiar console to manage on-premises virtual machines and cloud workloads that
are based on Microsoft.
Cost-effective: AWS for Microsoft workloads is the solution for organizations that
must access enterprise-grade computing resources in an affordable way. A global
cloud-computing infrastructure enables organizations to benefit from economies of
scale, which reduces the total costs of enterprise IT. AWS is designed to offer value by
enabling elastic consumption that scales with customers’ needs, pay-as-you-go
pricing models, and no long-term service commitments.
Flexible: With AWS for Microsoft workloads, organizations have the flexibility to
choose the computing, storage, and networking capacity they need, which services to
use, and how they want to use them. Elastic service capabilities allow scaling of
resources up or down in real-time as needs change, enabling a lean, adaptable
infrastructure. Automation capabilities can be enabled to leverage this elasticity
instantly based on easily customizable rules and volume thresholds.
Extensive: AWS for Microsoft workloads offers an extensive line of features and
services. AWS has been continually expanding its services to support virtually any
cloud workload, and it now has more than 40 services that range from compute,
storage, networking, database, analytics, application services, deployment,
management, and mobile. Designed to work together, these services are highly
customizable and available for a variety of programming interfaces including .NET,
Visual Studio, and Windows PowerShell. AWS expands and improves these services
continually.
Innovative: AWS's rapid pace of innovation helps enterprises focus on what's most
important to them when navigating through the many services available.
Here’s another reason companies choose AWS—global reach.
AWS has the largest global footprint of any cloud provider in the market today. Each
AWS Region has multiple Availability Zones and data centers. AWS has been running
high quality cloud infrastructure technology products and services since 2006. We
know our customers care about the availability and performance of their applications.
With AWS, customers can deploy applications across multiple Availability Zones in the
same Region for increased fault tolerance and low latency. Availability Zones are
connected to each other with fast, private fiber-optic networks. Customers can easily
architect applications that automatically failover between Availability Zones without
interruption.
For the most up-to-date details on AWS Regions, Availability Zones, edge locations,
and data centers, visit our website.
In this section, you will learn about some specific use cases for Microsoft workloads.
With AWS, you can run the full array of Microsoft workloads on AWS. Several
examples are shown here.
We continually expand our services to support virtually any cloud workload. We now
have over 100 services to offer. Among those offerings, AWS provides numerous
Windows and .NET services and functionality.
AWS also offers a broad selection of services along with much deeper functionality
within most of these services, including deeper functionality for Windows such as the
AWS Deep Learning AMI for Microsoft Windows Server and the first fully managed
native-Windows file system available in the cloud with FSx for Windows File Server.
Shown here are some use cases for Microsoft Workloads on AWS
Innovate
There are also many integration points between Microsoft workloads and the broad
set of AWS services, which can enable you to innovate and drive business agility.
Amazon EC2 makes it easy to start and manage your Windows instances.
You can run Microsoft Windows Server 2008 and 2008 R2, 2012 and 2012 R2, 2016
and 2019 on EC2 instances. Amazon EC2 instances that run Microsoft Windows
Server provide a secure, reliable, and high-performance environment for deploying
applications and Windows workloads. You can also use preconfigured Amazon
Machine Images, or AMIs, with different combinations of Windows and SQL Server to
help you migrate your Windows workloads quickly.
You can use AWS options to maintain legacy applications in the AWS Cloud, or rewrite
legacy applications while you migrate to more modern operating systems.
Instructor note regarding 32-bit applications, Windows 2003: AWS no longer provides
AMIs that support these operating systems.
Microsoft workloads can use AWS services in multiple ways. This example shows
Active Directory, which is a fundamental Windows workload.
For the first two options, you must still administer your deployments:
• You install and manage domain controllers, and
• You manually join EC2 instances to your self-managed AD.
Customers can extend their Active Directory Domain to AWS and use the identities
they manage in Active Directory directory services to use Office online.
SQL Server is another foundational workload, and you can choose from multiple
options for deployment.
Amazon RDS
Amazon Relational Database Service, or Amazon RDS, is a managed service that
makes it easy to deploy a relational database to support line-of-business applications
that run on AWS:
It automates database administration tasks, such as provisioning, patching,
backup, recovery, failure detection, and repair;
Multi-AZ deployments provide automatic failover; and
It integrates with IAM for granular control over resource permissions.
Re-factor
Additional savings and flexibility can be realized with a move to a variety of open
source database solutions on AWS. Customers can save significant cost by moving off
of the proprietary SQL Server engine and onto a fully managed relational database
services like Aurora, which is based on open source standards MySQL and PostreSQL.
AWS has available refactoring tooling and services to help customers move to cloud-
native solutions such as Aurora.
Whether you decide to self-manage your customer’s environment with Amazon EC2
or deploy to a managed service with Amazon RDS, you will have:
A cost-effective option for hosting SQL Server;
Compete control for managing software, compute, and storage resources; and
Rapid provisioning through relational database AMIs that enable you to store
database machine images.
The key to continued cost savings is the efficient management of ongoing operations.
AWS provides a set of management tools that enables you to programmatically
provision, monitor, and automate all the components of your cloud environment.
Using these tools, you can maintain consistent controls without restricting the speed
of development. AWS provides four kinds of management tools that work together
and are integrated with every part of the AWS Cloud:
• AWS CloudFormation – Model and provision all your cloud infrastructure
resources;
• AWS Systems Manager – Gain operational insights and take action on AWS
resources;
• Amazon CloudWatch – Gain visibility of your cloud resources and applications;
• AWS License Manager – Set rules to manage, discover, and report software license
usage; and
• AWS OpsWorks – Automate operations with Chef and Puppet.
AWS provides full support for .NET applications and Windows Workloads.
Additionally, AWS supports various features in .NET, .NET Core, and Core 2.1. AWS
services such as AWS Lambda, AWS X-Ray, and AWS CodeStar can help build modern
serverless and DevOps solutions. These services also provide deep integration with
tools that developers already use to build .NET applications, like Visual Studio and
Visual studio team services. This means that developers can use familiar tools and
also benefit from using the breadth of AWS products and services. To help developers
learn about various AWS services and get started quickly, AWS provides a range of
resources and tools, and AWS also offers a GitHub community.
You can access many case studies on the AWS website, which discuss customer
success stories.
This diagram illustrates how the shared responsibility model works and which
elements are part of each type of responsibility.
AWS also :
• Obtains industry certifications and independent third-party attestations
• Publishes information about AWS security and control practices in
whitepapers and website content
• Provides certificates, reports, and other documentation directly to AWS
customers as required under non-disclosure agreements
When you use services that are managed by AWS, such as Amazon RDS,
Amazon Redshift, or Amazon WorkDocs, you don’t have to worry about
launching and maintaining instances or patching the guest OS or applications.
AWS handles these tasks for you. For these managed services, basic security
configuration tasks happen automatically, such as data backups, database
replication, and firewall configuration.
However, there are certain security features that you should configure, no
matter which AWS service you use. These features that you must configure
include user accounts and credentials and credentials for AWS Identity and
Access Management, or IAM; SSL for data transmissions; and user activity
logging
The shared responsibility model for infrastructure services like Amazon EC2
specifies that AWS manages the security of the following assets:
• Facilities, including Regions, Availability Zones, and edge locations;
• Physical security of hardware;
• Network infrastructure; and
• Virtualization infrastructure.
Customers are responsible for the security of their cloud computing assets,
including:
• Amazon Machine Images, or AMIs;
• Operating systems;
• Applications;
• Data in transit;
• Data at rest;
• Data stores;
• Credentials; and
• Policies and configuration.
Compliance requirements are deeper than data sovereignty regulations and
geographic location. To implement the necessary security controls across the
operating environment, AWS recommends a layered security approach. AWS offers
complementary features and services to implement the necessary controls. Many of
these control measures apply to layers that AWS controls, which means that AWS
handles the security of the cloud, specifically the physical infrastructures that host
your resources.
In the next slides, you learn about additional security controls you can implement,
including:
• Virtual private clouds, or VPCs, which also include subnets;
• Security groups;
• Network access control lists, or network ACLs; and
• Firewalls.
Amazon Virtual Private Cloud, or Amazon VPC, allows you to add another
layer of network security to your instances. You can use Amazon VPC to
create private subnets, and you can even add an IPsec virtual private network,
or VPN, tunnel between your network and your VPC. Amazon VPC enables
you to define your own network topology, including definitions for subnets,
network access control lists, internet gateways, routing tables, and virtual
private gateways. The subnets that you create can be defined as either private
or public.
Security groups are stateful: responses to allowed inbound traffic are allowed
to flow outbound regardless of outbound rules, and vice versa. Traffic can be
restricted by IP protocol, by service port, and by source or destination IP
address. These IP addresses can be individual IP addresses or IP addresses
that are in a Classless Inter-Domain Routing, or CIDR, block. You can also
restrict traffic sources to those that come from other security groups. If you
add and remove rules from the security group, those changes are
automatically applied to the instances that are associated with the security
group.
Note: These virtual firewalls cannot be controlled through the guest OS;
instead, they can be modified only through the invocation of Amazon VPC
application programming interfaces, or APIs.
The level of security provided by the firewall is a function of the ports that you
open, and for what duration and purpose. Well-informed traffic management
and security design are still required on a per-instance basis. AWS further
encourages you to apply additional per-instance filters with host-based
firewalls, such as iptables or the Windows Firewall, so they can be state-
sensitive, dynamic, and respond automatically.
In this example, a security group allows inbound requests to port 443 to the remote
desktop gateway only if the request comes from one of the corporate data center IP
addresses. This allows the data center staff to connect to the remote desktop
gateway, and it blocks all connection requests from other IP address spaces.
A second security group for the application server allows connections to port 3389
only if they come from instances in the remote desktop gateway security group. This
allows the instance to remain in a private subnet, while allowing server
administrators to manage the application server by connecting through the remote
desktop gateway.
A network access control list, or network ACL, is an optional layer of security
that acts as a firewall for controlling traffic in and out of a subnet. You can set
up network ACLs with rules similar to your security groups, which adds an
additional layer of security to your VPC.
Network ACLs are stateless; responses to allowed inbound traffic are subject
to the rules for outbound traffic, and vice versa. A network ACL is a numbered
list of rules that are evaluated in order, starting with the lowest numbered rule.
The rules determine whether traffic is allowed in or out of any subnet
associated with the network ACL. A network ACL has separate inbound and
outbound rules, and each rule can either allow or deny traffic.
Each subnet must be associated with a network ACL. If you don't explicitly
associate a subnet with a network ACL, the subnet is automatically associated
with the default network ACL. The default network ACL allows all traffic to flow
in and out of each subnet.
Like security groups, network ACLs are managed through Amazon VPC APIs.
They add an additional layer of protection and enable additional security
through the separation of duties.
A network access control list, or network ACL, is an optional layer of security
that acts as a firewall for controlling traffic in and out of a subnet. You can set
up network ACLs with rules similar to your security groups, which adds an
additional layer of security to your VPC.
Network ACLs are stateless; responses to allowed inbound traffic are subject
to the rules for outbound traffic, and vice versa. A network ACL is a numbered
list of rules that are evaluated in order, starting with the lowest numbered rule.
The rules determine whether traffic is allowed in or out of any subnet
associated with the network ACL. A network ACL has separate inbound and
outbound rules, and each rule can either allow or deny traffic.
Each subnet must be associated with a network ACL. If you don't explicitly
associate a subnet with a network ACL, the subnet is automatically associated
with the default network ACL. The default network ACL allows all traffic to flow
in and out of each subnet.
Like security groups, network ACLs are managed through Amazon VPC APIs.
They add an additional layer of protection and enable additional security
through the separation of duties.
Most companies usually don’t migrate to the cloud and abandon their physical data
centers immediately. In many situations, you must connect the Amazon EC2 instance
to an on-premises Windows domain. A company with an existing data center might
still use that data center for critical tasks, while also extending their capabilities by
hosting specific applications and services in AWS. Companies in such situations can
choose to use a virtual private network, or VPN, solution. A VPN enables users to
establish secure connections into your VPC via an Amazon EC2 instance. Alternatively,
you can use AWS Direct Connect, or DX, to integrate the VPNs that the company
created with their existing data centers. Using DX enables interaction between
computers in the data center and the resources that run in AWS.
DX is a unique solution that helps companies get their important applications access
to the AWS network with scale, speed, and consistency. DX does not involve the
internet. Instead, it uses dedicated, private network connections between your on-
premises solutions and AWS.
Service benefits
DX is useful in several scenarios, and some common scenarios are described in the
following sections.
Transferring large datasets
Consider a high performance computing, or HPC, application that operates on large
datasets that must be transferred between your data center and AWS. For such
applications, you can connect to the cloud using DX.
Network transfers will not compete for internet bandwidth at your data center or
office location.
The high-bandwidth link reduces the potential for network congestion and degraded
application performance.
This type of architecture can be built by using an AWS Quick Start. AWS
CloudFormation templates accelerate the deployment.
You can run .NET applications in EC2 instances that run Windows Server,
and you can run fully managed databases with Amazon RDS for SQL
Server.
• Amazon EC2 instance store – Many instances can access storage from disks that
are physically attached to the host computer. The instance store, which is also
known as ephemeral storage, is physically attached to the host computer. It
provides temporary block-level storage for use with an instance. Instance store
volumes are usable only from a single instance during its lifetime. They can't be
detached and then attached to another instance. The data in an instance store
persists only during the lifetime of its associated instance. If an instance reboots,
either intentionally or unintentionally, the data in the instance store does not
persist. Unlike Amazon EBS volumes, you cannot take snapshots of an instance
store. The instance store is often used to temporarily store things such as swap
files or caches.
• Amazon FSx – Amazon FSx is a fully managed, native Microsoft Windows file
system built on Windows Server. With Amazon FSx, you can move your Windows-
based applications that require file storage to AWS. Amazon FSx supports the
Server Message Block, or SMB, protocol; the Windows NT file system, or NTFS;
Active Directory integration; and the Distributed File System, or DFS.
• Amazon EFS – Amazon EFS is a fully managed, cloud-native file system for a broad
range of Linux-based business applications. Accessible via the NFS protocol, it
provides simple, scalable elastic file storage and is easily shared among multiple
applications, instances, and on-premises servers simultaneously.
When you use Amazon EBS, consider these key points:
• Choose storage types that optimize cost and performance, and
• Provision enough IOPS for your workload.
By applying the flexible storage options to a workload, you can architect a performant
and cost-effective solution. For example, say that you are using an Amazon EC2
instance to run Microsoft SQL Server. You would need multiple types of storage with
different requirements for I/O performance, durability, latency sensitivity, and
persistence. For standard database reads and writes, you could use an Amazon EBS
Provisioned IOPS volume. This type of EBS volume could help ensure that the
read/write speed remains consistent during utilization, while also remaining
persistent in the event of disk failure. You could also use a General Purpose SSD
volume for the boot volume of the Amazon EC2 instance because this will not impact
the read/write performance after it is booted. For the TempDB data files, it is critical
that these files have the fastest possible read/write speed, which would use instance
store, or ephemeral storage, volumes. Because these volumes are not persistent, you
could archive the TempDB data files to Amazon S3 on a schedule. In Amazon S3, the
files could be held in a durable state.
With Amazon EBS, you can use any of the standard RAID configurations that you
would use with a traditional bare-metal server, as long as that particular RAID
configuration is supported by the operating system for your instance. This is because
all RAID is accomplished at the software level. For greater I/O performance than you
can achieve with a single volume, RAID 0 can stripe multiple volumes together, For
on-instance redundancy, RAID 1 can mirror two volumes together. However, the
maximum performance of Amazon EBS depends on the instance type.
1. It’s Windows-native – Amazon FSx for Windows File Server is built on Windows
Server. It provides file storage that supports the Windows file system features that
you use. It also provides file access via the SMB protocol, and integrates with
Active Directory. Amazon FSx for Windows File Server supports the following
features:
• DFS Namespaces and DFS Replication;
• Access Control Lists, or ACLs;
• NT File System, or NTFS; and
• VSS, or Volume Shadow Copy Service
2. It’s fully managed – Amazon FSx for Windows File Server sets up and provisions
file servers, storage volumes, and reduces the need for administrative overhead.
The service automatically updates Windows Server software, detects and corrects
hardware failures, and regularly performs backups.
3. It delivers fast performance – Amazon FSx for Windows File Server is built on SSD
storage and provides per-file-system-throughput of up to 2 GB per second. You
can tune the throughput level independent of your file system size. You can group
multiple Amazon FSx file systems together for up to 10 GB per second of
throughput across petabytes of data.
4. It’s accessible in the AWS cloud and on-premises – Use AWS Direct Connect and
Virtual Private Networks to connect FSx file shares to services that reside on-
premises. By using VPC peering, you can access FSx across Virtual Private Clouds.
5. It’s secure and compliant – Amazon FSx for Windows File Server automatically
encrypts all your data at rest and your data in transit. Amazon FSx is compliant
with the Payment Card Industry Data Security Standard, or PCI-DSS. For sensitive
workloads that are regulated by the Health Insurance Portability and
Accountability Act, or HIPAA, Amazon FSx is a HIPAA Eligible Service. To control
user access, Amazon FSx supports Windows access control lists, or ACLs. It also
protects your data with automatic daily backups of your file systems. Access is
also controlled by using AWS Identity and Access Management, or IAM, and
VPC security groups. Amazon FSx integrates with AWS CloudTrail to
monitor and log your API calls, letting you see actions that users on your
Amazon FSx resources take.
6. It fully supports the SMB protocol – Clients include Microsoft Windows Server
2008 and later, Amazon WorkSpaces and Amazon AppStream 2.0, VMware Cloud
on AWS, and Linux distributions that run smbclient.
The following list offers links to more information about topics you learned in this module:
• AWS maintains certifications and attestations that you can reference by visiting the
linked page.
• For an introduction to the AWS security model, please read the linked white paper.
• For more information, about VPCs, see the linked page
• For more information about EC2 instance types, see the linked page
In this module, you learned how foundational Amazon Web Services, or AWS,
services pertain to running Microsoft workloads. These foundational services include
compute services, such as Amazon Elastic Compute Cloud, or Amazon EC2; storage
services; networking services; and domain services.
You learned how to discuss the shared responsibility model, and how to use Virtual
Private Cloud (VPC), including Security Groups, Network Access Control Lists, and
firewalls.
You also learned how to choose storage options for your Microsoft workloads.
Welcome to the Running Microsoft Windows Server on AWS module.
In this module, you will learn how to run Microsoft Windows Server instances in
Amazon Elastic Compute Cloud, or EC2. You will also learn how to create custom
Amazon Machine Images, known as AMIs, for running Microsoft workloads.
Amazon EC2 offers virtual machines, or instances, that customers can launch and
manage with a few clicks or a few lines of code. EC2 supports Windows Server 2008
through 2019.
With EC2, customers can create, save, and reuse their own server images as Amazon
Machine Images. They can launch one instance at a time, or launch a whole fleet of
instances. Following the pay-as-you-go model, customers can add and terminate
instances as needed.
EC2 offers many types of instances, with various levels of CPU, memory, storage,
networking, graphics, and general-purpose performance.
Customers have full control over virtual instances. Customers have full root access
and/or administrative control over accounts, services, and applications. AWS does not
have any access rights to a customer’s instances or guest operating system, or OS.
AWS Identity and Account Management, or IAM, is used for authentication and
authorization of access to each customer’s AWS resources, but not for OS-level
access. To access the operating system on a customer’s Amazon EC2 instances, the
customer needs a different set of credentials. In the AWS shared responsibility model,
each customer owns the OS credentials, although AWS helps bootstrap the initial
access to the OS.
Customers can connect remotely to their Windows instances that use Remote
Desktop Protocol, or RDP, by using an RDP certificate generated for their instance.
They also control the updating and patching of their guest OS, including security
updates.
To provision an Amazon EC2 instance running Windows Server, multiple pieces of
information are required. The list shown here isn’t an exhaustive list; it’s a survey of
some of the most important items needed to provision a running, secure instance
that is a member of a Microsoft Active Directory Domain.
• Starting with item 1, customers must select an Amazon Machine Image to create
a new instance. AMIs provide the base virtual machine image for the
instance. Customers can select one from AWS Marketplace, create one
from an existing Amazon EC2 instance, or use one provided by AWS. You
will learn more about AMIs in the next section.
• Next, customers need network placement and addressing. All Amazon EC2
instances exist in a network. To determine where an instance is placed and
what type of IP addressing is assigned to it by default, customers can check
the Amazon Virtual Private Cloud, Amazon VPC, settings, in which the
instance is launched.
• Third, a customer’s instance types and sizes needed to support the customer’s
operating system, application, and Windows Server usage requirements
depend on the workload. Amazon EC2 allows customers to choose from
multiple instance types and sizes to select the proper infrastructure
resources for each instance.
• For domain membership, many enterprise customers use Microsoft Active
Directory Domain Services to manage objects across their corporate environment.
Customers can configure an instance to be treated as a domain object by
provisioning Amazon EC2 instances as members of the Active Directory Domain
when the instances are created.
• By configuring user data, customers can supply a batch file or PowerShell script for
the Windows instance to run when it starts. Customers can completely set up a
new instance without logging in directly to the instance. You will learn more about
user data in the next section.
• An Amazon EC2 instance can use two basic types of block storage –
ephemeral storage or Amazon Elastic Block Store, or Amazon EBS,
volumes. Ephemeral storage exists only for the life of the instance; Amazon
EBS volumes persist even after the instance has been stopped or
terminated.
• Tags help customers manage their instances, images, and other EC2 resources by
assigning categories, such as by owner, purpose, billing entity, or environment.
Customers can assign up to 50 tags to an EC2 instance.
• Finally, security groups are stateful firewalls that surround individual
Amazon EC2 instances and allow customers to control the traffic allowed to
pass to the instance. Security groups are applied to specific instances,
rather than network entry points. This increases security and gives
administrators finer granularity control when they grant access to the
instance.
7
• Number of cores
• Amount of memory
• Amount and type of storage
• Network performance
• Intel processor technologies
Each type and family includes multiple sizes – small, medium, large, extra
large, double-extra large, and so forth.
Each deployment is different, so customers should follow Microsoft's detailed
guidance on how to properly size their environments based on the number of
users and workloads involved. As a starting point, customers can consider the
minimum requirements for each server role, add additional capacity over the
absolute minimum requirements to allow for growth, and map the requirements
to an Amazon EC2 instance type and size.
For example, the numbers shown here come from the Microsoft SharePoint
deployment guide’s system requirements.
Launching new instances and running tests in parallel is a simple process on AWS.
AWS recommends measuring the performance of applications to identify appropriate
instance types and validate application architecture. Customers should also conduct
rigorous load/scale testing to ensure that their applications can scale as intended.
Customers can avoid overprovisioning and underprovisioning by changing instance
sizes and types as their needs change.
Also, customers should analyze whether their applications can scale across multiple
Amazon EC2 instances by design. They should design applications that are resilient to
reboot and relaunch, to allow for scaling horizontally instead of vertically, where
possible. Tools such as Amazon CloudWatch and AWS Cost Explorer help customers
collect data to track, analyze, and improve expenditures.
In some architectures, using Reserved and Spot Instances to perform workloads can
result in significant savings.
An AMI is a template that contains a software configuration, such as an operating
system, application server, and applications. A customer can use an AMI to launch an
instance, which is the copy of the AMI running as a virtual server on a host computer
in an AWS data center. Customers can launch as many instances as they want from an
AMI, and they can also launch instances from as many AMIs as needed.
Some of these AMIs also include an edition of Microsoft SQL Server, which can be
Enterprise Edition, Standard, Express, or Web.
Customers can launch an instance from an AWS Windows AMI with Microsoft SQL
Server to run the instance as a database server. Alternatively, customers can launch
an instance from any Windows AMI and then install the database software that they
need on the instance.
Windows Server 2003 is no longer provided, but customers can deploy their own
Windows Server 2003 – 32 or 64 bit – in Amazon EC2 to give them time on a secure
and stable environment while they migrate to a more modern OS.
For more information about currently available Windows AMIs, use the link shown
here.
After customers successfully launch and log in to an instance, they can configure the
instance for a specific application’s requirements. EC2Launch is a set of Windows
PowerShell scripts that runs on Windows Server 2016 and later AMIs. The EC2Launch
scripts replace the EC2Config service that is included on Windows Server 2012 R2 and
earlier AMIs. Both scripts provide similar functions.
EC2Launch performs the following tasks by default during the initial instance boot:
• Sets up new wallpaper that renders information about the instance
• Sets the computer name
• Sends instance information to the Amazon EC2 console
• Sends the RDP certificate thumbprint to the EC2 console.
• Sets a random password for the administrator account.
• Adds DNS suffixes.
• Dynamically extends the operating system partition to include any unpartitioned
space.
• Executes user data (if specified); you will learn more about user data next
• Sets persistent static routes to reach the metadata service and AWS Key
Management Service
Customers can also use EC2Launch to forward messages to the AWS console, initialize
secondary EBS volumes, and configure and schedule Sysprep to run on reboot.
By specifying user data, customers can supply a script to a Windows instance that
executes a series of commands. Scripts can take the form of batch or PowerShell
scripts on Windows instances. By using user data, customers can completely set up a
new instance without ever logging in directly to the instance.
The scripts customers supply do not have to do all of the work themselves. A user
data script could, for example, download and execute a longer script that is stored in
an Amazon S3 bucket. Customers can also download and install a Configuration
Management system, such as Chef or Puppet, and kick off an initialization task from a
reusable Chef Cookbook or Puppet Module.
Customers can run any command that that can be run in a command prompt
window or a Windows PowerShell command prompt.
If customers use an AMI that includes the AWS Tools for Windows PowerShell, they
can also use those cmdlets. If an IAM role is associated with the instance, the
customer does not need to specify credentials to the cmdlets. Applications that run
on the instance can use the role's credentials to access AWS resources such as
Amazon S3 buckets, as shown in the PowerShell with AWS tools example.
Here, you can see two different ways to pass user data to a Windows instance, by
using:
• A set of Windows batch commands, or
• A PowerShell script
The batch example script uses a few simple calls to the winrm utility to configure the
instance to allow remote administration via the Windows Remote Management
service.
The PowerShell example script uses built-in PowerShell commands to configure the
Windows instance as a web server running Internet Information Services, or IIS. As
you will see in this module’s lab, this script could be extended to install a full, working
ASP.NET application as well—all without the customer logging in to the instance
directly.
Instance metadata is data about the instance that you can use to configure the
instance from a script or command. Customers can use instance metadata to have
the user data and other scripts become self-describing.
Instance metadata is divided into categories. Some examples are listed here.
Customers can create their own AMIs that contain customized settings, installed
applications, and configurations.
Customers can launch single or multiple instances from an AMI when they need the
same configuration.
AMIs:
• Contain all customizations,
• Are anchored to the current Region,
• Reboot the instance by default to ensure consistency, and
• Create the instance with all attached volumes
Alternatively, use the AWS command-line interface to create an image that is based
on an existing instance as shown on the screen.
For more details, use the link shown.
<no audio>
Your customers must understand the differences among licensing models.
Under the License Included model, AWS manages the license. Customers pay as they
go. AWS provides the images and supports legacy versions.
Under the License Mobility model, customers must have active Microsoft software
assurance. Microsoft does require a verification process. Customers can import their
images and software. Eligible software includes Microsoft SQL, RDS, Exchange, and
SharePoint.
The final model is Bring Your Own License. Most customers choose Dedicated Host.
Windows server can be deployed on Dedicated Hosts. Customers are responsible for
compliance with Microsoft, and customers can import and use their own software on
these servers.
Software Assurance and License Mobility are not required for licenses purchased
prior to 10/1/2019 and not upgraded to versions released after 10/1/2019.
For more details about licensing models, refer to the training catalog for the extended
licensing training course.
Certifications and attestations include the following:
• AWS publishes a Service Organization Controls SOC 1 report, published under
both the SSAE 16 and the ISAE 3402 professional standards, as SOC 2-Security
and SOC 3 Report.
• AWS achieved ISO 9001, ISO 27001, ISO 27017, and ISO 27018 certifications, was
successfully validated as a Level 1 service provider under the Payment Card
Industry (PCI) Data Security Standard (DSS), and currently offers HIPAA Business
Associate Agreements to covered entities and their business associates subject to
HIPAA.
• AWS achieved FedRAMP compliance, received authorization from the United
States General Services Administration to operate at the FISMA Moderate level,
and is also the platform for applications with Authorities to Operate, or ATOs, under
the Defense Information Assurance Certification and Accreditation Program, or
DIACAP. NIST, FIPS 140-2, CJIS, and DoD SRG Levels 2 and 4 are some of the
other certifications AWS has received.
• For more information, see: http://aws.amazon.com/compliance/
Once customers have more than a few users, they often centralize application
and resource access, so they can manage access control policies for
applications and resources, such as printers and file shares. Customers use
Active Directory-integrated group policies to centralize access.
For the third area, Active Directory provides a way for computers to join an
Active Directory domain. This makes it possible to centrally manage
computers by using Active Directory group policies.
• Customers can connect. They can extend their on-premises Active Directory into
AWS by joining cloud-based workloads to their existing directory domain.
• Or, customers can re-host. They can host Active Directory on AWS by installing
Active Directory instances on Amazon Elastic Compute Cloud, or Amazon EC2.
• Or, customers can re-platform. They can use AWS Managed Microsoft Active
Directory, which provides a set of highly available domain controllers, monitoring
and recovery, data replication, snapshots, and software updates that are
automatically configured and managed.
<RE-HOST>
Amazon EC2 Active Directory is Active Directory that customers can manage and run
in the cloud. It can be standalone or replicated with your customer’s on-premises
network. Customers are responsible for all management and availability. If your
customer is replicating to an on-premises network, they must open all ports required
for replication, which is more ports than just for a trust. Using a cloud-based self-
managed Active Directory reduces Active Directory traffic to on-premises networks,
by Amazon EC2 workloads in the cloud.
<RE-PLATFORM>
Using Active Directory in the cloud results in lower latency for workload authorization.
It also reduces chances for failure due to a network outage on an on-premises
network, particularly in virtual desktop infrastructure, or VDI, scenarios. If running
standalone, customers do not have to open Active Directory replication or trust ports.
If using a trust, your customer only needs to open the trust ports, which are fewer
ports than required for replication. While trusts require some communications to an
on-premises infrastructure, the traffic is limited to the data centers.
AWS Microsoft Active Directory is a managed solution that eliminates the need for
customers to handle availability, monitoring, patches, and backups. It can be a fully
contained Active Directory in the cloud, where your customers manage users, groups,
and computers. It can also support cross-forest trusts to an on-premises Active
Directory.
CONNECT
Customers who run minimal EC2 instances that require access to Active Directory, and are
willing to accept some latency to Active Directory over on-premises links may select to
extend on-premises Active Directory to the AWS Cloud. To do so, customers must adopt
security policies, which allow Active Directory ports to be exposed to the internet, and
architect highly available connectivity to on-premises Active Directory services.
For customers who are considering Active Directory on-premises only, make sure
they understand how the link latency might affect application performance. This is
because any Kerberos-authorized traffic from services in the cloud must communicate
through the link to on-premises. This increases the round-trip delays in processing
application requests. Customers also need to understand the security implications of
opening their corporate network for access by cloud apps for authentication and
authorization.
EC2
Customers who run applications that are not yet supported by AWS Managed Active
Directory, such as Exchange or SharePoint, and need a replicated, multi-Region Active
Directory solution can choose to host Active Directory on EC2.
The decision to use Amazon EC2 Active Directory instances is primarily driven by two
issues:
• Microsoft Active Directory does not currently delegate key permissions that are
needed to support some applications. The classes of applications involved typically
require schema extensions, special service accounts, or access to containers
outside of the delegated OU. Examples of applications affected by this include
Exchange and SharePoint. Before making a decision, customers should have a
complete list of applications they want to run in the cloud, and they should conduct
a review of permissions required versus permissions granted in Microsoft Active
Directory.
• While trusts can be effective, customers might need to replicate an Active Directory
solution across multiple Regions. Because Microsoft Active Directory cannot be
part of an on-premises forest, Amazon EC2 Active Directory instances are required
for a cloud-based Active Directory solution.
With Active Directory Connector, customers can connect AWS applications to existing
on-premises Microsoft Active Directory domains. Active Directory Connector does
not require directory synchronization or federation infrastructure. With Active
Directory Connector, customers can forward AWS sign-in requests to on-premises
Active Directory domain controllers for authentication. The AWS services shown here
can connect to Active Directory Connector.
With Active Directory Connector, customers simply add one service account to their
Active Directory. Active Directory Connector eliminates the need for directory
synchronization, or the cost and complexity of hosting a federation infrastructure.
When customers add users to AWS applications, such as Amazon QuickSight, Active
Directory Connector reads the existing Active Directory to create lists of users and
groups from which to select. When users log in to the AWS applications, Active
Directory Connector forwards sign-in requests to the customer’s on-premises Active
Directory domain controllers for authentication. Active Directory Connector redirects
directory requests in the AWS environment to an on-premises Microsoft Active
Directory without caching information in the cloud.
Customers can manage AWS resources, such as EC2 instances or Amazon Simple
Storage Service, or Amazon S3, buckets, through IAM role-based access to the AWS
Management Console, and join EC2 Windows instances to an on-premises Active
Directory domain through Active Directory Connector. Active Directory Connector
also allows users to access the AWS Management Console and manage AWS
resources by logging in with their existing Active Directory credentials.
Active Directory Connector comes in two sizes: small and large. For organizations that
have up to 500 users, customers use small Active Directory Connector. Customers use
large Active Directory Connector when they have from 500 to 5,000 users.
This illustration shows the authentication flow and network path that is used when
customers enable AWS Management Console access via the Active Directory
Connector:
1. First, a user opens the secure custom sign-in page and supplies their Active
Directory user name and password.
2. Next, the authentication request is sent over Secure Sockets Layer, or SSL, to
Active Directory Connector.
3. Third, Active Directory Connector performs LDAP authentication to Active
Directory. The Active Directory Connector locates the nearest domain controllers
by querying the SRV DNS records for the domain.
4. After the user is authenticated, Active Directory Connector calls the Security
Token Service, or STS, AssumeRole method to get temporary security credentials
for that user. Using those temporary security credentials, Active Directory
Connector constructs a sign-in URL that users use to access the console
With AWS Managed Microsoft Active Directory, customers can run directory-aware
workloads in the AWS Cloud, including Microsoft SharePoint and custom .NET and
SQL Server-based applications. It also supports AWS managed applications and
services, including Amazon WorkSpaces, Amazon WorkDocs, Amazon QuickSight,
Amazon Chime, Amazon Connect, and Amazon Relational Database Service for
Microsoft SQL Server, or Amazon RDS for SQL Server.
AWS Directory Service for Microsoft Active Directory is powered by Windows Server
2012 R2. When customers select and launch this directory type, it is created as a
highly available pair of domain controllers connected to the customer’s Amazon VPC.
The domain controllers run in different Availability Zones in a Region of the
customer’s choice. Host monitoring and recovery, data replication, snapshots, and
software updates are automatically configured and managed for customers.
AWS provides monitoring, daily snapshots, and recovery as part of the service—your
customers add users and groups to AWS Managed Microsoft Active Directory, and
administer Group Policy by using familiar Active Directory tools that run on a
Windows computer joined to the AWS Managed Microsoft Active Directory domain.
Customers can also scale the directory by deploying additional domain controllers,
and they can improve application performance by distributing requests across a
larger number of domain controllers.
AWS Managed Microsoft Active Directory is approved for applications in the AWS
Cloud that are subject to the United States Health Insurance Portability and
Accountability Act, or HIPAA, or the Payment Card Industry Data Security Standard,
known as PCI DSS. Customers enable compliance for their directories.
AWS Directory Service for Microsoft Active Directory, also known as AWS Managed
Microsoft Active Directory, is powered by an actual Microsoft Windows Server Active
Directory that’s managed by AWS, in the AWS Cloud. It enables customers to migrate
a broad range of Active Directory–aware applications to the AWS Cloud.
With AWS Managed Microsoft Active Directory, customers can run directory-aware
workloads in the AWS Cloud, including Microsoft SharePoint and custom .NET and
SQL Server-based applications. It also supports AWS managed applications and
services, including Amazon WorkSpaces, Amazon WorkDocs, Amazon QuickSight,
Amazon Chime, Amazon Connect, and Amazon Relational Database Service for
Microsoft SQL Server, or Amazon RDS for SQL Server.
AWS Directory Service for Microsoft Active Directory is powered by Windows Server
2012 R2. When customers select and launch this directory type, it is created as a
highly available pair of domain controllers connected to the customer’s Amazon VPC.
The domain controllers run in different Availability Zones in a Region of the
customer’s choice. Host monitoring and recovery, data replication, snapshots, and
software updates are automatically configured and managed for customers.
AWS provides monitoring, daily snapshots, and recovery as part of the service—your
customers add users and groups to AWS Managed Microsoft Active Directory, and
administer Group Policy by using familiar Active Directory tools that run on a
Windows computer joined to the AWS Managed Microsoft Active Directory domain.
Customers can also scale the directory by deploying additional domain controllers,
and they can improve application performance by distributing requests across a
larger number of domain controllers.
AWS Managed Microsoft Active Directory is approved for applications in the AWS
Cloud that are subject to the United States Health Insurance Portability and
Accountability Act, or HIPAA, or the Payment Card Industry Data Security Standard,
known as PCI DSS. Customers enable compliance for their directories.
All compatible applications work with user credentials that customers store in AWS
Managed Microsoft Active Directory, or customers can connect to their existing
Active Directory infrastructure with a trust, and use credentials from an Active
Directory running on-premises or on EC2 Windows. If a customer joins EC2 instances
to an AWS Managed Microsoft Active Directory, their users can access Windows
workloads in the AWS Cloud with the same Windows single sign-on (SSO) experience
as when they access workloads in the on-premises network. In this scenario:
1. AWS Managed Microsoft Active Directory is deployed in two Availability Zones
2. Communications with on-premises networks are established through a VPN tunnel
or AWS Direct Connect
3. Customers connect to their existing on-premises Active Directory infrastructure to
the AWS Managed AD with a trust.
4. Users can access Windows workloads in the AWS Cloud with the same Windows
single sign-on (SSO) experience as when they access workloads in the on-premises
network.
AWS Managed Microsoft Active Directory is available in two editions: Standard and
Enterprise.
Switching between editions is not supported, so your customers must be sure that
they choose the correct edition.
For information about pricing, check online to get the most up-to-date numbers.
AWS Quick Starts provide AWS CloudFormation templates to support three
deployment scenarios for Active Directory implementation. For each scenario,
customers also have the option to create a new Amazon VPC or use an existing
Amazon VPC infrastructure. Customers can choose the scenario that best fits their
needs.
With scenario 1, shown here, customers deploy and manage their own AD DS
installation on the Amazon EC2 instances. The AWS CloudFormation template for this
scenario builds the AWS Cloud infrastructure, and sets up and configures AD DS and
Active Directory-integrated DNS in the AWS Cloud. It doesn’t include AWS Directory
Service, so customers must handle all AD DS maintenance and monitoring tasks.
Customers can also choose to deploy the Quick Start into an existing Amazon VPC
infrastructure.
With scenario 3, customers deploy AD DS with AWS Directory Service in the AWS
Cloud. The AWS CloudFormation template for this scenario builds the base AWS
Cloud infrastructure and deploys AWS Directory Service for Microsoft Active
Directory, which offers managed AD DS functionality in the AWS Cloud. AWS Directory
Service takes care of AD DS tasks, such as building a highly available directory
topology, monitoring domain controllers, and configuring backups and snapshots. As
with the first two scenarios, customers can deploy the Quick Start into an existing
Amazon VPC infrastructure.
Customers can enable their users to access Microsoft Office 365 with
credentials managed in AWS Directory Service for Microsoft Active Directory,
also known as AWS Microsoft Active Directory. To do this, customers deploy
Microsoft Azure AD Connect and Active Directory Federation Services for
Windows Server 2016, known as AD FS 2016, with AWS Microsoft Active
Directory. AWS Microsoft Active Directory enables customers to build a
Windows environment in the AWS Cloud, synchronize AWS Microsoft Active
Directory users into Microsoft Azure Active Directory, and use Office 365, all
without needing to create and manage Active Directory domain controllers.
Customers can benefit from the broad set of AWS Cloud services for compute,
storage, database, and Internet of Things, or IoT, while continuing to use
Office 365 business productivity apps—all with a single Active Directory
domain.
Read the associated blog post to learn how to use Azure AD Connect and AD
FS with AWS Microsoft Active Directory, so your customers’ employees can
access Office 365 with their Active Directory credentials.
In this section, you will learn about joining EC2 instances to an Active Directory
domain.
Customers can seamlessly join an EC2 instance to a directory domain when the
instance is launched using the Amazon EC2 Systems Manager. If customers need to
manually join an EC2 instance to their domain, they must launch the instance in the
proper Region and security group or subnet, and then join the instance to the
domain.
When customers launch an instance using the Amazon EC2 console, they can
join the instance to a domain. If they don't already have a Systems Manager
document, the wizard creates one and associates it with the instance.
Customers can also join the instance to the domain by associating the
Systems Manager document to the instance by using the AWS Tools for
PowerShell or the AWS Command Line Interface, called the AWS CLI.
In this module, you learned how to run Active Directory services on AWS. You learned
about three options available for deploying Active Directory on AWS and how to
position each option for acceptance. You also learned how to join domains, and
provide authentication and network naming services that apply to running Active
Directory on AWS.
In this module, you will learn how to run SQL Server databases on AWS. You will also
learn how to choose the most suitable deployment options, and select compute and
storage resources. Finally, you will learn how to migrate databases from existing
platforms to AWS.
A foundational workload is the database service. With AWS, customers have multiple
deployment options. Whether they decide to manage the environment with Amazon
Elastic Compute Cloud, or Amazon EC2; deploy to a managed service with the
Amazon Relational Database Service, or Amazon RDS; or migrate to native, open
databases, they will have:
A cost-effective option for hosting databases,
Complete control for managing software, compute, and storage resources, and
Rapid provisioning through relational database Amazon Machine Images, or AMIs,
that enable them to provision servers with the database service already installed
For customers who re-factor SQL Server and adopt cloud-native services on their
own timetable:
Additional savings and flexibility can be realized with a move to a variety of open
source database solutions on AWS. Customers can save significant cost by moving off
the proprietary SQL Server engine and onto a fully managed relational database
service, like Amazon Aurora, which is based on open source standards MySQL and
PostgreSQL. AWS has available refactoring tooling and services to help customers
move to cloud-native solutions, such as Aurora.
Note that Microsoft ended their support for SQL Server 2008 on July 9, 2019. To learn
more about migrating legacy applications to AWS, visit the AWS website.
Customers have two options to run SQL Server on AWS.
The first option is to re-host SQL Server on EC2 Windows
For corporate or third-party legacy and custom applications, including line-of-
business applications, customers can launch a database to support these apps by
using Amazon EC2 and Amazon EBS.
The second option to run SQL Server on AWS is to re-platform to Amazon RDS
Amazon Relational Database Service is a managed service that makes it easy to
deploy a relational database to support line-of-business applications that run on
AWS:
• Amazon RDS automates database administration tasks, such as provisioning,
patching, backup, recovery, failure detection, and repair.
• It runs in Multi-AZ deployments to provide automatic failover, and
• It integrates with AWS Identity and Access Management for granular resource
permission controls.
At times, some customers struggle to set up a multi-site, high availability option for
their SQL Server instance, either because of expense or technical challenges. With
Amazon RDS for SQL Server, customers can select an option when they launch an
Amazon RDS instance to set up a Multi-AZ SQL Server cluster that uses synchronous
replication between two Availability Zones, by using database mirroring.
Both EC2 and RDS support storage encryption for all editions using KMS, and
customers running Enterprise Edition can use Transparent Data Encryption with both
services.
Both services support Windows and SQL Server authentication, and AWS manages
the Operating System installation.
If customers want to take advantage of automated software patching, they should
choose Amazon RDS for SQL Server; otherwise, they will need to manage the
maintenance tasks with SQL Server on Amazon EC2.
Amazon RDS
Amazon RDS is a managed service that helps customers set up, operate, and scale a
relational database in the cloud. There’s no need for hardware or software
installation. RDS provides cost-efficient and resizable capacity while automating
administration tasks, such as hardware provisioning, database setup, patching, and
backups.
• Customers can use RDS to replace most user-managed databases, and it can be
instantiated in minutes. Customers can also control when patching takes place.
• As with many AWS services, it’s pay-as-you-go. In addition, customers can bring
their own licenses for databases, such as Microsoft SQL Server, if they want.
• RDS frees database administrators, or DBAs, from 70% of the typical database
maintenance work. This service is like moving an on-premises database to the
cloud.
Amazon Aurora
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for
the cloud that combines the performance and availability of traditional enterprise
databases with the simplicity and cost-effectiveness of open source databases.
Amazon Aurora has an architecture that decouples the storage and compute
components.
Aurora is faster than other standard databases and provides the security, availability,
and reliability of commercial databases at much lower cost.
Amazon Aurora provides multiple levels of security for databases, which includes
network isolation, encryption at rest by using AWS Key Management Service, or KMS,
and encryption of data in transit using Secure Sockets Layer, or SSL.
Migrating data from Microsoft SQL Server databases to Amazon Aurora can be done
using the AWS Database Migration Service. Customers can begin a data migration
with a few clicks, and the source database remains fully operational during the
migration, minimizing downtime to applications using that database.
Amazon Redshift
Amazon Redshift is a fast, scalable data warehouse that customers can use to analyze
the data across their data warehouse and data lake. Amazon Redshift delivers 10
times faster performance than other data warehouses by using machine learning,
massively parallel query execution, and columnar storage on high-performance disk.
Customers can set up and deploy a new data warehouse in minutes, and run queries
across petabytes of data in their Amazon Redshift data warehouse, and exabytes of
data in their data lake built on Amazon S3.
Amazon Redshift is less than 1/10th the cost of traditional, on-premises data
warehouses. Amazon Redshift requires no upfront costs, and customers only pay for
what they use.
Amazon Redshift extends customers’ data warehouse to their data lake to help them
gain unique insights that they could not get by querying independent data silos. They
can directly query open data formats stored in Amazon S3 with Redshift Spectrum, a
feature of Amazon Redshift, without the need for unnecessary data movement. This
enables customers to analyze data across their data warehouse and data lake,
together, with a single service.
As storage requirements grow, customers can also provision additional storage. SQL
Server supports up to 16 TB, and storage scaling is on-the-fly, with zero or near-zero
downtime.
Storage and compute instance types are decoupled. When a customer scales a
database instance up or down, the storage size and type remain the same.
From a planning perspective, many SQL Server workloads benefit from large amounts
of memory, in relation to CPU, for caching purposes. Customers should consider
memory-optimized instances, unless their particular workload is processing heavy,
such as running stored procedures, complex reporting queries, or computations.
Licensing and edition also determine availability, with Express Edition unavailable on
the largest classes, and Enterprise Edition only available on the largest classes if the
customer chooses the Licensing Included model. The edition can also determine
available storage options.
Storage can also be modified, which, in most cases, does not involve downtime. Older
instance types might require a short period of downtime during the first scale storage
operation performed on the instance. On SQL Server, the new storage size is made
available to the database within minutes of the operation starting. It can also be
scheduled to occur during the next maintenance window. Storage performance will
be degraded for a period, usually for several hours, but it can be several days, after
storage is modified, while the new storage configuration is optimized. Ongoing
storage optimization is indicated through the DB instance status, by using the console
or API. Once storage has been modified, it cannot be modified again for 6 hours, or as
long as the instance is undergoing storage optimization, whichever is longer. Storage
optimization time is roughly proportional to the pre-modification storage size of the
instance.
10
Highlighted on this slide are the instance types that are available for Database
Services.
SQL Server instance performance depends on many factors, but if customers focus on
the infrastructure level, it broadly depends on CPU resources, amount of memory,
network throughput, storage performance, and size.
At AWS, practically all these depend on the DB instance class a customer selects to
run the instance. Because the storage is network attached, the overall networking
capabilities of the instance class affect input-output throughput. Customers also have
an array of storage subsystem choices, with different performance levels and prices.
Amazon RDS DB instances use Amazon EBS volumes for database and log storage.
Customers can choose from general purpose and Provisioned IOPS storage types,
depending on their storage performance and size requirements.
Similar to Amazon EC2, the RDS storage subsystem offers an array of performance
levels and price. Here, you can see the SSD-based options, and their size and
performance attributes.
The first option, GP2, General Purpose SSDs with predictable performance and burst
capabilities, is the most popular. This is especially good for workloads that have some
variability.
The second option, IO1, Provisioned IOPS, is a good choice when input-output needs
are high and consistent.
Every workload is different, so customers should test their workloads. With the
licensing Included model in RDS, customers only pay for licensing costs while an
instance is operational, which makes testing and benchmarking cost-effective.
Note that AWS is phasing out magnetic storage for RDS, but the Throughput
Optimized HDD offering, called ST1, on EC2, is performant for sequential writes,
which makes it well-suited for database backups.
Database mirroring is a feature that provides a complete or almost complete
mirror of a database, depending on the operating mode, on a separate
database instance. This feature increases the availability and protection of
mirrored databases, and provides a mechanism to keep mirrored databases
available during upgrades.
• The customer deploys the cluster nodes inside an Amazon VPC, and
• Deploys WSFC cluster nodes in separate subnets.
At a high level, the architecture includes SQL Server instances deployed with
replication across two Availability Zones, as well as a domain controller
instance in each Availability Zone to handle Active Directory and DNS
requests. It also keeps the SQL Server instances from being exposed publicly
by placing them in private subnets, with NATs for outbound traffic and Remote
Desktop Gateway instances for remote management by administrators.
Customers can deploy this architecture by using the AWS-provided instructions in
the Quick Start on the AWS website.
Here are some notes about working with Multi-AZ deployments for Microsoft SQL
Server database instances:
• To use SQL Server Multi-AZ with Mirroring with a SQL Server database instance in
an Amazon VPC, customers first create a database subnet group that has subnets
in at least two distinct Availability Zones. They then assign the database subnet
group to the SQL Server database instance being mirrored.
• If customers have SQL Server Agent jobs, they need to re-create them in the
secondary, as the jobs are stored in the MSDB database, and the database can't be
replicated via mirroring. Customers should create the jobs first in the original
primary, then fail over, and create the same jobs in the new primary.
• Failover times are affected by the time it takes to complete the recovery process.
Large transactions increase the failover time.
Customers can use the following SDKs to perform Amazon RDS functions from within
their applications:
• Android
• iOS
• Java
• JavaScript
• .NET
• Node.js
• PHP
• Python, boto,
• Ruby, and
• Xamarin
A key to managing SQL Server deployments at scale is automation – the ability to
programmatically provision the entire cluster, without manual intervention.
Customers also have access to SQL Server ecosystem tools to analyze performance.
The aws command shown on the screen lists the types of metrics that are available from RDS
via CloudWatch.
22
Additionally, RDS Enhanced Monitoring is available for SQL Server. This monitoring
solution provides detailed OS-level metrics with up to 1-second granularity. Unlike
traditional CloudWatch metrics available at the infrastructure level, Enhanced
Monitoring collects metrics using an agent running on the instance itself. It has access
to more granular data, but might contribute to the load of the DB instance and report
slightly different numbers. The standard CloudWatch metrics resolution is 5 minutes,
but custom metrics can be much smaller.
Additionally, while certain SQL Server features, such as maintenance plans, database
mail, linked servers, and Microsoft Distributed Transaction Coordinator, or MSDTC,
are not supported, some AWS services or Amazon RDS features fill the same roles,
often in more robust ways, such as using automatic backups instead of maintenance
plans, or Amazon Simple Email Service, or Amazon SES, for sending email with high
deliverability.
For more information on limited linked server support, visit the AWS website.
AWS provides automated backup and recovery, with point-in-time restore capability
for up to 35 days in the past. Customers can always instruct the service to take
manual snapshots that aren’t subject to the 35-day window, or copy automated
snapshots to convert them to manual snapshots. Both of these features require a
designated window of 30 minutes or more, where AWS can perform the activities.
The maintenance window is once a week. The backup window is daily.
RDS also allows customers to back up and restore using .BAK files, providing access to
SQL Server’s native backup functionality. This is commonly used to restore on-
premises, or EC2, SQL Server backups to an RDS instance. It also allows customers to:
• Parameter Groups are used to change the tuning parameters of the DB engine.
• Option Groups are currently used to enable Transparent Data Encryption in
Enterprise Edition, and enable the SQL Server native backup and restore
functionality.
Both groups have a set of predefined default configurations with sensible default
settings matching vendor recommendations. These are suitable for most workloads.
Customers can customize the groups, by creating derivative groups with their own
settings, and then they can apply the groups to the DB instances they operate. At any
point, customers know exactly what configuration each of their DB instances is
running.
How can customers secure SQL Server on premises at the network layer? Should they
place it behind a firewall? Limit access to it using route tables and network access
control lists? Customers can deploy the same design they use on premises when they
run SQL Server on AWS.
Restrict traffic to the instance with Network access control lists and security groups.
You learned about network ACLs and security groups earlier in this course.
Avoid or limit public access to all of your instances by placing instances in private
subnets.
Turn on forced SSL to ensure that all database connections are encrypted.
Secure the data
Does your customer encrypt their SQL Server data today? If they are required to
encrypt their data today, AWS is a highly secure option for their SQL Server workload.
Having a data protection strategy in place is key for every business.
At AWS, customers are responsible for encrypting their data. They should encrypt
their data at rest by using AWS tools, such as AWS KMS and encrypted EBS volumes
to store their data. They should use application layer encryption like TDE or column-
level encryption.
Also, customers can encrypt data in transit using SSL. For Amazon RDS, the SSL
certificate includes the DB instance endpoint as the common name (CN) for the
SSL certificate to guard against spoofing attacks.
Some of the mechanisms in use today to encrypt data… customers can continue and
use those when running SQL Server on AWS.
From an access perspective, you can help your customers address these concerns:
Customers can install SQL Server on Linux in minutes with just a few commands.
Deploying SQL Server requires running just a few Docker commands, as shown here.
Customers should consider how much IOPS and throughput performance their
workload needs, and employ techniques from the following list to find the right
combination of throughput and performance – they should
• Enable EBS optimization on an instance,
• Create a single volume for data and logs,
• Format with a 64 K allocation unit size,
• Match the total EBS IOPS and throughput to instance type, and
• Stripe EBS PIOPS volumes for more than 20,000 IOPS.
An example volume layout is shown here. Each drive mapping represents an attached
volume and can be a different storage type.
By using the commands on this page, customers can configure their SQL Server’s
tempdb to reside on instance storage.
First, they use the ALTER SQL commands to move tempdb files to instance-storage-
backed drives.
Then, customers modify the system drive’s discretionary access control list, or DACL,
to grant the SQL service account access to the storage drive.
To optimize tempdb use, customers should consider the following techniques – they
might:
• Use multiple tempdb files, creating a 1:1 mapping with up to eight CPUs,
• Stripe multiple instance storage disks for higher input-output,
• Change SQL Server service startup to Automatic (Delayed Start) to allow instance
storage to provision,
• Script and automate configuration on instance boot, or
• Use a striping solution offered by their AWS consulting partner.
Another optimization option you can use is to enable instant database file
initialization. What is database file initialization?
Normally, database files are initialized to overwrite leftover disk data. File
initialization causes some DB operations to take longer. Instant database file
initialization reclaims unused disk space without zeroing it out.
On this slide, you can see how customers can enable Instant database file
initialization .
This slide illustrates the various components to be migrated as part of a database
migration. Many tools are available that can accomplish some or all of the migration
tasks.
Customer data is not locked in RDS SQL Server. Customers can move data to and from
Amazon RDS in many ways.
You have already seen how customers can use .BAK files to save and restore
databases.
Customers can also use the Publishing Wizard to export flat T-SQL files and import
them using sqlcmd.
For more advanced use cases, customers can use the AWS Database Migration
Service. This tool is especially useful if customers want to achieve zero or near-zero
downtime migrations, or deploy read replicas of the master databases in a separate
Region. It handles the initial load of data and performs change data capture, so
customers can keep up with changes asynchronously. It’s also highly available, so
customer replication jobs can run on an ongoing basis. And, it supports
heterogeneous migrations, between different DB engines, from MySQL, Oracle, or
PostgreSQL to SQL Server, and with databases in different locations, such as EC2, RDS,
and on premises.
Customers can use the AWS Marketplace where independent software vendors, or
ISVs, offer third-party data movement solutions and tools.
Finally, customers can use push replication, as documented on the AWS website.
The database migration strategy customers choose depends on several factors,
including:
• The size of the database,
• Network connectivity between the source server and AWS,
• The version and edition of the database,
• The amount of time available for migration, and
• available database options, tools, and utilities.
Your customer’s strategy will also depend on whether the migration and cutover to
AWS will be done in one step or a sequence of steps over time.
A one-step migration is a good option for small databases that can be shut down for
24 to 72 hours. During this downtime, all the data from the source database is
extracted and migrated to the destination database in AWS. The destination database
in AWS is tested and validated for data consistency with the source. After all
validations are completed successfully, the database traffic is cut over to AWS.
Customers might have mission-critical databases that cannot have any downtime.
Performing such zero or near-zero downtime migrations requires detailed planning
and appropriate data replication tools. Customers will need to use continuous data
replication tools for such scenarios. Synchronous replication could affect the
performance of the source database while the replication is happening. So if a few
minutes of database downtime are acceptable, customers might want asynchronous
replication instead. With the zero or near-zero downtime migration, customers have
more flexibility on when to perform the cutover, because the source and destination
databases are always in sync.
Depending on whether your customer runs their database on Amazon EC2 or uses
Amazon RDS, the process for data migration can differ. For example, users don’t have
OS-level access in Amazon RDS instances. Customers must understand the different
strategies, so they can choose the one that best fits their needs. They can simply “lift-
and-shift” a database to run on an Amazon EC2 instance. This might be the easiest
and quickest method to migrate their database. However, they will need to consider
various factors, like licensing, compatibility, and support. Often, customers re-
platform and/or re-factor the database tier so they can access AWS Cloud benefits.
Customers can also use DMS to migrate their on-premises database to a database
running on an Amazon EC2 instance. DMS can migrate databases with zero or near-
zero downtime.
AWS Database Migration Service also provides a schema conversion tool to help
convert SQL Server T-SQL code to equivalent code in the Amazon Aurora MySQL
dialect of SQL. When a code fragment cannot be automatically converted to the
target language, the AWS Database Migration Service clearly documents all locations
that require manual input from the application developer.
Customers can use AWS Database Migration Service for both one-time data migration
into RDS and EC2-based databases, as well as for continuous data replication. The
AWS Database Migration Service captures changes on the source database and
applies them in a transactional-consistent way to the target. Continuous replication
can be done from the data center to the databases in AWS or in the reverse,
replicating to a database in the data center from a database in AWS. Ongoing
continuous replication can also be done between homogenous or heterogeneous
databases.
Customers can use AWS DMS for multiple migration scenarios, as shown here. To use
AWS DMS, one endpoint must always be located on an AWS service. Migration from
an on-premises database to another on-premises database is not supported.
Customers can use the Database Migration Service to enable migrations with
little downtime.
Then, they select which tables, schemas, or databases to migrate, and a DMS
replication task loads the data and synchronizes it on an ongoing basis. When
using the continuous data replication mode, customers do not have to perform the
switchover to production. Instead, the data replication task runs until the customer
changes or terminates it.
You will also learn how to migrate virtual machines, or VMs, and server applications
to AWS.
Finally, you will learn how to use AWS services to provision workload environments,
automate change and configuration, and provide ongoing maintenance.
Customers can use VM Import/Export to import VM images from existing
virtualization environments to Amazon Elastic Compute Cloud, or Amazon EC2, as
Amazon Machine Images, or AMIs. Customers use the AMIs to launch instances. They
can then export the VM images from an instance, and import them to virtualization
environments. Customers can import Microsoft Windows and Linux VMs that use
VMware ESX, VMware Workstation, Microsoft Hyper-V, or Citrix Xen virtualization
formats. They can also export previously imported Amazon EC2 instances to VMware
ESX, Microsoft Hyper-V, or Citrix Xen formats.
Customers can import machine images by using the AWS Command Line Interface, or
AWS CLI, or the AWS Management Portal for vCenter Server.
To import VM using the AWS CLI, customers must complete the following steps:
1. First, the customer must download and install the AWS Command Line Interface.
2. Then, they upload the VM image to Amazon Simple Storage Service, or Amazon
Simple Storage Service, or Amazon S3, using the CLI. Multipart uploads provide
improved performance. As an alternative, customers can send the VM image to
AWS using the AWS Snowball service.
3. Once the VM image is uploaded, the customer imports the VM using the ec2
import-image command. As part of this command, the customer specifies the
licensing model and other parameters for the imported image.
4. Next, the customer uses the ec2 describe-import-image-tasks command to
monitor the import progress.
5. Finally, once the import task is completed, the customer uses the ec2 run-
instances command to create an Amazon EC2 instance from the AMI generated
during the import process.
To learn about importing VMs using the VMware vSphere virtualization platform,
refer to the AWS website.
AWS Server Migration Service, or AWS SMS, automates the migration of
Hyper-V and VMware virtual machines to the AWS Cloud. AWS SMS
incrementally replicates server VMs as cloud-hosted AMIs that are ready for
deployment on Amazon EC2.
Customers can begin migrating a group of servers with just a few clicks in the
AWS Management Console. After the migration starts, AWS SMS manages
the complexities of the migration process, including automatically replicating
volumes of live servers to AWS and creating new AMIs periodically. Customers
can quickly launch EC2 instances from AMIs in the console. Working with
AMIs, they can easily test and update cloud-based images before deploying
them in production.
AWS Server Migration Service is free to use; customers pay only for the
storage resources that the migration uses during the migration process.
This slide shows the general steps for using the Server Migration Service.
First, customers must prepare their on-premises VMs to meet the general
Server Migration Service requirements. This preparation includes disabling
antivirus or intrusion detection software, and allowing remote access from the
connector through SSH, on Linux VMs, or Remote Desktop, on Windows VMs.
Next, customers replicate VMs by using the AWS SMS console or the
command line interface. They import a server catalog from the connector, and
create one or more replication jobs to automate the replications. Replication
jobs can start immediately or at a later date, up to 30 days in the future.
Customers can stop and delete replication jobs after the replication is
complete.
As shown in Step 4, when the scheduled replication job starts, the SMS
connector takes a snapshot of the selected VM, converts the snapshot to an
OVF format, and uploads the VMDK disk file to an S3 bucket.
Finally, in step 5, SMS automatically converts the VMDK into an Amazon Elastic
Block Store, or Amazon EBS, snapshot, makes the proper changes to the boot
partition, and injects EC2 drivers into the image. The result is an AWS AMI,
which can be used to launch EC2 servers.
Previously, you learned about AWS Migration tools and services, such as:
• AWS Server Migration Service,
• AWS Database Migration Service, or AWS DMS, and
• AWS Schema Conversion Tool.
AWS also provides data tools that customers can use to accelerate
transferring data to the cloud. AWS offers the data transfer services shown
here:
• AWS Snowball uses secure appliances to transfer large amounts of data
into and out of AWS. AWS Snowmobile is a transport that uses a secure
40-foot shipping container to transfer data.
• AWS Storage Gateway is an on-premises storage gateway that links a
customer’s environment directly to AWS.
• The AWS DataSync service makes it easy to automate moving data from
on-premises storage and Amazon S3 or Amazon Elastic File System, or
Amazon EFS faster than open-source tools.
• Amazon S3 Transfer Acceleration uses Amazon CloudFront edge
locations to enable fast, easy, and secure transfers of files over long
distances between the customer’s client and Amazon S3 bucket.
7
• AWS Direct Connect lets customers establish a dedicated physical
connection between a network and one of the AWS Direct Connect
locations.
• And Amazon Kinesis Data Firehose loads streaming data into Amazon S3
or Amazon Redshift
For more information about AWS migration and data transfer services, visit the
Cloud Data Migration section on the AWS website.
By using configuration management tools, customers use code to represent the state
of infrastructure – such as, seeing what’s running inside EC2 instances. Compared to
AWS CloudFormation, which automates the creation of resources like EC2 instances,
S3 buckets, and Amazon Relational Database Service, or Amazon RDS, instances,
configuration management represents the configuration of the software that runs on
the network’s compute servers. With configuration management, customers can:
OpsWorks includes the Chef or Puppet management dashboards that customers can
use to quickly view change management operations status and host compliance.
With OpsWorks, scaling the properly configured cloud environment is easier because
customers can avoid performing manual tasks. OpsWorks includes AWS CLI
commands that automatically register instances in EC2 Auto Scaling Groups with the
Chef or Puppet configuration management server as part of the EC2 user data
settings.
Finally, both Chef and Puppet offer support from active user communities. Customers
can use community-developed libraries, which abstract the configuration from OS
details to deploy software and configuration in an operating system-independent
way. By using the Chef Supermarket or Puppet Forge, customers can take advantage
of a variety of supported configuration modules for most popular software
installations that are already developed and tested. Community-created configuration
exists for a wide variety of server types.
Here, you see some typical use cases for
adopting AWS OpsWorks for configuration
management.
AWS Systems Manager is an AWS service that customers can use to view and control
their infrastructure on AWS. By using the Systems Manager console, customers can
view operational data from multiple AWS services and automate operational tasks
across their AWS resources. Systems Manager helps customers maintain security and
compliance by scanning managed instances and reporting on (or taking corrective
action on) any policy violations it detects.
Systems Manager also helps customers configure and maintain their managed
instances. Supported machine types include Amazon EC2 instances, on-premises
servers, and VMs, including VMs in other cloud environments. Supported operating
system types include Windows Server, multiple distributions of Linux, and Raspbian.
Using Systems Manager, customers can associate AWS resources by applying the
same identifying resource tag to each of them. They can then view operational data
for the resources as a resource group.
SSM Agent is easy to download and install on other platforms, such as Red Hat
Enterprise Linux.
The SSM Agent logs activity for Run Command, State Manager, joining domains, and
Amazon CloudWatch. It’s open source, and its code and release notes are available on
GitHub.
Customers can run the SSM Agent on corporate servers and VMs on premises. To run
SSM Agent in hybrid environments, customers must complete the following steps:
1. Install a Transport Layer Security, or TLS, certificate on the computer that runs the
SSM Agent.
2. Create a managed-instance activation code from the AWS console or API. In the
activation, provide a description and a count of how many instances will activate.
Also, select an IAM role that SSM Agent uses to retrieve parameter objects,
commands, and so forth, from Systems Manager.
3. Download, install, and start the agent, using the activation code.
Activation codes have an expiration date. When customers create activation codes,
they should record and store them separately; the codes are only available once.
Computers that run SSM Agent require outbound internet access or a VPC endpoint,
but not inbound internet access.
Systems Manager documents contain configuration changes and automation
workflows that customers can use to execute changes across the fleet using Systems
Manager capabilities. With SSM documents, customers can use code to remotely
manage instances, ensure desired states for resources, and automate IT operations.
Amazon provides documents for many common tasks, or customers can author their
own in JSON or YAML. They can also store and execute documents from remote
locations, like GitHub or Amazon S3. Systems Manager supports creating and running
different document versions, sharing documents across AWS accounts, and tagging
documents.
Customers can retrieve the status and output of commands they run with Run
Command, and receive notifications about them.
Before customers can use Run Command to manage instances, they must perform
the following tasks:
• Install the SSM Agent on the instances.
• Configure an IAM user policy for any user who will run commands, and an IAM
instance profile role for any instance that will process commands, or activate the
instance with an activation code. And,
• Configure the network so instances have network connection to Systems
Manager.
When customers use Run Command, they choose a command document that
specifies the type of command they want to run. Customers specify the command to
run and its parameters. Then, they specify which instances run the command, either
by specifying a tag or selecting specific instances.
Customers can store the commands’ output in an Amazon S3 bucket and send
notifications.
AWS Systems Manager State Manager automates the process of keeping Amazon EC2
and hybrid infrastructure in a customer-defined state. With State Manager, customers
can perform the following types of tasks:
• Bootstrap instances with software at startup,
• Download and update agents, even the SSM Agent,
• Configure network settings,
• Join instances to a Windows domain,
• Patch instances with software updates throughout their lifecycle, and
• Run scripts.
To use State Manager, customers first determine the desired state to apply. For
example, they can automate the process of installing Windows updates. Customers
must create an association to assign instances to the intended state.
An SSM document describes the intended state of the instance or service. Amazon
provides many preconfigured documents that customers can use to create the
association, or customers can create their own.
Next, customers create an association, which binds the instances to the document
and schedules frequency for updating the state. Scheduling includes unplanned and
periodic rates.
After customers create the association, State Manager applies the configuration
according to the defined schedule. Customers can view the status and history from
the State Manager page.
Systems Manager Automation simplifies common maintenance and deployment
tasks of Amazon EC2 instances and other AWS resources. Automation enables
customers to do the following.
• Build automation workflows to configure and manage instances and AWS
resources,
• Create custom workflows or use predefined workflows maintained by AWS,
• Receive notifications about automation tasks and workflows by using Amazon
CloudWatch Events, and
• Monitor automation progress and execution details by using the Amazon EC2 or
the AWS Systems Manager console.
With Automation, customers control the workflows with repeatable steps. Steps can
include manual interaction, for example, to provide approval steps. Automation uses
Amazon Simple Notification Service, or SNS, notifications to approve steps.
With Automation, customers can delegate specific tasks to users who use Systems
Manager. For example, a user can't launch EC2 instance, but they can start an
automation task that creates an EC2 instance from a specific AMI.
Automation integrates with several AWS services to manage complex tasks. For
example, plugins are included to be able to perform the following tasks:
• Create or delete AWS CloudFormation stacks,
• Invoke AWS Lambda functions,
• Create an AMI from an instance,
• Launch an instance from an AMI,
• Start a Run Command, and
• Run other automations.
With AWS Systems Manager Patch Manager, customers can manage their Windows
or Linux servers patching in Amazon EC2 or on premises. To use Patch Manager,
customers must complete a number of tasks:
• They must create a patch baseline, or use one of the many Amazon-provided
baselines. A baseline defines which patches are approved for installation on the
instances. Customers can approve or reject specific patches, or create automatic
approval rules for certain types of updates, such as critical security updates.
• Customers must organize instances into patch groups. Patch groups tie instances to
a patch baseline. For example, customers can create patch groups for
Development, Test, and Production, and apply new patches to the Test group. They
can also use patch groups for reporting purposes, for example, to show which
patches are applied on production Windows Servers.
• Customers must also schedule patches to be applied by assigning them to a
Maintenance Window.
• Finally, customers must monitor patch completion and compliance status
information.
Patch Groups organize and associate instances with a specific patch baseline. They
help ensure that customers deploy appropriate patches to the correct set of
instances. Customers can view status and compliance by patch group. There can be
many patch groups, but each instance can be a member of only one patch group.
AWS Systems Manager Maintenance Windows let customers schedule and control
running potentially disruptive administrative tasks. Each Maintenance Window has a
schedule. Customers use cron or rate expressions to schedule when maintenance
starts.
They set a 1- to 24-hour maximum duration. Setting a duration does not stop
running tasks; it only stops scheduling remaining tasks.
Customers can perform Run Command, automation, Lambda, and step functions in a
Maintenance Window on instances selected by ID or tag. Maintenance Windows
retain executions history for 30 days.
AWS Systems Manager Inventory provides visibility into customers’ Amazon EC2 and
on-premises computing environments.
To use Inventory, customers select targets by instance ID or tag, or use State Manager
to create associations. Next, they schedule when to collect inventory metadata, and
choose the types of metadata to collect.
Customers can also create their own custom inventory types, such as rack location, to
add to the metadata collection.
They can collect and aggregate data from multiple AWS accounts and Regions, and
identify specific resources that aren’t compliant.
• Customers can view compliance history and change tracking for Patch Manager
patching data and State Manager associations by using AWS Config.
• They can customize Systems Manager Compliance to create their own compliance
types based on their IT or business requirements.
• And, customers can remediate issues by using Systems Manager Run Command,
State Manager, or Amazon CloudWatch Events.
Parameter Store provides a centralized, encrypted store for sensitive information
customers use in administrative tasks to manage instances and operating systems.
AWS Key Management Service, or KMS, integration helps customers encrypt their
sensitive information and protect their keys’ security.
Access to parameters is managed with IAM, so customers can also limit access to
data to the users who need it, on the resources they can use.
With AWS Systems Manager, customers can view and control their infrastructure both
in the AWS cloud, and on-premises. Customers can automate operational tasks, and
maintain inventory and compliance from a single resource.
AWS CloudFormation enables customers to provision and manage their infrastructure
as code. With AWS CloudFormation, customers can plan and design their architecture
to be secure, reliable, performant, and efficient. Customers tell AWS CloudFormation
what must be created, not how to create it.
Here are the steps a customer would follow for an AWS CloudFormation Quick Start
to set up a self-managed Active Directory Domain Server across two Availability
Zones. The customer would:
Because the AWS CloudFormation template describes all the resources customers
need for an application, they can replicate an application by reusing the template. If
they need additional availability, for example, they can use the same template to
create stacks consistently and repeatedly in multiple Regions. Or, they can start a
disaster recovery site, which is always in sync and provisioned the same way as the
production architecture but in a different Region.
Because the AWS CloudFormation template is a text file, customers can manage
infrastructure revisions and control as they would manage source code. When they
need to change resources, such as upgrade some resources in the stack, they can
compare the changed template with the original, and create a change set. By using
change sets, customers can preview how implementing the changes to the stack
might impact resources that are running. Customers can decide whether to
implement the changes or explore other changes instead.
In addition, customers can use previous versions of the template to revert the
infrastructure to a previous version.
To use AWS CloudFormation, customers must complete the following steps:
1. To begin, a customer must create or use an existing template. Customers can
create a YAML or JSON file, or use the AWS CloudFormation Designer to build the
template graphically. They can start from an example template from the AWS
CloudFormation Sample Template Library to learn the basics of creating a
template. They can also use Parameters in the template to declare values to use
when they create the stack.
2. Next, the customer saves the template locally or in an S3 bucket.
3. Then, the customer uses AWS CloudFormation to create a stack based on the
saved template, by using the AWS Management Console’s AWS CloudFormation
console or the command line interface.
4. Finally, while AWS CloudFormation configures and constructs the resources
specified in the stack, the customer monitors the resource creation process in the
AWS CloudFormation console. When the stack reaches the status
CREATE_COMPLETE, the customer can start using the resources.
The stack sets feature extends stacks by enabling customers to create, update, and
delete stacks across multiple accounts and Regions with a single operation. After
customers set up a trust relationship among the accounts where they create stacks,
the AWS CloudFormation template uses stack sets to allow the customer to create,
update, and delete stacks in specified target accounts and Regions.
In this module, you learned how to automate Microsoft workloads operations with
AWS services.
You also learned how to migrate virtual machines and server applications to AWS.
Finally, you learned how to use AWS services to provision workload environments,
automate change and configuration, and provide ongoing maintenance.
In this module, you will learn how to use AWS to build and run .NET applications.
You will also learn what tools to use to build architectures that support .NET, and
what code management services and code build architectures are available.
Finally, you will learn how to use AWS PowerShell to automate functions from
scripted solutions.
AWS provides full support for .NET applications and Windows workloads.
Additionally, AWS supports .NET, .NET Core, and Core 2.1, including AWS Lambda,
AWS X-Ray, and AWS CodeStar for building modern serverless and DevOps-centric
solutions. These solutions can provide deep integration with tools developers already
use to build .NET apps, like Visual Studio and Visual Studio Team Services. This means
developers can work with familiar to tools while they benefit from the broad variety
of AWS products and services.
The AWS Tools for PowerShell download is a Microsoft Software Installer, or MSI,
package that installs the following components:
• Microsoft .NET Framework Features
• AWS SDK for .NET
• AWS Tools for Windows PowerShell
• AWS Command Line Interface
The AWS Tools for Windows PowerShell provides PowerShell modules that are built
on the functionality exposed by the AWS SDK for .NET. The AWS PowerShell tools
enable customers to script operations on AWS resources from the PowerShell
command line. Although the cmdlets are implemented using the service clients and
methods from the SDK, the cmdlets provide an idiomatic PowerShell experience for
specifying parameters and handling results.
The Tools for Windows PowerShell and Tools for PowerShell Core are flexible in how
they enable customers to handle credentials, including support for the AWS Identity
and Access Management, or IAM, infrastructure. Customers can use the tools with
IAM user credentials, temporary security tokens, and IAM roles.
The Tools for PowerShell supports the same set of services and Regions that are
supported by the SDK. Customers can install the Tools for PowerShell on computers
running Windows, Linux, or macOS operating systems.
PowerShell Desired State Configuration, or DSC, is built on open standards. It provides
a configuration management platform that is built into to operating systems later
than Windows Server 2012 R2 and Windows 8.1, and it’s also provided for Linux.
Some AWS Quick Starts that run Windows Server instances use DSC.
DSC is flexible enough to function reliably and consistently in each stage of the
deployment lifecycle of development, test, pre-production, and production, as well as
during scale-out.
DSC uses lightweight commands called cmdlets to express a desired state. DSC
provides a similar framework to Chef and Puppet.
When using DSC to apply a desired configuration for a system, the customer creates a
configuration script with PowerShell that explains what the system should look like.
Customers use the configuration script to generate a Management Object Format, or
MOF, file, which is then pushed or pulled by a node to apply the desired state.
PowerShell DSC uses vendor-neutral MOF files to enable cross-platform
management, so the node can be either a Windows or a Linux system.
AWS provides a variety of tools to help .NET developers work with AWS products and
services.
The AWS SDK for .NET is an open-source toolkit that helps Windows developers build
.NET applications that tap into the cost-effective, scalable, and reliable AWS
infrastructure services, such as Amazon S3, Amazon EC2, AWS Lambda, and more.
The AWS SDK for .NET supports development on any platform that supports the .NET
Framework 3.5 or later. The AWS SDK for .NET also targets .NET Standard 1.3.
Customers can use it with .NET Core 1.x or .NET Core 2.0.
The Toolkit for Visual Studio is a plugin for the Visual Studio integrated development
environment, or IDE, that makes it easier to develop, debug, and deploy .NET
applications that use AWS. The Toolkit for Visual Studio provides Visual Studio
templates for AWS services, and deployment wizards for web applications and
serverless applications.
The AWS Tools for Microsoft Visual Studio Team Services, or VSTS, adds tasks to
enable build and release pipelines in VSTS and Team Foundation Server, or TFS, to
work with AWS services. Customers can work with Amazon S3, AWS Elastic Beanstalk,
AWS CodeDeploy, AWS Lambda, AWS CloudFormation, Amazon SQS, and Amazon
SNS. Customers can also run commands using the Windows PowerShell module and
the AWS CLI.
Finally, the AWS Cloud Development Kit, or CDK, supports .NET. The AWS CDK is a
software development framework that defines cloud infrastructure code and
provisions it through AWS CloudFormation.
The AWS Toolkit for Visual Studio provides Visual Studio project templates that
customers can use as starting points for AWS console and web applications. As a
customer’s application runs, they can use the AWS Explorer to view the AWS
resources used by the application. For example, if an application creates buckets in
Amazon S3, the customer can use AWS Explorer to view the buckets and their
contents. If a customer needs to provision AWS resources for an application, the
customer can create them manually using the AWS Explorer or use the AWS
CloudFormation templates included with this toolkit to provision web application
environments hosted on Amazon EC2.
If the bucket was used with Amazon CloudFront, you could also perform invalidation
requests in the bucket browser.
The AWS Tools for Microsoft Visual Studio Team Services adds tasks to easily enable
build and release pipelines in VSTS and Team Foundation Server to work with AWS
services, including Amazon S3, AWS Elastic Beanstalk, AWS CodeDeploy, AWS
Lambda, AWS CloudFormation, Amazon SQS, and Amazon SNS, and run commands
using the AWS Tools for Windows PowerShell module and the AWS CLI.
• Using VSTS, customer can transfer files to and from Amazon S3 buckets. Customers
can upload files to an S3 bucket with the Amazon S3 upload task or download from
a bucket with the Amazon S3 download task.
• They can also deploy applications to AWS Elastic Beanstalk, including ASP.NET or
ASP.NET Core applications.
• Customers can deploy to Amazon EC2 with AWS CodeDeploy.
• They can send a message to an SNS Topic or SQS Queue by running AWS Tools for
Windows PowerShell scripts.
• Customers can use cmdlets from the AWS Tools for Windows PowerShell module,
optionally installing the module before use.
• And, customers can run AWS CLI commands against an AWS connection.
For more information about VSTS tools, see the AWS vsts tools website on github.
AWS Tools for Microsoft Visual Studio enables customers to quickly deploy and
manage applications in the AWS Cloud without worrying about the infrastructure.
With Visual Studio 2013, 2015, and 2017, customers can directly deploy applications
to Elastic Beanstalk.
Customers can deploy .NET Core 1.0, 1.1, 2.0, and 2.1 web applications, and .NET
Framework web applications.
Follow the link on the screen to watch Jill from AWS demonstrate how to deploy
applications faster by using the AWS Visual Studio Toolkit and AWS Elastic Beanstalk.
https://www.youtube.com/watch?v=B190tcu1ERk (4:23)
The AWS Cloud Development Kit, or AWS CDK, is an open-source software
development framework that customers use to define cloud infrastructures in code,
and provision them in AWS CloudFormation. The CDK integrates with AWS services
and provides a higher-level object-level abstraction to define AWS resources.
With CDK, customers can use simple constructs to build infrastructure rather than
complex AWS resource configuration code.
They can write CDK code by using familiar development environments such as Visual
Studio, Visual Studio Code, or JetBrains Rider.
Best practices are built into the CDK, so customers’ code follows sensible, safe
defaults while still allowing infrastructures to fit the use case.
Customers can create and share their own CDK constructs, which are packaged in the
.NET NuGet format.
15
Customers can easily integrate AWS CodePipeline with third-party services, such as
GitHub and others shown here, or with custom plugins. With AWS CodePipeline,
customers only pay for what they use. There are no upfront fees or long-term
commitments.
AWS CodeStar is a cloud-based service for creating, managing, and working with
software development projects on AWS. Customers can quickly develop, build, and
deploy applications on AWS with an AWS CodeStar project. An AWS CodeStar project
creates and integrates AWS services for a project development toolchain. Depending
on the choice of AWS CodeStar project template, the toolchain might include source
control, build, deployment, virtual servers, serverless resources, and more. AWS
CodeStar also manages the permissions required for project users, who are called
team members. By adding users as team members to an AWS CodeStar project,
project owners can efficiently grant each team member role-appropriate access to a
project and its resources.
This slide depicts how AWS services for DevOps align to steps in application lifecycle
management.
In this module, you learned how to use AWS to build and run .NET applications.
You also learned what tools to use to build architectures that support .NET, and what
code management services and code build architectures are available.
Finally, you learned how to use AWS PowerShell to automate functions from scripted
solutions.
In this course, you learned the technical fundamentals of running Microsoft
workloads on Amazon Web Services, or AWS. You learned about the various tools
available to migrate, develop, build, deploy, manage, and operate Microsoft
applications and Windows Servers on AWS. You saw case studies and reference
architectures to showcase how some AWS customer architectures have been
designed for common Microsoft workloads including SQL and Active Directory. This
course is available in both instructor-led and web-based delivery formats.
In this course, you learned how to:
- Provide a technical overview of Microsoft workloads on AWS
- Discuss the technical advantages and positioning for Microsoft workloads on AWS,
- Provide guidance to customers who are architecting common Microsoft workloads
for AWS, and
- Explain the various tools to develop, deploy, and manage Microsoft workloads on
AWS
We discussed seven topics:
• Module one covered how to position AWS for managing and hosting Microsoft
workloads.
• Module two covered how to architect foundational AWS services to support
running Microsoft workloads.
• Module three covered how to run Microsoft Windows Server instances in Amazon
EC2, and create custom Amazon Machine Images (AMI) for running Microsoft
workloads.
• Module four covered how to deploy and run Directory services in AWS.
• Module five covered running SQL Server databases on AWS.
• Module six covered how to automate operations with AWS services.
• Module seven covered how tools Amazon provides help you to build and run .NET
applications on AWS.