AWS security Architecture
AWS security Architecture
• AWS Organizations helps you centrally govern your environment as you grow and scale your
workloads on AWS. Whether you are a growing startup or a large enterprise, Organizations helps you to
centrally manage billing; control access, compliance, and security; and share resources across your AWS
accounts. • Using AWS Organizations, you can automate account creation, create groups of accounts to
reflect your business needs, and apply policies for these groups for governance. You can also simplify
billing by setting up a single payment method for all of your AWS accounts
Benefits of AWS Organizations • Centrally manage policies across multiple aws accounts • Govern access
to aws services, resources, and regions • Automate aws account creation and management • Configure
aws services across multiple accounts • Consolidate billing across multiple aws accounts
Organizational Units (OU) • You can use organizational units (OUs) to group accounts together to
administer as a single unit. This greatly simplifies the management of your accounts. • For example, you
can attach a policy-based control to an OU, and all accounts within the OU automatically inherit the
policy. • You can create multiple OUs within a single organization, and you can create OUs within other
OUs. • Each OU can contain multiple accounts, and you can move accounts from one OU to another.
However, OU names must be unique within a parent OU or root.
Consolidated Billing • You can use the consolidated billing feature in AWS Organizations to consolidate
billing and payment for multiple AWS accounts. Every organization in AWS Organizations has a master
(payer) account that pays the charges of all the member (linked) accounts • One bill – You get one bill
for multiple accounts. • Easy tracking – You can track the charges across multiple accounts and
download the combined cost and usage data. • Combined usage – You can combine the usage across all
accounts in the organization to share the volume pricing discounts, Reserved Instance discounts, and
Savings Plans. This can result in a lower charge for your project, department, or company than with
individual standalone accounts. • No extra fee – Consolidated billing is offered at no additional cost.
Service Control Policies • Service control policies (SCPs) are one type of policy that you can use to
manage your organization. SCPs offer central control over the maximum available permissions for all
accounts in your organization, allowing you to ensure your accounts stay within your organization’s
access control guidelines. SCPs are available only in an organization that has all features enabled. •
Attaching an SCP to an AWS Organizations entity (root, OU, or account) defines a guardrail for what
actions the principals can perform. You still need to attach identity-based or resource-based policies to
principals or resources in your organization's accounts to actually grant permissions to them.
With AWS Organizations, you can centrally manage policies across multiple AWS
accounts. For example, you can apply service control policies (SCPs) across multiple
AWS accounts that are members of an organization. SCPs allow you to define which
AWS service APIs can and cannot be run by AWS Identity and Access
Management (IAM) entities (such as IAM users and roles) in your organization’s
member AWS accounts. SCPs are created and applied from the Org management
account, which is the AWS account that you used when you created your
organization.
If you use AWS Control Tower to manage your AWS organization, it will deploy a set
of SCPs as preventative guardrails (categorized as mandatory, strongly
recommended, or elective). These guardrails help you govern your resources by
enforcing organization-wide security controls. These SCPs automatically use an aws-
control-tower tag that has a value of managed-by-control-tower.
IAM Identity Center
AWS IAM Identity Center (successor to AWS Single Sign-On) is an identity
federation service that helps you centrally manage SSO access to all your
AWS accounts, principals, and cloud workloads. IAM Identity Center also
helps you manage access and permissions to commonly used third-party
software as a service (SaaS) applications. Identity providers integrate with
IAM Identity Center by using SAML 2.0. Bulk and just-in-time provisioning can
be done by using the System for Cross-Domain Identity Management (SCIM).
IAM Identity Center can also integrate with on-premises or AWS-managed
Microsoft Active Directory (AD) domains as an identity provider through the
use of AWS Directory Service. IAM Identity Center includes a user portal
where your end-users can find and access their assigned AWS accounts,
roles, cloud applications, and custom applications in one place.
IAM Identity Center natively integrates with AWS Organizations and runs in
the Org Management account by default. However, to exercise least privilege
and tightly control access to the management account, IAM Identity Center
administration can be delegated to a specific member account. In the AWS
SRA, the Shared Services account is the delegated administrator account for
IAM Identity Center. Before you enable delegated administration for IAM
Identity Center, review these considerations. You will find more information
about delegation in the Shared Services account section. Even after you
enable delegation, IAM Identity Center still needs to run in the Org
Management account to perform certain IAM Identity Center related tasks,
which include managing permission sets that are provisioned in the Org
Management account.
Within the IAM Identity Center console, accounts are displayed by their
encapsulating OU. This enables you to quickly discover your AWS accounts,
apply common sets of permissions, and manage access from a central
location.
IAM Identity Center includes an identity store where specific user information
must be stored. However, IAM Identity Center does not have to be the
authoritative source for workforce information. In cases where your
enterprise already has an authoritative source, IAM Identity Center supports
the following types of identity providers (IdPs).
You can rely on an existing IdP that is already in place within your enterprise.
This makes it easier to manage access across multiple applications and
services, because you are creating, managing, and revoking access from a
single location. For example, if someone leaves your team, you can revoke
their access to all applications and services (including AWS accounts) from
one location. This reduces the need for multiple credentials and provides you
with an opportunity to integrate with your human resources (HR) processes.
IAM Access Advisor:
IAM access advisor provides traceability data in the form of service last
accessed information for your AWS accounts and OUs. Use this detective
control to contribute to a least privilege strategy. For IAM entities, you can
view two types of last accessed information: allowed AWS service
information and allowed action information. The information includes the
date and time when the attempt was made.
IAM access within the Org Management account lets you view service last
accessed data for the Org Management account, OU, member account, or
IAM policy in your AWS organization. This information is available in the IAM
console within the management account and can also be obtained
programmatically by using IAM access advisor APIs in AWS Command Line
Interface (AWS CLI) or a programmatic client. The information indicates
which principals in an organization or account last attempted to access the
service and when. Last accessed information provides insight for actual
service usage (see example scenarios), so you can reduce IAM permissions
to only those services that are actually used.
AWS System Manager:
Features
AWS Systems Manager is the operations hub for your AWS applications and resources, and is
broken into four core feature groups.
Install SSM agent on the system we need to control
AWS Control Tower has a broad and flexible set of features. A key feature is
its ability to orchestrate the capabilities of several other AWS services,
including AWS Organizations, AWS Service Catalog, and IAM Identity Center,
to build a landing zone. For examples, by default AWS Control Tower uses
AWS CloudFormation to establish a baseline, AWS Organizations service
control policies (SCPs) to prevent configuration changes, and AWS Config
rules to continuously detect non-conformance. AWS Control Tower employs
blueprints that help you quickly align your multi-account AWS environment
with AWS Well Architected security foundation design principles. Among
governance features, AWS Control Tower offers guardrails that prevent
deployment of resources that don’t conform to selected policies.
You can get started implementing AWS SRA guidance with AWS Control
Tower. For example, AWS Control Tower establishes an AWS organization
with the recommended multi-account architecture. It provides blueprints to
provide identity management, provide federated access to accounts,
centralize logging, establish cross-account security audits, define a workflow
for provisioning new accounts, and implement account baselines with
network configurations.
In the AWS SRA, AWS Control Tower is within the Org Management account
because AWS Control Tower uses this account to set up an AWS organization
automatically and designates that account as the management account. This
account is used for billing across your AWS organization. It's also used for
Account Factory provisioning of accounts, to manage OUs, and to manage
guardrails. If you are launching AWS Control Tower in an existing AWS
organization, you can use the existing management account. AWS Control
Tower will use that account as the designated management account.
Security foundations
The security pillar describes how to take advantage of cloud technologies to protect
data, systems, and assets in a way that can improve your security posture. This
paper provides in-depth, best-practice guidance for architecting secure workloads
on AWS.
Design principles
In the cloud, there are a number of principles that can help you strengthen
your workload security:
AWS Artifact
AWS Artifact provides on-demand access to AWS security and compliance
reports and select online agreements. Reports available in AWS Artifact
include System and Organization Controls (SOC) reports, Payment Card
Industry (PCI) reports, and certifications from accreditation bodies across
geographies and compliance verticals that validate the implementation and
operating effectiveness of AWS security controls. AWS Artifact helps you
perform your due diligence of AWS with enhanced transparency into our
security control environment. It also lets you continuously monitor the
security and compliance of AWS with immediate access to new reports.
AWS Artifact Agreements enable you to review, accept, and track the status
of AWS agreements such as the Business Associate Addendum (BAA) for an
individual account and for the accounts that are part of your organization in
AWS Organizations.
You can provide the AWS audit artifacts to your auditors or regulators as
evidence of AWS security controls. You can also use the responsibility
guidance provided by some of the AWS audit artifacts to design your cloud
architecture. This guidance helps determine the additional security controls
you can put in place to support the specific use cases of your system.
Design considerations
AWS Control Tower names the account under the Security OU
the Audit Account by default. You can rename the account during the
AWS Control Tower setup.
It might be appropriate to have more than one Security Tooling
account. For example, monitoring and responding to security events
are often assigned to a dedicated team. Network security might
warrant its own account and roles in collaboration with the cloud
infrastructure or network team. Such splits retain the objective of
separating centralized security enclaves and further emphasize the
separation of duties, least privilege, and potential simplicity of team
assignments. If you are using AWS Control Tower, it restricts the
creation of additional AWS accounts under the Security OU.
AWS CloudTrail
AWS CloudTrail is a service that supports the governance, compliance, and
auditing of activity in your AWS account. With CloudTrail, you can log,
continuously monitor, and retain account activity related to actions across
your AWS infrastructure. CloudTrail is integrated with AWS Organizations,
and that integration can be used to create a single trail that logs all events
for all accounts in the AWS organization. This is referred to as
an organization trail. You can create and manage an organization trail only
from within the management account for the organization or from a
delegated administrator account. When you create an organization trail, a
trail with the name that you specify is created in every AWS account that
belongs to your AWS organization. The trail logs activity for all accounts,
including the management account, in the AWS organization and stores the
logs in a single S3 bucket. Because of the sensitivity of this S3 bucket, you
should secure it by following the best practices outlined in the Amazon S3 as
central log store section later in this guide. All accounts in the AWS
organization can see the organization trail in their list of trails. However,
member AWS accounts have view-only access to this trail. By default, when
you create an organization trail in the CloudTrail console, the trail is a multi-
Region trail.
You can create and manage organization trails from both management and
delegated administrator accounts. However, as a best practice, you should limit
access to the management account and use the delegated administrator
functionality where it is available.
AWS Security Hub
AWS Security Hub provides you with a comprehensive view of your security
posture in AWS and helps you check your environment against security
industry standards and best practices. Security Hub collects security data
from across AWS integrated services, supported third-party products, and
other custom security products that you might use. It helps you continuously
monitor and analyze your security trends and identify the highest priority
security issues. In addition to the ingested sources, Security Hub generates
its own findings represented by security controls that map to one or more
security standards. These standards include AWS Foundational Security Best
Practices (FSBP), Center for Internet Security (CIS) AWS Foundations
benchmark v1.20 and v1.4.0, National Institute of Standards and Technology
(NIST) SP 800-53 Rev. 5, Payment Card Industry Data Security Standard (PCI
DSS), and service-managed standards. For a list of current security
standards and details on specific security controls, see the Security Hub
standards reference in the Security Hub documentation.
You can use Security Hub with the Network Access Analyzer feature of
Amazon VPC to help continuously monitor the compliance of your AWS
network configuration. This will help you block unwanted network access and
help prevent your critical resources from external access. For further
architecture and implementation details, see the AWS blog post Continuous
verification of network compliance using Amazon VPC Network Access
Analyzer and AWS Security Hub.
Security Hub uses service-linked AWS Config rules to perform most of its
security checks for controls. To support these controls, AWS Config must be
enabled on all accounts—including the administrator (or delegated
administrator) account and member accounts—in each AWS Region where
Security Hub is enabled.
AWS Security Hub uses service-linked AWS Config rules to perform most of
its security checks for controls.
Security Hub does not manage AWS Config for you. If you already have AWS
Config enabled, you can continue to configure its settings through the AWS
Config console or APIs.
If you enable AWS Config after you enable a standard, Security Hub still
creates the AWS Config rules, but only if you enable AWS Config within 31
days after you enable the standard. If you do not enable AWS Config within
31 days, then you must disable and re-enable the standard after you enable
AWS Config.
Features
When you set up AWS Config, you can complete the following:
Resource management
Resource Administration
To exercise better governance over your resource configurations and to
detect resource misconfigurations, you need fine-grained visibility into what
resources exist and how these resources are configured at any time. You can
use AWS Config to notify you whenever resources are created, modified, or
deleted without having to monitor these changes by polling the calls made to
each resource.
You can use AWS Config rules to evaluate the configuration settings of your
AWS resources. When AWS Config detects that a resource violates the
conditions in one of your rules, AWS Config flags the resource as
noncompliant and sends a notification. AWS Config continuously evaluates
your resources as they are created, changed, or deleted.
You can also use the historical configurations of your resources provided by
AWS Config to troubleshoot issues and to access the last known good
configuration of a problem resource.
Security Analysis
To analyze potential security weaknesses, you need detailed historical
information about your AWS resource configurations, such as the AWS
Identity and Access Management (IAM) permissions that are granted to your
users, or the Amazon EC2 security group rules that control access to your
resources.
You can use AWS Config to view the IAM policy that was assigned to a user,
group, or role at any time in which AWS Config was recording. This
information can help you determine the permissions that belonged to a user
at a specific time: for example, you can view whether the user John Doe had
permission to modify Amazon VPC settings on Jan 1, 2015.
You can also use AWS Config to view the configuration of your EC2 security
groups, including the port rules that were open at a specific time. This
information can help you determine whether a security group blocked
incoming TCP traffic to a specific port.
AWS Config
AWS Config is a service that enables you to assess, audit, and evaluate the
configurations of supported AWS resources in your AWS accounts. AWS
Config continuously monitors and records AWS resource configurations, and
automatically evaluates recorded configurations against desired
configurations. You can also integrate AWS Config with other services to do
the heavy lifting in automated audit and monitoring pipelines. For example,
AWS Config can monitor for changes in individual secrets in AWS Secrets
Manager.
AWS Config integrates with AWS Security Hub to send the results of AWS
Config managed and custom rule evaluations as findings into Security Hub.
AWS Config rules can be used in conjunction with AWS Systems Manager to
effectively remediate noncompliant resources. You use AWS Systems
Manager Explorer to gather the compliance status of AWS Config rules in
your AWS accounts across AWS Regions and then use Systems Manager
Automation documents (runbooks) to resolve your noncompliant AWS Config
rules. For implementation details, see the the blog post Remediate
noncompliant AWS Config rules with AWS Systems Manager Automation
runbooks.
If you use AWS Control Tower to manage your AWS organization, it will
deploy a set of AWS Config rules as detective guardrails (categorized as
mandatory, strongly recommended, or elective). These guardrails help you
govern your resources and monitor compliance across accounts in your AWS
organization. These AWS Config rules will automatically use an aws-control-
tower tag that has a value of managed-by-control-tower.
AWS Config must be enabled for each member account in the AWS
organization and AWS Region that contains the resources that you want to
protect. You can centrally manage (for example, create, update, and delete)
AWS Config rules across all accounts within your AWS organization. From the
AWS Config delegated administrator account, you can deploy a common set
of AWS Config rules across all accounts and specify accounts where AWS
Config rules should not be created. The AWS Config delegated administrator
account can also aggregate resource configuration and compliance data
from all member accounts to provide a single view. Use the APIs from the
delegated administrator account to enforce governance by ensuring that the
underlying AWS Config rules cannot be modified by the member accounts in
your AWS organization.
Design considerations
AWS Config streams configuration and compliance change notifications
to Amazon EventBridge. This means that you can use the native
filtering capabilities in EventBridge to filter AWS Config events so that
you can route specific types of notifications to specific targets. For
example, you can send compliance notifications for specific rules or
resource types to specific email addresses, or route configuration
change notifications to an external IT service management (ITSM) or
configuration management database (CMDB) tool. For more
information, see the blog post AWS Config best practices.
In addition to using AWS Config proactive rule evaluation, you can
use AWS CloudFormation Guard, which is a a policy-as-code evaluation
tool that proactively checks for resource configuration compliance. The
AWS CloudFormation Guard command line interface (CLI) provides you
with a declarative, domain-specific language (DSL) that you can use to
express policy as code. In addition, you can use CLI commands to
validate JSON-formatted or YAML-formatted structured data such as
CloudFormation change sets, JSON-based Terraform configuration files,
or Kubernetes configurations. You can run the evaluations locally by
using the AWS CloudFormation Guard CLI as part of your authoring
process or run it within your deployment pipeline. If you have AWS
Cloud Development Kit (AWS CDK) applications, you can use cdk-
nag for proactive checking of best practices.
Design consideration
To get account-scoped findings (where the account serves as the
trusted boundary), you create an account-scoped analyzer in each
member account. This can be done as part of the account pipeline.
Account-scoped findings flow into Security Hub at the member account
level. From there, they flow to the Security Hub delegated
administrator account (Security Tooling).
AWS Firewall Manager
AWS Firewall Manager helps protect your network by simplifying your
administration and maintenance tasks for AWS WAF, AWS Shield Advanced,
Amazon VPC security groups, AWS Network Firewall, and Route 53 Resolver
DNS Firewall across multiple accounts and resources. With Firewall Manager,
you set up your AWS WAF firewall rules, Shield Advanced protections,
Amazon VPC security groups, AWS Network Firewall firewalls, and DNS
Firewall rule group associations only once. The service automatically applies
the rules and protections across your accounts and resources, even as you
add new resources.
Firewall Manager is particularly useful when you want to protect your entire
AWS organization instead of a small number of specific accounts and
resources, or if you frequently add new resources that you want to protect.
Firewall Manager uses security policies to let you define a set of
configurations, including relevant rules, protections, and actions that must
be deployed and the accounts and resources (indicated by tags) to include or
exclude. You can create granular and flexible configurations while still being
able to scale control out to large numbers of accounts and VPCs. These
policies automatically and consistently enforce the rules you configure even
when new accounts and resources are created. Firewall Manager is enabled
in all accounts through AWS Organizations, and configuration and
management are performed by the appropriate security teams in the
Firewall Manager delegated administrator account (in this case, the Security
Tooling account).
You must enable AWS Config for each AWS Region that contains the
resources that you want to protect. If you don't want to enable AWS Config
for all resources, you must enable it for resources that are associated
with the type of Firewall Manager policies that you use. When you use both
AWS Security Hub and Firewall Manager, Firewall Manager automatically
sends your findings to Security Hub. Firewall Manager creates findings for
resources that are out of compliance and for attacks that it detects, and
sends the findings to Security Hub. When you set up a Firewall Manager
policy for AWS WAF, you can centrally enable logging on web access control
lists (web ACLs) for all in-scope accounts and centralize the logs under a
single account.
Design consideration
Account managers of individual member accounts in the AWS
organization can configure additional controls (such as AWS WAF rules
and Amazon VPC security groups) in the Firewall Manager managed
services according to their particular needs.
Amazon EventBridge
Amazon EventBridge is a serverless event bus service that makes it
straightforward to connect your applications with data from a variety of
sources. It is frequently used in security automation. You can set up routing
rules to determine where to send your data to build application architectures
that react in real time to all your data sources. You can create a custom
event bus to receive events from your custom applications, in addition to
using the default event bus in each account. You can create an event bus in
the Security Tooling account that can receive security-specific events from
other accounts in the AWS organization. For example, by linking AWS Config
rules, GuardDuty, and Security Hub with EventBridge, you create a flexible,
automated pipeline for routing security data, raising alerts, and managing
actions to resolve issues.
Design considerations
EventBridge is capable of routing events to a number of different
targets. One valuable pattern for automating security actions is to
connect particular events to individual AWS Lambda responders, which
take appropriate actions. For example, in certain circumstances you
might want to use EventBridge to route a public S3 bucket finding to a
Lambda responder that corrects the bucket policy and removes the
public permissions. These responders can be integrated into your
investigative playbooks and runbooks to coordinate response activities.
A best practice for a successful security operations team is to integrate
the flow of security events and findings into a notification and workflow
system such as a ticketing system, a bug/issue system, or another
security information and event management (SIEM) system. This takes
the workflow out of email and static reports, and helps you route,
escalate, and manage events or findings. The flexible routing abilities
in EventBridge are a powerful enabler for this integration.
Amazon Detective
Amazon Detective supports your responsive security control strategy by
making it straightforward to analyze, investigate, and quickly identify the
root cause of security findings or suspicious activities for your security
analysts. Detective automatically extracts time-based events such as login
attempts, API calls, and network traffic from AWS CloudTrail logs and
Amazon VPC flow logs. You can use Detective to access up to a year of
historical event data. Detective consumes these events by using
independent streams of CloudTrail logs and Amazon VPC flow logs. Detective
uses machine learning and visualization to create a unified, interactive view
of the behavior of your resources and the interactions among them over time
—this is called a behavior graph. You can explore the behavior graph to
examine disparate actions such as failed logon attempts or suspicious API
calls.
Design consideration
You can navigate to Detective finding profiles from the GuardDuty and
AWS Security Hub consoles. These links can help streamline the
investigation process. Your account must be the administrative
account for both Detective and the service you are pivoting from
(GuardDuty or Security Hub). If the primary accounts are the same for
the services, the integration links work seamlessly.
With Audit Manager you can audit against prebuilt frameworks such as the
Center for Internet Security (CIS) benchmark, the CIS AWS Foundations
Benchmark, System and Organization Controls 2 (SOC 2), and the Payment
Card Industry Data Security Standard (PCI DSS).It also gives you the ability to
create your own frameworks with standard or custom controls based on your
specific requirements for internal audits.
Audit Manager collects four types of evidence. Three types of evidence are
automated: compliance check evidence from AWS Config and AWS Security
Hub, management events evidence from AWS CloudTrail, and configuration
evidence from AWS service-to-service API calls. For evidence that cannot be
automated, Audit Manager lets you upload manual evidence.
Note
Audit Manager assists in collecting evidence that's relevant for verifying
compliance with specific compliance standards and regulations. However, it
doesn't assess your compliance. Therefore, the evidence that's collected
through Audit Manager might not include details of your operational
processes that are needed for audits. Audit Manager isn't a substitute for
legal counsel or compliance experts. We recommend that you engage the
services of a third-party assessor who is certified for the compliance
framework(s) that you are evaluated against.
Audit Manager assessments can run over multiple accounts in your AWS
organizations. Audit Manager collects and consolidates evidence into a
delegated administrator account in AWS Organizations. This audit
functionality is primarily used by compliance and internal audit teams, and
requires only read access to your AWS accounts.
Design considerations
Audit Manager complements other AWS security services such as
Security Hub and AWS Config to help implement a risk management
framework. Audit Manager provides independent risk assurance
functionality, whereas Security Hub helps you oversee your risk and
AWS Config conformance packs assist in managing your risks. Audit
professionals who are familiar with the Three Lines Model developed by
the Institute of Internal Auditors (IIA) should note that this combination
of AWS services helps you cover the three lines of defense. For more
information, see the-two part blog series on the AWS Cloud Operations
& Migrations blog.
In order for Audit Manager to collect Security Hub evidence, the
delegated administrator account for both services has to be the same
AWS account. For this reason, in the AWS SRA, the Security Tooling
account is the delegated administrator for Audit Manager.
AWS Artifact
AWS Artifact is hosted within the Security Tooling account to delegate the
compliance artifact management functionality from the AWS Org
Management account. This delegation is important because we recommend
that you avoid using the AWS Org Management account for deployments
unless absolutely necessary. Instead, delegate deployments to member
accounts. Because audit artifact management can be done from a member
account and the function closely aligns with security and compliance teams,
the Security Tooling account is designated as the delegated administrator
account for AWS Artifact. You can use AWS Artifact reports to download AWS
security and compliance documents, such as AWS ISO certifications,
Payment Card Industry (PCI), and System and Organization Controls (SOC)
reports. You can restrict this capability to only AWS Identity and Access
Management (IAM) roles pertaining to your audit and compliance teams, so
they can download, review, and provide those reports to external auditors as
needed. You can additionally restrict specific IAM roles to have access to only
specific AWS Artifact reports through IAM policies. For sample IAM policies,
see the AWS Artifact documentation.
Design consideration
If you choose to have a dedicated AWS account for audit and
compliance teams, you can host AWS Artifact in a security audit
account, which is separate from the Security Tooling account. AWS
Artifact reports provide evidence that demonstrates that an
organization is following a documented process or meeting a specific
requirement. Audit artifacts are gathered and archived throughout the
system development lifecycle and can be used as evidence in internal
or external audits and assessments.
AWS KMS
AWS Key Management Service (AWS KMS) helps you create and manage
cryptographic keys and control their use across a wide range of AWS
services and in your applications. AWS KMS is a secure and resilient service
that uses hardware security modules to protect cryptographic keys. It follows
industry standard lifecycle processes for key material, such as storage,
rotation, and access control of keys. AWS KMS can help protect your data
with encryption and signing keys, and can be used for both server-side
encryption and client-side encryption through the AWS Encryption SDK. For
protection and flexibility, AWS KMS supports three types of keys: customer
managed keys, AWS managed keys, and AWS owned keys. Customer
managed keys are AWS KMS keys in your AWS account that you create, own,
and manage. AWS managed keys are AWS KMS keys in your account that are
created, managed, and used on your behalf by an AWS service that is
integrated with AWS KMS. AWS owned keys are a collection of AWS KMS keys
that an AWS service owns and manages for use in multiple AWS accounts.
For more information about using KMS keys, see the AWS KMS
documentation and AWS KMS Cryptographic Details.
In the Security Tooling account, AWS KMS is used to manage the encryption
of centralized security services such as the AWS CloudTrail organization trail
that is managed by the AWS organization.
AWS Private CA
AWS Private Certificate Authority (AWS Private CA) is a managed private CA
service that helps you securely manage the lifecycle of your private end-
entity TLS certificates for EC2 instances, containers, IoT devices, and on-
premises resources. It allows encrypted TLS communications to running
applications. With AWS Private CA, you can create your own CA hierarchy (a
root CA, through subordinate CAs, to end-entity certificates) and issue
certificates with it to authenticate internal users, computers, applications,
services, servers, and other devices, and to sign computer code. Certificates
issued by a private CA are trusted only within your AWS organization, not on
the internet.
Note
ACM also helps you provision, manage, and deploy public TLS certificates for
use with AWS services. To support this functionality, ACM has to reside in the
AWS account that would use the public certificate. This is discussed later in
this guide, in the Application account section.
Design considerations
With AWS Private CA, you can create a hierarchy of certificate
authorities with up to five levels. You can also create multiple
hierarchies, each with its own root. The AWS Private CA hierarchy
should adhere to your organization’s PKI design. However, keep in
mind that increasing the CA hierarchy increases the number of
certificates in the certification path, which, in turn, increases the
validation time of an end-entity certificate. A well-defined CA hierarchy
provides benefits that include granular security control appropriate to
each CA, delegation of subordinate CA to a different application, which
leads to division of administrative tasks, use of CA with limited
revocable trust, the ability to define different validity periods, and the
ability to enforce path limits. Ideally, your root and subordinate CAs are
in separate AWS accounts. For more information about planning a CA
hierarchy by using AWS Private CA, see the AWS Private CA
documentation and the blog post How to secure an enterprise scale
AWS Private CA hierarchy for automotive and manufacturing.
AWS Private CA can integrate with your existing CA hierarchy, which
allows you to use the automation and native AWS integration capability
of ACM in conjunction with the existing root of trust that you use today.
You can create a subordinate CA in AWS Private CA backed by a parent
CA on premises. For more information about implementation,
see Installing a subordinate CA certificate signed by an external parent
CA in the AWS Private CA documentation.
Amazon Inspector
Amazon Inspector is an automated vulnerability management service that
automatically discovers and scans Amazon EC2 instances, container images
in Amazon Container Registry (Amazon ECR), and AWS Lambda functions for
known software vulnerabilities and unintended network exposure.
Design considerations
Amazon Inspector integrates with AWS Security Hub automatically
when both services are enabled. You can use this integration to send
all findings from Amazon Inspector to Security Hub, which will then
include those findings in its analysis of your security posture.
Amazon Inspector automatically exports events for findings, resource
coverage changes, and initial scans of individual resources to Amazon
EventBridge, and, optionally, to an Amazon Simple Storage Service
(Amazon S3) bucket. To export active findings to an S3 bucket, you
need an AWS KMS key that Amazon Inspector can use to encrypt
findings and an S3 bucket with permissions that allow Amazon
Inspector to upload objects. EventBridge integration enables you to
monitor and process findings in near real time as part of your existing
security and compliance workflows. EventBridge events are published
to the Amazon Inspector delegated administrator account in addition to
the member account from which they originated.
Implementation example
The AWS SRA code library provides a sample implementation of Amazon
Inspector. It demonstrates delegated administration (Security Tooling) and
configures Amazon Inspector for all existing and future accounts in the AWS
organization.
Deploying common security services
within all AWS accounts
The Apply security services across your AWS organization section earlier in
this reference highlighted security services that protect an AWS account, and
noted that many of these services can also be configured and managed
within AWS Organizations. Some of these services should be deployed in all
accounts, and you will see them in the AWS SRA. This enables a consistent
set of guardrails and provides centralized monitoring, management, and
governance across your AWS organization.
Security Hub, GuardDuty, AWS Config, Access Analyzer, and AWS CloudTrail
organization trails appear in all accounts. The first three support the
delegated administrator feature discussed previously in the Management
account, trusted access, and delegated administrators section. CloudTrail
currently uses a different aggregation mechanism.
Design considerations
Specific account configurations might necessitate additional security
services. For example, accounts that manage S3 buckets (the
Application and Log Archive accounts) should also include Amazon
Macie, and consider turning on CloudTrail S3 data event logging in
these common security services. (Macie supports delegated
administration with centralized configuration and monitoring.) Another
example is Amazon Inspector, which is applicable only for accounts
that host either EC2 instances or Amazon ECR images.
In addition to the services described previously in this section, the AWS
SRA includes two security-focused services, Amazon Detective and
AWS Audit Manager, which support AWS Organizations integration and
the delegated administrator functionality. However, those are not
included as part of the recommended services for account baselining,
because we have seen that these services are best used in the
following scenarios:
o You have a dedicated team or group of resources that perform
these functions. Detective is best utilized by security analyst
teams and Audit Manager is helpful to your internal audit or
compliance teams.
o You want to focus on a core set of tools such as GuardDuty and
Security Hub at the start of your project, and then build on these
by using services that provide additional capabilities.
Design consideration
Operational log data used by your infrastructure, operations, and
workload teams often overlaps with the log data used by security,
audit, and compliance teams. We recommend that you consolidate
your operational log data into the Log Archive account. Based on your
specific security and governance requirements, you might need to
filter operational log data saved to this account. You might also need
to specify who has access to the operational log data in the Log
Archive account.
Types of logs
The primary logs shown in the AWS SRA include CloudTrail (organization
trail), Amazon VPC flow logs, access logs from Amazon CloudFront and AWS
WAF, and DNS logs from Amazon Route 53. These logs provide an audit of
actions taken (or attempted) by a user, role, AWS service, or network entity
(identified, for example, by an IP address). Other log types (for example,
application logs or database logs) can be captured and archived as well. For
more information about log sources and logging best practices, see
the security documentation for each service.
In the AWS SRA, the primary logs stored in Amazon S3 come from CloudTrail,
so this section describes how to protect those objects. This guidance also
applies to any other S3 objects created either by your own applications or by
other AWS services. Apply these patterns whenever you have data in
Amazon S3 that needs high integrity, strong access control, and automated
retention or destruction.
All new objects (including CloudTrail logs) that are uploaded to S3 buckets
are encrypted by default by using Amazon server-side encryption with
Amazon S3-managed encryption keys (SSE-S3). This helps protect the data
at rest, but access control is controlled exclusively by IAM policies. To
provide an additional managed security layer, you can use server-side
encryption with AWS KMS keys that you manage (SSE-KMS) on all security S3
buckets. This adds a second level of access control. To read log files, a user
must have both Amazon S3 read permissions for the S3 object and an IAM
policy or role applied that allows them permissions to decrypt by the
associated key policy.
Two options help you protect or verify the integrity of CloudTrail log objects
that are stored in Amazon S3. CloudTrail provides log file integrity
validation to determine whether a log file was modified or deleted after
CloudTrail delivered it. The other option is S3 Object Lock.
In addition to protecting the S3 bucket itself, you can adhere to the principle
of least privilege for the logging services (for example, CloudTrail) and the
Log Archive account. For example, users with permissions granted by the
AWS managed IAM policy AWSCloudTrail_FullAccess can disable or
reconfigure the most sensitive and important auditing functions in their AWS
accounts. Limit the application of this IAM policy to as few individuals as
possible.
Use detective controls, such as those delivered by AWS Config and AWS IAM
Access Analyzer, to monitor (and alert and remediate) this broader collective
of preventive controls for unexpected changes.
Implementation example
The AWS SRA code library provides a sample implementation of Amazon S3
block account public access. This module blocks Amazon S3 public access for
all existing and future accounts in the AWS organization.
Infrastructure OU – Network
account
The Network account manages the gateway between your application and
the broader internet. It is important to protect that two-way interface. The
Network account isolates the networking services, configuration, and
operation from the individual application workloads, security, and other
infrastructure. This arrangement not only limits connectivity, permissions,
and data flow, but also supports separation of duties and least privilege for
the teams that need to operate in these accounts. By splitting network flow
into separate inbound and outbound virtual private clouds (VPCs), you can
protect sensitive infrastructure and traffic from undesired access. The
inbound network is generally considered higher risk and deserves
appropriate routing, monitoring, and potential issue mitigations. These
infrastructure accounts will inherit permission guardrails from the Org
Management account and the Infrastructure OU. Networking (and security)
teams manage the majority of the infrastructure in this account.
Network architecture
Although network design and specifics are beyond the scope of this
document, we recommend these three options for network connectivity
between the various accounts: VPC peering, AWS PrivateLink, and AWS
Transit Gateway. Important considerations in choosing among these are
operational norms, budgets, and specific bandwidth needs.
VPC peering ‒ The simplest way to connect two VPCs is to use VPC
peering. A connection enables full bidirectional connectivity between
the VPCs. VPCs that are in separate accounts and AWS Regions can
also be peered together. At scale, when you have tens to hundreds of
VPCs, interconnecting them with peering results in a mesh of hundreds
to thousands of peering connections, which can be challenging to
manage and scale. VPC peering is best used when resources in one
VPC must communicate with resources in another VPC, the
environment of both VPCs is controlled and secured, and the number
of VPCs to be connected is fewer than 10 (to allow for the individual
management of each connection).
AWS PrivateLink ‒ PrivateLink provides private connectivity between
VPCs, services, and applications. You can create your own application
in your VPC and configure it as a PrivateLink-powered service (referred
to as an endpoint service). Other AWS principals can create a
connection from their VPC to your endpoint service by using
an interface VPC endpoint or a Gateway Load Balancer endpoint,
depending on the type of service. When you use PrivateLink, service
traffic doesn’t pass across a publicly routable network. Use PrivateLink
when you have a client-server setup where you want to give one or
more consumer VPCs unidirectional access to a specific service or set
of instances in the service provider VPC. This is also a good option
when clients and servers in the two VPCs have overlapping IP
addresses, because PrivateLink uses elastic network interfaces within
the client VPC so that there are no IP conflicts with the service
provider.
AWS Transit Gateway ‒ Transit Gateway provides a hub-and-spoke
design for connecting VPCs and on-premises networks as a fully
managed service without requiring you to provision virtual appliances.
AWS manages high availability and scalability. A transit gateway is a
regional resource and can connect thousands of VPCs within the same
AWS Region. You can attach your hybrid connectivity (VPN and AWS
Direct Connect connections) to a single transit gateway, thereby
consolidating and controlling your AWS organization's entire routing
configuration in one place. A transit gateway solves the complexity
involved with creating and managing multiple VPC peering connections
at scale. It is the default for most network architectures, but specific
needs around cost, bandwidth, and latency might make VPC peering a
better fit for your needs.
Inspection VPC
A dedicated inspection VPC provides a simplified and central approach for
managing inspections between VPCs (in the same or in different AWS
Regions), the internet, and on-premises networks. For the AWS SRA, ensure
that all traffic between VPCs passes through the inspection VPC, and avoid
using the inspection VPC for any other workload.
You use a firewall on a per-Availability Zone basis in your VPC. For each
Availability Zone, you choose a subnet to host the firewall endpoint that
filters your traffic. The firewall endpoint in an Availability Zone can protect all
the subnets inside the zone except for the subnet where it’s located.
Depending on the use case and deployment model, the firewall subnet could
be either public or private. The firewall is completely transparent to the
traffic flow and does not perform network address translation (NAT). It
preserves the source and destination address. In this reference architecture,
the firewall endpoints are hosted in an inspection VPC. All traffic from the
inbound VPC and to the outbound VPC is routed through this firewall subnet
for inspection.
Network Firewall makes firewall activity visible in real time through Amazon
CloudWatch metrics, and offers increased visibility of network traffic by
sending logs to Amazon Simple Storage Service (Amazon S3), CloudWatch,
and Amazon Kinesis Data Firehose. Network Firewall is interoperable with
your existing security approach, including technologies from AWS Partners.
You can also import existing Suricata rulesets, which might have been
written internally or sourced externally from third-party vendors or open-
source platforms.
In the AWS SRA, Network Firewall is used within the network account
because the network control-focused functionality of the service aligns with
the intent of the account.
The network account defines the critical network infrastructure that controls
the traffic in and out of your AWS environment. This traffic needs to be
tightly monitored. In the AWS SRA, Network Access Analyzer is used within
the Network account to help identify unintended network access, identify
internet-accessible resources through internet gateways, and verify that
appropriate network controls such as network firewalls and NAT gateways
are present on all network paths between resources and internet gateways.
Design consideration
Network Access Analyzer is a feature of Amazon VPC, and it can be
used in any AWS account that has a VPC. Network administrators can
get tightly scoped, cross-account IAM roles to validate that approved
network paths are enforced within each AWS account.
AWS RAM
AWS Resource Access Manager (AWS RAM) helps you securely share the
AWS resources that you create in one AWS account with other AWS
accounts. AWS RAM provides a central place to manage the sharing of
resources and to standardize this experience across accounts. This makes it
simpler to manage resources while taking advantage of the administrative
and billing isolation, and reduce the scope of impact containment benefits
provided by a multi-account strategy. If your account is managed by AWS
Organizations, AWS RAM lets you share resources with all accounts in the
organization, or only with the accounts within one or more specified
organizational units (OUs). You can also share with specific AWS accounts by
account ID, regardless of whether the account is part of an organization. You
can also share some supported resource types with specified IAM roles and
users.
AWS RAM enables you to share resources that do not support IAM resource-
based policies, such as VPC subnets and Route 53 rules. Furthermore, with
AWS RAM, the owners of a resource can see which principals have access to
individual resources that they have shared. IAM entities can retrieve the list
of resources shared with them directly, which they can’t do with resources
shared by IAM resource policies. If AWS RAM is used to share resources
outside your AWS organization, an invitation process is initiated. The
recipient must accept the invitation before access to the resources is
granted. This provides additional checks and balances.
AWS RAM is invoked and managed by the resource owner, in the account
where the shared resource is deployed. One common use case for AWS RAM
illustrated in the AWS SRA is for network administrators to share VPC subnets
and transit gateways with the entire AWS organization. This provides the
ability to decouple AWS account and network management functions and
helps achieve separation of duties. For more information about VPC sharing,
see the AWS blog post VPC sharing: A new approach to multiple accounts
and VPC management and the AWS network infrastructure whitepaper.
Design consideration
Although AWS RAM as a service is deployed only within the Network
account in the AWS SRA, it would typically be deployed in more than
one account. For example, you can centralize your data lake
management to a single data lake account, and then share the AWS
Lake Formation data catalog resources (databases and tables) with
other accounts in your AWS organization. For more information, see
the AWS Lake Formation documentation and the AWS blog
post Securely share your data across AWS accounts using AWS Lake
Formation.. Additionally, security administrators can use AWS RAM to
follow best practices when they build an AWS Private CA hierarchy. CAs
can be shared with external third parties, who can issue certificates
without having access to the CA hierarchy. This allows origination
organizations to limit and revoke third-party access.
Edge security
Edge security generally entails three types of protections: secure content
delivery, network and application-layer protection, and distributed denial of
service (DDoS) mitigation. Content such as data, videos, applications, and
APIs have to be delivered quickly and securely, using the recommended
version of TLS to encrypt communications between endpoints. The content
should also have access restrictions through signed URLs, signed cookies,
and token authentication. Application-level security should be designed to
control bot traffic, block common attack patterns such as SQL injection or
cross-site scripting (XSS), and provide web traffic visibility. At the edge,
DDoS mitigation provides an important defense layer that ensures continued
availability of mission-critical business operations and services. Applications
and APIs should be protected from SYN floods, UDP floods, or other reflection
attacks, and have inline mitigation to stop basic network-layer attacks.
AWS offers several services to help provide a secure environment, from the
core cloud to the edge of the AWS network. Amazon CloudFront, AWS
Certificate Manager (ACM), AWS Shield, AWS WAF, and Amazon Route 53
work together to help create a flexible, layered security perimeter. With
Amazon CloudFront, content, APIs, or applications can be delivered over
HTTPS by using TLSv1.3 to encrypt and secure communication between
viewer clients and CloudFront. You can use ACM to create a custom SSL
certificate and deploy it to an CloudFront distribution for free. ACM
automatically handles certificate renewal. AWS Shield is a managed DDoS
protection service that helps safeguard applications that run on AWS. It
provides dynamic detection and automatic inline mitigations that minimize
application downtime and latency. AWS WAF lets you create rules to filter
web traffic based on specific conditions (IP addresses, HTTP headers and
body, or custom URIs), common web attacks, and pervasive bots. Route 53 is
a highly available and scalable DNS web service. Route 53 connects user
requests to internet applications that run on AWS or on premises. The AWS
SRA adopts a centralized network ingress architecture by using AWS Transit
Gateway, hosted within the Network account, so the edge security
infrastructure is also centralized in this account.
Amazon CloudFront
Amazon CloudFront is a secure content delivery network (CDN) that provides
inherent protection against common network layer and transport DDoS
attempts. You can deliver your content, APIs, or applications by using TLS
certificates, and advanced TLS features are enabled automatically. You can
use ACM to create a custom TLS certificate and enforce HTTPS
communications between viewers and CloudFront, as described later in
the ACM section. You can additionally require that the communications
between CloudFront and your custom origin implement end-to-end
encryption in transit. For this scenario, you must install a TLS certificate on
your origin server. If your origin is an elastic load balancer, you can use a
certificate that is generated by ACM or a certificate that is validated by a
third-party certificate authority (CA) and imported into ACM. If S3 bucket
website endpoints serve as the origin for CloudFront, you can’t configure
CloudFront to use HTTPS with your origin, because Amazon S3 doesn’t
support HTTPS for website endpoints. (However, you can still require HTTPS
between viewers and CloudFront.) For all other origins that support installing
HTTPS certificates, you must use a certificate that is signed by a trusted
third-party CA.
Design considerations
Alternatively, you can deploy CloudFront as part of the application in
the Application account. In this scenario, the application team makes
decisions such as how the CloudFront distributions are deployed,
determines the appropriate cache policies, and takes responsibility for
governance, auditing, and monitoring of the CloudFront distributions.
By spreading CloudFront distributions across multiple accounts, you
can benefit from additional service quotas. As another benefit, you can
use CloudFront’s inherent and automated origin access identity (OAI)
and origin access control (OAC) configuration to restrict access to
Amazon S3 origins.
When you deliver web content through a CDN such as CloudFront, you
have to prevent viewers from bypassing the CDN and accessing your
origin content directly. To achieve this origin access restriction, you
can use CloudFront and AWS WAF to add custom headers and verify
the headers before you forward requests to your custom origin. For a
detailed explanation of this solution, see the AWS security blog
post How to enhance Amazon CloudFront origin security with AWS WAF
and AWS Secrets Manager. An alternate method is to limit only the
CloudFront prefix list in the security group that’s associated with the
Application Load Balancer. This will help ensure that only a CloudFront
distribution can access the load balancer,
AWS WAF
AWS WAF is a web application firewall that helps protect your web
applications from web exploits such as common vulnerabilities and bots that
could affect application availability, compromise security, or consume
excessive resources. It can be integrated with an Amazon CloudFront
distribution, an Amazon API Gateway REST API, an Application Load Balancer,
an AWS AppSync GraphQL API, an Amazon Cognito user pool, and the AWS
App Runner service.
AWS WAF uses web access control lists (ACLs) to protect a set of AWS
resources. A web ACL is a set of rules that defines the inspection criteria, and
an associated action to take (block, allow, count, or run bot control) if a web
request meets the criteria. AWS WAF provides a set of managed rules that
provides protection against common application vulnerabilities. These rules
are curated and managed by AWS and AWS Partners. AWS WAF also offers a
powerful rule language for authoring custom rules. You can use custom rules
to write inspection criteria that fit your particular needs. Examples include IP
restrictions, geographical restrictions, and customized versions of managed
rules that better fit your specific application behavior.
AWS WAF provides a set of intelligent tier-managed rules for common and
targeted bots and account takeover protection (ATP). You are charged a
subscription fee and a traffic inspection fee when you use the bot control and
ATP rule groups. Therefore, we recommend that you monitor your traffic first
and then decide what to use. You can use the bot management and account
takeover dashboards that are available for free on the AWS WAF console to
monitor these activities and then decide whether you need an intelligent tier
AWS WAF rule group.
In the AWS SRA, AWS WAF is integrated with CloudFront in the Network
account. In this configuration, WAF rule processing happens at the edge
locations instead of within the VPC. This enables filtering of malicious traffic
closer to the end user who requested the content, and helps restrict
malicious traffic from entering your core network.
You can send full AWS WAF logs to an S3 bucket in the Log Archive account
by configuring cross-account access to the S3 bucket. For more information,
see the AWS re:Post article on this topic.
Design considerations
As an alternative to deploying AWS WAF centrally in the Network
account, some use cases are better met by deploying AWS WAF in the
Application account. For example, you might choose this option when
you deploy your CloudFront distributions in your Application account or
have public-facing Application Load Balancers, or if you’re using
Amazon API Gateway in front of your web applications. If you decide to
deploy AWS WAF in each Application account, use AWS Firewall
Manager to manage the AWS WAF rules in these accounts from the
centralized Security Tooling account.
You can also add general AWS WAF rules at the CloudFront layer and
additional application-specific AWS WAF rules at a Regional resource
such as the Application Load Balancer or the API gateway.
AWS Shield
AWS Shield is a managed DDoS protection service that safeguards
applications that run on AWS. There are two tiers of Shield: Shield Standard
and Shield Advanced. Shield Standard provides all AWS customers with
protection against the most common infrastructure (layers 3 and 4) events at
no additional charge. Shield Advanced provides more sophisticated
automatic mitigations for unauthorized events that target applications on
protected Amazon Elastic Compute Cloud (Amazon EC2), Elastic Load
Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53
hosted zones. If you own high-visibility websites or are prone to frequent
DDoS attacks, you can consider the additional features that Shield Advanced
provides.
You can use the Shield Advanced automatic application layer DDoS
mitigation feature to configure Shield Advanced to respond automatically to
mitigate application layer (layer 7) attacks against your protected CloudFront
distributions and Application Load Balancers. When you enable this feature,
Shield Advanced automatically generates custom AWS WAF rules to mitigate
DDoS attacks. Shield Advanced also gives you access to the AWS Shield
Response Team (SRT). You can contact SRT at any time to create and
manage custom mitigations for your application or during an active DDoS
attack. If you want SRT to proactively monitor your protected resources and
contact you during a DDoS attempt, consider enabling the proactive
engagement feature.
Design considerations
If you are have any workloads that are fronted by internet-facing
resources in the Application account, such as Amazon CloudFront, an
Application Load Balancer, or a Network Load Balancer, configure
Shield Advanced in the Applications account and add those resources
to Shield protection. You can use AWS Firewall Manager to configure
these options at scale.
If you have multiple resources in the data flow, such as a CloudFront
distribution in front of an Application Load Balancer, only use the entry-
point resource as the protected resource. This will ensure that you are
not paying Shield Data Transfer Out (DTO) fees twice for two
resources.
Shield Advanced records metrics that you can monitor in Amazon
CloudWatch. (For more information, see AWS Shield Advanced metrics
and alarms in the AWS documentation.) Set up CloudWatch alarms to
receive SNS notifications to your security center when a DDoS event is
detected. In a suspected DDoS event, contact the AWS Enterprise
Support team by filing a support ticket and assigning it the highest
priority. The Enterprise Support team will include the Shield Response
Team (SRT) when handling the event. In addition, you can preconfigure
the AWS Shield engagement Lambda function to create a support
ticket and send an email to the SRT team.
Amazon Route 53
Amazon Route 53 is a highly available and scalable DNS web service. You
can use Route 53 to perform three main functions: domain registration, DNS
routing, and health checking.
You can use Route 53 as a DNS service to map domain names to your EC2
instances, S3 buckets, CloudFront distributions, and other AWS resources.
The distributed nature of the AWS DNS servers helps ensure that your end
users are routed to your application consistently. Features such as Route 53
traffic flow and routing control help you improve reliability. If your primary
application endpoint becomes unavailable, you can configure your failover to
reroute your users to an alternate location. Route 53 Resolver provides
recursive DNS for your VPC and on-premises networks over AWS Direct
Connect or AWS managed VPN.
By using the AWS Identity and Access Management (IAM) service with Route
53, you get fine-grained control over who can update your DNS data. You can
enable DNS Security Extensions (DNSSEC) signing to let DNS resolvers
validate that a DNS response came from Route 53 and has not been
tampered with.
Route 53 resolvers are created by default as part of every VPC. In the AWS
SRA, Route 53 is used in the Network account primarily for the DNS firewall
capability.
Design consideration
DNS Firewall and AWS Network Firewall both offer domain name
filtering, but for different types of traffic. You can use DNS Firewall and
Network Firewall together to configure domain-based filtering for
application-layer traffic over two different network paths.
o DNS Firewall provides filtering for outbound DNS queries that
pass through the Route 53 Resolver from applications within your
VPCs. You can also configure DNS Firewall to send custom
responses for queries to blocked domain names.
o Network Firewall provides filtering for both network-layer and
application-layer traffic, but does not have visibility into queries
made by Route 53 Resolver.
Infrastructure OU – Shared
Services account
The Shared Services account is part of the Infrastructure OU, and its purpose
is to support the services that multiple applications and teams use to deliver
their outcomes. For example, directory services (Active Directory),
messaging services, and metadata services are in this category. The AWS
SRA highlights the shared services that support security controls. Although
the Network accounts are also part of the Infrastructure OU, they are
removed from the Shared Services account to support the separation of
duties. The teams that will manage these services don’t need permissions or
access to the Network accounts.
AWS Managed Microsoft AD helps you extend your existing Active Directory
to AWS and use your existing on-premises user credentials to access cloud
resources. You can also administer your on-premises users, groups,
applications, and systems without the complexity of running and maintaining
an on-premises, highly available Active Directory. You can join your existing
computers, laptops, and printers to an AWS Managed Microsoft AD domain.
In the AWS SRA, AWS Directory Service is used within the Shared Services
account to provide domain services for Microsoft-aware workloads across
multiple AWS member accounts.
Design consideration
You can grant your on-premises Active Directory users access to sign
in to the AWS Management Console and AWS Command Line Interface
(AWS CLI) with their existing Active Directory credentials by using IAM
Identity Center and selecting AWS Managed Microsoft AD as the
identity source. This enables your users to assume one of their
assigned roles at sign-in, and to access and take action on the
resources according to the permissions defined for the role. An
alternative option is to use AWS Managed Microsoft AD to enable your
users to assume an AWS Identity and Access Management (IAM) role.
The primary reason for using the Shared Services account as the delegated
administrator for IAM Identity Center is the Active Directory location. If you
plan to use Active Directory as your IAM Identity Center identity source, you
will need to locate the directory in the member account that you have
designated as your IAM Identity Center delegated administrator account. In
the AWS SRA, the Shared Services account hosts AWS Managed Microsoft
AD, so that account is made the delegated administrator for IAM Identity
Center.
Design considerations
If you decide to change the IAM Identity Center identity source from
any other source to Active Directory, or change it from Active Directory
to any other source, the directory must reside in (be owned by) the
IAM Identity Center delegated administrator member account, if one
exists; otherwise, it must be in the management account.
You can host your AWS Managed Microsoft AD within a dedicated VPC
in a different account and then use AWS Resource Access Manager
(AWS RAM) to share subnets from this other account to the delegated
administrator account. That way, the AWS Managed Microsoft AD
instance is controlled in the delegated administrator account, but from
the network perspective it acts as if it is deployed in the VPC of
another account. This is helpful when you have multiple AWS Managed
Microsoft AD instances and you want to deploy them locally to where
your workload is running but manage them centrally through one
account.
If you have a dedicated identity team that performs regular identity
and access management activities or have strict security requirements
to separate identity management functions from other shared services
functions, you can host a dedicated AWS account for identity
management. In this scenario, you designate this account as your
delegated administrator for IAM Identity Center, and it also hosts your
AWS Managed Microsoft AD directory. You can achieve the same level
of logical isolation between your identity management workloads and
other shared services workloads by using fine-grained IAM permissions
within a single shared service account.
IAM Identity Center currently doesn't provide multi-Region support. (To
enable IAM Identity Center in a different Region, you must first delete
your current IAM Identity Center configuration.) Furthermore, it doesn’t
support the use of different identity sources for different set of
accounts or let you delegate permissions management to different
parts of your organization (that is, multiple delegated administrators)
or to different groups of administrators. If you require any of these
features, you can use IAM federation to manage your user identities
within an identity provider (IdP) outside of AWS and give these external
user identities permission to use AWS resources in your account. IAM
supports IdPs that are compatible with OpenID Connect (OIDC) or SAML
2.0. As a best practice, use SAML 2.0 federation with third-party
identity providers such as Active Directory Federation Service (AD FS),
Okta, Azure Active Directory (Azure AD), or Ping Identity to provide
single sign-on capability for users to log into the AWS Management
Console or to call AWS API operations. For more information about IAM
federation and identity providers, see About SAML 2.0-based
federation in the IAM documentation and the AWS Identity Federation
workshops
Workloads OU – Application
account
The Application account hosts the primary infrastructure and services to run
and maintain an enterprise application. The Application account and
Workloads OU serve a few primary security objectives. First, you create a
separate account for each application to provide boundaries and controls
between workloads so that you can avoid issues of comingling roles,
permissions, data, and encryption keys. You want to provide a separate
account container where the application team can be given broad rights to
manage their own infrastructure without affecting others. Next, you add a
layer of protection by providing a mechanism for the security operations
team to monitor and collect security data. Employ an organization trail and
local deployments of account security services (Amazon GuardDuty, AWS
Config, AWS Security Hub, Amazon EventBridge, AWS IAM Access Analyzer),
which are configured and monitored by the security team. Finally, you
enable your enterprise to set controls centrally. You align the application
account to the broader security structure by making it a member of the
Workloads OU through which it inherits appropriate service permissions,
constraints, and guardrails.
Design consideration
In your organization you are likely to have more than one business
application. The Workloads OU is intended to house most of your
business-specific workloads, including both production and non-
production environments. These workloads can be a mix of commercial
off-the-shelf (COTS) applications and your own internally developed
custom applications and data services. There are few patterns for
organizing different business applications along with their development
environments. One pattern is to have multiple child OUs based on your
development environment, such as production, staging, test, and
development, and to use separate child AWS accounts under those
OUs that pertain to different applications. Another common pattern is
to have separate child OUs per application and then use separate child
AWS accounts for individual development environments. The exact OU
and account structure depends on your application design and the
teams that manage those applications. Consider the security controls
that you want to enforce, whether they are environment-specific or
application-specific, because it is easier to implement those controls as
SCPs on OUs. For further considerations on organizing workload-
oriented OUs, see the Organizing workload-oriented OUs section of the
AWS whitepaper Organizing Your AWS Environment Using Multiple
Accounts.
Application VPC
The virtual private cloud (VPC) in the Application account needs both
inbound access (for the simple web services that you are modeling) and
outbound access (for application needs or AWS service needs). By default,
resources inside a VPC are routable to one another. There are two private
subnets: one to host the EC2 instances (application layer) and the other for
Amazon Aurora (database layer). Network segmentation between different
tiers, such as the application tier and database tier, is accomplished through
VPC security groups, which restrict traffic at the instance level. For resiliency,
the workload spans two or more Availability Zones and utilizes two subnets
per zone.
Design consideration
You can use Traffic Mirroring to copy network traffic from an elastic
network interface of EC2 instances. You can then send the traffic to
out-of-band security and monitoring appliances for content inspection,
threat monitoring, or troubleshooting. For example, you might want to
monitor the traffic that is leaving your VPC or the traffic whose source
is outside your VPC. In this case, you will mirror all traffic except for the
traffic passing within your VPC and send it to a single monitoring
appliance. Amazon VPC flow logs do not capture mirrored traffic; they
generally capture information from packet headers only. Traffic
Mirroring provides deeper insight into the network traffic by allowing
you to analyze actual traffic content, including payload. Enable Traffic
Mirroring only for the elastic network interface of EC2 instances that
might be operating as part of sensitive workloads or for which you
expect to need detailed diagnostics in the event of an issue.
VPC endpoints
VPC endpoints provide another layer of security control as well as scalability
and reliability. Use these to connect your application VPC to other AWS
services. (In the Application account, the AWS SRA employs VPC endpoints
for AWS KMS, AWS Systems Manager, and Amazon S3.) Endpoints are virtual
devices. They are horizontally scaled, redundant, and highly available VPC
components. They allow communication between instances in your VPC and
services without imposing availability risks or bandwidth constraints on your
network traffic. You can use a VPC endpoint to privately connect your VPC to
supported AWS services and VPC endpoint services powered by AWS
PrivateLink without requiring an internet gateway, NAT device, VPN
connection, or AWS Direct Connect connection. Instances in your VPC do not
require public IP addresses to communicate with other AWS services. Traffic
between your VPC and the other AWS service does not leave the Amazon
network.
Amazon EC2
The Amazon EC2 instances that compose our application make use of
version 2 of the Instance Metadata Service (IMDSv2). IMDSv2 adds
protections for four types of vulnerabilities that could be used to try to
access the IMDS: website application firewalls, open reverse proxies, server-
side request forgery (SSRF) vulnerabilities, open layer 3 firewalls, and NATs.
For more information, see the blog post Add defense in depth against open
firewalls, reverse proxies, and SSRF vulnerabilities with enhancements to the
EC2 Instance Metadata Service.
Implementation example
The AWS SRA code library provides a sample implementation of default
Amazon EBS encryption in Amazon EC2. It demonstrates how you can enable
the account-level default Amazon EBS encryption within each AWS account
and AWS Region in the AWS organization.
Design considerations
For common scenarios such as strictly internal applications that require
a private TLS certificate on the Application Load Balancer, you can use
ACM within this account to generate a private certificate from AWS
Private CA. In the AWS SRA, the ACM root Private CA is hosted in the
Security Tooling account and can be shared with the whole AWS
organization or with specific AWS accounts to issue end-entity
certificates, as described earlier in the Security Tooling
account section.
For public certificates, you can use ACM to generate those certificates
and manage them, including automated rotation. Alternatively, you
can generate your own certificates by using SSL/TLS tools to create a
certificate signing request (CSR), get the CSR signed by a certificate
authority (CA) to produce a certificate, and then import the certificate
into ACM or upload the certificate to IAM for use with the Application
Load Balancer. If you import a certificate into ACM, you must monitor
the expiration date of the certificate and renew it before it expires.
For additional layers of defense, you can deploy AWS WAF policies to
protect the Application Load Balancer. Having edge policies,
application policies, and even private or internal policy enforcement
layers adds to the visibility of communication requests and provides
unified policy enforcement. For more information, see the blog
post Deploying defense in depth using AWS Managed Rules for AWS
WAF.
AWS Private CA
AWS Private Certificate Authority (AWS Private CA) is used in the Application
account to generate private certificates to be used with an Application Load
Balancer. It is a common scenario for Application Load Balancers to serve
secure content over TLS. This requires TLS certificates to be installed on the
Application Load Balancer. For applications that are strictly internal, private
TLS certificates can provide the secure channel.
In the AWS SRA, AWS Private CA is hosted in the Security Tooling account
and is shared out to the Application account by using AWS RAM. This allows
developers in an Application account to request a certificate from a shared
private CA. Sharing CAs across your organization or across AWS accounts
helps reduce the cost and complexity of creating and managing duplicate
CAs in all your AWS accounts. When you use ACM to issue private certificates
from a shared CA, the certificate is generated locally in the requesting
account, and ACM provides full lifecycle management and renewal.
Amazon Inspector
The AWS SRA uses Amazon Inspector to automatically discover and scan EC2
instances and container images that reside in the Amazon Elastic Container
Registry (Amazon ECR) for software vulnerabilities and unintended network
exposure.
Design consideration
You can use Patch Manager, a capability of AWS Systems Manager, to
trigger on-demand patching to remediate Amazon Inspector zero-day
or other critical security vulnerabilities. Patch Manager helps you patch
those vulnerabilities without having to wait for your normal patching
schedule. The remediation is carried out by using the Systems
Manager Automation runbook. For more information, see the two part
blog series Automate vulnerability management and remediation in
AWS using Amazon Inspector and AWS Systems Manager.
Amazon Aurora
In the AWS SRA, Amazon Aurora and Amazon S3 make up the logical data
tier. Aurora is a fully managed relational database engine that's compatible
with MySQL and PostgreSQL. An application that is running on the EC2
instances communicates with Aurora and Amazon S3 as needed. Aurora is
configured with a database cluster inside a DB subnet group.
Design consideration
As in many database services, security for Aurora is managed at three
levels. To control who can perform Amazon Relational Database
Service (Amazon RDS) management actions on Aurora DB clusters and
DB instances, you use IAM. To control which devices and EC2 instances
can open connections to the cluster endpoint and port of the DB
instance for Aurora DB clusters in a VPC, you use a VPC security group.
To authenticate logins and permissions for an Aurora DB cluster, you
can take the same approach as with a stand-alone DB instance of
MySQL or PostgreSQL, or you can use IAM database authentication for
Aurora MySQL-Compatible Edition. With this latter approach, you
authenticate to your Aurora MySQL-Compatible DB cluster by using an
IAM role and an authentication token.
Amazon S3
Amazon S3 is an object storage service that offers industry-leading
scalability, data availability, security, and performance. It is the data
backbone of many applications built on AWS, and appropriate permissions
and security controls are critical for protecting sensitive data. For
recommended security best practices for Amazon S3, see
the documentation, online tech talks, and deeper dives in blog posts. The
most important best practice is to block overly permissive access (especially
public access) to S3 buckets.
AWS KMS
The AWS SRA illustrates the recommended distribution model for key
management, where the KMS key resides within the same AWS account as
the resource to be encrypted. For this reason, AWS KMS is used in the
Application account in addition to being included in the Security Tooling
account. In the Application account, AWS KMS is used to manage keys that
are specific to the application resources. You can implement a separation of
duties by using key policies to grant key usage permissions to local
application roles and to restrict management and monitoring permissions to
your key custodians.
Design consideration
In a distributed model, the AWS KMS key management responsibility
resides with the application team. However, your central security team
can be responsible for the governance and monitoring of important
cryptographic events such as the following:
o The imported key material in a KMS key is nearing its expiration
date.
o The key material in a KMS key was automatically rotated.
o A KMS key was deleted.
o There is a high rate of decryption failure.
AWS CloudHSM
AWS CloudHSM provides managed hardware security modules (HSMs) in the
AWS Cloud. It enables you to generate and use your own encryption keys on
AWS by using FIPS 140-2 level 3 validated HSMs that you control access to.
You can use CloudHSM to offload SSL/TLS processing for your web servers.
This reduces the burden on the web server and provides extra security by
storing the web server's private key in CloudHSM. You could similarly deploy
an HSM from CloudHSM in the inbound VPC in the Network account to store
your private keys and sign certificate requests if you need to act as an
issuing certificate authority.
Design consideration
If you have a hard requirement for FIPS 140-2 level 3, you can also
choose to configure AWS KMS to use the CloudHSM cluster as a custom
key store rather than using the native KMS key store. By doing this,
you benefit from the integration between AWS KMS and AWS services
that encrypt your data, while being responsible for the HSMs that
protect your KMS keys. This combines single-tenant HSMs under your
control with the ease of use and integration of AWS KMS. To manage
your CloudHSM infrastructure, you have to employ a public key
infrastructure (PKI) and have a team that has experience managing
HSMs.
AWS Secrets Manager
AWS Secrets Manager helps you protect the credentials (secrets) that you
need to access your applications, services, and IT resources. The service
enables you to efficiently rotate, manage, and retrieve database credentials,
API keys, and other secrets throughout their lifecycle. You can replace
hardcoded credentials in your code with an API call to Secrets Manager to
retrieve the secret programmatically. This helps ensure that the secret can't
be compromised by someone who is examining your code, because the
secret no longer exists in the code. Additionally, Secrets Manager helps you
move your applications between environments (development, pre-
production, production). Instead of changing the code, you can ensure that
an appropriately named and referenced secret is available in the
environment. This promotes the consistency and reusability of application
code across different environments, while requiring fewer changes and
human interactions after the code has been tested.
With Secrets Manager, you can manage access to secrets by using fine-
grained IAM policies and resource-based policies. You can help secure
secrets by encrypting them with encryption keys that you manage by using
AWS KMS. Secrets Manager also integrates with AWS logging and monitoring
services for centralized auditing.
Secrets Manager uses envelope encryption with AWS KMS keys and data
keys to protect each secret value. When you create a secret, you can choose
any symmetric customer managed key in the AWS account and Region, or
you can use the AWS managed key for Secrets Manager.
As a best practice, you can monitor your secrets to log any changes to them.
This helps you ensure that any unexpected usage or change can be
investigated. Unwanted changes can be rolled back. Secrets Manager
currently supports two AWS services that enable you to monitor your
organization and activity: AWS CloudTrail and AWS Config. CloudTrail
captures all API calls for Secrets Manager as events, including calls from the
Secrets Manager console and from code calls to the Secrets Manager APIs. In
addition, CloudTrail captures other related (non-API) events that might have
a security or compliance impact on your AWS account or might help you
troubleshoot operational problems. These include certain secrets rotation
events and deletion of secret versions. AWS Config can provide detective
controls by tracking and monitoring changes to secrets in Secrets Manager.
These changes include a secret’s description, rotation configuration, tags,
and relationship to other AWS sources such as the KMS encryption key or the
AWS Lambda functions used for secret rotation. You can also configure
Amazon EventBridge, which receives configuration and compliance change
notifications from AWS Config, to route particular secrets events for
notification or remediation actions.
Design consideration
In general, configure and manage Secrets Manager in the account that
is closest to where the secrets will be used. This approach takes
advantage of the local knowledge of the use case and provides speed
and flexibility to application development teams. For tightly controlled
information where an additional layer of control might be appropriate,
secrets can be centrally managed by Secrets Manager in the Security
Tooling account.
Amazon Cognito
Amazon Cognito lets you add user sign-up, sign-in, and access control to
your web and mobile apps quickly and efficiently. Amazon Cognito scales to
millions of users and supports sign-in with social identity providers, such as
Apple, Facebook, Google, and Amazon, and enterprise identity providers
through SAML 2.0 and OpenID Connect. The two main components of
Amazon Cognito are user pools and identity pools. User pools are user
directories that provide sign-up and sign-in options for your application
users. Identity pools enable you to grant your users access to other AWS
services. You can use identity pools and user pools separately or together.
For common usage scenarios, see the Amazon Cognito documentation.
Amazon Cognito provides a built-in and customizable UI for user sign-up and
sign-in. You can use Android, iOS, and JavaScript SDKs for Amazon Cognito to
add user sign-up and sign-in pages to your apps. Amazon Cognito Sync is an
AWS service and client library that enables cross-device syncing of
application-related user data.
Design considerations
You can create an AWS Lambda function and then trigger that function
during user pool operations such as user sign-up, confirmation, and
sign-in (authentication) with an AWS Lambda trigger. You can add
authentication challenges, migrate users, and customize verification
messages. For common operations and user flow, see the Amazon
Cognito documentation. Amazon Cognito calls Lambda functions
synchronously.
You can use Amazon Cognito user pools to secure small, multi-tenant
applications. A common use case of multi-tenant design is to run
workloads to support testing multiple versions of an application. Multi-
tenant design is also useful for testing a single application with
different datasets, which allows full use of your cluster resources.
However, make sure that the number of tenants and expected volume
align with the related Amazon Cognito service quotas. These quotas
are shared across all tenants in your application.
Layered defense
The Application account provides an opportunity to illustrate layered defense
principals that AWS enables. Consider the security of the EC2 instances that
make up the core of a simple example application represented in the AWS
SRA and you can see the way AWS services work together in a layered
defense. This approach aligns to the structural view of AWS security services,
as described in the section Apply security services across your AWS
organization earlier in this guide.