Proans
Proans
Proans
com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/
A company must deploy multiple independent instances of an application. The front-end application is internet accessible. However, corporate
policy stipulates that the backends are to be isolated from each other and the internet, yet accessible from a centralized administration server.
The application setup should be automated to minimize the opportunity for mistakes as new instances are deployed.
Which option meets the requirements and MINIMIZES costs?
A. Use an AWS CloudFormation template to create identical IAM roles for each region. Use AWS CloudFormation StackSets to deploy each
application instance by using parameters to customize for each instance, and use security groups to isolate each instance while permitting
access to the central server.
B. Create each instance of the application IAM roles and resources in separate accounts by using AWS CloudFormation StackSets. Include a
VPN connection to the VPN gateway of the central administration server.
C. Duplicate the application IAM roles and resources in separate accounts by using a single AWS CloudFormation template. Include VPC
peering to connect the VPC of each application instance to a central VPC.
D. Use the parameters of the AWS CloudFormation template to customize the deployment into separate accounts. Include a NAT gateway to
allow communication back to the central administration server.
Correct Answer: A
B- Can work correctly however VPN connection is paid by the hour per connection + Egress data
D - cant be used as no IGW is allowed so NAT GW cant be created
So answer is:
C - Can work if the administration instance is in AWS - peering is free only data out is payed
If the question stated the admin server must be on premise then B was the answer regardless the cost
upvoted 3 times
A group of Amazon EC2 instances have been con+gured as a high performance computing (HPC) cluster. The instances are running in a
placement group, and are able to communicate with each other at network speeds of up to 20 Gbps.
The cluster needs to communicate with a control EC2 instance outside of the placement group. The control instance has the same instance type
and AMI as the other instances, and is con+gured with a public IP address.
How can the Solutions Architect improve the network speeds between the control instance and the instances in the placement group?
B. Ensure that the instances are communicating using their private IP addresses.
Correct Answer: B
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
https://acloud.guru/forums/aws-certified-solutions-architect-associate/discussion/-LBVlYuS1HKudeoD52ur
/You%20can%20move%20an%20existing%20instance%20to%20a%20placement%20group
upvoted 6 times
Why NOT A:
“
Before you move or remove the instance, the instance must be in the stopped state.
“
Termination in AWS has different results than Stop.
“
The key difference between stopping and terminating an instance is that the attached bootable EBS volume will not be deleted. The data on your
EBS volume will remain after stopping while all information on the local (ephemeral) hard drive will be lost as usual.
“
upvoted 3 times
A Solutions Architect has created an AWS CloudFormation template for a three-tier application that contains an Auto Scaling group of Amazon
EC2 instances running a custom AMI.
The Solutions Architect wants to ensure that future updates to the custom AMI can be deployed to a running stack by +rst updating the template
to refer to the new
AMI, and then invoking UpdateStack to replace the EC2 instances with instances launched from the new AMI.
How can updates to the AMI be deployed to meet these requirements?
A. Create a change set for a new version of the template, view the changes to the running EC2 instances to ensure that the AMI is correctly
updated, and then execute the change set.
B. Edit the AWS::AutoScaling::LaunchCon+guration resource in the template, changing its DeletionPolicy to Replace.
D. Create a new stack from the updated template. Once it is successfully deployed, modify the DNS records to point to the new stack and
delete the old stack.
Correct Answer: C
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html https://docs.aws.amazon.com
/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchcon+g.html
Quoting
"If you want to update existing instances when you update the LaunchConfiguration resource, you must specify an UpdatePolicy attribute for the
Auto Scaling group. "
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html
upvoted 28 times
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html
upvoted 2 times
You can add an UpdatePolicy attribute to your stack to perform rolling updates (or replace the group) when a change has been made to the
group.
upvoted 2 times
"When you use CloudFormation, you manage related resources as a single unit called a stack. You create, update, and delete a collection
of resources by creating, updating, and deleting stacks. All the resources in a stack are defined by the stack's CloudFormation template" -
to change a stack, you change its template
upvoted 1 times
" # viet1991 1 year ago
Aws is C.
Without inserting an UpdatePolicy attribute to AWS::AutoScaling::LaunchConfiguration, execute the change set will only create new
LaunchConfiguration and existing instances are not affected.
upvoted 2 times
"Resources": {
"LaunchConfig": {
"Type": "AWS::AutoScaling::LaunchConfiguration",
"Properties": {
"KeyName": {
"Ref": "KeyName"
},
"ImageId": {
"Fn::FindInMap": [
"AWSRegionArch2AMI",
{
"Ref": "AWS::Region"
},
{
"Fn::FindInMap": [
"AWSInstanceType2Arch",
upvoted 1 times
" # ss160700 1 year ago
A - it us the changes
upvoted 1 times
There is more to this question than first considered. Don't rush to the first answer the makes sense, sometimes there is a better answer. B and C
and not wrong but they are not the best answer. D could be done but really why would you do that?
upvoted 1 times
A Solutions Architect is designing a multi-account structure that has 10 existing accounts. The design must meet the following requirements:
✑ Consolidate all accounts into one organization.
✑ Allow full access to the Amazon EC2 service from the master account and the secondary accounts.
✑ Minimize the effort required to add additional secondary accounts.
Which combination of steps should be included in the solution? (Choose two.)
A. Create an organization from the master account. Send invitations to the secondary accounts from the master account. Accept the
invitations and create an OU.
B. Create an organization from the master account. Send a join request to the master account from each secondary account. Accept the
requests and create an OU.
C. Create a VPC peering connection between the master account and the secondary accounts. Accept the request for the VPC peering
connection.
D. Create a service control policy (SCP) that enables full EC2 access, and attach the policy to the OU.
E. Create a full EC2 access policy and map the policy to a role in each account. Trust every other account to assume the role.
Correct Answer: AD
There is a concept of Permission Boundary vs Actual IAM Policies. That is, we have a concept of ג€Allowג€ vs ג€Grantג€. In terms of
boundaries, we have the following three boundaries:
1. SCP
2. User/Role boundaries
3. Session boundaries (ex. AssumeRole ... )
In terms of actual permission granting, we have the following:
1. Identity Policies
2. Resource Policies
upvoted 10 times
" # LCC92 1 year ago
you misunderstand the question.
-> Allow full access to [the Amazon EC2 service] from [the master account and the secondary accounts]. => means To allow all accounts to
access to their own EC2 service, which SCP can do.
upvoted 8 times
As the suggested answer says, there is a concept of Permission Boundary vs Actual IAM Policies. That is, we have a concept of "Allow" vs
"Grant". In terms of boundaries, we have the following three boundaries:
1. SCP
2. User/Role boundaries
3. Session boundaries (ex. AssumeRole ... )
D is allowing permissions while E is granting permissions. In addition, E doesn't meet with the requirement "Minimize the effort required to add
additional secondary accounts", because the trusted relations of role in all existing accounts have to be changed when a new account needs to
be added, which is quite a lot of work. All things considered, D is a more preferable answer compared with E.
upvoted 1 times
AnyCompany has acquired numerous companies over the past few years. The CIO for AnyCompany would like to keep the resources for each
acquired company separate. The CIO also would like to enforce a chargeback model where each company pays for the AWS services it uses.
The Solutions Architect is tasked with designing an AWS architecture that allows AnyCompany to achieve the following:
✑ Implementing a detailed chargeback mechanism to ensure that each company pays for the resources it uses.
✑ AnyCompany can pay for AWS services for all its companies through a single invoice.
✑ Developers in each acquired company have access to resources in their company only.
✑ Developers in an acquired company should not be able to affect resources in their company only.
✑ A single identity store is used to authenticate Developers across all companies.
Which of the following approaches would meet these requirements? (Choose two.)
A. Create a multi-account strategy with an account per company. Use consolidated billing to ensure that AnyCompany needs to pay a single
bill only.
B. Create a multi-account strategy with a virtual private cloud (VPC) for each company. Reduce impact across companies by not creating any
VPC peering links. As everything is in a single account, there will be a single invoice. Use tagging to create a detailed bill for each company.
C. Create IAM users for each Developer in the account to which they require access. Create policies that allow the users access to all
resources in that account. Attach the policies to the IAM user.
D. Create a federated identity store against the company's Active Directory. Create IAM roles with appropriate permissions and set the trust
relationships with AWS and the identity store. Use AWS STS to grant users access based on the groups they belong to in the identity store.
E. Create a multi-account strategy with an account per company. For billing purposes, use a tagging solution that uses a tag to identify the
company that creates each resource.
Correct Answer: AD
1. Create a multi-account strategy with a virtual private cloud (VPC) for each company- This is a multi-account strategy , Different account, with
associated VPCs
It meets the requirement of “The CIO of AnyCompany wishes to maintain a separation of resources for each acquired”
2.
Reduce impact across companies by not creating any VPC peering links- This requirement of separating resources is met by not peering VPCs
3. As everything is in a single account- As this is one organisation, it’s best practice to implement AWS organisation for consolidated billing, so
assume was organisation is implemented here.
4. Use tagging to create a detailed bill for each company. Tagging with help create detailed bill for each company. The key word is detailed. AWS
Control tower will give you the bill per company, but you will still need tagging to ensure the cost are detailed for each company.
upvoted 1 times
https://aws.amazon.com/organizations/faqs/
Q: Which central governance and management capabilities does AWS Organizations enable?
AWS Organizations enables the following capabilities:
Automate AWS account creation and management, and provision resources with AWS CloudFormation Stacksets
Maintain a secure environment with policies and management of AWS security services
Govern access to AWS services, resources, and regions
Centrally manage policies across multiple AWS accounts
Audit your environment for compliance
View and manage costs with consolidated billing
Configure AWS services across multiple accounts
upvoted 7 times
A company deployed a three-tier web application in two regions: us-east-1 and eu-west-1. The application must be active in both regions at the
same time. The database tier of the application uses a single Amazon RDS Aurora database globally, with a master in us-east-1 and a read replica
in eu-west-1. Both regions are connected by a VPN.
The company wants to ensure that the application remains available even in the event of a region-level failure of all of the application's
components. It is acceptable for the application to be in read-only mode for up to 1 hour. The company plans to con+gure two Amazon Route 53
record sets, one for each of the regions.
How should the company complete the con+guration to meet its requirements while providing the lowest latency for the application end-users?
(Choose two.)
A. Use failover routing and con+gure the us-east-1 record set as primary and the eu-west-1 record set as secondary. Con+gure an HTTP health
check for the web application in us-east-1, and associate it to the us-east-1 record set.
B. Use weighted routing and con+gure each record set with a weight of 50. Con+gure an HTTP health check for each region, and attach it to
the record set for that region.
C. Use latency-based routing for both record sets. Con+gure a health check for each region and attach it to the record set for that region.
D. Con+gure an Amazon CloudWatch alarm for the health checks in us-east-1, and have it invoke an AWS Lambda function that promotes the
read replica in eu- west-1.
E. Con+gure Amazon RDS event noti+cations to react to the failure of the database in us-east-1 by invoking an AWS Lambda function that
promotes the read replica in eu-west-1.
Correct Answer: CE
D: Configure an Amazon CloudWatch alarm for the health checks in us-east-1, and have it invoke an AWS Lambda function that promotes the
read replica in eu- west-1.
How can a alarm configured in one region invoking function in another region in case of the region itself is in a failure state?
upvoted 5 times
I can see few RDS events regarding failure, which I don't see in CloudWAtch
Example:
failure
RDS-EVENT-0031
The DB instance has failed due to an incompatible configuration or an underlying storage issue. Begin a point-in-time-restore for the DB
instance.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html
upvoted 1 times
https://aws.amazon.com/blogs/developer/send-real-time-amazon-cloudwatch-alarm-notifications-to-amazon-chime/
upvoted 1 times
https://docs.aws.amazon.com/lambda/latest/dg/services-rds.html
"Amazon RDS sends notifications to an Amazon Simple Notification Service (Amazon SNS) topic, which you can configure to invoke
a Lambda function. Amazon SNS wraps the message from Amazon RDS in its own event document and sends it to your function."
upvoted 1 times
A company runs a Windows Server host in a public subnet that is con+gured to allow a team of administrators to connect over RDP to
troubleshoot issues with hosts in a private subnet. The host must be available at all times outside of a scheduled maintenance window, and needs
to receive the latest operating system updates within 3 days of release.
What should be done to manage the host with the LEAST amount of administrative effort?
A. Run the host in a single-instance AWS Elastic Beanstalk environment. Con+gure the environment with a custom AMI to use a hardened
machine image from AWS Marketplace. Apply system updates with AWS Systems Manager Patch Manager.
B. Run the host on AWS WorkSpaces. Use Amazon WorkSpaces Application Manager (WAM) to harden the host. Con+gure Windows automatic
updates to occur every 3 days.
C. Run the host in an Auto Scaling group with a minimum and maximum instance count of 1. Use a hardened machine image from AWS
Marketplace. Apply system updates with AWS Systems Manager Patch Manager.
D. Run the host in AWS OpsWorks Stacks. Use a Chief recipe to harden the AMI during instance launch. Use an AWS Lambda scheduled event
to run the Upgrade Operating System stack command to apply system updates.
Correct Answer: B
Reference:
https://docs.aws.amazon.com/workspaces/latest/adminguide/workspace-maintenance.html
I like C but not sure how would an ASG serve any purpose in this scenario. Plus, WorkSpaces makes even less sense. Firstly, it's not cheap
especially to just be used as a host server.
upvoted 1 times
D
AWS OpsWorks Stacks does not provide a way to apply updates to online Windows instances.
https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os-windows.html
upvoted 1 times
A company has a large on-premises Apache Hadoop cluster with a 20 PB HDFS database. The cluster is growing every quarter by roughly 200
instances and 1
PB. The company's goals are to enable resiliency for its Hadoop data, limit the impact of losing cluster nodes, and signi+cantly reduce costs. The
current cluster runs 24/7 and supports a variety of analysis workloads, including interactive queries and batch processing.
Which solution would meet these requirements with the LEAST expense and down time?
A. Use AWS Snowmobile to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle
the interactive workload based on historical data from the on-premises cluster. Store the data on EMRFS. Minimize costs using Reserved
Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metrics.
Create job-speci+c, optimized clusters for batch workloads that are similarly optimized.
B. Use AWS Snowmobile to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster of a similar size and
con+guration to the current cluster. Store the data on EMRFS. Minimize costs by using Reserved Instances. As the workload grows each
quarter, purchase additional Reserved Instances and add to the cluster.
C. Use AWS Snowball to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle the
interactive workloads based on historical data from the on-premises cluster. Store the data on EMRFS. Minimize costs using Reserved
Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metrics.
Create job-speci+c, optimized clusters for batch workloads that are similarly optimized.
D. Use AWS Direct Connect to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle
the interactive workload based on historical data from the on-premises cluster. Store the data on EMRFS. Minimize costs using Reserved
Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metrics.
Create job-speci+c, optimized clusters for batch workloads that are similarly optimized.
Correct Answer: A
To migrate large datasets of 10 PB or more in a single location, you should use Snowmobile. For datasets less than 10 PB or distributed in
multiple locations, you should use Snowball. In addition, you should evaluate the amount of available bandwidth in your network backbone. If
you have a high speed backbone with hundreds of Gb/s of spare throughput, then you can use Snowmobile to migrate the large datasets all at
once. If you have limited bandwidth on your backbone, you should consider using multiple Snowballs to migrate the data incrementally.
To migrate large datasets of 10PB or more in a single location, you should use Snowmobile. For datasets less than 10PB or distributed in multiple
locations, you should use Snowball. In addition, you should evaluate the amount of available bandwidth in your network backbone. If you have a
high speed backbone with hundreds of Gb/s of spare throughput, then you can use Snowmobile to migrate the large datasets all at once. If you
have limited bandwidth on your backbone, you should consider using multiple Snowballs to migrate the data incrementally.
upvoted 28 times
upvoted 2 times
A company is running a large application on premises. Its technology stack consists of Microsoft .NET for the web server platform and Apache
Cassandra for the database. The company wants to migrate this application to AWS to improve service reliability. The IT team also wants to
reduce the time it spends on capacity management and maintenance of this infrastructure. The Development team is willing and available to make
code changes to support the migration.
Which design is the LEAST complex to manage after the migration?
A. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NET. Migrate the existing Cassandra database
to Amazon Aurora with multiple read replicas, and run both in a Multi-AZ mode.
B. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling
con+guration. Migrate the Cassandra database to Amazon EC2 instances that are running in a Multi-AZ con+guration.
C. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling
con+guration. Migrate the existing Cassandra database to Amazon DynamoDB.
D. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NET. Migrate the existing Cassandra database
to Amazon DynamoDB.
Correct Answer: D
A company has a requirement that only allows specially hardened AMIs to be launched into public subnets in a VPC, and for the AMIs to be
associated with a speci+c security group. Allowing non-compliant instances to launch into the public subnet could present a signi+cant security
risk if they are allowed to operate.
A mapping of approved AMIs to subnets to security groups exists in an Amazon DynamoDB table in the same AWS account. The company created
an AWS
Lambda function that, when invoked, will terminate a given Amazon EC2 instance if the combination of AMI, subnet, and security group are not
approved in the
DynamoDB table.
What should the Solutions Architect do to MOST quickly mitigate the risk of compliance deviations?
A. Create an Amazon CloudWatch Events rule that matches each time an EC2 instance is launched using one of the allowed AMIs, and
associate it with the Lambda function as the target.
B. For the Amazon S3 bucket receiving the AWS CloudTrail logs, create an S3 event noti+cation con+guration with a +lter to match when logs
contain the ec2:RunInstances action, and associate it with the Lambda function as the target.
C. Enable AWS CloudTrail and con+gure it to stream to an Amazon CloudWatch Logs group. Create a metric +lter in CloudWatch to match
when the ec2:RunInstances action occurs, and trigger the Lambda function when the metric is greater than 0.
D. Create an Amazon CloudWatch Events rule that matches each time an EC2 instance is launched, and associate it with the Lambda function
as the target.
Correct Answer: D
Answer is D.
upvoted 1 times
A Solutions Architect must migrate an existing on-premises web application with 70 TB of static +les supporting a public open-data initiative. The
Architect wants to upgrade to the latest version of the host operating system as part of the migration effort.
Which is the FASTEST and MOST cost-effective way to perform the migration?
A. Run a physical-to-virtual conversion on the application server. Transfer the server image over the internet, and transfer the static data to
Amazon S3.
B. Run a physical-to-virtual conversion on the application server. Transfer the server image over AWS Direct Connect, and transfer the static
data to Amazon S3.
C. Re-platform the server to Amazon EC2, and use AWS Snowball to transfer the static data to Amazon S3.
D. Re-platform the server by using the AWS Server Migration Service to move the code and data to a new Amazon EC2 instance.
Correct Answer: C
A company has an application that generates a weather forecast that is updated every 15 minutes with an output resolution of 1 billion unique
positions, each approximately 20 bytes in size (20 Gigabytes per forecast). Every hour, the forecast data is globally accessed approximately 5
million times (1,400 requests per second), and up to 10 times more during weather events. The forecast data is overwritten every update. Users of
the current weather forecast application expect responses to queries to be returned in less than two seconds for each request.
Which design meets the required request rate and response time?
A. Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an Amazon API Gateway endpoint with
AWS Lambda functions responding to queries as the origin. Enable API caching on the API Gateway stage with a cache-control timeout set for
15 minutes.
B. Store forecast locations in an Amazon EFS volume. Create an Amazon CloudFront distribution that targets an Elastic Load Balancing group
of an Auto Scaling jeet of Amazon EC2 instances that have mounted the Amazon EFS volume. Set the cache-control timeout for 15 minutes in
the CloudFront distribution.
C. Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an API Gateway endpoint with AWS
Lambda functions responding to queries as the origin. Create an Amazon Lambda@Edge function that caches the data locally at edge
locations for 15 minutes.
D. Store forecast locations in Amazon S3 as individual objects. Create an Amazon CloudFront distribution targeting an Elastic Load Balancing
group of an Auto Scaling jeet of EC2 instances, querying the origin of the S3 object. Set the cache-control timeout for 15 minutes in the
CloudFront distribution.
Correct Answer: C
Reference:
https://aws.amazon.com/blogs/networking-and-content-delivery/lambdaedge-design-best-practices/
EFS also has lower limits then S3 which make it less suitable for this case which may have 14k request per second.
You can control how long your files stay in a CloudFront cache before CloudFront forwards another request to your origin. Reducing the duration
allows you to serve dynamic content. Increasing the duration means your users get better performance because your files are more likely to be
served directly from the edge cache. A longer duration also reduces the load on your origin.
To change the cache duration for an individual file, you can configure your origin to add a Cache-Control max-age or Cache-Control s-maxage
directive, or an Expires header field to the file.
upvoted 13 times
A(wrong): Cache-control is not available for API Gateway, for which it is TTL.
upvoted 2 times
C(wrong): Maximum RPS for API Gateway is 10,000requests/s, for lambda it is 1,000requests/s. They can't meet with the requirements of
maximum 14,000+ requests/s during whether events. In addition, Lambda@Edge is not used to cache data at edge locations for the specific
time.
https://aws.amazon.com/blogs/networking-and-content-delivery/lambdaedge-design-best-practices/
upvoted 1 times
A company is using AWS CloudFormation to deploy its infrastructure. The company is concerned that, if a production CloudFormation stack is
deleted, important data stored in Amazon RDS databases or Amazon EBS volumes might also be deleted.
How can the company prevent users from accidentally deleting data in this way?
A. Modify the CloudFormation templates to add a DeletionPolicy attribute to RDS and EBS resources.
B. Con+gure a stack policy that disallows the deletion of RDS and EBS resources.
C. Modify IAM policies to deny deleting RDS and EBS resources that are tagged with an ג€aws:cloudformation:stack-nameג€ tag.
D. Use AWS Con+g rules to prevent deleting RDS and EBS resources.
Correct Answer: A
With the DeletionPolicy attribute you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy
attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource by
default. To keep a resource when its stack is deleted, specify Retain for that resource. You can use retain for any resource. For example, you
can retain a nested stack, Amazon S3 bucket, or EC2 instance so that you can continue to use or modify those resources after you delete their
stacks.
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html
Correct is A. CloudFormation
upvoted 1 times
" # jackdryan 1 year ago
I'll go with A
upvoted 2 times
A company is planning to migrate an application from on-premises to AWS. The application currently uses an Oracle database and the company
can tolerate a brief downtime of 1 hour when performing the switch to the new infrastructure. As part of the migration, the database engine will be
changed to MySQL. A
Solutions Architect needs to determine which AWS services can be used to perform the migration while minimizing the amount of work and time
required.
Which of the following will meet the requirements?
A. Use AWS SCT to generate the schema scripts and apply them on the target prior to migration. Use AWS DMS to analyze the current schema
and provide a recommendation for the optimal database engine. Then, use AWS DMS to migrate to the recommended engine. Use AWS SCT to
identify what embedded SQL code in the application can be converted and what has to be done manually.
B. Use AWS SCT to generate the schema scripts and apply them on the target prior to migration. Use AWS DMS to begin moving data from the
on-premises database to AWS. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new
database. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually.
C. Use AWS DMS to help identify the best target deployment between installing the database engine on Amazon EC2 directly or moving to
Amazon RDS. Then, use AWS DMS to migrate to the platform. Use AWS Application Discovery Service to identify what embedded SQL code in
the application can be converted and what has to be done manually.
D. Use AWS DMS to begin moving data from the on-premises database to AWS. After the initial copy, continue to use AWS DMS to keep the
databases in sync until cutting over to the new database. Use AWS Application Discovery Service to identify what embedded SQL code in the
application can be converted and what has to be done manually.
Correct Answer: B
Use the modify-instance-placement command and specify the name of the placement group to which to move the instance.
upvoted 2 times
" # Waiweng 1 year ago
It's B
upvoted 2 times
A company is using AWS to run an internet-facing production application written in Node.js. The Development team is responsible for pushing new
versions of their software directly to production. The application software is updated multiple times a day. The team needs guidance from a
Solutions Architect to help them deploy the software to the production jeet quickly and with the least amount of disruption to the service.
Which option meets these requirements?
A. Prepackage the software into an AMI and then use Auto Scaling to deploy the production jeet. For software changes, update the AMI and
allow Auto Scaling to automatically push the new AMI to production.
B. Use AWS CodeDeploy to push the prepackaged AMI to production. For software changes, recon+gure CodeDeploy with new AMI
identi+cation to push the new AMI to the production jeet.
C. Use AWS Elastic Beanstalk to host the production application. For software changes, upload the new application version to Elastic
Beanstalk to push this to the production jeet using a blue/green deployment method.
D. Deploy the base AMI through Auto Scaling and bootstrap the software using user data. For software changes, SSH to each of the instances
and replace the software with the new version.
Correct Answer: A
A company used Amazon EC2 instances to deploy a web jeet to host a blog site. The EC2 instances are behind an Application Load Balancer
(ALB) and are con+gured in an Auto Scaling group. The web application stores all blog content on an Amazon EFS volume.
The company recently added a feature for bloggers to add video to their posts, attracting 10 times the previous user tramc. At peak times of day,
users report buffering and timeout issues while attempting to reach the site or watch videos.
Which is the MOST cost-emcient and scalable deployment that will resolve the issues for users?
B. Update the blog site to use instance store volumes for storage. Copy the site contents to the volumes at launch and to Amazon S3 at
shutdown.
C. Con+gure an Amazon CloudFront distribution. Point the distribution to an S3 bucket, and migrate the videos from EFS to Amazon S3.
D. Set up an Amazon CloudFront distribution for all site contents, and point the distribution at the ALB.
Correct Answer: C
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-https-connection-fails/
upvoted 9 times
" # cldy Most Recent % 11 months ago
C. Configure an Amazon CloudFront distribution. Point the distribution to an S3 bucket, and migrate the videos from EFS to Amazon S3.
upvoted 2 times
I'll go with C
upvoted 4 times
A company runs its containerized batch jobs on Amazon ECS. The jobs are scheduled by submitting a container image, a task de+nition, and the
relevant data to an Amazon S3 bucket. Container images may be unique per job. Running the jobs as quickly as possible is of utmost importance,
so submitting job artifacts to the
S3 bucket triggers the job to run immediately. Sometimes there may be no jobs running at all. However, jobs of any size can be submitted with no
prior warning to the IT Operations team. Job de+nitions include CPU and memory resource requirements.
What solution will allow the batch jobs to complete as quickly as possible after being scheduled?
A. Schedule the jobs on an Amazon ECS cluster using the Amazon EC2 launch type. Use Service Auto Scaling to increase or decrease the
number of running tasks to suit the number of running jobs.
B. Schedule the jobs directly on EC2 instances. Use Reserved Instances for the baseline minimum load, and use On-Demand Instances in an
Auto Scaling group to scale up the platform based on demand.
C. Schedule the jobs on an Amazon ECS cluster using the Fargate launch type. Use Service Auto Scaling to increase or decrease the number of
running tasks to suit the number of running jobs.
D. Schedule the jobs on an Amazon ECS cluster using the Fargate launch type. Use Spot Instances in an Auto Scaling group to scale the
platform based on demand. Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs.
Correct Answer: C
A company receives clickstream data +les to Amazon S3 every +ve minutes. A Python script runs as a cron job once a day on an Amazon EC2
instance to process each +le and load it into a database hosted on Amazon RDS. The cron job takes 15 to 30 minutes to process 24 hours of data.
The data consumers ask for the data be available as soon as possible.
Which solution would accomplish the desired outcome?
A. Increase the size of the instance to speed up processing and update the schedule to run once an hour.
B. Convert the cron job to an AWS Lambda function and trigger this new function using a cron job on an EC2 instance.
C. Convert the cron job to an AWS Lambda function and schedule it to run once an hour using Amazon CloudWatch Events.
D. Create an AWS Lambda function that runs when a +le is delivered to Amazon S3 using S3 event noti+cations.
Correct Answer: D
Reference:
https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
D is correct
upvoted 1 times
A company that is new to AWS reports it has exhausted its service limits across several accounts that are on the Basic Support plan. The
company would like to prevent this from happening in the future.
What is the MOST emcient way of monitoring and managing all service limits in the company's accounts?
A. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor,
provide noti+cations using Amazon SNS if the limits are close to exceeding the threshold.
B. Reach out to AWS Support to proactively increase the limits across all accounts. That way, the customer avoids creating and managing
infrastructure just to raise the service limits.
C. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor,
programmatically increase the limits that are close to exceeding the threshold.
D. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, and
use Amazon SNS for noti+cations if a limit is close to exceeding the threshold. Ensure that the accounts are using the AWS Business Support
plan at a minimum.
Correct Answer: A
https://aws.amazon.com/solutions/implementations/quota-monitor/
upvoted 2 times
If you have a Business, Enterprise On-Ramp, or Enterprise Support plan, you can use the Trusted Advisor console and the AWS Support API to
access all Trusted Advisor checks.
https://docs.aws.amazon.com/awssupport/latest/user/trustedadvisor.html
There are no mentions of any restrictions
upvoted 1 times
" If you have a Basic or Developer Support plan, you can use the Trusted Advisor console to access all checks in the Service Limits category and
six checks in the Security category."
upvoted 1 times
A company runs an IoT platform on AWS. IoT sensors in various locations send data to the company's Node.js API servers on Amazon EC2
instances running behind an Application Load Balancer. The data is stored in an Amazon RDS MySQL DB instance that uses a 4 TB General
Purpose SSD volume.
The number of sensors the company has deployed in the +eld has increased over time, and is expected to grow signi+cantly. The API servers are
consistently overloaded and RDS metrics show high write latency.
Which of the following steps together will resolve the issues permanently and enable growth as new sensors are provisioned, while keeping this
platform cost- emcient? (Choose two.)
A. Resize the MySQL General Purpose SSD storage to 6 TB to improve the volume's IOPS
B. Re-architect the database tier to use Amazon Aurora instead of an RDS MySQL DB instance and add read replicas
C. Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data
D. Use AWS X-Ray to analyze and debug application issues and add more API servers to match the load
E. Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance
Correct Answer: CE
upvoted 1 times
" # AzureDP900 11 months ago
C & E is the right answer
upvoted 1 times
A Solutions Architect is designing a system that will collect and store data from 2,000 internet-connected sensors. Each sensor produces 1 KB of
data every second. The data must be available for analysis within a few seconds of it being sent to the system and stored for analysis inde+nitely.
Which is the MOST cost-effective solution for collecting and storing the data?
A. Put each record in Amazon Kinesis Data Streams. Use an AWS Lambda function to write each record to an object in Amazon S3 with a
pre+x that organizes the records by hour and hashes the record's key. Analyze recent data from Kinesis Data Streams and historical data from
Amazon S3.
B. Put each record in Amazon Kinesis Data Streams. Set up Amazon Kinesis Data Firehouse to read records from the stream and group them
into objects in Amazon S3. Analyze recent data from Kinesis Data Streams and historical data from Amazon S3.
C. Put each record into an Amazon DynamoDB table. Analyze the recent data by querying the table. Use an AWS Lambda function connected
to a DynamoDB stream to group records together, write them into objects in Amazon S3, and then delete the record from the DynamoDB table.
Analyze recent data from the DynamoDB table and historical data from Amazon S3
D. Put each record into an object in Amazon S3 with a pre+x what organizes the records by hour and hashes the record's key. Use S3 lifecycle
management to transition objects to S3 infrequent access storage to reduce storage costs. Analyze recent and historical data by accessing
the data in Amazon S3
Correct Answer: C
B is more practical. I can buffer, group and write data to S3 every 60 secs. I do not want to write a file to S3 every seconds using the lambda.
upvoted 1 times
B is the right answer. Kinesis Data stream with Kinesis Forehouse reading and buffering from it to write to S3 is a standard ingestion pattern for
ingesting IoT data.
upvoted 2 times
An auction website enables users to bid on collectible items. The auction rules require that each bid is processed only once and in the order it was
received. The current implementation is based on a jeet of Amazon EC2 web servers that write bid records into Amazon Kinesis Data Streams. A
single t2.large instance has a cron job that runs the bid processor, which reads incoming bids from Kinesis Data Streams and processes each bid.
The auction site is growing in popularity, but users are complaining that some bids are not registering.
Troubleshooting indicates that the bid processor is too slow during peak demand hours, sometimes crashes while processing, and occasionally
loses track of which records is being processed.
What changes should make the bid processing more reliable?
A. Refactor the web application to use the Amazon Kinesis Producer Library (KPL) when posting bids to Kinesis Data Streams. Refactor the
bid processor to jag each record in Kinesis Data Streams as being unread, processing, and processed. At the start of each bid processing run,
scan Kinesis Data Streams for unprocessed records.
B. Refactor the web application to post each incoming bid to an Amazon SNS topic in place of Kinesis Data Streams. Con+gure the SNS topic
to trigger an AWS Lambda function that processes each bid as soon as a user submits it.
C. Refactor the web application to post each incoming bid to an Amazon SQS FIFO queue in place of Kinesis Data Streams. Refactor the bid
processor to continuously the SQS queue. Place the bid processing EC2 instance in an Auto Scaling group with a minimum and a maximum
size of 1.
D. Switch the EC2 instance type from t2.large to a larger general compute instance type. Put the bid processor EC2 instances in an Auto
Scaling group that scales out the number of EC2 instances running the bid processor, based on the IncomingRecords metric in Kinesis Data
Streams.
Correct Answer: D
Reference:
https://d0.awsstatic.com/whitepapers/Building_a_Real_Time_Bidding_Platform_on_AWS_v1_Final.pdf
Because the auction website already used Kinesis Data Stream, but still its bid processor "sometimes crashes while processing, and occasionally
loses track of which records is being processed", the question is asking us to make the bid processing more reliable, rather than faster.
As for option D, neither "switch to a larger instance type" nor "adding more EC2 instances within an Auto Scaling group" are able to solve
aforementioned reliability issue.
upvoted 1 times
upvoted 3 times
" # tartarus23 6 months ago
Selected Answer: C
C. SQS then Kinesis decouples the architecture and business flow to ensure that all bids are getting sent almost real time.
upvoted 1 times
Crashes while processing = Needs to be replaced asap to continue processing the bids
Occasionally loses track = Only happens sometimes not ALL the time
Then, what changes to make it more RELIABLE = continued service due to crashes and slow processing
Answer is D.
upvoted 2 times
A bank is re-architecting its mainframe-based credit card approval processing application to a cloud-native application on the AWS cloud.
The new application will receive up to 1,000 requests per second at peak load. There are multiple steps to each transaction, and each step must
receive the result of the previous step. The entire request must return an authorization response within less than 2 seconds with zero data loss.
Every request must receive a response. The solution must be Payment Card Industry Data Security Standard (PCI DSS)-compliant.
Which option will meet all of the bank's objectives with the LEAST complexity and LOWEST cost while also meeting compliance requirements?
A. Create an Amazon API Gateway to process inbound requests using a single AWS Lambda task that performs multiple steps and returns a
JSON object with the approval status. Open a support case to increase the limit for the number of concurrent Lambdas to allow room for
bursts of activity due to the new application.
B. Create an Application Load Balancer with an Amazon ECS cluster on Amazon EC2 Dedicated Instances in a target group to process
incoming requests. Use Auto Scaling to scale the cluster out/in based on average CPU utilization. Deploy a web service that processes all of
the approval steps and returns a JSON object with the approval status.
C. Deploy the application on Amazon EC2 on Dedicated Instances. Use an Elastic Load Balancer in front of a farm of application servers in an
Auto Scaling group to handle incoming requests. Scale out/in based on a custom Amazon CloudWatch metric for the number of inbound
requests per second after measuring the capacity of a single instance.
D. Create an Amazon API Gateway to process inbound requests using a series of AWS Lambda processes, each with an Amazon SQS input
queue. As each step completes, it writes its result to the next step's queue. The +nal step returns a JSON object with the approval status.
Open a support case to increase the limit for the number of concurrent Lambdas to allow room for bursts of activity due to the new
application.
Correct Answer: C
My simple understanding:
Multiple Lambda functions for each step can add up to 300ms/step
upvoted 3 times
Why not D? The question is asking for the least complex working solution.
upvoted 1 times
A Solutions Architect is migrating a 10 TB PostgreSQL database to Amazon RDS for PostgreSQL. The company's internet link is 50 MB with a VPN
in the
Amazon VPC, and the Solutions Architect needs to migrate the data and synchronize the changes before the cutover. The cutover must take place
within an 8-day period.
What is the LEAST complex method of migrating the database securely and reliably?
A. Order an AWS Snowball device and copy the database using the AWS DMS. When the database is available in Amazon S3, use AWS DMS to
load it to Amazon RDS, and con+gure a job to synchronize changes before the cutover.
B. Create an AWS DMS job to continuously replicate the data from on premises to AWS. Cutover to Amazon RDS after the data is
synchronized.
C. Order an AWS Snowball device and copy a database dump to the device. After the data has been copied to Amazon S3, import it to the
Amazon RDS instance. Set up log shipping over a VPN to synchronize changes before the cutover.
D. Order an AWS Snowball device and copy the database by using the AWS Schema Conversion Tool. When the data is available in Amazon S3,
use AWS DMS to load it to Amazon RDS, and con+gure a job to synchronize changes before the cutover.
Correct Answer: B
Scary how people are testing for a cert like this, and don't even know the difference
upvoted 3 times
-You use the AWS Schema Conversion Tool (AWS SCT) to extract the data locally and move it to an Edge device.
-You ship the Edge device or devices back to AWS.
-After AWS receives your shipment, the Edge device automatically loads its data into an Amazon S3 bucket.
-AWS DMS takes the files and migrates the data to the target data store. If you are using change data capture (CDC), those updates are written
to the Amazon S3 bucket and then applied to the target data store.
upvoted 2 times
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.html
upvoted 1 times
You use the AWS Schema Conversion Tool (AWS SCT) to extract the data locally and move it to an Edge device.
You ship the Edge device or devices back to AWS.
After AWS receives your shipment, the Edge device automatically loads its data into an Amazon S3 bucket.
AWS DMS takes the files and migrates the data to the target data store. If you are using change data capture (CDC), those updates are written to
the Amazon S3 bucket and then applied to the target data store.
upvoted 6 times
A Solutions Architect must update an application environment within AWS Elastic Beanstalk using a blue/green deployment methodology. The
Solutions Architect creates an environment that is identical to the existing application environment and deploys the application to the new
environment.
What should be done next to complete the update?
Correct Answer: B
Reference:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html
https://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/swap-the-environment-of-an-elastic-beanstalk-application.html
upvoted 2 times
" # Bulti 1 year ago
Answer is B. You need to swap Environment URLs
upvoted 1 times
A company has a legacy application running on servers on premises. To increase the application's reliability, the company wants to gain actionable
insights using application logs. A Solutions Architect has been given following requirements for the solution:
✑ Aggregate logs using AWS.
✑ Automate log analysis for errors.
✑ Notify the Operations team when errors go beyond a speci+ed threshold.
What solution meets the requirements?
A. Install Amazon Kinesis Agent on servers, send logs to Amazon Kinesis Data Streams and use Amazon Kinesis Data Analytics to identify
errors, create an Amazon CloudWatch alarm to notify the Operations team of errors
B. Install an AWS X-Ray agent on servers, send logs to AWS Lambda and analyze them to identify errors, use Amazon CloudWatch Events to
notify the Operations team of errors.
C. Install Logstash on servers, send logs to Amazon S3 and use Amazon Athena to identify errors, use sendmail to notify the Operations team
of errors.
D. Install the Amazon CloudWatch agent on servers, send logs to Amazon CloudWatch Logs and use metric +lters to identify errors, create a
CloudWatch alarm to notify the Operations team of errors.
Correct Answer: D
Reference:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html https://docs.aws.amazon.com/kinesis-agent-
windows/latest/userguide/what-is-kinesis-agent-windows.html
I don't see any reason why A would not work but it seems like overkill for just error counting.
upvoted 2 times
upvoted 2 times
" # Moon Highly Voted $ 1 year, 1 month ago
I would for with A.
https://docs.aws.amazon.com/kinesis-agent-windows/latest/userguide/what-is-kinesis-agent-windows.html
https://medium.com/@khandelwal12nidhi/build-log-analytic-solution-on-aws-cc62a70057b2
upvoted 14 times
for Kinesis agent: "Your operating system must be either Amazon Linux AMI with version 2015.09 or later, or Red Hat Enterprise Linux version
7 or later."
https://docs.aws.amazon.com/streams/latest/dev/writing-with-agents.html#download-install
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html
upvoted 8 times
What combination of steps could a Solutions Architect take to protect a web workload running on Amazon EC2 from DDoS and application layer
attacks? (Choose two.)
A. Put the EC2 instances behind a Network Load Balancer and con+gure AWS WAF on it.
C. Put the EC2 instances in an Auto Scaling group and con+gure AWS WAF on it.
D. Create and use an Amazon CloudFront distribution and con+gure AWS WAF on it.
E. Create and use an internet gateway in the VPC and use AWS Shield.
Correct Answer: DE
Reference:
https://aws.amazon.com/answers/networking/aws-ddos-attack-mitigation/
"AWS Shield Standard automatically protects your Amazon Route 53 Hosted Zones from infrastructure layer DDoS attacks"
https://aws.amazon.com/shield/?nc1=h_ls&whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc
"AWS WAF can be deployed on Amazon CloudFront, the Application Load Balancer (ALB), Amazon API Gateway, and AWS AppSync."
https://aws.amazon.com/waf/faqs/
upvoted 7 times
- AWS Shield
Amazon CloudFront distributions
Amazon Route 53 hosted zones
AWS Global Accelerator accelerators
Application load balancers
Elastic Load Balancing (ELB) load balancers
Amazon Elastic Compute Cloud (Amazon EC2) Elastic IP addresses
- AWS WAF
Amazon CloudFront
Amazon API Gateway REST API
Application Load Balancer
AWS AppSync GraphQL API
A photo-sharing and publishing company receives 10,000 to 150,000 images daily. The company receives the images from multiple suppliers and
users registered with the service. The company is moving to AWS and wants to enrich the existing metadata by adding data using Amazon
Rekognition.
The following is an example of the additional data:
As part of the cloud migration program, the company uploaded existing image data to Amazon S3 and told users to upload images directly to
Amazon S3.
What should the Solutions Architect do to support these requirements?
A. Trigger AWS Lambda based on an S3 event noti+cation to create additional metadata using Amazon Rekognition. Use Amazon DynamoDB
to store the metadata and Amazon ES to create an index. Use a web front-end to provide search capabilities backed by Amazon ES.
B. Use Amazon Kinesis to stream data based on an S3 event. Use an application running in Amazon EC2 to extract metadata from the images.
Then store the data on Amazon DynamoDB and Amazon CloudSearch and create an index. Use a web front-end with search capabilities
backed by CloudSearch.
C. Start an Amazon SQS queue based on S3 event noti+cations. Then have Amazon SQS send the metadata information to Amazon
DynamoDB. An application running on Amazon EC2 extracts data from Amazon Rekognition using the API and adds data to DynamoDB and
Amazon ES. Use a web front-end to provide search capabilities backed by Amazon ES.
D. Trigger AWS Lambda based on an S3 event noti+cation to create additional metadata using Amazon Rekognition. Use Amazon RDS MySQL
Multi-AZ to store the metadata information and use Lambda to create an index. Use a web front-end with search capabilities backed by
Lambda.
Correct Answer: D
A Solutions Architect is redesigning an image-viewing and messaging platform to be delivered as SaaS. Currently, there is a farm of virtual
desktop infrastructure
(VDI) that runs a desktop image-viewing application and a desktop messaging application. Both applications use a shared database to manage
user accounts and sharing. Users log in from a web portal that launches the applications and streams the view of the application on the user's
machine. The Development Operations team wants to move away from using VDI and wants to rewrite the application.
What is the MOST cost-effective architecture that offers both security and ease of management?
A. Run a website from an Amazon S3 bucket with a separate S3 bucket for images and messaging data. Call AWS Lambda functions from
embedded JavaScript to manage the dynamic content, and use Amazon Cognito for user and sharing management.
B. Run a website from Amazon EC2 Linux servers, storing the images in Amazon S3, and use Amazon Cognito for user accounts and sharing.
Create AWS CloudFormation templates to launch the application by using EC2 user data to install and con+gure the application.
C. Run a website as an AWS Elastic Beanstalk application, storing the images in Amazon S3, and using an Amazon RDS database for user
accounts and sharing. Create AWS CloudFormation templates to launch the application and perform blue/green deployments.
D. Run a website from an Amazon S3 bucket that authorizes Amazon AppStream to stream applications for a combined image viewer and
messenger that stores images in Amazon S3. Have the website use an Amazon RDS database for user accounts and sharing.
Correct Answer: C
https://aws.amazon.com/blogs/architecture/create-dynamic-contact-forms-for-s3-static-websites-using-aws-lambda-amazon-api-gateway-and-
amazon-ses/
https://aws.amazon.com/getting-started/projects/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/
I feel the words "wants to rewrite the application" are key. They aren't looking to move the same code to AppStreah which is App streaming,
similar to VDI but scoped at the App level.
B - EC2 will be more expensive and "EC2 user data" is just silly and wrong
C - RDS isn't the best choice for a user store and there is no blue/green requirement
D - Don't believe AppStream can be launched from S3. Too Dynamic. Might be possible with Lambda.
upvoted 18 times
Light reading
https://stackoverflow.com/questions/49782492/cognito-user-authorization-to-access-an-s3-object
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_cognito-bucket.html
upvoted 8 times
Cognito as a keyword narrows down to A/B. CloudFormation is not the case so not B.
upvoted 1 times
" # tgv 1 year ago
AAA
---
upvoted 1 times
A company would like to implement a serverless application by using Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. They deployed
a proof of concept and stated that the average response time is greater than what their upstream services can accept. Amazon CloudWatch
metrics did not indicate any issues with DynamoDB but showed that some Lambda functions were hitting their timeout.
Which of the following actions should the Solutions Architect consider to improve performance? (Choose two.)
A. Con+gure the AWS Lambda function to reuse containers to avoid unnecessary startup time.
B. Increase the amount of memory and adjust the timeout on the Lambda function. Complete performance testing to identify the ideal memory
and timeout con+guration for the Lambda function.
C. Create an Amazon ElastiCache cluster running Memcached, and con+gure the Lambda function for VPC integration with access to the
Amazon ElastiCache cluster.
D. Enable API cache on the appropriate stage in Amazon API Gateway, and override the TTL for individual methods that require a lower TTL
than the entire stage.
E. Increase the amount of CPU, and adjust the timeout on the Lambda function. Complete performance testing to identify the ideal CPU and
timeout con+guration for the Lambda function.
Correct Answer: BD
Reference:
https://lumigo.io/blog/aws-lambda-timeout-best-practices/
C. No DynamoDB
D. Sounds good to have less load on Lambda. Caching always gives things faster and better, lesser computation for Lambda. (https://lumigo.io
/learn/aws-lambda-timeout-best-practices/)
upvoted 1 times
" # AzureDP900 11 months, 1 week ago
Before even looking answers I decided to go with B,D . It is most appropriate.
upvoted 1 times
A AND D FOR ME
upvoted 2 times
A company is migrating an application to AWS. It wants to use fully managed services as much as possible during the migration. The company
needs to store large, important documents within the application with the following requirements:
✑ The data must be highly durable and available.
✑ The data must always be encrypted at rest and in transit.
✑ The encryption key must be managed by the company and rotated periodically.
Which of the following solutions should the Solutions Architect recommend?
A. Deploy the storage gateway to AWS in +le gateway mode. Use Amazon EBS volume encryption using an AWS KMS key to encrypt the
storage gateway volumes.
B. Use Amazon S3 with a bucket policy to enforce HTTPS for connections to the bucket and to enforce server-side encryption and AWS KMS
for object encryption.
C. Use Amazon DynamoDB with SSL to connect to DynamoDB. Use an AWS KMS key to encrypt DynamoDB objects at rest.
D. Deploy instances with Amazon EBS volumes attached to store this data. Use EBS volume encryption using an AWS KMS key to encrypt the
data.
Correct Answer: A
A Solutions Architect is designing a highly available and reliable solution for a cluster of Amazon EC2 instances.
The Solutions Architect must ensure that any EC2 instance within the cluster recovers automatically after a system failure. The solution must
ensure that the recovered instance maintains the same IP address.
How can these requirements be met?
A. Create an AWS Lambda script to restart any EC2 instances that shut down unexpectedly.
B. Create an Auto Scaling group for each EC2 instance that has a minimum and maximum size of 1.
C. Create a new t2.micro instance to monitor the cluster instances. Con+gure the t2.micro instance to issue an aws ec2 reboot-instances
command upon failure.
D. Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric, and then con+gure an EC2 action to recover the instance.
Correct Answer: D
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html
upvoted 2 times
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html
upvoted 4 times
A public retail web application uses an Application Load Balancer (ALB) in front of Amazon EC2 instances running across multiple Availability
Zones (AZs) in a
Region backed by an Amazon RDS MySQL Multi-AZ deployment. Target group health checks are con+gured to use HTTP and pointed at the
product catalog page. Auto Scaling is con+gured to maintain the web jeet size based on the ALB health check.
Recently, the application experienced an outage. Auto Scaling continuously replaced the instances during the outage. A subsequent investigation
determined that the web server metrics were within the normal range, but the database tier was experiencing high load, resulting in severely
elevated query response times.
Which of the following changes together would remediate these issues while improving monitoring capabilities for the availability and
functionality of the entire application stack for future growth? (Choose two.)
A. Con+gure read replicas for Amazon RDS MySQL and use the single reader endpoint in the web application to reduce the load on the
backend database tier.
B. Con+gure the target group health check to point at a simple HTML page instead of a product catalog page and the Amazon Route 53 health
check against the product page to evaluate full application functionality. Con+gure Amazon CloudWatch alarms to notify administrators when
the site fails.
C. Con+gure the target group health check to use a TCP check of the Amazon EC2 web server and the Amazon Route 53 health check against
the product page to evaluate full application functionality. Con+gure Amazon CloudWatch alarms to notify administrators when the site fails.
D. Con+gure an Amazon CloudWatch alarm for Amazon RDS with an action to recover a high-load, impaired RDS instance in the database tier.
E. Con+gure an Amazon ElastiCache cluster and place it between the web application and RDS MySQL instances to reduce the load on the
backend database tier.
Correct Answer: CE
Problem is that "single reader endpoint" is a feature of Aurora, not RDS MySQL.
So probably A is incorrect.
upvoted 1 times
Simple health check like TCP check (ping) will be enough because R53 also perform full health check.
upvoted 1 times
A: Single reader endpoint will allow for easy future growths by simply adding more replicas. Costs aren't mentioned. Thus I would prefer A to D
B: Monitoring should be as cheap as possible. Compared to C, HTTP-Checks are more reliable.
D: Does not work directly.
upvoted 2 times
A company is running an email application across multiple AWS Regions. The company uses Ohio (us-east-2) as the primary Region and Northern
Virginia (us- east-1) as the Disaster Recovery (DR) Region. The data is continuously replicated from the primary Region to the DR Region by a
single instance on the public subnet in both Regions. The replication messages between the Regions have a signi+cant backlog during certain
times of the day. The backlog clears on its own after a short time, but it affects the application's RPO.
Which of the following solutions should help remediate this performance problem? (Choose two.)
B. Have the instance in the primary Region write the data to an Amazon SQS queue in the primary Region instead, and have the instance in the
DR Region poll from this queue.
C. Use multiple instances on the primary and DR Regions to send and receive the replication data.
E. Attach an additional elastic network interface to each of the instances in both Regions and set up load balancing between the network
interfaces.
Correct Answer: CE
https://aws.amazon.com/about-aws/whats-new/2017/09/elastic-load-balancing-network-load-balancer-now-supports-load-balancing-to-ip-
addresses-as-targets-for-aws-and-on-premises-resources/
upvoted 1 times
Why not C? The SQS queue in the source region would not improve RPO.
upvoted 1 times
A company has implemented AWS Organizations. It has recently set up a number of new accounts and wants to deny access to a speci+c set of
AWS services in these new accounts.
How can this be controlled MOST emciently?
A. Create an IAM policy in each account that denies access to the services. Associate the policy with an IAM group, and add all IAM users to
the group.
B. Create a service control policy that denies access to the services. Add all of the new accounts to a single organizational unit (OU), and
apply the policy to that OU.
C. Create an IAM policy in each account that denies access to the services. Associate the policy with an IAM role, and instruct users to log in
using their corporate credentials and assume the IAM role.
D. Create a service control policy that denies access to the services, and apply the policy to the root of the organization.
Correct Answer: B
Reference:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html
upvoted 2 times
A company has deployed an application to multiple environments in AWS, including production and testing. The company has separate accounts
for production and testing, and users are allowed to create additional application users for team members or services, as needed. The Security
team has asked the Operations team for better isolation between production and testing with centralized controls on security credentials and
improved management of permissions between environments.
Which of the following options would MOST securely accomplish this goal?
A. Create a new AWS account to hold user and service accounts, such as an identity account. Create users and groups in the identity account.
Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles.
B. Modify permissions in the production and testing accounts to limit creating new IAM users to members of the Operations team. Set a
strong IAM password policy on each account. Create new IAM users and groups in each account to limit developer access to just the services
required to complete their job function.
C. Create a script that runs on each account that checks user accounts for adherence to a security policy. Disable any user or service
accounts that do not comply.
D. Create all user accounts in the production account. Create roles for access in the production account and testing accounts. Grant cross-
account access from the production account to the testing account.
Correct Answer: A
Reference:
https://aws.amazon.com/ru/blogs/security/how-to-centralize-and-automate-iam-policy-creation-in-sandbox-development-and-test-
environments/
It's so commonly used, AWS even provided a way to color-code the console when you assume a role, so it shows up red when you're working in
a prod role, green in dev/test role, etc: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-console.html
upvoted 2 times
" # denccc 1 year ago
will go with A
upvoted 1 times
The CISO of a large enterprise with multiple IT departments, each with its own AWS account, wants one central place where AWS permissions for
users can be managed and users authentication credentials can be synchronized with the company's existing on-premises solution.
Which solution will meet the CISO's requirements?
A. De+ne AWS IAM roles based on the functional responsibilities of the users in a central account. Create a SAML-based identity management
provider. Map users in the on-premises groups to IAM roles. Establish trust relationships between the other accounts and the central account.
B. Deploy a common set of AWS IAM users, groups, roles, and policies in all of the AWS accounts using AWS Organizations. Implement
federation between the on-premises identity provider and the AWS accounts.
C. Use AWS Organizations in a centralized account to de+ne service control policies (SCPs). Create a SAML-based identity management
provider in each account and map users in the on-premises groups to AWS IAM roles.
D. Perform a thorough analysis of the user base and create AWS IAM users accounts that have the necessary permissions. Set up a process to
provision and deprovision accounts based on data in the on-premises solution.
Correct Answer: C
the page.
Also, question asks about "AWS permissions for users can be managed", SCP won't help too much about that. It's more like IAM's job.
upvoted 15 times
" # nsvijay04b1 Most Recent % 1 week, 1 day ago
Selected Answer: C
each account IAM identity provider and role for SAML access created and it should be trusted external IDP provider.
upvoted 1 times
C is wrong.
upvoted 1 times
Also, logging into central account and then assuming role for human resources does not seem a good option. This needs to be done at each and
every account level only.... and what kind of services will be needed that way...
upvoted 3 times
" # 01037 1 year ago
Either A or C needs to create roles for all accounts, so neither can really control permissions centrally.
But SCP defines boundaries, so it can provide central permission control to some extent, and simpler.
So I'm inclined to C.
upvoted 1 times
A large company has increased its utilization of AWS over time in an unmanaged way. As such, they have a large number of independent AWS
accounts across different business units, projects, and environments. The company has created a Cloud Center of Excellence team, which is
responsible for managing all aspects of the AWS Cloud, including their AWS accounts.
Which of the following should the Cloud Center of Excellence team do to BEST address their requirements in a centralized way? (Choose two.)
A. Control all AWS account root user credentials. Assign AWS IAM users in the account of each user who needs to access AWS resources.
Follow the policy of least privilege in assigning permissions to each user.
B. Tag all AWS resources with details about the business unit, project, and environment. Send all AWS Cost and Usage reports to a central
Amazon S3 bucket, and use tools such as Amazon Athena and Amazon QuickSight to collect billing details by business unit.
C. Use the AWS Marketplace to choose and deploy a Cost Management tool. Tag all AWS resources with details about the business unit,
project, and environment. Send all AWS Cost and Usage reports for the AWS accounts to this tool for analysis.
D. Set up AWS Organizations. Enable consolidated billing, and link all existing AWS accounts to a master billing account. Tag all AWS
resources with details about the business unit, project and environment. Analyze Cost and Usage reports using tools such as Amazon Athena
and Amazon QuickSight, to collect billing details by business unit.
E. Using a master AWS account, create IAM users within the master account. De+ne IAM roles in the other AWS accounts, which cover each of
the required functions in the account. Follow the policy of least privilege in assigning permissions to each role, then enable the IAM users to
assume the roles that they need to use.
Correct Answer: AD
E is wrong, it is a bad practice to use the master account for creating users.
upvoted 1 times
To abide by industry regulations, a Solutions Architect must design a solution that will store a company's critical data in multiple public AWS
Regions, including in the United States, where the company's headquarters is located. The Solutions Architect is required to provide access to the
data stored in AWS to the company's global WAN network. The Security team mandates that no tramc accessing this data should traverse the
public internet.
How should the Solutions Architect design a highly available solution that meets the requirements and is cost-effective?
A. Establish AWS Direct Connect connections from the company headquarters to all AWS Regions in use. Use the company WAN to send
tramc over to the headquarters and then to the respective DX connection to access the data.
B. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region. Use the company WAN to send tramc
over a DX connection. Use inter-region VPC peering to access the data in other AWS Regions.
C. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region. Use the company WAN to send tramc
over a DX connection. Use an AWS transit VPC solution to access data in other AWS Regions.
D. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region. Use the company WAN to send tramc
over a DX connection. Use Direct Connect Gateway to access data in other AWS Regions.
Correct Answer: D
Reference:
https://aws.amazon.com/blogs/aws/new-aws-direct-connect-gateway-inter-region-vpc-access/
With the launch of multi-account support for Direct Connect gateway, you can associate up to 10 Amazon VPCs from multiple accounts with a
Direct Connect gateway. The Amazon VPCs and the Direct Connect gateway must be owned by AWS Accounts that belong to the same AWS
payer account ID.
upvoted 3 times
user management and authentication functions. Use ECS Docker containers to build an API.
B. Use Amazon Route 53 latency routing with an Application Load Balancer and AWS Fargate in different
regions for hosting the website. use Amazon Cognito to provide user management and authentication
functions. Use Amazon EKS containers.
C. Use Amazon CloudFront with Amazon S3 for hosting static web resources. Use Amazon Cognito to provide
user management authentication functions. Use Amazon API Gateway with AWS Lambda to build an API.
D. Use AWS Direct Connect with Amazon CloudFront and Amazon S3 for hosting static web resource. Use
Amazon Cognito to provide user management authentication functions. Use AWS Lambda to build an API.
Correct Answer: C
upvoted 15 times
" # HellGate Most Recent % 7 months, 2 weeks ago
Selected Answer: D
B, C, D are all right way... D > C > B
D is the best answer.
upvoted 1 times
upvoted 3 times
A company wants to manage the costs associated with a group of 20 applications that are infrequently used, but are still business-critical, by
migrating to AWS.
The applications are a mix of Java and Node.js spread across different instance clusters. The company wants to minimize costs while
standardizing by using a single deployment methodology. Most of the applications are part of month-end processing routines with a small number
of concurrent users, but they are occasionally run at other times. Average application memory consumption is less than 1 GB, though some
applications use as much as 2.5 GB of memory during peak processing. The most important application in the group is a billing report written in
Java that accesses multiple data sources and often for several hours.
Which is the MOST cost-effective solution?
A. Deploy a separate AWS Lambda function for each application. Use AWS CloudTrail logs and Amazon CloudWatch alarms to verify
completion of critical jobs.
B. Deploy Amazon ECS containers on Amazon EC2 with Auto Scaling con+gured for memory utilization of 75%. Deploy an ECS task for each
application being migrated with ECS task scaling. Monitor services and hosts by using Amazon CloudWatch.
C. Deploy AWS Elastic Beanstalk for each application with Auto Scaling to ensure that all requests have sumcient resources. Monitor each
AWS Elastic Beanstalk deployment by using CloudWatch alarms.
D. Deploy a new Amazon EC2 instance cluster that co-hosts all applications by using EC2 Auto Scaling and Application Load Balancers. Scale
cluster size based on a custom metric set on instance memory utilization. Purchase 3-year Reserved Instance reservations equal to the
GroupMaxSize parameter of the Auto Scaling group.
Correct Answer: C
Side note: It's not an available choice, but I'd argue that since these apps are only sporadically used, Fargate would likely be even more cost
effective than EC2-based ECS: https://aws.amazon.com/blogs/containers/theoretical-cost-optimization-by-amazon-ecs-launch-type-fargate-
vs-ec2/
upvoted 2 times
A Solutions Architect must build a highly available infrastructure for a popular global video game that runs on a mobile phone platform. The
application runs on
Amazon EC2 instances behind an Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The
database tier is an Amazon RDS MySQL Multi-AZ instance. The entire application stack is deployed in both us-east-1 and eu-central-1. Amazon
Route 53 is used to route tramc to the two installations using a latency-based routing policy. A weighted routing policy is con+gured in Route 53 as
a fail over to another region in case the installation in a region becomes unresponsive.
During the testing of disaster recovery scenarios, after blocking access to the Amazon RDS MySQL instance in eu-central-1 from all the application
instances running in that region. Route 53 does not automatically failover all tramc to us-east-1.
Based on this situation, which changes would allow the infrastructure to failover to us-east-1? (Choose two.)
A. Specify a weight of 100 for the record pointing to the primary Application Load Balancer in us-east-1 and a weight of 60 for the pointing to
the primary Application Load Balancer in eu-central-1.
B. Specify a weight of 100 for the record pointing to the primary Application Load Balancer in us-east-1 and a weight of 0 for the record
pointing to the primary Application Load Balancer in eu-central-1.
C. Set the value of Evaluate Target Health to Yes on the latency alias resources for both eu-central-1 and us-east-1.
D. Write a URL in the application that performs a health check on the database layer. Add it as a health check within the weighted routing
policy in both regions.
E. Disable any existing health checks for the resources in the policies and set a weight of 0 for the records pointing to primary in both eu-
central-1 and us-east-1, and set a weight of 100 for the primary Application Load Balancer only in the region that has healthy resources.
Correct Answer: BC
If all the records that have a weight greater than 0 are unhealthy, then Route 53 considers the zero-weighted records.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/health-checks-how-route-53-chooses-records.html
Answer is BC
upvoted 5 times
in both regions.
Did we see the same question?
upvoted 1 times
An online e-commerce business is running a workload on AWS. The application architecture includes a web tier, an application tier for business
logic, and a database tier for user and transactional data management. The database server has a 100 GB memory requirement. The business
requires cost-emcient disaster recovery for the application with an RTO of 5 minutes and an RPO of 1 hour. The business also has a regulatory for
out-of-region disaster recovery with a minimum distance between the primary and alternate sites of 250 miles.
Which of the following options can the Solutions Architect design to create a comprehensive solution for this customer that meets the disaster
recovery requirements?
A. Back up the application and database data frequently and copy them to Amazon S3. Replicate the backups using S3 cross-region
replication, and use AWS CloudFormation to instantiate infrastructure for disaster recovery and restore data from Amazon S3.
B. Employ a pilot light environment in which the primary database is con+gured with mirroring to build a standby database on m4.large in the
alternate region. Use AWS CloudFormation to instantiate the web servers, application servers and load balancers in case of a disaster to bring
the application up in the alternate region. Vertically resize the database to meet the full production demands, and use Amazon Route 53 to
switch tramc to the alternate region.
C. Use a scaled-down version of the fully functional production environment in the alternate region that includes one instance of the web
server, one instance of the application server, and a replicated instance of the database server in standby mode. Place the web and the
application tiers in an Auto Scaling behind a load balancer, which can automatically scale when the load arrives to the application. Use
Amazon Route 53 to switch tramc to the alternate region.
D. Employ a multi-region solution with fully functional web, application, and database tiers in both regions with equivalent capacity. Activate
the primary database in one region only and the standby database in the other region. Use Amazon Route 53 to automatically switch tramc
from one region to another using health check routing policies.
Correct Answer: D
Warm standby (RPO in seconds, RTO in minutes): Maintain a scaled-down but fully functional version of your workload always running in the
DR Region. Business-critical systems are fully duplicated and are always on, but with a scaled down fleet. When the time comes for recovery,
the system is scaled up quickly to handle the production load. The more scaled-up the Warm Standby is, the lower RTO and control plane
reliance will be. When scaled up to full scale this is known as a Hot Standby.
upvoted 2 times
If I have no clue whatsoever, I go read the documentation on AWS and/or try it myself in my AWS account(s) before even looking at the
comments. Only when I think I know the answer, I check.
This is somewhat time-consuming but that way I really learn the stuff, not cram for the exam alone.
upvoted 3 times
A company runs a memory-intensive analytics application using on-demand Amazon EC2 C5 compute optimized instance. The application is used
continuously and application demand doubles during working hours. The application currently scales based on CPU usage. When scaling in
occurs, a lifecycle hook is used because the instance requires 4 minutes to clean the application state before terminating.
Because users reported poor performance during working hours, scheduled scaling actions were implemented so additional instances would be
added during working hours. The Solutions Architect has been asked to reduce the cost of the application.
Which solution is MOST cost-effective?
A. Use the existing launch con+guration that uses C5 instances, and update the application AMI to include the Amazon CloudWatch agent.
Change the Auto Scaling policies to scale based on memory utilization. Use Reserved Instances for the number of instances required after
working hours, and use Spot Instances to cover the increased demand during working hours.
B. Update the existing launch con+guration to use R5 instances, and update the application AMI to include SSM Agent. Change the Auto
Scaling policies to scale based on memory utilization. Use Reserved Instances for the number of instances required after working hours, and
use Spot Instances with on-Demand instances to cover the increased demand during working hours.
C. Use the existing launch con+guration that uses C5 instances, and update the application AMI to include SSM Agent. Leave the Auto Scaling
policies to scale based on CPU utilization. Use scheduled Reserved Instances for the number of instances required after working hours, and
use Spot Instances to cover the increased demand during working hours.
D. Create a new launch con+guration using R5 instances, and update the application AMI to include the Amazon CloudWatch agent. Change
the Auto Scaling policies to scale based on memory utilization. Use Reserved Instances for the number of instances required after working
hours, and use Standard Reserved Instances with On-Demand Instances to cover the increased demand during working hours.
Correct Answer: D
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring_ec2.html
https://aws.amazon.com/ec2/pricing/reserved-instances/
Standard RIs: These provide the most significant discount (up to 72% off On-Demand) and are best suited for steady-state usage.
Scheduled RIs: These are available to launch within the time windows you reserve. This option allows you to match your capacity reservation to a
predictable recurring schedule that only requires a fraction of a day, a week, or a month.
upvoted 1 times
upvoted 4 times
A company has a data center that must be migrated to AWS as quickly as possible. The data center has a 500 Mbps AWS Direct Connect link and
a separate, fully available 1 Gbps ISP connection. A Solutions Architect must transfer 20 TB of data from the data center to an Amazon S3 bucket.
What is the FASTEST way transfer the data?
Correct Answer: B
Import/Export supports importing and exporting data into and out of Amazon S3 buckets. For signi+cant data sets, AWS Import/Export is often
faster than Internet transfer and more cost effective than upgrading your connectivity.
Reference:
https://stackshare.io/stackups/aws-direct-connect-vs-aws-import-export
Along with Transfer Acceleration, which provides a consistent experience, the entire data can be moved in 2 days. However AWS Import/Export
(now snowball) takes around a week to make the data available on AWS. The Answer is D.
upvoted 23 times
Question didn’t say about location information so it’s not easy to compare transfer rate thru S3 Transfer Acceleration . When I check aws
document I could find pretty similar case with mig data of 25TB from below link on example 2.
https://aws.amazon.com/snowball/pricing/
A company wants to host its website on AWS using serverless architecture design patterns for global customers. The company has outlined its
requirements as follow:
✑ The website should be responsive.
✑ The website should offer minimal latency.
✑ The website should be highly available.
✑ Users should be able to authenticate through social identity providers such as Google, Facebook, and Amazon.
✑ There should be baseline DDoS protections for spikes in tramc.
How can the design requirements be met?
A. Use Amazon CloudFront with Amazon ECS for hosting the website. Use AWS Secrets Manager to provide user management and
authentication functions. Use ECS Docker containers to build an API.
B. Use Amazon Route 53 latency routing with an Application Load Balancer and AWS Fargate in different regions for hosting the website. Use
Amazon Cognito to provide user management and authentication functions. Use Amazon EKS containers to build an API.
C. Use Amazon CloudFront with Amazon S3 for hosting static web resources. Use Amazon Cognito to provide user management and
authentication functions. Use Amazon API Gateway with AWS Lambda to build an API.
D. Use AWS Direct Connect with Amazon CloudFront and Amazon S3 for hosting static web resources. Use Amazon Cognito to provide user
management authentication functions. Use AWS Lambda to build an API.
Correct Answer: C
That's why you'll use Amazon API Gateway with AWS Lambda to build an API.
And recall that:
A company wants to host its website on AWS using serverless architecture design patterns
SAM is not compatible with EKS but it is with Lambda and API Gateway
upvoted 1 times
it's C
upvoted 2 times
" # Ebi 1 year, 1 month ago
C is the answer
upvoted 4 times
A company is currently using AWS CodeCommit for its source control and AWS CodePipeline for continuous integration. The pipeline has a build
stage for building the artifacts, which is then staged in an Amazon S3 bucket.
The company has identi+ed various improvement opportunities in the existing process, and a Solutions Architect has been given the following
requirements:
✑ Create a new pipeline to support feature development
✑ Support feature development without impacting production applications
✑ Incorporate continuous testing with unit tests
✑ Isolate development and production artifacts
✑ Support the capability to merge tested code into production code.
How should the Solutions Architect achieve these requirements?
A. Trigger a separate pipeline from CodeCommit feature branches. Use AWS CodeBuild for running unit tests. Use CodeBuild to stage the
artifacts within an S3 bucket in a separate testing account.
B. Trigger a separate pipeline from CodeCommit feature branches. Use AWS Lambda for running unit tests. Use AWS CodeDeploy to stage the
artifacts within an S3 bucket in a separate testing account.
C. Trigger a separate pipeline from CodeCommit tags. Use Jenkins for running unit tests. Create a stage in the pipeline with S3 as the target
for staging the artifacts with an S3 bucket in a separate testing account.
D. Create a separate CodeCommit repository for feature development and use it to trigger the pipeline. Use AWS Lambda for running unit
tests. Use AWS CodeBuild to stage the artifacts within different S3 buckets in the same production account.
Correct Answer: A
Reference:
https://docs.aws.amazon.com/codebuild/latest/userguide/how-to-create-pipeline.html
A company runs an ordering system on AWS using Amazon SQS and AWS Lambda, with each order received as a JSON message. Recently the
company had a marketing event that led to a tenfold increase in orders. With this increase, the following undesired behaviors started in the
ordering system:
✑ Lambda failures while processing orders lead to queue backlogs.
✑ The same orders have been processed multiple times.
A Solutions Architect has been asked to solve the existing issues with the ordering system and add the following resiliency features:
✑ Retain problematic orders for analysis.
✑ Send noti+cation if errors go beyond a threshold value.
How should the Solutions Architect meet these requirements?
A. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after
processing, increase the visibility timeout for the messages, create a dead letter queue for messages that could not be processed, create an
Amazon CloudWatch alarm on Lambda errors for noti+cation.
B. Receive single messages with each Lambda invocation, put additional Lambda workers to poll the queue, delete messages after
processing, increase the message timer for the messages, use Amazon CloudWatch Logs for messages that could not be processed, create a
CloudWatch alarm on Lambda errors for noti+cation.
C. Receive multiple messages with each Lambda invocation, use long polling when receiving the messages, log the errors from the message
processing code using Amazon CloudWatch Logs, create a dead letter queue with AWS Lambda to capture failed invocations, create
CloudWatch events on Lambda errors for noti+cation.
D. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after
processing, increase the visibility timeout for the messages, create a delay queue for messages that could not be processed, create an
Amazon CloudWatch metric on Lambda errors for noti+cation.
Correct Answer: D
B - Single message/lambda will increase concurrency requirements and increased failure rates. There is no "Lambda workers" just increased
concurrency limit.
C - There is no long polling in Lambda
D is incorrect, the delay queue is used to throttle incoming messages and not handle messages that could not be processed.
upvoted 27 times
An organization has recently grown through acquisitions. Two of the purchased companies use the same IP CIDR range. There is a new short-term
requirement to allow AnyCompany A (VPC-A) to communicate with a server that has the IP address 10.0.0.77 in AnyCompany B (VPC-B).
AnyCompany A must also communicate with all resources in AnyCompany C (VPC-C). The Network team has created the VPC peer links, but it is
having issues with communications between VPC-A and VPC-B. After an investigation, the team believes that the routing tables in the VPCs are
incorrect.
What con+guration will allow AnyCompany A to communicate with AnyCompany C in addition to the database in AnyCompany B?
A. On VPC-A, create a static route for the VPC-B CIDR range (10.0.0.0/24) across VPC peer pcx-AB. Create a static route of 10.0.0.0/16 across
VPC peer pcx-AC. On VPC-B, create a static route for VPC-A CIDR (172.16.0.0/24) on peer pcx-AB. On VPC-C, create a static route for VPC-A
CIDR (172.16.0.0/24) across peer pcx-AC.
B. On VPC-A, enable dynamic route propagation on pcx-AB and pcx-AC. On VPC-B, enable dynamic route propagation and use security groups
to allow only the IP address 10.0.0.77/32 on VPC peer pcx-AB. On VPC-C, enable dynamic route propagation with VPC-A on peer pcx-AC.
C. On VPC-A, create network access control lists that block the IP address 10.0.0.77/32 on VPC peer pcx-AC. On VPC-A, create a static route
for VPC-B CIDR (10.0.0.0/24) on pcx-AB and a static route for VPC-C CIDR (10.0.0.0/24) on pcx-AC. On VPC-B, create a static route for VPC-A
CIDR (172.16.0.0/24) on peer pcx-AB. On VPC-C, create a static route for VPC-A CIDR (172.16.0.0/24) across peer pcx-AC.
D. On VPC-A, create a static route for the VPC-B (10.0.0.77/32) database across VPC peer pcx-AB. Create a static route for the VPC-C CIDR on
VPC peer pcx-AC. On VPC-B, create a static route for VPC-A CIDR (172.16.0.0/24) on peer pcx-AB. On VPC-C, create a static route for VPC-A
CIDR (172.16.0.0/24) across peer pcx-AC.
Correct Answer: C
Go with D
upvoted 1 times
" # AzureDP900 11 months ago
D works fine
upvoted 1 times
A company is designing a new highly available web application on AWS. The application requires consistent and reliable connectivity from the
application servers in AWS to a backend REST API hosted in the company's on-premises environment. The backend connection between AWS and
on-premises will be routed over an AWS Direct Connect connection through a private virtual interface. Amazon Route 53 will be used to manage
private DNS records for the application to resolve the IP address on the backend REST API.
Which design would provide a reliable connection to the backend API?
A. Implement at least two backend endpoints for the backend REST API, and use Route 53 health checks to monitor the availability of each
backend endpoint and perform DNS-level failover.
B. Install a second Direct Connect connection from a different network carrier and attach it to the same virtual private gateway as the +rst
Direct Connect connection.
C. Install a second cross connect for the same Direct Connect connection from the same network carrier, and join both connections to the
same link aggregation group (LAG) on the same private virtual interface.
D. Create an IPSec VPN connection routed over the public internet from the on-premises data center to AWS and attach it to the same virtual
private gateway as the Direct Connect connection.
Correct Answer: B
A - The ask is, Which design would provide a "reliable connection" to the backend API? not to re-design the backend implementation for High
Availability.
C - 2 DX connections from the same provider create a single point of failure
D - VPN over the public internet is generally less reliable than a dedicated DX connection.
upvoted 22 times
people are saying Direct connect gateway I agree not mentioned answer so no question and normal DX will connect 2 connection on VPG
as below
https://aws.amazon.com/directconnect/resiliency-recommendation/?nc=sn&loc=4&dn=2
if anyone want to see a direct connect gateway, please see below URL
https://www.stax.io/changelog/2020-10-06-new-direct-connect-functionality-for-stax-networks/
upvoted 2 times
A retail company is running an application that stores invoice +les in an Amazon S3 bucket and metadata about the +les in an Amazon DynamoDB
table. The application software runs in both us-east-1 and eu-west-1. The S3 bucket and DynamoDB table are in us-east-1. The company wants to
protect itself from data corruption and loss of connectivity to either Region.
Which option meets these requirements?
A. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in
us-east-1. Enable versioning on the S3 bucket.
B. Create an AWS Lambda function triggered by Amazon CloudWatch Events to make regular backups of the DynamoDB table. Set up S3 cross-
region replication from us-east-1 to eu-west-1. Set up MFA delete on the S3 bucket in us-east-1.
C. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable versioning on the S3 bucket. Implement strict
ACLs on the S3 bucket.
D. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in
us-east-1. Set up S3 cross-region replication from us-east-1 to eu-west-1.
Correct Answer: D
A company wants to launch an online shopping website in multiple countries and must ensure that customers are protected against potential
`man-in-the-middle` attacks.
Which architecture will provide the MOST secure site access?
A. Use Amazon Route 53 for domain registration and DNS services. Enable DNSSEC for all Route 53 requests. Use AWS Certi+cate Manager
(ACM) to register TLS/SSL certi+cates for the shopping website, and use Application Load Balancers con+gured with those TLS/SSL
certi+cates for the site. Use the Server Name Identi+cation extension in all client requests to the site.
B. Register 2048-bit encryption keys from a third-party certi+cate service. Use a third-party DNS provider that uses the customer managed
keys for DNSSec. Upload the keys to ACM, and use ACM to automatically deploy the certi+cates for secure web services to an EC2 front-end
web server jeet by using NGINX. Use the Server Name Identi+cation extension in all client requests to the site.
C. Use Route 53 for domain registration. Register 2048-bit encryption keys from a third-party certi+cate service. Use a third-party DNS service
that supports DNSSEC for DNS requests that use the customer managed keys. Import the customer managed keys to ACM to deploy the
certi+cates to Classic Load Balancers con+gured with those TLS/SSL certi+cates for the site. Use the Server Name Identi+cation extension in
all clients requests to the site.
D. Use Route 53 for domain registration, and host the company DNS root servers on Amazon EC2 instances running Bind. Enable DNSSEC for
DNS requests. Use ACM to register TLS/SSL certi+cates for the shopping website, and use Application Load Balancers con+gured with those
TLS/SSL certi+cates for the site. Use the Server Name Identi+cation extension in all client requests to the site.
Correct Answer: B
A company is creating an account strategy so that they can begin using AWS. The Security team will provide each team with the permissions they
need to follow the principle or least privileged access. Teams would like to keep their resources isolated from other groups, and the Finance team
would like each team's resource usage separated for billing purposes.
Which account creation process meets these requirements and allows for changes?
A. Create a new AWS Organizations account. Create groups in Active Directory and assign them to roles in AWS to grant federated access.
Require each team to tag their resources, and separate bills based on tags. Control access to resources through IAM granting the minimally
required privilege.
B. Create individual accounts for each team. Assign the security account as the master account, and enable consolidated billing for all other
accounts. Create a cross-account role for security to manage accounts, and send logs to a bucket in the security account.
C. Create a new AWS account, and use AWS Service Catalog to provide teams with the required resources. Implement a third-party billing
solution to provide the Finance team with the resource use for each team based on tagging. Isolate resources using IAM to avoid account
sprawl. Security will control and monitor logs and permissions.
D. Create a master account for billing using Organizations, and create each team's account from that master account. Create a security
account for logs and cross-account access. Apply service control policies on each account, and grant the Security team cross-account access
to all accounts. Security will create IAM policies for each account to maintain least privilege access.
Correct Answer: B
By creating individual IAM users for people accessing your account, you can give each IAM user a unique set of security credentials. You can
also grant different permissions to each IAM user. If necessary, you can change or revoke an IAM user's permissions anytime. (If you give out
your root user credentials, it can be dimcult to revoke them, and it is impossible to restrict their permissions.)
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
A company has a 24 TB MySQL database in its on-premises data center that grows at the rate of 10 GB per day. The data center is connected to
the company's
AWS infrastructure with a 50 Mbps VPN connection.
The company is migrating the application and workload to AWS. The application code is already installed and tested on Amazon EC2. The
company now needs to migrate the database and wants to go live on AWS within 3 weeks.
Which of the following approaches meets the schedule with LEAST downtime?
A. 1. Use the VM Import/Export service to import a snapshot of the on-premises database into AWS. 2. Launch a new EC2 instance from the
snapshot. 3. Set up ongoing database replication from on premises to the EC2 database over the VPN. 4. Change the DNS entry to point to the
EC2 database. 5. Stop the replication.
B. 1. Launch an AWS DMS instance. 2. Launch an Amazon RDS Aurora MySQL DB instance. 3. Con+gure the AWS DMS instance with on-
premises and Amazon RDS database information. 4. Start the replication task within AWS DMS over the VPN. 5. Change the DNS entry to point
to the Amazon RDS MySQL database. 6. Stop the replication.
C. 1. Create a database export locally using database-native tools. 2. Import that into AWS using AWS Snowball. 3. Launch an Amazon RDS
Aurora DB instance. 4. Load the data in the RDS Aurora DB instance from the export. 5. Set up database replication from the on-premises
database to the RDS Aurora DB instance over the VPN. 6. Change the DNS entry to point to the RDS Aurora DB instance. 7. Stop the
replication.
D. 1. Take the on-premises application osine. 2. Create a database export locally using database-native tools. 3. Import that into AWS using
AWS Snowball. 4. Launch an Amazon RDS Aurora DB instance. 5. Load the data in the RDS Aurora DB instance from the export. 6. Change the
DNS entry to point to the Amazon RDS Aurora DB instance. 7. Put the Amazon EC2 hosted application online.
Correct Answer: C
A company wants to allow its Marketing team to perform SQL queries on customer records to identify market segments. The data is spread
across hundreds of +les. The records must be encrypted in transit and at rest. The Team Manager must have the ability to manage users and
groups, but no team members should have access to services or resources not required for the SQL queries. Additionally, Administrators need to
audit the queries made and receive noti+cations when a query violates rules de+ned by the Security team.
AWS Organizations has been used to create a new account and an AWS IAM user with administrator permissions for the Team Manager.
Which design meets these requirements?
A. Apply a service control policy (SCP) that allows access to IAM, Amazon RDS, and AWS CloudTrail. Load customer records in Amazon RDS
MySQL and train users to execute queries using the AWS CLI. Stream the query logs to Amazon CloudWatch Logs from the RDS database
instance. Use a subscription +lter with AWS Lambda functions to audit and alarm on queries against personal data.
B. Apply a service control policy (SCP) that denies access to all services except IAM, Amazon Athena, Amazon S3, and AWS CloudTrail. Store
customer record +les in Amazon S3 and train users to execute queries using the CLI via Athena. Analyze CloudTrail events to audit and alarm
on queries against personal data.
C. Apply a service control policy (SCP) that denies access to all services except IAM, Amazon DynamoDB, and AWS CloudTrail. Store customer
records in DynamoDB and train users to execute queries using the AWS CLI. Enable DynamoDB streams to track the queries that are issued
and use an AWS Lambda function for real-time monitoring and alerting.
D. Apply a service control policy (SCP) that allows access to IAM, Amazon Athena, Amazon S3, and AWS CloudTrail. Store customer records
as +les in Amazon S3 and train users to leverage the Amazon S3 Select feature and execute queries using the AWS CLI. Enable S3 object-level
logging and analyze CloudTrail events to audit and alarm on queries against personal data.
Correct Answer: D
This is a easy one for solution type of questions, hope I can have it in my exam
upvoted 1 times
" # Smartphone 1 year ago
Answer is B.
Each of the following policies is an example of a deny list policy strategy. Deny list policies must be attached along with other policies that allow
the approved actions in the affected accounts.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples.html
upvoted 1 times
A Solutions Architect is responsible for redesigning a legacy Java application to improve its availability, data durability, and scalability. Currently,
the application runs on a single high-memory Amazon EC2 instance. It accepts HTTP requests from upstream clients, adds them to an in-memory
queue, and responds with a
200 status. A separate application thread reads items from the queue, processes them, and persists the results to an Amazon RDS MySQL
instance. The processing time for each item takes 90 seconds on average, most of which is spent waiting on external service calls, but the
application is written to process multiple items in parallel.
Tramc to this service is unpredictable. During periods of high load, items may sit in the internal queue for over an hour while the application
processes the backlog.
In addition, the current system has issues with availability and data loss if the single application node fails.
Clients that access this service cannot be modi+ed. They expect to receive a response to each HTTP request they send within 10 seconds before
they will time out and retry the request.
Which approach would improve the availability and durability of the system while decreasing the processing latency and minimizing costs?
A. Create an Amazon API Gateway REST API that uses Lambda proxy integration to pass requests to an AWS Lambda function. Migrate the
core processing code to a Lambda function and write a wrapper class that provides a handler method that converts the proxy events to the
internal application data model and invokes the processing module.
B. Create an Amazon API Gateway REST API that uses a service proxy to put items in an Amazon SQS queue. Extract the core processing code
from the existing application and update it to pull items from Amazon SQS instead of an in-memory queue. Deploy the new processing
application to smaller EC2 instances within an Auto Scaling group that scales dynamically based on the approximate number of messages in
the Amazon SQS queue.
C. Modify the application to use Amazon DynamoDB instead of Amazon RDS. Con+gure Auto Scaling for the DynamoDB table. Deploy the
application within an Auto Scaling group with a scaling policy based on CPU utilization. Back the in-memory queue with a memory-mapped +le
to an instance store volume and periodically write that +le to Amazon S3.
D. Update the application to use a Redis task queue instead of the in-memory queue. Build a Docker container image for the application.
Create an Amazon ECS task de+nition that includes the application container and a separate container to host Redis. Deploy the new task
de+nition as an ECS service using AWS Fargate, and enable Auto Scaling.
Correct Answer: B
Reference:
https://aws.amazon.com/blogs/database/introducing-amazon-elasticsearch-service-as-a-target-in-aws-database-migration-service/
My answer is B
upvoted 26 times
A Solutions Architect needs to migrate a legacy application from on premises to AWS. On premises, the application runs on two Linux servers
behind a load balancer and accesses a database that is master-master on two servers. Each application server requires a license +le that is tied to
the MAC address of the server's network adapter. It takes the software vendor 12 hours to send ne license +les through email. The application
requires con+guration +les to use static.
IPv4 addresses to access the database servers, not DNS.
Given these requirements, which steps should be taken together to enable a scalable architecture for the application servers? (Choose two.)
A. Create a pool of ENIs, request license +les from the vendor for the pool, and store the license +les within Amazon S3. Create automation to
download an unused license, and attach the corresponding ENI at boot time.
B. Create a pool of ENIs, request license +les from the vendor for the pool, store the license +les on an Amazon EC2 instance, modify the
con+guration +les, and create an AMI from the instance. use this AMI for all instances.
C. Create a bootstrap automation to request a new license +le from the vendor with a unique return email. Have the server con+gure itself with
the received license +le.
D. Create bootstrap automation to attach an ENI from the pool, read the database IP addresses from AWS Systems Manager Parameter Store,
and inject those parameters into the local con+guration +les. Keep SSM up to date using a Lambda function.
E. Install the application on an EC2 instance, con+gure the application, and con+gure the IP address information. Create an AMI from this
instance and use if for all instances.
Correct Answer: CD
Having the database IP addresses on Parameter Store ensures that all the EC2 instances will have a central location to retrieve the IP addresses.
This also reduces the need to constantly update any script from inside the EC2 instance even if you add/remove more databases in the future.
upvoted 4 times
A company has an Amazon VPC that is divided into a public subnet and a private subnet. A web application runs in Amazon VPC, and each subnet
has its own
NACL. The public subnet has a CIDR of 10.0.0.0/24. An Application Load Balancer is deployed to the public subnet. The private subnet has a CIDR
of 10.0.1.0/24.
Amazon EC2 instances that run a web server on port 80 are launched into the private subnet.
Only network tramc that is required for the Application Load Balancer to access the web application can be allowed to travel between the public
and private subnets.
What collection of rules should be written to ensure that the private subnet's NACL meets the requirement? (Choose two.)
Correct Answer: BC
Reference:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario3.html
A company has an internal AWS Elastic Beanstalk worker environment inside a VPC that must access an external payment gateway API available
on an HTTPS endpoint on the public internet. Because of security policies, the payment gateway's Application team can grant access to only one
public IP address.
Which architecture will set up an Elastic Beanstalk environment to access the company's application without making multiple changes on the
company's end?
A. Con+gure the Elastic Beanstalk application to place Amazon EC2 instances in a private subnet with an outbound route to a NAT gateway in
a public subnet. Associate an Elastic IP address to the NAT gateway that can be whitelisted on the payment gateway application side.
B. Con+gure the Elastic Beanstalk application to place Amazon EC2 instances in a public subnet with an internet gateway. Associate an
Elastic IP address to the internet gateway that can be whitelisted on the payment gateway application side.
C. Con+gure the Elastic Beanstalk application to place Amazon EC2 instances in a private subnet. Set an HTTPS_PROXY application
parameter to send outbound HTTPS connections to an EC2 proxy server deployed in a public subnet. Associate an Elastic IP address to the
EC2 proxy host that can be whitelisted on the payment gateway application side.
D. Con+gure the Elastic Beanstalk application to place Amazon EC2 instances in a public subnet. Set the HTTPS_PROXY and NO_PROXY
application parameters to send non-VPC outbound HTTPS connections to an EC2 proxy server deployed in a public subnet. Associate an
Elastic IP address to the EC2 proxy host that can be whitelisted on the payment gateway application side.
Correct Answer: A
Reference:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc.html
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc.html
upvoted 2 times
upvoted 1 times
A company has a website that enables users to upload videos. Company policy states the uploaded videos must be analyzed for restricted
content. An uploaded video is placed in Amazon S3, and a message is pushed to an Amazon SQS queue with the video's location. A backend
application pulls this location from
Amazon SQS and analyzes the video.
The video analysis is compute-intensive and occurs sporadically during the day. The website scales with demand. The video analysis application
runs on a +xed number of instances. Peak demand occurs during the holidays, so the company must add instances to the application during this
time. All instances used are currently on-demand Amazon EC2 T2 instances. The company wants to reduce the cost of the current solution.
Which of the following solutions is MOST cost-effective?
A. Keep the website on T2 instances. Determine the minimum number of website instances required during off-peak times and use Spot
Instances to cover them while using Reserved Instances to cover peak demand. Use Amazon EC2 R4 and Amazon EC2 R5 Reserved Instances
in an Auto Scaling group for the video analysis application.
B. Keep the website on T2 instances. Determine the minimum number of website instances required during off-peak times and use Reserved
Instances to cover them while using On-Demand Instances to cover peak demand. Use Spot Fleet for the video analysis application comprised
of Amazon EC2 C4 and Amazon EC2 C5 Spot Instances.
C. Migrate the website to AWS Elastic Beanstalk and Amazon EC2 C4 instances. Determine the minimum number of website instances
required during off-peak times and use On-Demand Instances to cover them while using Spot capacity to cover peak demand. Use Spot Fleet
for the video analysis application comprised of C4 and Amazon EC2 C5 instances.
D. Migrate the website to AWS Elastic Beanstalk and Amazon EC2 R4 instances. Determine the minimum number of website instances
required during off-peak times and use Reserved Instances to cover them while using On-Demand Instances to cover peak demand. Use Spot
Fleet for the video analysis application comprised of R4 and Amazon EC2 R5 instances.
Correct Answer: B
upvoted 2 times
" # Ebi 1 year ago
I go with B
upvoted 3 times
A company has an application that uses Amazon EC2 instances in an Auto Scaling group. The Quality Assurance (QA) department needs to launch
a large number of short-lived environments to test the application. The application environments are currently launched by the Manager of the
department using an AWS
CloudFormation template. To launch the stack, the Manager uses a role with permission to use CloudFormation, EC2, and Auto Scaling APIs. The
Manager wants to allow testers to launch their own environments, but does not want to grant broad permissions to each user.
Which set up would achieve these goals?
A. Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department permission to assume the Manager's role and
add a policy that restricts the permissions to the template and the resources it creates. Train users to launch the template from the
CloudFormation console.
B. Create an AWS Service Catalog product from the environment template. Add a launch constraint to the product with the existing role. Give
users in the QA department permission to use AWS Service Catalog APIs only. Train users to launch the templates from the AWS Service
Catalog console.
C. Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department permission to use CloudFormation and S3 APIs,
with conditions that restrict the permission to the template and the resources it creates. Train users to launch the template from the
CloudFormation console.
D. Create an AWS Elastic Beanstalk application from the environment template. Give users in the QA department permission to use Elastic
Beanstalk permissions only. Train users to launch Elastic Beanstalk environment with the Elastic Beanstalk CLI, passing the existing role to
the environment as a service role.
Correct Answer: B
Reference:
https://aws.amazon.com/ru/blogs/mt/how-to-launch-secure-and-governed-aws-resources-with-aws-cloudformation-and-aws-service-catalog/
B makes more sense to me as it restricts users to create services through the catalog.
upvoted 26 times
A. Create an Amazon CloudWatch alarm action that triggers a Lambda function to add an Amazon RDS for MySQL read replica when resource
upvoted 2 times
A company has several teams, and each team has their own Amazon RDS database that totals 100 TB. The company is building a data query
platform for
Business Intelligence Analysts to generate a weekly business report. The new system must run ad-hoc SQL queries.
What is the MOST cost-effective solution?
A. Create a new Amazon Redshift cluster. Create an AWS Glue ETL job to copy data from the RDS databases to the Amazon Redshift cluster.
Use Amazon Redshift to run the query.
B. Create an Amazon EMR cluster with enough core nodes. Run an Apache Spark job to copy data from the RDS databases to a Hadoop
Distributed File System (HDFS). Use a local Apache Hive metastore to maintain the table de+nition. Use Spark SQL to run the query.
C. Use an AWS Glue ETL job to copy all the RDS databases to a single Amazon Aurora PostgreSQL database. Run SQL queries on the Aurora
PostgreSQL database.
D. Use an AWS Glue crawler to crawl all the databases and create tables in the AWS Glue Data Catalog. Use an AWS Glue ETL job to load data
from the RDS databases to Amazon S3, and use Amazon Athena to run the queries.
Correct Answer: A
A company provides AWS solutions to its users with AWS CloudFormation templates. Users launch the templates in their accounts to have
different solutions provisioned for them. The users want to improve the deployment strategy for solutions while retaining the ability to do the
following:
✑ Add their own features to a solution for their speci+c deployments.
✑ Run unit tests on their changes.
✑ Turn features on and off for their deployments.
✑ Automatically update with code changes.
✑ Run security scanning tools for their deployments.
Which strategies should the Solutions Architect use to meet the requirements?
A. Allow users to download solution code as Docker images. Use AWS CodeBuild and AWS CodePipeline for the CI/CD pipeline. Use Docker
images for different solution features and the AWS CLI to turn features on and off. Use AWS CodeDeploy to run unit tests and security scans,
and for deploying and updating a solution with changes.
B. Allow users to download solution code artifacts. Use AWS CodeCommit and AWS CodePipeline for the CI/CD pipeline. Use AWS Amplify
plugins for different solution features and user prompts to turn features on and off. Use AWS Lambda to run unit tests and security scans, and
AWS CodeBuild for deploying and updating a solution with changes.
C. Allow users to download solution code artifacts in their Amazon S3 buckets. Use Amazon S3 and AWS CodePipeline for the CI/CD
pipelines. Use CloudFormation StackSets for different solution features and to turn features on and off. Use AWS Lambda to run unit tests and
security scans, and CloudFormation for deploying and updating a solution with changes.
D. Allow users to download solution code artifacts. Use AWS CodeCommit and AWS CodePipeline for the CI/CD pipeline. Use the AWS Cloud
Development Kit constructs for different solution features, and use the manifest +le to turn features on and off. Use AWS CodeBuild to run unit
tests and security scans, and for deploying and updating a solution with changes.
Correct Answer: A
Reference:
https://www.slideshare.net/AmazonWebServices/cicd-for-containers-a-way-forward-for-your-devops-pipeline
A company uses Amazon S3 to host a web application. Currently, the company uses a continuous integration tool running on an Amazon EC2
instance that builds and deploys the application by uploading it to an S3 bucket. A Solutions Architect needs to enhance the security of the
company's platform with the following requirements:
✑ A build process should be run in a separate account from the account hosting the web application.
✑ A build process should have minimal access in the account it operates in.
✑ Long-lived credentials should not be used.
As a start, the Development team created two AWS accounts: one for the application named web account process; other is a named build
account.
Which solution should the Solutions Architect use to meet the security requirements?
A. In the build account, create a new IAM role, which can be assumed by Amazon EC2 only. Attach the role to the EC2 instance running the
continuous integration process. Create an IAM policy to allow s3: PutObject calls on the S3 bucket in the web account. In the web account,
create an S3 bucket policy attached to the S3 bucket that allows the build account to use s3:PutObject calls.
B. In the build account, create a new IAM role, which can be assumed by Amazon EC2 only. Attach the role to the EC2 instance running the
continuous integration process. Create an IAM policy to allow s3: PutObject calls on the S3 bucket in the web account. In the web account,
create an S3 bucket policy attached to the S3 bucket that allows the newly created IAM role to use s3:PutObject calls.
C. In the build account, create a new IAM user. Store the access key and secret access key in AWS Secrets Manager. Modify the continuous
integration process to perform a lookup of the IAM user credentials from Secrets Manager. Create an IAM policy to allow s3: PutObject calls
on the S3 bucket in the web account, and attack it to the user. In the web account, create an S3 bucket policy attached to the S3 bucket that
allows the newly created IAM user to use s3:PutObject calls.
D. In the build account, modify the continuous integration process to perform a lookup of the IAM user credentials from AWS Secrets Manager.
In the web account, create a new IAM user. Store the access key and secret access key in Secrets Manager. Attach the PowerUserAccess IAM
policy to the IAM user.
Correct Answer: A
BBB
upvoted 1 times
" # shotty1 9 months, 2 weeks ago
I am pretty sure it is A. Using a role as a trusted Principal for cross account access has never worked for me, even though the documentation is
sometimes a bit vague on that topic.
upvoted 2 times
A jeet of Amazon ECS instances is used to poll an Amazon SQS queue and update items in an Amazon DynamoDB database. Items in the table
are not being updated, and the SQS queue is +lling up. Amazon CloudWatch Logs are showing consistent 400 errors when attempting to update
the table. The provisioned write capacity units are appropriately con+gured, and no throttling is occurring.
What is the LIKELY cause of the failure?
Correct Answer: C
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
Status 400 with DynamoDB. Here,probably an authn failure due to someone messing up the role.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.MessagesAndCodes
upvoted 32 times
upvoted 4 times
" # blackgamer 1 year ago
D is the answer.
upvoted 2 times
A mobile gaming application publishes data continuously to Amazon Kinesis Data Streams. An AWS Lambda function processes records from the
data stream and writes to an Amazon DynamoDB table. The DynamoDB table has an auto scaling policy enabled with the target utilization set to
70%.
For several minutes at the start and end of each day, there is a spike in tramc that often exceeds +ve times the normal load. The company notices
the
GetRecords.IteratorAgeMilliseconds metric of the Kinesis data stream temporarily spikes to over a minute for several minutes. The AWS Lambda
function writes
ProvisionedThroughputExceededException messages to Amazon CloudWatch Logs during these times, and some records are redirected to the
dead letter queue.
No exceptions are thrown by the Kinesis producer on the gaming application.
What change should the company make to resolve this issue?
A. Use Application Auto Scaling to set a scaling schedule to scale out write capacity on the DynamoDB table during predictable load spikes.
B. Use Amazon CloudWatch Events to monitor the dead letter queue and invoke a Lambda function to automatically retry failed records.
C. Reduce the DynamoDB table auto scaling policy's target utilization to 20% to more quickly respond to load spikes.
D. Increase the number of shards in the Kinesis data stream to increase throughput capacity.
Correct Answer: D
Since the spikes were huge and it hit the provisioned WCU during that time before auto-scaling could kick in. It resulted in
ProvisionedThroughputExceededException from Dynamodb. As a result, it took a few rounds (a few mins) to scale to the desired utilisation
target.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html
So, the solution is to lower the utilisation target and let it scale ASAP.
upvoted 9 times
Sudden, short-duration spikes of activity are accommodated by the table's built-in burst capacity.
upvoted 2 times
"However, if the processing time cannot be reduced, then consider upscaling the Kinesis stream by increasing the number of shards."
upvoted 2 times
https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-data-streams-iteratorage-metric/
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/model
/ProvisionedThroughputExceededException.html
https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-scheduled-scaling.html
upvoted 5 times
A company has a web application that securely uploads pictures and videos to an Amazon S3 bucket. The company requires that only
authenticated users are allowed to post content. The application generates a presigned URL that is used to upload objects through a browser
interface. Most users are reporting slow upload times for objects larger than 100 MB.
What can a Solutions Architect do to improve the performance of these uploads while ensuring only authenticated users are allowed to post
content?
A. Set up an Amazon API Gateway with an edge-optimized API endpoint that has a resource as an S3 service proxy. Con+gure the PUT method
for this resource to expose the S3 PutObject operation. Secure the API Gateway using a COGNITO_USER_POOLS authorizer. Have the browser
interface use API Gateway instead of the presigned URL to upload objects.
B. Set up an Amazon API Gateway with a regional API endpoint that has a resource as an S3 service proxy. Con+gure the PUT method for this
resource to expose the S3 PutObject operation. Secure the API Gateway using an AWS Lambda authorizer. Have the browser interface use API
Gateway instead of the presigned URL to upload API objects.
C. Enable an S3 Transfer Acceleration endpoint on the S3 bucket. Use the endpoint when generating the presigned URL. Have the browser
interface upload the objects to this URL using the S3 multipart upload API.
D. Con+gure an Amazon CloudFront distribution for the destination S3 bucket. Enable PUT and POST methods for the CloudFront cache
behavior. Update the CloudFront origin to use an origin access identity (OAI). Give the OAI user s3:PutObject permissions in the bucket policy.
Have the browser interface upload objects using the CloudFront distribution.
Correct Answer: C
The question is about uploading the object faster not about retrieving uploaded objects faster and hence the answer is C. When using
CloudFront to upload objects with S3 as origin the request goes through the Edge servers but doesn't use the S3 Transfer acceleration feature to
accelerate the upload. Uploading speeds from slow to fast - direct S3-> Cloudfront to S3-> S3 transfer acceleration
upvoted 3 times
" # Britts 1 year ago
No brainer. C
upvoted 1 times
A company's CISO has asked a Solutions Architect to re-engineer the company's current CI/CD practices to make sure patch deployments to its
applications can happen as quickly as possible with minimal downtime if vulnerabilities are discovered. The company must also be able to quickly
roll back a change in case of errors.
The web application is deployed in a jeet of Amazon EC2 instances behind an Application Load Balancer. The company is currently using GitHub
to host the application source code, and has con+gured an AWS CodeBuild project to build the application. The company also intends to use AWS
CodePipeline to trigger builds from GitHub commits using the existing CodeBuild project.
What CI/CD con+guration meets all of the requirements?
A. Con+gure CodePipeline with a deploy stage using AWS CodeDeploy con+gured for in-place deployment. Monitor the newly deployed code,
and, if there are any issues, push another code update.
B. Con+gure CodePipeline with a deploy stage using AWS CodeDeploy con+gured for blue/green deployments. Monitor the newly deployed
code, and, if there are any issues, trigger a manual rollback using CodeDeploy.
C. Con+gure CodePipeline with a deploy stage using AWS CloudFormation to create a pipeline for test and production stacks. Monitor the
newly deployed code, and, if there are any issues, push another code update.
D. Con+gure the CodePipeline with a deploy stage using AWS OpsWorks and in-place deployments. Monitor the newly deployed code, and, if
there are any issues, push another code update.
Correct Answer: B
A company wants to analyze log data using date ranges with a custom application running on AWS. The application generates about 10 GB of data
every day, which is expected to grow. A Solutions Architect is tasked with storing the data in Amazon S3 and using Amazon Athena to analyze the
data.
Which combination of steps will ensure optimal performance as the data grows? (Choose two.)
A. Store each object in Amazon S3 with a random string at the front of each key.
C. Store the data in Amazon S3 in a columnar format, such as Apache Parquet or Apache ORC.
D. Store the data in Amazon S3 in objects that are smaller than 10 MB.
E. Store the data using Apache Hive partitioning in Amazon S3 using a key that includes a date, such as dt=2019-02.
Correct Answer: BC
An advisory +rm is creating a secure data analytics solution for its regulated +nancial services users. Users will upload their raw data to an
Amazon S3 bucket, where they have PutObject permissions only. Data will be analyzed by applications running on an Amazon EMR cluster
launched in a VPC. The +rm requires that the environment be isolated from the internet. All data at rest must be encrypted using keys controlled
by the +rm.
Which combination of actions should the Solutions Architect take to meet the user's security requirements? (Choose two.)
A. Launch the Amazon EMR cluster in a private subnet con+gured to use an AWS KMS CMK for at-rest encryption. Con+gure a gateway VPC
endpoint for Amazon S3 and an interface VPC endpoint for AWS KMS.
B. Launch the Amazon EMR cluster in a private subnet con+gured to use an AWS KMS CMK for at-rest encryption. Con+gure a gateway VPC
endpoint for Amazon S3 and a NAT gateway to access AWS KMS.
C. Launch the Amazon EMR cluster in a private subnet con+gured to use an AWS CloudHSM appliance for at-rest encryption. Con+gure a
gateway VPC endpoint for Amazon S3 and an interface VPC endpoint for CloudHSM.
D. Con+gure the S3 endpoint policies to permit access to the necessary data buckets only.
E. Con+gure the S3 bucket policies to permit access using an aws:sourceVpce condition to match the S3 endpoint ID.
Correct Answer: AE
Then CE
upvoted 27 times
Reason for E:
upvoted 1 times
SSE-KMS: AWS manages data key and you manage master key
A ) Server-Side Encryption
SSE-S3 (AWS-Managed Keys) => When the requirement is to keep the encryption work simple and minimise the maintenance overhead then use
SSE-S3.
SSE-KMS (AWS KMS Keys) => When the requirement is to maintain a security audit trail then use SSE-KMS Keys.
SSE-C (Customer-Provided Keys) => When end-to-end encryption is not required and the client wants full control of his/her security keys, then
use SSE-C.
B) Client-Side Encryption
AWS KMS-managed, customer master key => When the requirement is to maintain end-to-end encryption plus a security audit trail, then use
AWS KMS Keys.
Client Managed Master Key => When the requirement is to maintain end-to-end encryption but the client wants full control of his/her security
keys, then use Client Managed Master Key.
upvoted 3 times
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-data-encryption-options.html
upvoted 3 times
While debugging a backend application for an IoT system that supports globally distributed devices, a Solutions Architect notices that stale data
is occasionally being sent to user devices. Devices often share data, and stale data does not cause issues in most cases. However, device
operations are disrupted when a device reads the stale data after an update.
The global system has multiple identical application stacks deployed in different AWS Regions. If a user device travels out of its home geographic
region, it will always connect to the geographically closest AWS Region to write or read data. The same data is available in all supported AWS
Regions using an Amazon
DynamoDB global table.
What change should be made to avoid causing disruptions in device operations?
A. Update the backend to use strongly consistent reads. Update the devices to always write to and read from their home AWS Region.
B. Enable strong consistency globally on a DynamoDB global table. Update the backend to use strongly consistent reads.
C. Switch the backend data store to Amazon Aurora MySQL with cross-region replicas. Update the backend to always write to the master
endpoint.
D. Select one AWS Region as a master and perform all writes in that AWS Region only. Update the backend to use strongly consistent reads.
Correct Answer: A
If applications update the same item in different Regions at about the same time, conflicts can arise. To help ensure eventual consistency,
DynamoDB global tables use a last writer wins reconciliation between concurrent updates, in which DynamoDB makes a best effort to determine
the last writer. With this conflict resolution mechanism, all the replicas will agree on the latest update and converge toward a state in which they
all have identical data. “
upvoted 2 times
A software as a service (SaaS) company offers a cloud solution for document management to private law +rms and the public sector. A local
government client recently mandated that highly con+dential documents cannot be stored outside the country. The company CIO asks a Solutions
Architect to ensure the application can adapt to this new requirement. The CIO also wants to have a proper backup plan for these documents, as
backups are not currently performed.
What solution meets these requirements?
A. Tag documents that are not highly con+dential as regular in Amazon S3. Create individual S3 buckets for each user. Upload objects to each
user's bucket. Set S3 bucket replication from these buckets to a central S3 bucket in a different AWS account and AWS Region. Con+gure an
AWS Lambda function triggered by scheduled events in Amazon CloudWatch to delete objects that are tagged as secret in the S3 backup
bucket.
B. Tag documents as either regular or secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS Region.
Create a cross- region S3 bucket in a separate AWS account. Set proper IAM roles to allow cross-region permissions to the S3 buckets.
Con+gure an AWS Lambda function triggered by Amazon CloudWatch scheduled events to copy objects that are tagged as secret to the S3
backup bucket and objects tagged as normal to the cross-region S3 bucket.
C. Tag documents as either regular or secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS Region.
Use S3 selective cross-region replication based on object tags to move regular documents to an S3 bucket in a different AWS Region.
Con+gure an AWS Lambda function that triggers when new S3 objects are created in the main bucket to replicate only documents tagged as
secret into the S3 bucket in the same AWS Region.
D. Tag highly con+dential documents as secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS
Region. Use S3 selective cross-region replication based on object tags to move regular documents to a different AWS Region. Create an
Amazon CloudWatch Events rule for new S3 objects tagged as secret to trigger an AWS Lambda function to replicate them into a separate
bucket in the same AWS Region.
Correct Answer: D
Hence C is correct.
upvoted 1 times
upvoted 1 times
A company has an application that runs on a jeet of Amazon EC2 instances and stores 70 GB of device data for each instance in Amazon S3.
Recently, some of the S3 uploads have been failing. At the same time, the company is seeing an unexpected increase in storage data costs. The
application code cannot be modi+ed.
What is the MOST emcient way to upload the device data to Amazon S3 while managing storage costs?
A. Upload device data using a multipart upload. Use the AWS CLI to list incomplete parts to address the failed S3 uploads. Enable the lifecycle
policy for the incomplete multipart uploads on the S3 bucket to delete the old uploads and prevent new failed uploads from accumulating.
B. Upload device data using S3 Transfer Acceleration. Use the AWS Management Console to address the failed S3 uploads. Use the Multi-
Object Delete operation nightly to delete the old uploads.
C. Upload device data using a multipart upload. Use the AWS Management Console to list incomplete parts to address the failed S3 uploads.
Con+gure a lifecycle policy to archive continuously to Amazon S3 Glacier.
D. Upload device data using S3 Transfer Acceleration. Use the AWS Management Console to list incomplete parts to address the failed S3
uploads. Enable the lifecycle policy for the incomplete multipart uploads on the S3 bucket to delete the old uploads and prevent new failed
uploads from accumulating.
Correct Answer: C
Reference:
https://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-lens-optimize-storage.html#locate-incomplete-mpu
https://aws.amazon.com/blogs/aws-cloud-financial-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-
amazon-s3-costs/
upvoted 1 times
" # TechX Most Recent % 4 months, 1 week ago
Selected Answer: A
Agree with A, best solution here
upvoted 1 times
Additionally, TA is best practice for transferring large files to S3 buckets. As data arrives at the closest edge location, the data is routed to
Amazon S3 over an optimized network path. This will insure more device uploads will not end up in a failed state.
upvoted 2 times
A company is in the process of implementing AWS Organizations to constrain its developers to use only Amazon EC2, Amazon S3, and Amazon
DynamoDB. The
Developers account resides in a dedicated organizational unit (OU). The Solutions Architect has implemented the following SCP on the Developers
account:
When this policy is deployed, IAM users in the Developers account are still able to use AWS services that are not listed in the policy.
What should the Solutions Architect do to eliminate the Developers' ability to use services outside the scope of this policy?
A. Create an explicit deny statement for each AWS service that should be constrained.
D. Add an explicit deny statement using a wildcard to the end of the SCP.
Correct Answer: B
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"cloudwatch:*"
],
"Resource": "*"
}
]
}
An allow list policy might look like the following example, which enables account users to perform operations for Amazon Elastic Compute Cloud
(Amazon EC2) and Amazon CloudWatch, ****but no other service****.
+ The FullAWSAccess SCP doesnt need to be deleted, the fact defining a new SCP is enough..
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_strategies.html#orgs_policies_allowlist
upvoted 1 times
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_inheritance_auth.html
upvoted 2 times
upvoted 2 times
" # M_Asep 1 year, 1 month ago
I Support D
upvoted 1 times
A request results in an explicit deny if an applicable policy includes a Deny statement. If policies that apply to a request include an Allow
statement and a Deny statement, the Deny statement trumps the Allow statement. The request is explicitly denied.
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html
upvoted 4 times
A company developed a Java application and deployed it to an Apache Tomcat server that runs on Amazon EC2 instances. The company's
Engineering team has implemented AWS CloudFormation and Chef Automate to automate the provisioning of and updates to the infrastructure
and con+guration of the application in the development, test, and production environments. These implementations have led to signi+cantly
improves reliability in releasing changes. The Engineering team reports there are frequent service disruptions due to unexpected errors when
updating the application of the Apache Tomcat server.
Which solution will increase the reliability of all releases?
C. Con+gure Amazon CloudFront to serve all requests from the cache while deploying the updates.
Correct Answer: A
Reference:
https://medium.com/@tom.tikkle/blue-green-deployments-increasing-safety-reliability-speed-98a5c6b222b0
For the exam you will default for OpsWorks if you see those keywords.
upvoted 3 times
During a security audit of a Service team's application, a Solutions Architect discovers that a username and password for an Amazon RDS
database and a set of
AWS IAM user credentials can be viewed in the AWS Lambda function code. The Lambda function uses the username and password to run queries
on the database, and it uses the IAM credentials to call AWS services in a separate management account.
The Solutions Architect is concerned that the credentials could grant inappropriate access to anyone who can view the Lambda code. The
management account and the Service team's account are in separate AWS Organizations organizational units (OUs).
Which combination of changes should the Solutions Architect make to improve the solution's security? (Choose two.)
A. Con+gure Lambda to assume a role in the management account with appropriate access to AWS.
B. Con+gure Lambda to use the stored database credentials in AWS Secrets Manager and enable automatic rotation.
C. Create a Lambda function to rotate the credentials every hour by deploying a new Lambda version with the updated credentials.
D. Use an SCP on the management account's OU to prevent IAM users from accessing resources in the Service team's account.
E. Enable AWS Shield Advanced on the management account to shield sensitive resources from unauthorized IAM access.
Correct Answer: BD
in the question, mentioned as “The Solutions Architect is afraid that the credentials might be misused by anybody who can examine the Lambda
code”, so proper access control is needed here. We need D for this.
upvoted 2 times
Kanavpeer is right.
upvoted 1 times
D is wrong as users from one account cannot access resources from another account if not allowed through cross-account access using
assumed roles. There's no need to use SCP for deny
E is wrong as shield is used for ddos protection
C does not make sense with hourly redeploying of lambda
upvoted 2 times
A company is having issues with a newly deployed serverless infrastructure that uses Amazon API Gateway, Amazon Lambda, and Amazon
DynamoDB.
In a steady state, the application performs as expected. However, during peak load, tens of thousands of simultaneous invocations are needed
and user requests fail multiple times before succeeding. The company has checked the logs for each component, focusing speci+cally on Amazon
CloudWatch Logs for Lambda.
There are no errors logged by the services or applications.
What might cause this problem?
A. Lambda has very low memory assigned, which causes the function to fail at peak load.
B. Lambda is in a subnet that uses a NAT gateway to reach out of the internet, and the function instance does not have sumcient Amazon EC2
resources in the VPC to scale with the load.
C. The throttle limit set on API Gateway is very low. During peak load, the additional requests are not making their way through to Lambda.
D. DynamoDB is set up in an auto scaling mode. During peak load, DynamoDB adjusts capacity and throughput behind the scenes, which is
causing the temporary downtime. Once the scaling completes, the retries go through successfully.
Correct Answer: C
Reference:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
The company has checked the logs for each component, focusing specifically on Amazon CloudWatch Logs for Lambda.
and it means there is no error log from lambda. the company actually did not check API gateway's cloudwatch log. if lambda fails, the company
could check it is the lambda problem with cloudwatch logs. furthermore, A is completely wrong because lambda runs pararell with concurrency.
and question says this problem only occurs when during maximum loads. if lambda memory is the cause of problem, it can be failed whenever
under maximum loads or not.
upvoted 18 times
A large company with hundreds of AWS accounts has a newly established centralized internal process for purchasing new or modifying existing
Reserved
Instances. This process requires all business units that want to purchase or modify Reserved Instances to submit requests to a dedicated team
for procurement or execution. Previously, business units would directly purchase or modify Reserved Instances in their own respective AWS
accounts autonomously.
Which combination of steps should be taken to proactively enforce the new process in the MOST secure way possible? (Choose two.)
A. Ensure all AWS accounts are part of an AWS Organizations structure operating in all features mode.
B. Use AWS Con+g to report on the attachment of an IAM policy that denies access to the ec2:PurchaseReservedInstancesOffering and
ec2:ModifyReservedInstances actions.
C. In each AWS account, create an IAM policy with a DENY rule to the ec2:PurchaseReservedInstancesOffering and
ec2:ModifyReservedInstances actions.
D. Create an SCP that contains a deny rule to the ec2:PurchaseReservedInstancesOffering and ec2:ModifyReservedInstances actions. Attach
the SCP to each organizational unit (OU) of the AWS Organizations structure.
E. Ensure that all AWS accounts are part of an AWS Organizations structure operating in consolidated billing features mode.
Correct Answer: CE
A Solutions Architect wants to make sure that only AWS users or roles with suitable permissions can access a new Amazon API Gateway
endpoint. The Solutions
Architect wants an end-to-end view of each request to analyze the latency of the request and create service maps.
How can the Solutions Architect design the API Gateway access control and perform request inspections?
A. For the API Gateway method, set the authorization to AWS_IAM. Then, give the IAM user or role execute-api:Invoke permission on the REST
API resource. Enable the API caller to sign requests with AWS Signature when accessing the endpoint. Use AWS X-Ray to trace and analyze
user requests to API Gateway.
B. For the API Gateway resource, set CORS to enabled and only return the company's domain in Access-Control-Allow-Origin headers. Then,
give the IAM user or role execute-api:Invoke permission on the REST API resource. Use Amazon CloudWatch to trace and analyze user
requests to API Gateway.
C. Create an AWS Lambda function as the custom authorizer, ask the API client to pass the key and secret when making the call, and then use
Lambda to validate the key/secret pair against the IAM system. Use AWS X-Ray to trace and analyze user requests to API Gateway.
D. Create a client certi+cate for API Gateway. Distribute the certi+cate to the AWS users and roles that need to access the endpoint. Enable the
API caller to pass the client certi+cate when accessing the endpoint. Use Amazon CloudWatch to trace and analyze user requests to API
Gateway.
Correct Answer: D
Reference:
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-cors.html
upvoted 2 times
" # Waiweng 1 year ago
it's A
upvoted 5 times
A Solutions Architect needs to design a highly available application that will allow authenticated users to stay connected to the application even
when there are underlying failures.
Which solution will meet these requirements?
A. Deploy the application on Amazon EC2 instances. Use Amazon Route 53 to forward requests to the EC2 instances. Use Amazon DynamoDB
to save the authenticated connection details.
B. Deploy the application on Amazon EC2 instances in an Auto Scaling group. Use an internet-facing Application Load Balancer to handle
requests. Use Amazon DynamoDB to save the authenticated connection details.
C. Deploy the application on Amazon EC2 instances in an Auto Scaling group. Use an internet-facing Application Load Balancer on the front
end. Use EC2 instances to save the authenticated connection details.
D. Deploy the application on Amazon EC2 instances in an Auto Scaling group. Use an internet-facing Application Load Balancer on the front
end. Use EC2 instances hosting a MySQL database to save the authenticated connection details.
Correct Answer: C
Answer is B
upvoted 1 times
My take: B
upvoted 3 times
A company experienced a breach of highly con+dential personal information due to permission issues on an Amazon S3 bucket. The Information
Security team has tightened the bucket policy to restrict access. Additionally, to be better prepared for future attacks, these requirements must be
met:
✑ Identify remote IP addresses that are accessing the bucket objects.
✑ Receive alerts when the security policy on the bucket is changed.
✑ Remediate the policy changes automatically.
Which strategies should the Solutions Architect use?
A. Use Amazon CloudWatch Logs with CloudWatch +lters to identify remote IP addresses. Use CloudWatch Events rules with AWS Lambda to
automatically remediate S3 bucket policy changes. Use Amazon SES with CloudWatch Events rules for alerts.
B. Use Amazon Athena with S3 access logs to identify remote IP addresses. Use AWS Con+g rules with AWS Systems Manager Automation to
automatically remediate S3 bucket policy changes. Use Amazon SNS with AWS Con+g rules for alerts.
C. Use S3 access logs with Amazon Elasticsearch Service and Kibana to identify remote IP addresses. Use an Amazon Inspector assessment
template to automatically remediate S3 bucket policy changes. Use Amazon SNS for alerts.
D. Use Amazon Macie with an S3 bucket to identify access patterns and remote IP addresses. Use AWS Lambda with Macie to automatically
remediate S3 bucket policy changes. Use Macie automatic alerting capabilities for alerts.
Correct Answer: B
https://docs.aws.amazon.com/de_de/macie/latest/user/findings-filter-fields.html
upvoted 1 times
https://aws.amazon.com/blogs/mt/using-aws-systems-manager-opscenter-and-aws-config-for-compliance-monitoring/
upvoted 1 times
A Solutions Architect is designing a deployment strategy for an application tier and has the following requirements:
✑ The application code will need a 500 GB static dataset to be present before application startup.
✑ The application tier must be able to scale up and down based on demand with as little startup time as possible.
✑ The Development team should be able to update the code multiple times each day.
✑ Critical operating system (OS) patches must be installed within 48 hours of being released.
Which deployment strategy meets these requirements?
A. Use AWS Systems Manager to create a new AMI with the updated OS patches. Update the Auto Scaling group to use the patched AMI and
replace existing unpatched instances. Use AWS CodeDeploy to push the application code to the instances. Store the static data in Amazon
EFS.
B. Use AWS Systems Manager to create a new AMI with updated OS patches. Update the Auto Scaling group to use the patched AMI and
replace existing unpatched instances. Update the OS patches and the application code as batch job every night. Store the static data in
Amazon EFS.
C. Use an Amazon-provided AMI for the OS. Con+gure an Auto Scaling group set to a static instance count. Con+gure an Amazon EC2 user
data script to download the data from Amazon S3. Install OS patches with AWS Systems Manager when they are released. Use AWS
CodeDeploy to push the application code to the instances.
D. Use an Amazon-provided AMI for the OS. Con+gure an Auto Scaling group. Con+gure an Amazon EC2 user data script to download the data
from Amazon S3. Replace existing instances after each updated Amazon-provided AMI release. Use AWS CodeDeploy to push the application
code to the instances.
Correct Answer: B
A company is operating a large customer service call center, and stores and processes call recordings with a custom application. Approximately
2% of the call recordings are transcribed by an offshore team for quality assurance purposes. These recordings take up to 72 hours to be
transcribed. The recordings are stored on an NFS share before they are archived to an offsite location after 90 days. The company uses Linux
servers for processing the call recordings and managing the transcription queue. There is also a web application for the quality assurance staff to
review and score call recordings.
The company plans to migrate the system to AWS to reduce storage costs and the time required to transcribe calls.
Which set of actions should be taken to meet the company's objectives?
A. Upload the call recordings to Amazon S3 from the call center. Set up an S3 lifecycle policy to move the call recordings to Amazon S3 Glacier
after 90 days. Use an AWS Lambda trigger to transcribe the call recordings with Amazon Transcribe. Use Amazon S3, Amazon API Gateway,
and Lambda to host the review and scoring application.
B. Upload the call recordings to Amazon S3 from the call center. Set up an S3 lifecycle policy to move the call recordings to Amazon S3 Glacier
after 90 days. Use an AWS Lambda trigger to transcribe the call recordings with Amazon Mechanical Turk. Use Amazon EC2 instances in an
Auto Scaling group behind an Application Load Balancer to host the review and scoring application.
C. Use Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer to host the review and scoring application.
Upload the call recordings to this application from the call center and store them on an Amazon EFS mount point. Use AWS Backup to archive
the call recordings after 90 days. Transcribe the call recordings with Amazon Transcribe.
D. Upload the call recordings to Amazon S3 from the call center and put the object key in an Amazon SQS queue. Set up an S3 lifecycle policy
to move the call recordings to Amazon S3 Glacier after 90 days. Use Amazon EC2 instances in an Auto Scaling group to send the recordings to
Amazon Mechanical Turk for transcription. Use the number of objects in the queue as the scaling metric. Use Amazon S3, Amazon API
Gateway, and AWS Lambda to host the review and scoring application.
Correct Answer: A
A Solutions Architect is building a containerized .NET Core application that will run in AWS Fargate. The backend of the application requires
Microsoft SQL Server with high availability. All tiers of the application must be highly available. The credentials used for the connection string to
SQL Server should not be stored on disk within the .NET Core front-end containers.
Which strategies should the Solutions Architect use to meet these requirements?
A. Set up SQL Server to run in Fargate with Service Auto Scaling. Create an Amazon ECS task execution role that allows the Fargate task
de+nition to get the secret value for the credentials to SQL Server running in Fargate. Specify the ARN of the secret in AWS Secrets Manager in
the secrets section of the Fargate task de+nition so the sensitive data can be injected into the containers as environment variables on startup
for reading into the application to construct the connection string. Set up the .NET Core service using Service Auto Scaling behind an
Application Load Balancer in multiple Availability Zones.
B. Create a Multi-AZ deployment of SQL Server on Amazon RDS. Create a secret in AWS Secrets Manager for the credentials to the RDS
database. Create an Amazon ECS task execution role that allows the Fargate task de+nition to get the secret value for the credentials to the
RDS database in Secrets Manager. Specify the ARN of the secret in Secrets Manager in the secrets section of the Fargate task de+nition so
the sensitive data can be injected into the containers as environment variables on startup for reading into the application to construct the
connection string. Set up the .NET Core service in Fargate using Service Auto Scaling behind an Application Load Balancer in multiple
Availability Zones.
C. Create an Auto Scaling group to run SQL Server on Amazon EC2. Create a secret in AWS Secrets Manager for the credentials to SQL Server
running on EC2. Create an Amazon ECS task execution role that allows the Fargate task de+nition to get the secret value for the credentials to
SQL Server on EC2. Specify the ARN of the secret in Secrets Manager in the secrets section of the Fargate task de+nition so the sensitive data
can be injected into the containers as environment variables on startup for reading into the application to construct the connection string. Set
up the .NET Core service using Service Auto Scaling behind an Application Load Balancer in multiple Availability Zones.
D. Create a Multi-AZ deployment of SQL Server on Amazon RDS. Create a secret in AWS Secrets Manager for the credentials to the RDS
database. Create non- persistent empty storage for the .NET Core containers in the Fargate task de+nition to store the sensitive information.
Create an Amazon ECS task execution role that allows the Fargate task de+nition to get the secret value for the credentials to the RDS
database in Secrets Manager. Specify the ARN of the secret in Secrets Manager in the secrets section of the Fargate task de+nition so the
sensitive data can be written to the non-persistent empty storage on startup for reading into the application to construct the connection string.
Set up the .NET Core service using Service Auto Scaling behind an Application Load Balancer in multiple Availability Zones.
Correct Answer: D
storage.html https://aws.amazon.com/premiumsupport/knowledge-center/ecs-data-security-container-task/
upvoted 1 times
" # kangtamo 4 months, 1 week ago
Selected Answer: B
It should be B, retrieving RDS credentials from Secret Manager.
upvoted 1 times
An enterprise company wants to implement cost controls for all its accounts in AWS Organizations, which has full features enabled. The company
has mapped organizational units (OUs) to its business units, and it wants to bill these business units for their individual AWS spending. There has
been a recent spike in the company's AWS bill, which is generating attention from the Finance team. A Solutions Architect needs to investigate the
cause of the spike while designing a solution that will track AWS costs in Organizations and generate a noti+cation to the required teams if costs
from a business unit exceed a speci+c monetary threshold.
Which solution will meet these requirements?
A. Use Cost Explorer to troubleshoot the reason for the additional costs. Set up an AWS Lambda function to monitor the company's AWS bill
by each AWS account in an OU. Store the threshold amount set by the Finance team in the AWS Systems Manager Parameter Store. Write the
custom rules in the Lambda function to verify any hidden costs for the AWS accounts. Trigger a noti+cation from the Lambda function to an
Amazon SNS topic when a budget threshold is breached.
B. Use AWS Trusted Advisor to troubleshoot the reason for the additional costs. Set up an AWS Lambda function to monitor the company's
AWS bill by each AWS account in an OU. Store the threshold amount set by the Finance team in the AWS Systems Manager Parameter Store.
Write custom rules in the Lambda function to verify any hidden costs for the AWS accounts. Trigger an email to the required teams from the
Lambda function using Amazon SNS when a budget threshold is breached.
C. Use Cost Explorer to troubleshoot the reason for the additional costs. Create a budget using AWS Budgets with the monetary amount set by
the Finance team for each OU by grouping the linked accounts. Con+gure an Amazon SNS noti+cation to the required teams in the budget.
D. Use AWS Trusted Advisor to troubleshoot the reason for the additional costs. Create a budget using AWS Budgets with the monetary
amount set by the Finance team for each OU by grouping the linked accounts. Add the Amazon EC2 instance types to be used in the company
as a budget +lter. Con+gure an Amazon SNS topic with a subscription for the Finance team email address to receive budget noti+cations.
Correct Answer: C
AWS Trusted Advisor – Get real-time identification of potential areas for optimization.
AWS Budgets – Set custom budgets that trigger alerts when cost or usage exceed (or are forecasted to exceed) a budgeted amount. Budgets
can be set based on tags and accounts as well as resource types.
upvoted 6 times
A company is developing a new service that will be accessed using TCP on a static port. A solutions architect must ensure that the service is
highly available, has redundancy across Availability Zones, and is accessible using the DNS name my.service.com, which is publicly accessible.
The service must use +xed address assignments so other companies can add the addresses to their allow lists.
Assuming that resources are deployed in multiple Availability Zones in a single Region, which solution will meet these requirements?
A. Create Amazon EC2 instances with an Elastic IP address for each instance. Create a Network Load Balancer (NLB) and expose the static
TCP port. Register EC2 instances with the NLB. Create a new name server record set named my.service.com, and assign the Elastic IP
addresses of the EC2 instances to the record set. Provide the Elastic IP addresses of the EC2 instances to the other companies to add to their
allow lists.
B. Create an Amazon ECS cluster and a service de+nition for the application. Create and assign public IP addresses for the ECS cluster. Create
a Network Load Balancer (NLB) and expose the TCP port. Create a target group and assign the ECS cluster name to the NLB. Create a new A
record set named my.service.com, and assign the public IP addresses of the ECS cluster to the record set. Provide the public IP addresses of
the ECS cluster to the other companies to add to their allow lists.
C. Create Amazon EC2 instances for the service. Create one Elastic IP address for each Availability Zone. Create a Network Load Balancer
(NLB) and expose the assigned TCP port. Assign the Elastic IP addresses to the NLB for each Availability Zone. Create a target group and
register the EC2 instances with the NLB. Create a new A (alias) record set named my.service.com, and assign the NLB DNS name to the record
set.
D. Create an Amazon ECS cluster and a service de+nition for the application. Create and assign public IP address for each host in the cluster.
Create an Application Load Balancer (ALB) and expose the static TCP port. Create a target group and assign the ECS service de+nition name
to the ALB. Create a new CNAME record set and associate the public IP addresses to the record set. Provide the Elastic IP addresses of the
Amazon EC2 instances to the other companies to add to their allow lists.
Correct Answer: B
Selected Answer: C
No-brainer
upvoted 2 times
" # Devgela 9 months, 2 weeks ago
C. Assign the Elastic IP addresses to the NLB make the answers correct
upvoted 1 times
A company is running a web application with On-Demand Amazon EC2 instances in Auto Scaling groups that scale dynamically based on custom
metrics. After extensive testing, the company determines that the m5.2xlarge instance size is optimal for the workload. Application data is stored
in db.r4.4xlarge Amazon RDS instances that are con+rmed to be optimal. The tramc to the web application spikes randomly during the day.
What other cost-optimization methods should the company implement to further reduce costs without impacting the reliability of the application?
A. Double the instance count in the Auto Scaling groups and reduce the instance size to m5.large.
B. Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running.
C. Reduce the RDS instance size to db.r4.xlarge and add +ve equivalently sized read replicas to provide reliability.
D. Reserve capacity for all EC2 instances and leverage Spot Instance pricing for the RDS database.
Correct Answer: B
This article by AWS clearly states that by 'reserving capacity' you are reserving the instances and reducing your costs. See -
https://aws.amazon.com/aws-cost-management/aws-cost-optimization/reserved-instances/
upvoted 1 times
https://aws.amazon.com/blogs/aws/s3-lifecycle-management-update-support-for-multipart-uploads-and-delete-markers/
upvoted 1 times
During an audit, a security team discovered that a development team was putting IAM user secret access keys in their code and then committing it
to an AWS
CodeCommit repository. The security team wants to automatically +nd and remediate instances of this security vulnerability.
Which solution will ensure that the credentials are appropriately secured automatically?
A. Run a script nightly using AWS Systems Manager Run Command to search for credentials on the development instances. If found, use AWS
Secrets Manager to rotate the credentials.
B. Use a scheduled AWS Lambda function to download and scan the application code from CodeCommit. If credentials are found, generate
new credentials and store them in AWS KMS.
C. Con+gure Amazon Macie to scan for credentials in CodeCommit repositories. If credentials are found, trigger an AWS Lambda function to
disable the credentials and notify the user.
D. Con+gure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials. If credentials are found,
disable them in AWS IAM and notify the user.
Correct Answer: C
Reference:
https://aws.amazon.com/blogs/security/how-to-+nd-update-access-keys-password-mfa-aws-management-console/
amazon-s3-using-codebuild-and-cloudwatch-events.html
https://aws.amazon.com/blogs/compute/discovering-sensitive-data-in-aws-codecommit-with-aws-lambda-2/
upvoted 2 times
Using Mercie means you are saving your code artefacts to S3 instead.
upvoted 1 times
A company is using AWS CodePipeline for the CI/CD of an application to an Amazon EC2 Auto Scaling group. All AWS resources are de+ned in
AWS
CloudFormation templates. The application artifacts are stored in an Amazon S3 bucket and deployed to the Auto Scaling group using instance
user data scripts.
As the application has become more complex, recent resource changes in the CloudFormation templates have caused unplanned downtime.
How should a solutions architect improve the CI/CD pipeline to reduce the likelihood that changes in the templates will cause downtime?
A. Adapt the deployment scripts to detect and report CloudFormation error conditions when performing deployments. Write test plans for a
testing team to execute in a non-production environment before approving the change for production.
B. Implement automated testing using AWS CodeBuild in a test environment. Use CloudFormation change sets to evaluate changes before
deployment. Use AWS CodeDeploy to leverage blue/green deployment patterns to allow evaluations and the ability to revert changes, if
needed.
C. Use plugins for the integrated development environment (IDE) to check the templates for errors, and use the AWS CLI to validate that the
templates are correct. Adapt the deployment code to check for error conditions and generate noti+cations on errors. Deploy to a test
environment and execute a manual test plan before approving the change for production.
D. Use AWS CodeDeploy and a blue/green deployment pattern with CloudFormation to replace the user data deployment scripts. Have the
operators log in to running instances and go through a manual test plan to verify the application is running as expected.
Correct Answer: D
https://aws.amazon.com/blogs/devops/performing-bluegreen-deployments-with-aws-codedeploy-and-auto-scaling-groups/
upvoted 21 times
A +nancial services company is moving to AWS and wants to enable developers to experiment and innovate while preventing access to production
applications.
The company has the following requirements:
✑ Production workloads cannot be directly connected to the internet.
✑ All workloads must be restricted to the us-west-2 and eu-central-1 Regions.
✑ Noti+cation should be sent when developer sandboxes exceed $500 in AWS spending monthly.
Which combination of actions needs to be taken to create a multi-account structure that meets the company's requirements? (Choose three.)
A. Create accounts for each production workload within an organization in AWS Organizations. Place the production accounts within an
organizational unit (OU). For each account, delete the default VPC. Create an SCP with a Deny rule for the attach an internet gateway and
create a default VPC actions. Attach the SCP to the OU for the production accounts.
B. Create accounts for each production workload within an organization in AWS Organizations. Place the production accounts within an
organizational unit (OU). Create an SCP with a Deny rule on the attach an internet gateway action. Create an SCP with a Deny rule to prevent
use of the default VPC. Attach the SCPs to the OU for the production accounts.
C. Create a SCP containing a Deny Effect for cloudfront:*, iam:*, route53:*, and support:* with a StringNotEquals condition on an
aws:RequestedRegion condition key with us-west-2 and eu-central-1 values. Attach the SCP to the organization's root.
D. Create an IAM permission boundary containing a Deny Effect for cloudfront:*, iam:*, route53:*, and support:* with a StringNotEquals
condition on an aws:RequestedRegion condition key with us-west-2 and eu-central-1 values. Attach the permission boundary to an IAM group
containing the development and production users.
E. Create accounts for each development workload within an organization in AWS Organizations. Place the development accounts within an
organizational unit (OU). Create a custom AWS Con+g rule to deactivate all IAM users when an account's monthly bill exceeds $500.
F. Create accounts for each development workload within an organization in AWS Organizations. Place the development accounts within an
organizational unit (OU). Create a budget within AWS Budgets for each development account to monitor and report on monthly spending
exceeding $500.
In conclusion, ACF
upvoted 7 times
"and create a default VPC actions. Create an SCP with a Deny rule to prevent use of the default VPC"
Obviousy it can be understood that "create default vpc actions" means the default vpc for the prod environment....
And when it is said that..."Create an SCP with a Deny rule to prevent use of the default VPC"... It can be understood that it is talking about th
original "default VPC" no the new one... isn´t it?
In any case It is too much "It can be understood"... So I go for ACF, nobody will use never that VPC so I for me it has more sense cleaning the
entire network structure of prod (consdering B syntax).
upvoted 1 times
" # Ebi Highly Voted $ 1 year ago
ACF is the right answer.
B can not be the answer, there is no way to have one single SCP at OU or root level to deny using of default VPC in each account
upvoted 23 times
B(wrong): "Create an SCP with a Deny rule to prevent use of the default VPC." It is impossible to do this.
D(wrong): Permission boundary can only be attached to user or role, rather than IAM group.
E(wrong): Obviously wrong. AWS Budgets should be used.
upvoted 2 times
upvoted 1 times
" # tkanmani76 9 months, 3 weeks ago
A - Why not B ? Tried searching SCP for VPC - we can deny creation of default VPC (CreateDefaultVpc), there are none to stop using it. So only
way is to delete.
D - Why not C ? Per AWS it is not a good practice to attach SCP to root.
F - No contention with E here.
upvoted 1 times
A company is hosting a three-tier web application in an on-premises environment. Due to a recent surge in tramc that resulted in downtime and a
signi+cant +nancial impact, company management has ordered that the application be moved to AWS. The application is written in .NET and has a
dependency on a MySQL database. A solutions architect must design a scalable and highly available solution to meet the demand of 200,000 daily
users.
Which steps should the solutions architect take to design an appropriate solution?
A. Use AWS Elastic Beanstalk to create a new application with a web server environment and an Amazon RDS MySQL Multi-AZ DB instance.
The environment should launch a Network Load Balancer (NLB) in front of an Amazon EC2 Auto Scaling group in multiple Availability Zones.
Use an Amazon Route 53 alias record to route tramc from the company's domain to the NLB.
B. Use AWS CloudFormation to launch a stack containing an Application Load Balancer (ALB) in front of an Amazon EC2 Auto Scaling group
spanning three Availability Zones. The stack should launch a Multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a Retain
deletion policy. Use an Amazon Route 53 alias record to route tramc from the company's domain to the ALB.
C. Use AWS Elastic Beanstalk to create an automatically scaling web server environment that spans two separate Regions with an Application
Load Balancer (ALB) in each Region. Create a Multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a cross-Region read replica.
Use Amazon Route 53 with a geoproximity routing policy to route tramc between the two Regions.
D. Use AWS CloudFormation to launch a stack containing an Application Load Balancer (ALB) in front of an Amazon ECS cluster of Spot
instances spanning three Availability Zones. The stack should launch an Amazon RDS MySQL DB instance with a Snapshot deletion policy. Use
an Amazon Route 53 alias record to route tramc from the company's domain to the ALB.
Correct Answer: A
Question forcus on HA
Amazon Aurora is designed for spead storage on three AZ => HA more than RDS only
upvoted 1 times
" # Sathish1412 2 months, 1 week ago
B is best option for the requirement
upvoted 1 times
daily demands of 200,000 users < Network Load Balancer is capable of handling millions of requests per second while maintaining ultra-low
latencies.
https://aws.amazon.com/elasticloadbalancing/network-load-balancer/
upvoted 2 times
you can also AAdding an Amazon RDS DB instance to your .NET application environment
upvoted 1 times
upvoted 1 times
A solutions architect is designing a publicly accessible web application that is on an Amazon CloudFront distribution with an Amazon S3 website
endpoint as the origin. When the solution is deployed, the website returns an Error 403: Access Denied message.
Which steps should the solutions architect take to correct the issue? (Choose two.)
C. Remove the origin access identity (OAI) from the CloudFront distribution.
D. Change the storage class from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA).
Correct Answer: AC
https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html
upvoted 1 times
" # Hari008 7 months, 3 weeks ago
Here the key word is publicly available, i will go with A&C
upvoted 1 times
The question says it is using a S3 Website Endpoint. OAI can only be used when Cloudfront needs to access a REST API endpoint, so removing
OAI would fix this problem.
- Using a REST API endpoint as the origin, with access restricted by an origin access identity (OAI)
- Using a website endpoint as the origin, with anonymous (public) access allowed
- Using a website endpoint as the origin, with access restricted by a Referer header
upvoted 5 times
A web application is hosted in a dedicated VPC that is connected to a company's on-premises data center over a Site-to-Site VPN connection. The
application is accessible from the company network only. This is a temporary non-production application that is used during business hours. The
workload is generally low with occasional surges.
The application has an Amazon Aurora MySQL provisioned database cluster on the backend. The VPC has an internet gateway and a NAT
gateways attached.
The web servers are in private subnets in an Auto Scaling group behind an Elastic Load Balancer. The web servers also upload data to an Amazon
S3 bucket through the internet.
A solutions architect needs to reduce operational costs and simplify the architecture.
Which strategy should the solutions architect use?
A. Review the Auto Scaling group settings and ensure the scheduled actions are speci+ed to operate the Amazon EC2 instances during
business hours only. Use 3-year scheduled Reserved Instances for the web server EC2 instances. Detach the internet gateway and remove the
NAT gateways from the VPC. Use an Aurora Serverless database and set up a VPC endpoint for the S3 bucket.
B. Review the Auto Scaling group settings and ensure the scheduled actions are speci+ed to operate the Amazon EC2 instances during
business hours only. Detach the internet gateway and remove the NAT gateways from the VPC. Use an Aurora Serverless database and set up
a VPC endpoint for the S3 bucket, then update the network routing and security rules and policies related to the changes.
C. Review the Auto Scaling group settings and ensure the scheduled actions are speci+ed to operate the Amazon EC2 instances during
business hours only. Detach the internet gateway from the VPC, and use an Aurora Serverless database. Set up a VPC endpoint for the S3
bucket, then update the network routing and security rules and policies related to the changes.
D. Use 3-year scheduled Reserved Instances for the web server Amazon EC2 instances. Remove the NAT gateways from the VPC, and set up a
VPC endpoint for the S3 bucket. Use Amazon CloudWatch and AWS Lambda to stop and start the Aurora DB cluster so it operates during
business hours only. Update the network routing and security rules and policies related to the changes.
Correct Answer: C
This link shows you how to create a site-to-site VPN connection to your AWS VPCs. No internet gateway or NAT gateway is required
upvoted 1 times
B is the right answer. A and D are out because scheduled reserved instances are not required as it is a temporary application. C is identical to B
but it keeps the NAT Gateway which has extra unnecessary cost when we are using VPC endpoint to talk to S3.
upvoted 2 times
A company plans to refactor a monolithic application into a modern application design deployed on AWS. The CI/CD pipeline needs to be
upgraded to support the modern design for the application with the following requirements:
✑ It should allow changes to be released several times every hour.
✑ It should be able to roll back the changes as quickly as possible.
Which design will meet these requirements?
A. Deploy a CI/CD pipeline that incorporates AMIs to contain the application and their con+gurations. Deploy the application by replacing
Amazon EC2 instances.
B. Specify AWS Elastic Beanstalk to stage in a secondary environment as the deployment target for the CI/CD pipeline of the application. To
deploy, swap the staging and production environment URLs.
C. Use AWS Systems Manager to re-provision the infrastructure for each deployment. Update the Amazon EC2 user data to pull the latest code
artifact from Amazon S3 and use Amazon Route 53 weighted routing to point to the new environment.
D. Roll out the application updates as part of an Auto Scaling event using prebuilt AMIs. Use new versions of the AMIs to add instances, and
phase out all instances that use the previous AMI version with the con+gured termination policy during a deployment event.
Correct Answer: A
going with B
upvoted 2 times
" # Ebi 1 year ago
Although there is no clarification of the platform and development env used, but the closest answer in here is B
upvoted 3 times
A company currently has data hosted in an IBM Db2 database. A web application calls an API that runs stored procedures on the database to
retrieve user information data that is read-only. This data is historical in nature and changes on a daily basis. When a user logs in to the
application, this data needs to be retrieved within 3 seconds. Each time a user logs in, the stored procedures run. Users log in several times a day
to check stock prices.
Running this database has become cost-prohibitive due to Db2 CPU licensing. Performance goals are not being met. Timeouts from Db2 are
common due to long-running queries.
Which approach should a solutions architect take to migrate this solution to AWS?
A. Rehost the Db2 database in Amazon Fargate. Migrate all the data. Enable caching in Fargate. Refactor the API to use the Fargate Db2
database. Implement Amazon API Gateway and enable API caching.
B. Use AWS DMS to migrate data to Amazon DynamoDB using a continuous replication task. Refactor the API to use the DynamoDB data.
Implement the refactored API in Amazon API Gateway and enable API caching.
C. Create a local cache on the mainframe to store query outputs. Use SFTP to sync to Amazon S3 on a daily basis. Refactor the API to use
Amazon EFS. Implement Amazon API Gateway and enable API caching.
D. Extract data daily and copy the data to AWS Snowball for storage on Amazon S3. Sync daily. Refactor the API to use the S3 data. Implement
Amazon API Gateway and enable API caching.
Correct Answer: A
A company is planning to deploy a new business analytics application that requires 10,000 hours of compute time each month. The compute
resources can have jexible availability, but must be as cost-effective as possible. The company will also provide a reporting service to distribute
analytics reports, which needs to run at all times.
How should the Solutions Architect design a solution that meets these requirements?
A. Deploy the reporting service on a Spot Fleet. Deploy the analytics application as a container in Amazon ECS with AWS Fargate as the
compute option. Set the analytics application to use a custom metric with Service Auto Scaling.
B. Deploy the reporting service on an On-Demand Instance. Deploy the analytics application as a container in AWS Batch with AWS Fargate as
the compute option. Set the analytics application to use a custom metric with Service Auto Scaling.
C. Deploy the reporting service as a container in Amazon ECS with AWS Fargate as the compute option. Deploy the analytics application on a
Spot Fleet. Set the analytics application to use a custom metric with Amazon EC2 Auto Scaling applied to the Spot Fleet.
D. Deploy the reporting service as a container in Amazon ECS with AWS Fargate as the compute option. Deploy the analytics application on an
On-Demand Instance and purchase a Reserved Instance with a 3-year term. Set the analytics application to use a custom metric with Amazon
EC2 Auto Scaling applied to the On-Demand Instance.
Correct Answer: C
So pretty much both answers are valid. But, considering the business perspective: it's a new application. Would you want to commit yourself
for the next 3 years with unknown outcome? Sure you can modify them later or resell, but still.
I'd choose C.
upvoted 2 times
A company is migrating its three-tier web application from on-premises to the AWS Cloud. The company has the following requirements for the
migration process:
✑ Ingest machine images from the on-premises environment.
✑ Synchronize changes from the on-premises environment to the AWS environment until the production cutover.
✑ Minimize downtime when executing the production cutover.
✑ Migrate the virtual machines' root volumes and data volumes.
Which solution will satisfy these requirements with minimal operational overhead?
A. Use AWS Server Migration Service (SMS) to create and launch a replication job for each tier of the application. Launch instances from the
AMIs created by AWS SMS. After initial testing, perform a +nal replication and create new instances from the updated AMIs.
B. Create an AWS CLI VM Import/Export script to migrate each virtual machine. Schedule the script to run incrementally to maintain changes
in the application. Launch instances from the AMIs created by VM Import/Export. Once testing is done, rerun the script to do a +nal import
and launch the instances from the AMIs.
C. Use AWS Server Migration Service (SMS) to upload the operating system volumes. Use the AWS CLI import-snapshot command for the data
volumes. Launch instances from the AMIs created by AWS SMS and attach the data volumes to the instances. After initial testing, perform a
+nal replication, launch new instances from the replicated AMIs, and attach the data volumes to the instances.
D. Use AWS Application Discovery Service and AWS Migration Hub to group the virtual machines as an application. Use the AWS CLI VM
Import/Export script to import the virtual machines as AMIs. Schedule the script to run incrementally to maintain changes in the application.
Launch instances from the AMIs. After initial testing, perform a +nal virtual machine import and launch new instances from the AMIs.
Correct Answer: B
Right option is A
upvoted 1 times
An enterprise company's data science team wants to provide a safe, cost-effective way to provide easy access to Amazon SageMaker. The data
scientists have limited AWS knowledge and need to be able to launch a Jupyter notebook instance. The notebook instance needs to have a
precon+gured AWS KMS key to encrypt data at rest on the machine learning storage volume without exposing the complex setup requirements.
Which approach will allow the company to set up a self-service mechanism for the data scientists to launch Jupyter notebooks in its AWS
accounts with the
LEAST amount of operational overhead?
A. Create a serverless front end using a static Amazon S3 website to allow the data scientists to request a Jupyter notebook instance by +lling
out a form. Use Amazon API Gateway to receive requests from the S3 website and trigger a central AWS Lambda function to make an API call
to Amazon SageMaker that will launch a notebook instance with a precon+gured KMS key for the data scientists. Then call back to the front-
end website to display the URL to the notebook instance.
B. Create an AWS CloudFormation template to launch a Jupyter notebook instance using the AWS::SageMaker::NotebookInstance resource
type with a precon+gured KMS key. Add a user-friendly name to the CloudFormation template. Display the URL to the notebook using the
Outputs section. Distribute the CloudFormation template to the data scientists using a shared Amazon S3 bucket.
C. Create an AWS CloudFormation template to launch a Jupyter notebook instance using the AWS::SageMaker::NotebookInstance resource
type with a precon+gured KMS key. Simplify the parameter names, such as the instance size, by mapping them to Small, Large, and X-Large
using the Mappings section in CloudFormation. Display the URL to the notebook using the Outputs section, then upload the template into an
AWS Service Catalog product in the data scientist's portfolio, and share it with the data scientist's IAM role.
D. Create an AWS CLI script that the data scientists can run locally. Provide step-by-step instructions about the parameters to be provided
while executing the AWS CLI script to launch a Jupyter notebook with a precon+gured KMS key. Distribute the CLI script to the data scientists
using a shared Amazon S3 bucket.
Correct Answer: B
upvoted 2 times
" # AzureDP900 11 months ago
I will go with C
upvoted 1 times
A company is migrating its applications to AWS. The applications will be deployed to AWS accounts owned by business units. The company has
several teams of developers who are responsible for the development and maintenance of all applications. The company is expecting rapid growth
in the number of users.
The company's chief technology omcer has the following requirements:
✑ Developers must launch the AWS infrastructure using AWS CloudFormation.
Developers must not be able to create resources outside of CloudFormation.
A. Using CloudFormation, create an IAM role that can be assumed by CloudFormation that has permissions to create all the resources the
company needs. Use CloudFormation StackSets to deploy this template to each AWS account.
B. In a central account, create an IAM role that can be assumed by developers, and attach a policy that allows interaction with
CloudFormation. Modify the AssumeRolePolicyDocument action to allow the IAM role to be passed to CloudFormation.
C. Using CloudFormation, create an IAM role that can be assumed by developers, and attach policies that allow interaction with and passing a
role to CloudFormation. Attach an inline policy to deny access to all other AWS services. Use CloudFormation StackSets to deploy this
template to each AWS account.
D. Using CloudFormation, create an IAM role for each developer, and attach policies that allow interaction with CloudFormation. Use
CloudFormation StackSets to deploy this template to each AWS account.
E. In a central AWS account, create an IAM role that can be assumed by CloudFormation that has permissions to create the resources the
company requires. Create a CloudFormation stack policy that allows the IAM role to manage resources. Use CloudFormation StackSets to
deploy the CloudFormation stack policy to each AWS account.
Correct Answer: CE
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html
"AssumeRolePolicyDocument
The trust policy that is associated with this role. Trust policies define which entities can assume the role."
upvoted 1 times
" # tomosabc1 1 month ago
Selected Answer: AC
The answer is AC.
B(wrong):"Modify the AssumeRolePolicyDocument action to allow the IAM role to be passed to CloudFormation." => this sentence is wrong.
"AssumeRolePolicyDocument
The trust policy that is associated with this role. Trust policies define which entities can assume the role."
We need to use iam:Passrole to pass the role from developer to cloudformation.
D(wrong): "create an IAM role for each developer". This sentence is wrong.
E(wrong): The newly created role in central account cannot be directly used by CloudFormation to create resources in other account. In addition,
similar to S3 bucket policy, CloudFormation stack policy is used to control who can update the stack, rather than allowing the stack to
create/manage AWS resource.
upvoted 1 times
A media company has a static web application that is generated programmatically. The company has a build pipeline that generates HTML
content that is uploaded to an Amazon S3 bucket served by Amazon CloudFront. The build pipeline runs inside a Build Account. The S3 bucket and
CloudFront distribution are in a Distribution Account. The build pipeline uploads the +les to Amazon S3 using an IAM role in the Build Account.
The S3 bucket has a bucket policy that only allows CloudFront to read objects using an origin access identity (OAI). During testing all attempts to
access the application using the CloudFront URL result in an
HTTP 403 Access Denied response.
What should a solutions architect suggest to the company to allow access the objects in Amazon S3 through CloudFront?
A. Modify the S3 upload process in the Build Account to add the bucket-owner-full-control ACL to the objects at upload.
B. Create a new cross-account IAM role in the Distribution Account with write access to the S3 bucket. Modify the build pipeline to assume
this role to upload the +les to the Distribution Account.
C. Modify the S3 upload process in the Build Account to set the object owner to the Distribution Account.
D. Create a new IAM role in the Distribution Account with read access to the S3 bucket. Con+gure CloudFront to use this new role as its OAI.
Modify the build pipeline to assume this role when uploading +les from the Build Account.
Correct Answer: B
The cross account role sets the owner as the distribution account.
upvoted 1 times
https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html#object-ownership-replication
upvoted 1 times
One of those questions from AWS which evaluates ability to pick the BEST answer not only the right one
upvoted 1 times
A company has built a high performance computing (HPC) cluster in AWS for a tightly coupled workload that generates a large number of shared
+les stored in
Amazon EFS. The cluster was performing well when the number of Amazon EC2 instances in the cluster was 100. However, when the company
increased the cluster size to 1,000 EC2 instances, overall performance was well below expectations.
Which collection of design choices should a solutions architect make to achieve the maximum performance from the HPC cluster? (Choose
three.)
B. Launch the EC2 instances and attach elastic network interfaces in multiples of four.
C. Select EC2 instance types with an Elastic Fabric Adapter (EFA) enabled.
E. Replace Amazon EFS win multiple Amazon EBS volumes in a RAID array.
Cluster – packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network
performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
upvoted 40 times
A company with multiple accounts is currently using a con+guration that does not meet the following security governance policies:
✑ Prevent ingress from port 22 to any Amazon EC2 instance.
✑ Require billing and application tags for resources.
✑ Encrypt all Amazon EBS volumes.
A solutions architect wants to provide preventive and detective controls, including noti+cations about a speci+c resource, if there are policy
deviations.
Which solution should the solutions architect implement?
A. Create an AWS CodeCommit repository containing policy-compliant AWS CloudFormation templates. Create an AWS Service Catalog
portfolio. Import the CloudFormation templates by attaching the CodeCommit repository to the portfolio. Restrict users across all accounts to
items from the AWS Service Catalog portfolio. Use AWS Con+g managed rules to detect deviations from the policies. Con+gure an Amazon
CloudWatch Events rule for deviations, and associate a CloudWatch alarm to send noti+cations when the TriggeredRules metric is greater than
zero.
B. Use AWS Service Catalog to build a portfolio with products that are in compliance with the governance policies in a central account.
Restrict users across all accounts to AWS Service Catalog products. Share a compliant portfolio to other accounts. Use AWS Con+g managed
rules to detect deviations from the policies. Con+gure an Amazon CloudWatch Events rule to send a noti+cation when a deviation occurs.
C. Implement policy-compliant AWS CloudFormation templates for each account, and ensure that all provisioning is completed by
CloudFormation. Con+gure Amazon Inspector to perform regular checks against resources. Perform policy validation and write the
assessment output to Amazon CloudWatch Logs. Create a CloudWatch Logs metric +lter to increment a metric when a deviation occurs.
Con+gure a CloudWatch alarm to send noti+cations when the con+gured metric is greater than zero.
D. Restrict users and enforce least privilege access using AWS IAM. Consolidate all AWS CloudTrail logs into a single account. Send the
CloudTrail logs to Amazon Elasticsearch Service (Amazon ES). Implement monitoring, alerting, and reporting using the Kibana dashboard in
Amazon ES and with Amazon SNS.
Correct Answer: C
B is correct answer
upvoted 1 times
" # tgv 1 year ago
BBB
---
upvoted 2 times
A company is manually deploying its application to production and wants to move to a more mature deployment pattern. The company has asked
a solutions architect to design a solution that leverages its current Chef tools and knowledge. The application must be deployed to a staging
environment for testing and veri+cation before being deployed to production. Any new deployment must be rolled back in 5 minutes if errors are
discovered after a deployment.
Which AWS service and deployment pattern should the solutions architect use to meet these requirements?
A. Use AWS Elastic Beanstalk and deploy the application using a rolling update deployment strategy.
B. Use AWS CodePipeline and deploy the application using a rolling update deployment strategy.
C. Use AWS CodeBuild and deploy the application using a canary deployment strategy.
D. Use AWS OpsWorks and deploy the application using a blue/green deployment strategy.
Correct Answer: A
upvoted 2 times
" # kopper2019 1 year ago
Chef = OpsWorks = D
upvoted 2 times
A company has been using a third-party provider for its content delivery network and recently decided to switch to Amazon CloudFront. The
development team wants to maximize performance for the global user base. The company uses a content management system (CMS) that serves
both static and dynamic content.
The CMS is behind an Application Load Balancer (ALB) which is set as the default origin for the distribution. Static assets are served from an
Amazon S3 bucket.
The Origin Access Identity (OAI) was created properly and the S3 bucket policy has been updated to allow the GetObject action from the OAI, but
static assets are receiving a 404 error.
Which combination of steps should the solutions architect take to +x the error? (Choose two.)
A. Add another origin to the CloudFront distribution for the static assets.
B. Add a path-based rule to the ALB to forward requests for the static assets.
C. Add an RTMP distribution to allow caching of both static and dynamic content.
D. Add a behavior to the CloudFront distribution for the path pattern and the origin of the static assets.
E. Add a host header condition to the ALB listener and forward the header from CloudFront to add tramc to the allow list.
Correct Answer: AB
upvoted 4 times
A +nancial services company logs personally identi+able information to its application logs stored in Amazon S3. Due to regulatory compliance
requirements, the log +les must be encrypted at rest. The security team has mandated that the company's on-premises hardware security modules
(HSMs) be used to generate the
CMK material.
Which steps should the solutions architect take to meet these requirements?
A. Create an AWS CloudHSM cluster. Create a new CMK in AWS KMS using AWS_CloudHSM as the source for the key material and an origin of
AWS_CLOUDHSM. Enable automatic key rotation on the CMK with a duration of 1 year. Con+gure a bucket policy on the logging bucket that
disallows uploads of unencrypted data and requires that the encryption source be AWS KMS.
B. Provision an AWS Direct Connect connection, ensuring there is no overlap of the RFC 1918 address space between on-premises hardware
and the VPCs. Con+gure an AWS bucket policy on the logging bucket that requires all objects to be encrypted. Con+gure the logging
application to query the on-premises HSMs from the AWS environment for the encryption key material, and create a unique CMK for each
logging event.
C. Create a CMK in AWS KMS with no key material and an origin of EXTERNAL. Import the key material generated from the on-premises HSMs
into the CMK using the public key and import token provided by AWS. Con+gure a bucket policy on the logging bucket that disallows uploads
of non-encrypted data and requires that the encryption source be AWS KMS.
D. Create a new CMK in AWS KMS with AWS-provided key material and an origin of AWS_KMS. Disable this CMK, and overwrite the key
material with the key material from the on-premises HSM using the public key and import token provided by AWS. Re-enable the CMK. Enable
automatic key rotation on the CMK with a duration of 1 year. Con+gure a bucket policy on the logging bucket that disallows uploads of non-
encrypted data and requires that the encryption source be AWS KMS.
Correct Answer: D
Selected Answer: C
C,https://aws.amazon.com/blogs/security/how-to-byok-bring-your-own-key-to-aws-kms-for-less-than-15-00-a-year-using-aws-cloudhsm/
upvoted 2 times
" # tgv 1 year ago
CCC
---
upvoted 1 times
If successful, you’ll see an output on the CLI similar to below. The KeyState will be PendingImport and the Origin will be EXTERNAL.
upvoted 5 times
A solutions architect is implementing infrastructure as code for a two-tier web application in an AWS CloudFormation template. The web frontend
application will be deployed on Amazon EC2 instances in an Auto Scaling group. The backend database will be an Amazon RDS for MySQL DB
instance. The database password will be rotated every 60 days.
How can the solutions architect MOST securely manage the con+guration of the application's database credentials?
A. Provide the database password as a parameter in the CloudFormation template. Create an initialization script in the Auto Scaling group's
launch con+guration UserData property to reference the password parameter using the Ref intrinsic function. Store the password on the EC2
instances. Reference the parameter for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Ref
intrinsic function.
B. Create a new AWS Secrets Manager secret resource in the CloudFormation template to be used as the database password. Con+gure the
application to retrieve the password from Secrets Manager when needed. Reference the secret resource for the value of the
MasterUserPassword property in the AWS::RDS::DBInstance resource using a dynamic reference.
C. Create a new AWS Secrets Manager secret resource in the CloudFormation template to be used as the database password. Create an
initialization script in the Auto Scaling group's launch con+guration UserData property to reference the secret resource using the Ref intrinsic
function. Reference the secret resource for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Ref
intrinsic function.
D. Create a new AWS Systems Manager Parameter Store parameter in the CloudFormation template to be used as the database password.
Create an initialization script in the Auto Scaling group's launch con+guration UserData property to reference the parameter. Reference the
parameter for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Fn::GetAtt intrinsic function.
Correct Answer: D
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html
But I'm not saying you're wrong, it appears from that document you referenced, you definitely CAN do this with the Ref function as well.
So it appears B and C are both feasible answers. It would come down to which one you think is the better answer. And that might be a matter
of personal preference?
upvoted 1 times
C. Configure an organization-level CloudTrail in the parent account to deliver log events to a central S3 bucket. Migrate the existing CloudTrail
logs from each member account to the central S3 bucket. Delete the existing CloudTrail and logs in the member accounts.
D. Configure an organization-level CloudTrail in the parent account to deliver log events to a central S3 bucket. Configure CloudTrail in each
member account to deliver log events to the central S3 bucket.
upvoted 1 times
A company built an application based on AWS Lambda deployed in an AWS CloudFormation stack. The last production release of the web
application introduced an issue that resulted in an outage lasting several minutes. A solutions architect must adjust the deployment process to
support a canary release.
Which solution will meet these requirements?
A. Create an alias for every new deployed version of the Lambda function. Use the AWS CLI update-alias command with the routing-con+g
parameter to distribute the load.
B. Deploy the application into a new CloudFormation stack. Use an Amazon Route 53 weighted routing policy to distribute the load.
C. Create a version for every new deployed Lambda function. Use the AWS CLI update-function-con+guration command with the routing-con+g
parameter to distribute the load.
D. Con+gure AWS CodeDeploy and use CodeDeployDefault.OneAtATime in the Deployment con+guration to distribute the load.
Correct Answer: C
Between A and C.
A is wrong because "Create an alias for every new deployed version". The alias it's the same, the weight between the versions for the alias it's
different. You point out to the alias and then operate with version.
C it's wrong because you have to use update-alias instead of update-function-configuration.
https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html
upvoted 1 times
A manufacturing company is growing exponentially and has secured funding to improve its IT infrastructure and ecommerce presence. The
company's ecommerce platform consists of:
✑ Static assets primarily comprised of product images stored in Amazon S3.
✑ Amazon DynamoDB tables that store product information, user information, and order information.
✑ Web servers containing the application's front-end behind Elastic Load Balancers.
The company wants to set up a disaster recovery site in a separate Region.
Which combination of actions should the solutions architect take to implement the new design while meeting all the requirements? (Choose
three.)
A. Enable Amazon Route 53 health checks to determine if the primary site is down, and route tramc to the disaster recovery site if there is an
issue.
B. Enable Amazon S3 cross-Region replication on the buckets that contain static assets.
C. Enable multi-Region targets on the Elastic Load Balancer and target Amazon EC2 instances in both Regions.
E. Enable Amazon CloudWatch and create CloudWatch alarms that route tramc to the disaster recovery site when application latency exceeds
the desired threshold.
F. Enable Amazon S3 versioning on the source and destination buckets containing static assets to ensure there is a rollback version available
in the event of data corruption.
upvoted 1 times
However, another picks are not correct either. So I would answer ABD :D
C: There's no multi-region targets in ELB. However you can load balance traffic with the IP addresses, so you could do it.
upvoted 3 times
A company is developing a gene reporting device that will collect genomic information to assist researchers will collecting large samples of data
from a diverse population. The device will push 8 KB of genomic data every second to a data platform that will need to process and analyze the
data and provide information back to researchers. The data platform must meet the following requirements:
✑ Provide near-real-time analytics of the inbound genomic data
✑ Ensure the data is jexible, parallel, and durable
✑ Deliver results of processing to a data warehouse
Which strategy should a solutions architect use to meet these requirements?
A. Use Amazon Kinesis Data Firehouse to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an
Amazon RDS instance.
B. Use Amazon Kinesis Data Streams to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an
Amazon Redshift cluster using Amazon EMR.
C. Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SQS with Kinesis, and save the results to an Amazon
Redshift cluster.
D. Use an Amazon API Gateway to put requests into an Amazon SQS queue, analyze the data with an AWS Lambda function, and save the
results to an Amazon Redshift cluster using Amazon EMR.
Correct Answer: B
A company needs to move its on-premises resources to AWS. The current environment consists of 100 virtual machines (VMs) with a total of 40
TB of storage.
Most of the VMs can be taken osine because they support functions during business hours only, however, some are mission critical, so downtime
must be minimized.
The administrator of the on-premises network provisioned 10 Mbps of internet bandwidth for the migration. The on-premises network throughput
has reached capacity and would be costly to increase. A solutions architect must design a migration solution that can be performed within the
next 3 months.
Which method would ful+ll these requirements?
A. Set up a 1 Gbps AWS Direct Connect connection. Then, provision a private virtual interface, and use AWS Server Migration Service (SMS) to
migrate the VMs into Amazon EC2.
B. Use AWS Application Discovery Service to assess each application, and determine how to refactor and optimize each using AWS services or
AWS Marketplace solutions.
C. Export the VMs locally, beginning with the most mission-critical servers +rst. Use AWS Transfer for SFTP to securely upload each VM to
Amazon S3 after they are exported. Use VM Import/Export to import the VMs into Amazon EC2.
D. Migrate mission-critical VMs with AWS SMS. Export the other VMs locally and transfer them to Amazon S3 using AWS Snowball. Use VM
Import/Export to import the VMs into Amazon EC2.
Correct Answer: A
Direct Connect link is needed only while migration period. So ordering that for just 3 months doesn't seem correct. Also it's a costly option. Rules
out A.
And refactoring 100 applications in 3 months, doesn't sound right to me as well. Rules out B.
So left will be D. Problem with D is that Snowball transfer also takes some time, but I guess it's OK for non critical systems to be down for week.
If we can use the on-prem servers while setupping AWS instances and then transfer only the delta of data, the downtime then will be minimized.
upvoted 10 times
A company runs a popular public-facing ecommerce website. Its user base is growing quickly from a local market to a national market. The
website is hosted in an on-premises data center with web servers and a MySQL database. The company wants to migrate its workload to AWS. A
solutions architect needs to create a solution to:
✑ Improve security
✑ Improve reliability
✑ Improve availability
✑ Reduce latency
✑ Reduce maintenance
Which combination of steps should the solutions architect take to meet these requirements? (Choose three.)
A. Use Amazon EC2 instances in two Availability Zones for the web servers in an Auto Scaling group behind an Application Load Balancer.
C. Use Amazon EC2 instances in two Availability Zones to host a highly available MySQL database cluster.
D. Host static website content in Amazon S3. Use S3 Transfer Acceleration to reduce latency while serving webpages. Use AWS WAF to
improve website security.
E. Host static website content in Amazon S3. Use Amazon CloudFront to reduce latency while serving webpages. Use AWS WAF to improve
website security.
ABE for me. With this option, the web app and DB are highly available. And the latency and security is covered with the E answer.
upvoted 1 times
A company has an internal application running on AWS that is used to track and process shipments in the company's warehouse. Currently, after
the system receives an order, it emails the staff the information needed to ship a package. Once the package is shipped, the staff replies to the
email and the order is marked as shipped.
The company wants to stop using email in the application and move to a serverless application model.
Which architecture solution meets these requirements?
A. Use AWS Batch to con+gure the different tasks required to ship a package. Have AWS Batch trigger an AWS Lambda function that creates
and prints a shipping label. Once that label is scanned, as it leaves the warehouse, have another Lambda function move the process to the
next step in the AWS Batch job.
B. When a new order is created, store the order information in Amazon SQS. Have AWS Lambda check the queue every 5 minutes and process
any needed work. When an order needs to be shipped, have Lambda print the label in the warehouse. Once the label has been scanned, as it
leaves the warehouse, have an Amazon EC2 instance update Amazon SQS.
C. Update the application to store new order information in Amazon DynamoDB. When a new order is created, trigger an AWS Step Functions
workjow, mark the orders as ג€in progressג€, and print a package label to the warehouse. Once the label has been scanned and ful+lled, the
application will trigger an AWS Lambda function that will mark the order as shipped and complete the workjow.
D. Store new order information in Amazon EFS. Have instances pull the new information from the NFS and send that information to printers in
the warehouse. Once the label has been scanned, as it leaves the warehouse, have Amazon API Gateway call the instances to remove the
order information from Amazon EFS.
Correct Answer: A
C
Step functions
upvoted 1 times
" # tgv 1 year ago
CCC
---
upvoted 1 times
A company has developed a mobile game. The backend for the game runs on several virtual machines located in an on-premises data center. The
business logic is exposed using a REST API with multiple functions. Player session data is stored in central +le storage. Backend services use
different API keys for throttling and to distinguish between live and test tramc.
The load on the game backend varies throughout the day. During peak hours, the server capacity is not sumcient. There are also latency issues
when fetching player session data. Management has asked a solutions architect to present a cloud architecture that can handle the game's
varying load and provide low-latency data access. The API model should not be changed.
Which solution meets these requirements?
A. Implement the REST API using a Network Load Balancer (NLB). Run the business logic on an Amazon EC2 instance behind the NLB. Store
player session data in Amazon Aurora Serverless.
B. Implement the REST API using an Application Load Balancer (ALB). Run the business logic in AWS Lambda. Store player session data in
Amazon DynamoDB with on-demand capacity.
C. Implement the REST API using Amazon API Gateway. Run the business logic in AWS Lambda. Store player session data in Amazon
DynamoDB with on- demand capacity.
D. Implement the REST API using AWS AppSync. Run the business logic in AWS Lambda. Store player session data in Amazon Aurora
Serverless.
Correct Answer: A
C is the answer.
upvoted 1 times
" # Waiweng 1 year ago
it;s C
upvoted 3 times
An enterprise company wants to allow its developers to purchase third-party software through AWS Marketplace. The company uses an AWS
Organizations account structure with full features enabled, and has a shared services account in each organizational unit (OU) that will be used by
procurement managers. The procurement team's policy indicates that developers should be able to obtain third-party software from an approved
list only and use Private Marketplace in AWS
Marketplace to achieve this requirement. The procurement team wants administration of Private Marketplace to be restricted to a role named
procurement- manager-role, which could be assumed by procurement managers. Other IAM users, groups, roles, and account administrators in the
company should be denied
Private Marketplace administrative access.
What is the MOST emcient way to design an architecture to meet these requirements?
A. Create an IAM role named procurement-manager-role in all AWS accounts in the organization. Add the PowerUserAccess managed policy to
the role. Apply an inline policy to all IAM users and roles in every AWS account to deny permissions on the
AWSPrivateMarketplaceAdminFullAccess managed policy.
B. Create an IAM role named procurement-manager-role in all AWS accounts in the organization. Add the AdministratorAccess managed policy
to the role. De+ne a permissions boundary with the AWSPrivateMarketplaceAdminFullAccess managed policy and attach it to all the developer
roles.
C. Create an IAM role named procurement-manager-role in all the shared services accounts in the organization. Add the
AWSPrivateMarketplaceAdminFullAccess managed policy to the role. Create an organization root-level SCP to deny permissions to administer
Private Marketplace to everyone except the role named procurement-manager-role. Create another organization root-level SCP to deny
permissions to create an IAM role named procurement-manager-role to everyone in the organization.
D. Create an IAM role named procurement-manager-role in all AWS accounts that will be used by developers. Add the
AWSPrivateMarketplaceAdminFullAccess managed policy to the role. Create an SCP in Organizations to deny permissions to administer
Private Marketplace to everyone except the role named procurement-manager-role. Apply the SCP to all the shared services accounts in the
organization.
Correct Answer: D
https://aws.amazon.com/blogs/awsmarketplace/controlling-access-to-a-well-architected-private-marketplace-using-iam-and-aws-organizations/
upvoted 19 times
upvoted 1 times
I would choose D
upvoted 2 times
Selected Answer: C
C. SCP to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role.
https://aws.amazon.com/blogs/awsmarketplace/controlling-access-to-a-well-architected-private-marketplace-using-iam-and-aws-organizations/
upvoted 1 times
" # andylogan 1 year ago
It's C
upvoted 1 times
A solutions architect is designing the data storage and retrieval architecture for a new application that a company will be launching soon. The
application is designed to ingest millions of small records per minute from devices all around the world. Each record is less than 4 KB in size and
needs to be stored in a durable location where it can be retrieved with low latency. The data is ephemeral and the company is required to store the
data for 120 days only, after which the data can be deleted.
The solutions architect calculates that, during the course of a year, the storage requirements would be about 10-15 TB.
Which storage strategy is the MOST cost-effective and meets the design requirements?
A. Design the application to store each incoming record as a single .csv +le in an Amazon S3 bucket to allow for indexed retrieval. Con+gure a
lifecycle policy to delete data older than 120 days.
B. Design the application to store each incoming record in an Amazon DynamoDB table properly con+gured for the scale. Con+gure the
DynamoDB Time to Live (TTL) feature to delete records older than 120 days.
C. Design the application to store each incoming record in a single table in an Amazon RDS MySQL database. Run a nightly cron job that
executes a query to delete any records older than 120 days.
D. Design the application to batch incoming records before writing them to an Amazon S3 bucket. Update the metadata for the object to
contain the list of records in the batch and use the Amazon S3 metadata search feature to retrieve the data. Con+gure a lifecycle policy to
delete the data after 120 days.
Correct Answer: C
upvoted 1 times
Re: PUT header request with limitation for user-defined metadata to 2KB, that should be OK, you're not storing 4KB data in metadata, you'd
be combining multiple 4KB data pieces into a very large flat file. The metadata would only tell you which data pieces are in that very large flat
file.
upvoted 1 times
A company provides auction services for artwork and has users across North America and Europe. The company hosts its application in Amazon
EC2 instances in the us-east-1 Region. Artists upload photos of their work as large-size, high-resolution image +les from their mobile phones to a
centralized Amazon S3 bucket created in the us-east-1 Region. The users in Europe are reporting slow performance for their image uploads.
How can a solutions architect improve the performance of the image upload process?
B. Create an Amazon CloudFront distribution and point to the application as a custom origin.
D. Create an Auto Scaling group for the EC2 instances and create a scaling policy.
Correct Answer: C
A company has developed a new release of a popular video game and wants to make it available for public download. The new release package is
approximately
5 GB in size. The company provides downloads for existing releases from a Linux-based, publicly facing FTP site hosted in an on-premises data
center. The company expects the new release will be downloaded by users worldwide. The company wants a solution that provides improved
download performance and low transfer costs, regardless of a user's location.
Which solutions will meet these requirements?
A. Store the game +les on Amazon EBS volumes mounted on Amazon EC2 instances within an Auto Scaling group. Con+gure an FTP service
on the EC2 instances. Use an Application Load Balancer in front of the Auto Scaling group. Publish the game download URL for users to
download the package.
B. Store the game +les on Amazon EFS volumes that are attached to Amazon EC2 instances within an Auto Scaling group. Con+gure an FTP
service on each of the EC2 instances. Use an Application Load Balancer in front of the Auto Scaling group. Publish the game download URL
for users to download the package.
C. Con+gure Amazon Route 53 and an Amazon S3 bucket for website hosting. Upload the game +les to the S3 bucket. Use Amazon CloudFront
for the website. Publish the game download URL for users to download the package.
D. Con+gure Amazon Route 53 and an Amazon S3 bucket for website hosting. Upload the game +les to the S3 bucket. Set Requester Pays for
the S3 bucket. Publish the game download URL for users to download the package.
Correct Answer: C
A new startup is running a serverless application using AWS Lambda as the primary source of compute. New versions of the application must be
made available to a subset of users before deploying changes to all users. Developers should also have the ability to abort the deployment and
have access to an easy rollback
mechanism. A solutions architect decides to use AWS CodeDeploy to deploy changes when a new version is available.
Which CodeDeploy con+guration should the solutions architect use?
A. A blue/green deployment
B. A linear deployment
C. A canary deployment
D. An all-at-once deployment
Correct Answer: D
Reference:
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html
upvoted 1 times
" # tgv 1 year ago
CCC
---
upvoted 1 times
A solutions architect is implementing federated access to AWS for users of the company's mobile application. Due to regulatory and security
requirements, the application must use a custom-built solution for authenticating users and must use IAM roles for authorization.
Which of the following actions would enable authentication and authorization and satisfy the requirements? (Choose two.)
A. Use a custom-built SAML-compatible solution for authentication and AWS SSO for authorization.
B. Create a custom-built LDAP connector using Amazon API Gateway and AWS Lambda for authentication. Store authorization tokens in
Amazon DynamoDB, and validate authorization requests using another Lambda function that reads the credentials from DynamoDB.
C. Use a custom-built OpenID Connect-compatible solution with AWS SSO for authentication and authorization.
D. Use a custom-built SAML-compatible solution that uses LDAP for authentication and uses a SAML assertion to perform authorization to the
IAM identity provider.
E. Use a custom-built OpenID Connect-compatible solution for authentication and use Amazon Cognito for authorization.
Correct Answer: AC
No. AWS SSO supports single sign-on to business applications through web browsers only.https://aws.amazon.com/single-sign-
on/faqs/?nc1=h_ls
upvoted 1 times
No. AWS SSO supports single sign-on to business applications through web browsers only.
upvoted 2 times
I will go with DE
upvoted 4 times
" # Bulti 1 year, 1 month ago
D&E is correct. AWS SSO does not support mobile authentication.
upvoted 1 times
A company has developed a custom tool used in its workjow that runs within a Docker container. The company must perform manual steps each
time the container code is updated to make the container image available to new workjow executions. The company wants to automate this
process to eliminate manual effort and ensure a new container image is generated every time the tool code is updated.
Which combination of actions should a solutions architect take to meet these requirements? (Choose three.)
A. Con+gure an Amazon ECR repository for the tool. Con+gure an AWS CodeCommit repository containing code for the tool being deployed to
the container image in Amazon ECR.
B. Con+gure an AWS CodeDeploy application that triggers an application version update that pulls the latest tool container image from
Amazon ECR, updates the container with code from the source AWS CodeCommit repository, and pushes the updated container image to
Amazon ECR.
C. Con+guration an AWS CodeBuild project that pulls the latest tool container image from Amazon ECR, updates the container with code from
the source AWS CodeCommit repository, and pushes the updated container image to Amazon ECR.
D. Con+gure an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeDeploy
application update.
E. Con+gure an Amazon EventBridge rule that triggers on commits to the AWS CodeCommit repository for the tool. Con+gure the event to
trigger an update to the tool container image in Amazon ECR. Push the updated container image to Amazon ECR.
F. Con+gure an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeBuild
build.
Why A?
From it we only need "Configure an Amazon ECR repository for the tool.". The rest is crap. C and F cover all the process from pulling to
CodeCommit to pushing to ECR.
Why the hell the added a second sentence in A "Configure an AWS CodeCommit repository containing code for the tool being deployed to the
container image in Amazon ECR." ?
Whose sick mind is this a product of?
upvoted 1 times
Configuration an AWS CodeBuild project that pulls the latest tool container image from Amazon ECR,
upvoted 2 times
A company hosts an application on Amazon EC2 instance and needs to store +les in Amazon S3. The +les should never traverse the public
internet, and only the application EC2 instances are granted access to a speci+c Amazon S3 bucket. A solutions architect has created a VPC
endpoint for Amazon S3 and connected the endpoint to the application VPC.
Which additional steps should the solutions architect take to meet these requirements?
A. Assign an endpoint policy to the endpoint that restricts access to a speci+c S3 bucket. Attach a bucket policy to the S3 bucket that grants
access to the VPC endpoint. Add the gateway pre+x list to a NACL of the instances to limit access to the application EC2 instances only.
B. Attach a bucket policy to the S3 bucket that grants access to application EC2 instances only using the aws:SourceIp condition. Update the
VPC route table so only the application EC2 instances can access the VPC endpoint.
C. Assign an endpoint policy to the VPC endpoint that restricts access to a speci+c S3 bucket. Attach a bucket policy to the S3 bucket that
grants access to the VPC endpoint. Assign an IAM role to the application EC2 instances and only allow access to this role in the S3 bucket's
policy.
D. Assign an endpoint policy to the VPC endpoint that restricts access to S3 in the current Region. Attach a bucket policy to the S3 bucket that
grants access to the VPC private subnets only. Add the gateway pre+x list to a NACL to limit access to the application EC2 instances only.
Correct Answer: C
upvoted 1 times
A +nancial services company has an on-premises environment that ingests market data feeds from stock exchanges, transforms the data, and
sends the data to an internal Apache Kafka cluster. Management wants to leverage AWS services to build a scalable and near-real-time solution
with consistent network performance to provide stock market data to a web application.
Which steps should a solutions architect take to build the solution? (Choose three.)
A. Establish an AWS Direct Connect connection from the on-premises data center to AWS.
B. Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Consumer Library
to put the data into an Amazon Kinesis data stream.
C. Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Kinesis Producer
Library to put the data into a Kinesis data stream.
D. Create a WebSocket API in Amazon API Gateway, create an AWS Lambda function to process an Amazon Kinesis data stream, and use the
@connections command to send callback messages to connected clients.
E. Create a GraphQL API in AWS AppSync, create an AWS Lambda function to process the Amazon Kinesis data stream, and use the
@connections command to send callback messages to connected clients.
@connections command for call back doesnt seem to be available in Appsync( but yes with API gateway) as it manages these constructs
internally.
A +tness tracking company serves users around the world, with its primary markets in North America and Asia. The company needs to design an
infrastructure for its read-heavy user authorization application with the following requirements:
✑ Be resilient to problems with the application in any Region.
✑ Write to a database in a single Region.
✑ Read from multiple Regions.
✑ Support resiliency across application tiers in each Region.
✑ Support the relational database semantics rejected in the application.
Which combination of steps should a solutions architect take? (Choose two.)
A. Use an Amazon Route 53 geoproximity routing policy combined with a multivalue answer routing policy.
B. Deploy web, application, and MySQL database servers to Amazon EC2 instance in each Region. Set up the application so that reads and
writes are local to the Region. Create snapshots of the web, application, and database servers and store the snapshots in an Amazon S3
bucket in both Regions. Set up cross- Region replication for the database layer.
C. Use an Amazon Route 53 geolocation routing policy combined with a failover routing policy.
D. Set up web, application, and Amazon RDS for MySQL instances in each Region. Set up the application so that reads are local and writes are
partitioned based on the user. Set up a Multi-AZ failover for the web, application, and database servers. Set up cross-Region replication for the
database layer.
E. Set up active-active web and application servers in each Region. Deploy an Amazon Aurora global database with clusters in each Region.
Set up the application to use the in-Region Aurora database endpoints. Create snapshots of the web application servers and store them in an
Amazon S3 bucket in both Regions.
Correct Answer: BD
Why not A? A is a valid answer. But, if we set up geoproximity base routing, it will route traffic based on the closeness of AWS resources
and users. In other terms, we can't give higher priority to our higher revenue regions.
upvoted 1 times
---
The first important thing to note is that users are from all over the world and not only from North America and Asia and that you have to be
resilient to problem with the application in ANY REGION.
What I don't like about Failover is that it works by creating 2 records (primary + secondary)
Since you have to be resilient to problem with the application in ANY Region, how are you configuring the failover policy/ies?
upvoted 6 times
" # dmscountera Most Recent % 2 weeks, 5 days ago
Selected Answer: AE
As per the Q, you need to read/be resilient in ANY region not from just 2.
So multi-value supports up to 8 IPs > failover ~2
8 > 2 =>
AE
upvoted 1 times
Why not A? A is a valid answer. But, if we set up geoproximity base routing, it will route traffic based on the closeness of AWS resources and
users. In other terms, we can't give higher priority to our higher revenue regions.
upvoted 2 times
EBS volume snapshots are stored in S3, however you cannot choose what bucket they are stored in nor can they be accessed through the S3
api.
upvoted 2 times
Why not A? Failover routing is better than multivalue answer for this case, and geolocation can be used here with no issues.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
upvoted 1 times
A company needs to create a centralized logging architecture for all of its AWS accounts. The architecture should provide near-real-time data
analysis for all AWS
CloudTrail logs and VPC Flow Logs across all AWS accounts. The company plans to use Amazon Elasticsearch Service (Amazon ES) to perform
log analysis in the logging account.
Which strategy a solutions architect use to meet these requirements?
A. Con+gure CloudTrail and VPC Flow Logs in each AWS account to send data to a centralized Amazon S3 bucket in the logging account.
Create and AWS Lambda function to load data from the S3 bucket to Amazon ES in the logging account.
B. Con+gure CloudTrail and VPC Flow Logs to send data to a log group in Amazon CloudWatch account. Con+gure a CloudWatch subscription
+lter in each AWS account to send data to Amazon Kinesis Data Firehouse in the logging account. Load data from Kinesis Data Firehouse into
Amazon ES in the logging account.
C. Con+gure CloudTrail and VPC Flow Logs to send data to a separate Amazon S3 bucket in each AWS account. Create an AWS Lambda
function triggered by S3 events to copy the data to a centralized logging bucket. Create another Lambda function to load data from the S3
bucket to Amazon ES in the logging account.
D. Con+gure CloudTrail and VPC Flow Logs to send data to a log group in Amazon CloudWatch Logs in each AWS account. Create AWS
Lambda functions in each AWS accounts to subscribe to the log groups and stream the data to an Amazon S3 bucket in the logging account.
Create another Lambda function to load data from the S3 bucket to Amazon ES in the logging account.
Correct Answer: A
upvoted 3 times
" # andylogan 1 year ago
It's B
upvoted 1 times
A +nancial company is using a high-performance compute cluster running on Amazon EC2 instances to perform market simulations. A DNS record
must be created in an Amazon Route 53 private hosted zone when instances start. The DNS record must be removed after instances are
terminated.
Currently the company uses a combination of Amazon CloudWatch Events and AWS Lambda to create the DNS record. The solution worked well in
testing with small clusters, but in production with clusters containing thousands of instances the company sees the following error in the Lambda
logs:
HTTP 400 error (Bad request).
The response header also includes a status code element with a value of `Throttling` and a status message element with a value of `Rate
exceeded`.
Which combination of steps should the Solutions Architect take to resolve these issues? (Choose three.)
A. Con+gure an Amazon SOS FIFO queue and con+gure a CloudWatch Events rule to use this queue as a target. Remove the Lambda target
from the CloudWatch Events rule.
B. Con+gure an Amazon Kinesis data stream and con+gure a CloudWatch Events rule to use this queue as a target. Remove the Lambda target
from the CloudWatch Events rule.
C. Update the CloudWatch Events rule to trigger on Amazon EC2 ג€Instance Launch Successfulג€ and ג€Instance Terminate Successfulג€
events for the Auto Scaling group used by the cluster.
D. Con+gure a Lambda function to retrieve messages from an Amazon SQS queue. Modify the Lambda function to retrieve a maximum of 10
messages then batch the messages by Amazon Route 53 API call type and submit. Delete the messages from the SQS queue after successful
API calls.
E. Con+gure an Amazon SQS standard queue and con+gure the existing CloudWatch Events rule to use this queue as a target. Remove the
Lambda target from the CloudWatch Events rule.
F. Con+gure a Lambda function to read data from the Amazon Kinesis data stream and con+gure the batch window to 5 minutes. Modify the
function to make a single API call to Amazon Route 53 with all records read from the kinesis data stream.
The goal here is to support thousands of instances launching and terminating, with a SQS FIFO queue this requirement is not fullfilled. And it
was the original problem with Lambda and the concurrency.
upvoted 5 times
upvoted 4 times
" # Ebi 1 year, 1 month ago
We need FIFO queue here for exactly-once-processing feature as well as order
upvoted 4 times
The errors in the Lambda logs indicate that throttling is occurring. Throttling is intended to protect your resources and downstream applications.
Though Lambda automatically scales to accommodate incoming traffic, functions can still be throttled for various reasons.
In this case it is most likely that the throttling is not occurring in Lambda itself but in API calls made to Amazon Route 53. In Route 53 you are
limited (by default) to five requests per second per AWS account. If you submit more than five requests per second, Amazon Route 53 returns an
HTTP 400 error (Bad request). The response header also includes a Code element with a value of Throttling and a Message element with a value
of Rate exceeded.
The resolution here is to place the data for the DNS records into an SQS queue where they can buffer. AWS Lambda can then poll the queue and
process the messages, making sure to batch the messages to reduce the likelihood of receiving more errors.
upvoted 5 times
A North American company with headquarters on the East Coast is deploying a new web application running on Amazon EC2 in the us-east-1
Region. The application should dynamically scale to meet user demand and maintain resiliency. Additionally, the application must have disaster
recover capabilities in an active-passive con+guration with the us-west-1 Region.
Which steps should a solutions architect take after creating a VPC in the us-east-1 Region?
A. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect both VPCs. Deploy an Application Load Balancer (ALB)
spanning multiple Availability Zones (AZs) to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs in each Region as
part of an Auto Scaling group spanning both VPCs and served by the ALB.
B. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs) to the VPC in the us-east-1 Region. Deploy EC2
instances across multiple AZs as part of an Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1 Region. Create
an Amazon Route 53 record set with a failover routing policy and health checks enabled to provide high availability across both Regions.
C. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect both VPCs. Deploy an Application Load Balancer (ALB) that
spans both VPCs. Deploy EC2 instances across multiple Availability Zones as part of an Auto Scaling group in each VPC served by the ALB.
Create an Amazon Route 53 record that points to the ALB.
D. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs) to the VPC in the us-east-1 Region. Deploy EC2
instances across multiple AZs as part of an Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1 Region. Create
separate Amazon Route 53 records in each Region that point to the ALB in the Region. Use Route 53 health checks to provide high availability
across both Regions.
Correct Answer: D
upvoted 1 times
Between B vs D... Route53 doesn't have per-region records. It's a global service. So D is wrong. B should work great.
upvoted 2 times
A company standardized its method of deploying applications to AWS using AWS CodePipeline and AWS CloudFormation. The applications are in
TypeScript and
Python. The company has recently acquired another business that deploys applications to AWS using Python scripts.
Developers from the newly acquired company are hesitant to move their applications under CloudFormation because it would require that they
learn a new domain-speci+c language and eliminate their access to language features, such as looping.
How can the acquired applications quickly be brought up to deployment standards while addressing the developers' concerns?
A. Create Cloud Formation templates and re-use parts of the Python scripts as Instance user data. Use the AWS Cloud Development Kit (AWS
CDK) to deploy the application using these templates. Incorporate the AWS CDK into CodePipeline and deploy the application to AWS using
these templates.
B. Use a third-party resource provisioning engine inside AWS CodeBuild to standardize the deployment processes of the existing and acquired
company. Orchestrate the CodeBuild job using CodePipeline.
C. Standardize on AWS OpsWorks. Integrate OpsWorks with CodePipeline. Have the developers create Chef recipes to deploy their
applications on AWS.
D. De+ne the AWS resources using TypeScript or Python. Use the AWS Cloud Development Kit (AWS CDK) to create CloudFormation templates
from the developers' code, and use the AWS CDK to create CloudFormation stacks. Incorporate the AWS CDK as a CodeBuild job in
CodePipeline.
Correct Answer: B
upvoted 3 times
A company has a single AWS master billing account, which is the root of the AWS Organizations hierarchy.
The company has multiple AWS accounts within this hierarchy, all organized into organization units (OUs). More OUs and AWS accounts will
continue to be created as other parts of the business migrate applications to AWS. These business units may need to use different AWS services.
The Security team is implementing the following requirements for all current and future AWS accounts:
✑ Control policies must be applied across all accounts to prohibit AWS servers.
✑ Exceptions to the control policies are allowed based on valid use cases.
Which solution will meet these requirements with minimal optional overhead?
A. Use an SCP in Organizations to implement a deny list of AWS servers. Apply this SCP at the level. For any speci+c exceptions for an OU,
create a new SCP for that OU and add the required AWS services to the allow list.
B. Use an SCP in Organizations to implement a deny list of AWS service. Apply this SCP at the root level and each OU. Remove the default
AWS managed SCP from the root level and all OU levels. For any speci+c exceptions, modify the SCP attached to that OU, and add the required
AWS services to the allow list.
C. Use an SCP in Organizations to implement a deny list of AWS service. Apply this SCP at each OU level. Leave the default AWS managed SCP
at the root level. For any speci+c executions for an OU, create a new SCP for that OU.
D. Use an SCP in Organizations to implement an allow list of AWS services. Apply this SCP at the root level. Remove the default AWS managed
SCP from the root level and all OU levels. For any speci+c exceptions for an OU, modify the SCP attached to that OU, and add the required AWS
services to the allow list.
Correct Answer: B
1 - The allowed rights work with as the intersection of the rights given by SCP at root, OU and IAM Policies. Therefore if you implement on a
SCP at OU level a Deny of an AWS Server you then wish to grant, the only option is to Modify your SCP, which rules out answers A and C
which recommend you to Create a new SCP
2 - In answers A, B and C it is suggested to Implement an Explicit Deny, and for options B and C, this Deny is at Root Level. It is not possible
with this strategy to allow exceptions with this configurations because Explicit Deny takes precedence over Explicit Allow, then Implicit Deny,
then Implicit Allow. The only way to address this problem is to set Implicit Deny at the Root Level, so then with our Explicit Allow on SCP at
OU Level, it overrides the Implicit Deny, which is what is proposed in Answer D : it is an Allow list of AWS Services not including the restricted
AWS Servers which are Implicitly Denied.
upvoted 6 times
Using Allow List Strategy, to allow a permission, SCPs with allow statement must be added to the account and every OU above it including
root. Every SCP in the hierarchy must explicitly allow the APIs you want to use.
Explicit allow at a lower level of organization hierarchy cannot overwrite the implicit deny at a higher level.
upvoted 1 times
One more negative for C. once you implement a deny on a toplevel. it will override any allow in a child OU. not that it is stated within this
question. but with that in mind that it could be the case, whitelisting makes more sense for me.
upvoted 21 times
With D we are giving the same AWS Services to all the units.
upvoted 2 times
OU
D- cannot work if FullAccess is replaced with specific access SCP it should be applied to all level including OU and account levels (intersection).
Overall all answers are not fully complete but I have to go with C
upvoted 2 times
The default configuration of AWS Organizations supports using SCPs as deny lists. Using a deny list strategy, account administrators can
delegate all services and actions until you create and attach an SCP that denies a specific service or set of actions. Deny statements require less
maintenance, because you don't need to update them when AWS adds new services. Deny statements usually use less space, thus making it
easier to stay within the maximum size for SCPs. In a statement where the Effect element has a value of Deny, you can also restrict access to
specific resources, or define conditions for when SCPs are in effect.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_inheritance_auth.html
upvoted 3 times
upvoted 1 times
A healthcare company runs a production workload on AWS that stores highly sensitive personal information. The security team mandates that, for
auditing purposes, any AWS API action using AWS account root user credentials must automatically create a high-priority ticket in the company's
ticketing system. The ticketing system has a monthly 3-hour maintenance window when no tickets can be created.
To meet security requirements, the company enabled AWS CloudTrail logs and wrote a scheduled AWS Lambda function that uses Amazon Athena
to query API actions performed by the root user. The Lambda function submits any actions found to the ticketing system API. During a recent
security audit, the security team discovered that several tickets were not created because the ticketing system was unavailable due to planned
maintenance.
Which combination of steps should a solutions architect take to ensure that the incidents are reported to the ticketing system even during planned
maintenance?
(Choose two.)
A. Create an Amazon SNS topic to which Amazon CloudWatch alarms will be published. Con+gure a CloudWatch alarm to invoke the Lambda
function.
B. Create an Amazon SQS queue to which Amazon CloudWatch alarms will be published. Con+gure a CloudWatch alarm to publish to the SQS
queue.
C. Modify the Lambda function to be triggered by messages published to an Amazon SNS topic. Update the existing application code to retry
every 5 minutes if the ticketing system's API endpoint is unavailable.
D. Modify the Lambda function to be triggered when there are messages in the Amazon SQS queue and to return successfully when the
ticketing system API has processed the request.
E. Create an Amazon EventBridge rule that triggers on all API events where the invoking user identity is root. Con+gure the EventBridge rule to
write the event to an Amazon SQS queue.
Correct Answer: BD
Why need an Event and a queue, the Lambda is already scheduled... unless when the event is "the ticketing system is available" not
"the invoking user identity is root" in question E... E does not address the main concern which is the unavailability of ticketing system
upvoted 1 times
With Dead Letter Queuing option as an alternative solution for on-failure destination :
https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
2 - SNS is possible as a destination from Event Source Mapping, having SQS->SNS->Lambda, plus for multiple destination
notifications such as email sending would be useful, hence C&E could be feasible assuming this link between SQS and SNS.
upvoted 3 times
" # beso Highly Voted $ 1 year, 1 month ago
B and D, CloudWatch--> SQS--> Lambda-->Ticketing system
upvoted 13 times
The existing system can be modified to use Amazon EventBridge instead of using AWS CloudTrail with Amazon Athena. Eventbridge can be
configured with a rule that checks all AWS API calls via CloudTrail. The rule can be configured to look for the usage or the root user account.
Eventbridge can then be configured with an Amazon SQS queue as a target that puts a message in the queue waiting to be processed.
The Lambda function can then be configured to poll the queue for messages (event-source mapping), process the event synchronously and only
return a successful result when the ticketing system has processed the request. The message will be deleted only if the result is successful,
allowing for retries.
This system will ensure that the important events are not missed when the ticketing system is unavailable.
upvoted 3 times
A & B are wrong because CloudWatch Alarms is based on metrics, not an event/action (that's CloudWatch Events)
C is eliminated because it could have only worked in combo with A, and A is wrong
D is valid per your links
E is valid per your links
(Note that you'd probably have to be careful with D that you don't have a Lambda function running for a LONG time trying to reach the API!
A solutions architect is migrating an existing workload to AWS Fargate. The task can only run in a private subnet within the VPC where there is no
direct connectivity from outside the system to the application. When the Fargate task is launched, the task fails with the following error:
CannotPullContainerError: API error (500): Get https://111122223333.dkr.ecr.us-east-1.amazonaws.com/v2/: net/http: request canceled while
waiting for connection
How should the solutions architect correct this error?
A. Ensure the task is set to ENABLED for the auto-assign public IP setting when launching the task.
B. Ensure the task is set to DISABLED for the auto-assign public IP setting when launching the task. Con+gure a NAT gateway in the public
subnet in the VPC to route requests to the internet.
C. Ensure the task is set to DISABLED for the auto-assign public IP setting when launching the task. Con+gure a NAT gateway in the private
subnet in the VPC to route requests to the internet.
D. Ensure the network mode is set to bridge in the Fargate task de+nition.
Correct Answer: C
When a Fargate task is launched, its elastic network interface requires a route to the internet to pull container
images. If you receive an error similar to the following when launching a task, it is because a route to the internet
does not exist:
CannotPullContainerError: API error (500): Get https://111122223333.dkr.ecr.us-east-1.amazonaws.com/v2/:
net/http: request canceled while waiting for connection”
To resolve this issue, you can:
o For tasks in public subnets, specify ENABLED for Auto-assign public IP when launching the task.
o For tasks in private subnets, specify DISABLED for Auto-assign public IP when launching the task, and
configure a NAT gateway in your VPC to route requests to the internet.
upvoted 5 times
upvoted 1 times
A company is running a two-tier web-based application in an on-premises data center. The application user consists of a single server running a
stateful application. The application connects to a PostgreSQL database running on a separate server. The application's user base is expected to
grow signi+cantly, so the company is migrating the application and database to AWS. The solution will use Amazon Aurora PostgreSQL, Amazon
EC2 Auto Scaling, and Elastic Load
Balancing.
Which solution will provide a consistent user experience that will allow the application and database tiers to scale?
A. Enable Aurora Auto Scaling for Aurora Replicas. Use a Network Load Balancer with the least outstanding requests routing algorithm and
sticky sessions enabled.
B. Enable Aurora Auto Scaling for Aurora writes. Use an Application Load Balancer with the round robin routing algorithm and sticky sessions
enabled.
C. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the robin routing and sticky sessions enabled.
D. Enable Aurora Scaling for Aurora writers. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky
sessions enabled.
Correct Answer: B
C then
upvoted 4 times
A solutions architect is designing a network for a new cloud deployment. Each account will need autonomy to modify route tables and make
changes. Centralized and controlled egress internet connectivity is also needed. The cloud footprint is expected to grow to thousands of AWS
accounts.
Which architecture will meet these requirements?
A. A centralized transit VPC with a VPN connection to a standalone VPC in each account. Outbound internet tramc will be controlled by +rewall
appliances.
B. A centralized shared VPC with a subnet for each account. Outbound internet tramc will be controlled through a jeet of proxy servers.
C. A shared services VPC to host central assets to include a jeet of +rewalls with a route to the internet. Each spoke VPC will peer to the
central VPC.
D. A shared transit gateway to which each VPC will be attached. Outbound internet access will route through a jeet of VPN-attached +rewalls.
Correct Answer: A
Answer C is wrong, because there is a default limit of 50 VPS peerings per VPC, which can be increased to a amximum of 125
(https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html). Since the cloud footprint is expected to grow to thousands of AWS
accounts, VPC peering with one central VPC would not work. Transit Gateway can hadle up to 5000 attachments and therefore is the better
choice here.
upvoted 10 times
---
upvoted 1 times
Using a software-based firewall appliance (on EC2) from AWS Marketplace as an egress point is similar to the NAT gateway setup. This option
can be used if you want to leverage the layer 7 firewall/Intrusion Prevention/Detection System (IPS/IDS) capabilities of the various vendor
offerings.
In Figure 12, we replace NAT Gateway with an EC2 instance (with SNAT enabled on EC2 instance). There are few key considerations with this
upvoted 1 times
A solutions architect needs to migrate 50 TB of NFS data to Amazon S3. The +les are on several NFS +le servers on corporate network. These are
dense +le systems containing tens of millions of small +les. The system operators have con+gured the +le interface on an AWS Snowball Edge
device and are using a shell script to copy data.
Developers report that copying the data to the Snowball Edge device is very slow. The solutions architect suspects this may be related to the
overhead of encrypting all the small +les and transporting them over the network.
Which changes can be made to speed up the data transfer?
A. Cluster two Snowball Edge devices together to increase the throughput of the devices.
B. Change the solution to use the S3 Adapter instead of the +le interface on the Snowball Edge device.
C. Increase the number of parallel copy jobs to increase the throughput of the Snowball Edge device.
D. Connect directly to the USB interface on the Snowball Edge device and copy the +les locally.
Correct Answer: B
upvoted 2 times
the first thing to do is set the S3 Adapter for Snowball, otherwise the multiple copies will throw the same problem again.
upvoted 5 times
the first thing to do is set the S3 Adapter for Snowball, otherwise the multiple copies will throw the same problem again.
upvoted 3 times
But According to AWS, if transfer is started with File Interface, it should be continue till end. Therefore, opening multiple window will speed things
up. If we want to start over, then obviously s3 Interface would be faster.
Here is the link: https://docs.aws.amazon.com/snowball/latest/developer-guide/using-fileinterface.html#fileinterface-overview
upvoted 3 times
A company is planning on hosting its ecommerce platform on AWS using a multi-tier web application designed for a NoSQL database. The
company plans to use the us-west-2 Region as its primary Region. The company wants to ensure that copies of the application and data are
available in second Region, us-west-1, for disaster recovery. The company wants to keep the time to fail over as low as possible. Failing back to
the primary Region should be possible without administrative interaction after the primary service is restored.
Which design should the solutions architect use?
A. Use AWS CloudFormation StackSets to create the stacks in both Regions with Auto Scaling groups for the web and application tiers.
Asynchronously replicate static content between Regions using Amazon S3 cross-Region replication. Use an Amazon Route 53 DNS failover
routing policy to direct users to the secondary site in us-west-1 in the event of an outage. Use Amazon DynamoDB global tables for the
database tier.
B. Use AWS CloudFormation StackSets to create the stacks in both Regions with Auto Scaling groups for the web and application tiers.
Asynchronously replicate static content between Regions using Amazon S3 cross-Region replication. Use an Amazon Route 53 DNS failover
routing policy to direct users to the secondary site in us-west-1 in the event of an outage Deploy an Amazon Aurora global database for the
database tier.
C. Use AWS Service Catalog to deploy the web and application servers in both Regions Asynchronously replicate static content between the
two Regions using Amazon S3 cross-Region replication. Use Amazon Route 53 health checks to identify a primary Region failure and update
the public DNS entry listing to the secondary Region in the event of an outage. Use Amazon RDS for MySQL with cross-Region replication for
the database tier.
D. Use AWS CloudFormation StackSets to create the stacks in both Regions using Auto Scaling groups for the web and application tiers.
Asynchronously replicate static content between Regions using Amazon S3 cross-Region replication. Use Amazon CloudFront with static +les
in Amazon S3, and multi-Region origins for the front-end web tier. Use Amazon DynamoDB tables in each Region with scheduled backups to
Amazon S3.
Correct Answer: C
Selected Answer: A
Dynamo DB is NoSql solution, Cloudformation is for Iaas, for C, use service catalogue for what?
upvoted 1 times
A company hosts a blog post application on AWS using Amazon API Gateway, Amazon DynamoDB, and AWS Lambda. The application currently
does not use
API keys to authorize requests. The API model is as follows:
GET/posts/[postid] to get post details
GET/users[userid] to get user details
GET/comments/[commentid] to get comments details
The company has noticed users are actively discussing topics in the comments section, and the company wants to increase user engagement by
marking the comments appears in real time.
Which design should be used to reduce comment latency and improve user experience?
B. Modify the blog application code to request GET comment[commented] every 10 seconds.
D. Change the concurrency limit of the Lambda functions to lower the API response time.
Correct Answer: D
Having to change to GraphQL shouldn't be relevant since the question doesn't ask about the Easiest way.
upvoted 1 times
https://aws.amazon.com/blogs/mobile/appsync-realtime/
upvoted 3 times
A company has a VPC with two domain controllers running Active Directory in the default con+guration. The VPC DHCP options set is con+gured
to use the IP addresses of the two domain controllers. There is a VPC interface endpoint de+ned; but instances within the VPC are not able to
resolve the private endpoint addresses.
Which strategies would resolve this issue? (Choose two.)
A. De+ne an outbound Amazon Route 53 Resolver. Set a conditional forward rule for the Active Directory domain to the Active Directory
servers. Update the VPC DHCP options set to AmazonProvidedDNS.
B. Update the DNS service on the Active Directory servers to forward all non-authoritative queries to the VPC Resolver.
C. De+ne an inbound Amazon Route 53 Resolver. Set a conditional forward rule for the Active Directory domain to the Active Directory servers.
Update the VPC DHCP options set to AmazonProvidedDNS.
D. Update the DNS service on the client instances to split DNS queries between the Active Directory servers and the VPC Resolver.
E. Update the DNS service on the Active Directory servers to forward all queries to the VPC Resolver.
Correct Answer: BE
For anyone who thinks that A is not correct because outbound resolver will forward to on-premise DNS server.
Remember , our goal is to resolve records in our domain which in the question is hosted in the AD so we need to forward these requests if they
don't match the private hosts for the VPC.
The DNS being hosted inside the VPC or on premise is not relevant since you are specifying an ip in the forward rule , so technically you can
forward to the AD which inside the VPC
in AWS Docs:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-forwarding-outbound-queries.html#resolver-forwarding-outbound-
queries-rule-values
Target IP addresses
When a DNS query matches the name that you specify in Domain name, the outbound endpoint forwards the query to the IP addresses that you
specify here. These are typically the IP addresses for DNS resolvers on your network.
Selected Answer: AB
AB is answer. why?
A) correct - outbound resolver has conditional fwd rules to resolve hybrid DNS + VPC DHCP options must be reverted to other EC2 can resolve
DNS
B) correct - AD servers to use inbound resolver for non-authorititative queries to reach instances
C) wrong - There is no conditional fwd rules for inbound resolvers
D) wrong - splitting DNS server based on type of app seems illogical for me
E) wrong - AD servers need to resolve internal queries as well, not makes sense
upvoted 1 times
" # RVD 2 months, 2 weeks ago
Selected Answer: BC
To resolve the AWS services CNAME it needs to forward the queries to AWS DNS which on prem DNS trying to forward, here question is about
ec2 is not able to resolve the endpoint DNS. EC2->ADDNS->Inboud Resolver.
upvoted 3 times
A company has a photo sharing social networking application. To provide a consistent experience for users, the company performs some image
processing on the photos uploaded by users before publishing on the application. The image processing is implemented using a set of Python
libraries.
The current architecture is as follows:
✑ The image processing Python code runs in a single Amazon EC2 instance and stores the processed images in an Amazon S3 bucket named
ImageBucket.
✑ The front-end application, hosted in another bucket, loads the images from ImageBucket to display to users.
With plans for global expansion, the company wants to implement changes in its existing architecture to be able to scale for increased demand on
the application and reduce management complexity as the application scales.
Which combination of changes should a solutions architect make? (Choose two.)
A. Place the image processing EC2 instance into an Auto Scaling group.
E. Deploy the applications in an Amazon ECS cluster and apply Service Auto Scaling.
Correct Answer: DE
upvoted 1 times
" # tkanmani76 10 months, 2 weeks ago
D for sure. It's not A as its mentioned that firm wants to change the architecture. Between B and E, Lambda would be a good choice and more
operationally efficient over ECS. Its much faster when it comes to scaling over ECS. Hence will choose Lambda (Choice B) over ECS.
https://prismatic.io/blog/why-we-moved-from-lambda-to-ecs/ - This is an interesting case study on the problems faced by Prismatic with
Lambda and why they moved to ECS - which provide a perspective. However in our case Lambda will do the work.
upvoted 1 times
== B & D
upvoted 8 times
To be able to scale for increased demand on the application and reduce management complexity as the application scales. I would prefer
solution D,E. but not sure the answer tbh.
upvoted 1 times
A company has a web application that allows users to upload short videos. The videos are stored on Amazon EBS volumes and analyzed by
custom recognition software for categorization.
The website contains static content that has variable tramc with peaks in certain months. The architecture consists of Amazon EC2 instances
running in an Auto
Scaling group for the web application and EC2 instances running in an Auto Scaling group to process an Amazon SQS-queue. The company wants
to re-architect the application to reduce operational overhead using AWS managed services where possible and remove dependencies on third-
party software.
Which solution meets these requirements?
A. Use Amazon ECS containers for the web application and Spot instances for the Scaling group that processes the SQS queue. Replace the
custom software with Amazon Rekognition to categorize the videos.
B. Store the uploaded videos in Amazon EFS and mount the +le system to the EC2 instances for the web application. Process the SQS queue
with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
C. Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event noti+cation to publish events to the SQS
queue. Process the SQS queue with an AWS Lambda function that call the Amazon Rekognition API to categorize the videos.
D. Use AWS Elastic Beanstalk to launch EC2 instances in an Auto Scaling group for the application and launch a worker environment to
process the SQS queue. Replace the custom software with Amazon Rekognition to categorize the videos.
Correct Answer: A
A retail company processes point-of-sale data on application servers in its data center and writes outputs to an Amazon DynamoDB table. The
data center is connected to the company's VPC with an AWS Direct Connect (DX) connection, and the application servers require a consistent
network connection at speeds greater than 2 Gbps.
The company decides that the DynamoDB table needs to be highly available and fault tolerant. The company policy states that the data should be
available across two regions.
What changes should the company make to meet these requirements?
A. Establish a second DX connection for redundancy. Use DynamoDB global tables to replicate data to a second Region. Modify the
application to fail over to the second Region.
B. Use an AWS managed VPN as a backup to DX. Create an identical DynamoDB table in a second Region. Modify the application to replicate
data to both Regions.
C. Establish a second DX connection for redundancy. Create an identical DynamoDB table in a second Region. Enable DynamoDB auto scaling
to manage throughput capacity. Modify the application to write to the second Region.
D. Use AWS managed VPN as a backup to DX. Create an identical DynamoDB table in a second Region. Enable DynamoDB streams to capture
changes to the table. Use AWS Lambda to replicate changes to the second Region.
Correct Answer: A
upvoted 1 times
A company is using AWS CloudFormation as its deployment tool for all application. It stages all application binaries and templates within Amazon
S3 bucket with versioning enabled. Developers have access to an Amazon EC2 instance that hosts the integrated development (IDE). The
Developers download the application binaries from Amazon S3 to the EC2 instance, make changes, and upload the binaries to an S3 bucket after
running the unit tests locally. The developers want to improve the existing deployment mechanism and implement CI/CD using AWS CodePipeline.
The developers have the following requirements:
✑ Use AWS CodeCommit for source control.
✑ Automate unit testing and security scanning.
✑ Alert the Developers when unit tests fail.
✑ Turn application features on and off, and customize deployment dynamically as part of CI/CD.
✑ Have the lead Developer provide approval before deploying an application.
Which solution will meet these requirements?
A. Use AWS CodeBuild to run tests and security scans. Use an Amazon EventBridge rule to send Amazon SNS alerts to the Developers when
unit tests fail. Write AWS Cloud Developer kit (AWS CDK) constructs for different solution features, and use a manifest +le to turn features on
and off in the AWS CDK application. Use a manual approval stage in the pipeline to allow the lead Developer to approve applications.
B. Use AWS Lambda to run unit tests and security scans. Use Lambda in a subsequent stage in the pipeline to send Amazon SNS alerts to the
developers when unit tests fail. Write AWS Amplify plugins for different solution features and utilize user prompts to turn features on and off.
Use Amazon SES in the pipeline to allow the lead developer to approve applications.
C. Use Jenkins to run unit tests and security scans. Use an Amazon EventBridge rule in the pipeline to send Amazon SES alerts to the
developers when unit tests fail. Use AWS CloudFormation nested stacks for different solution features and parameters to turn features on and
off. Use AWS Lambda in the pipeline to allow the lead developer to approve applications.
D. Use AWS CodeDeploy to run unit tests and security scans. Use an Amazon CloudWatch alarm in the pipeline to send Amazon SNS alerts to
the developers when unit tests fail. Use Docker images for different solution features and the AWS CLI to turn features on and off. Use a
manual approval stage in the pipeline to allow the lead developer to approve applications.
Correct Answer: C
upvoted 2 times
" # tartarus23 6 months, 1 week ago
Selected Answer: A
A. CodeBuild is the AWS managed service for unit tests and scans. I highly doubt AWS will promote third party services such as Jenkins, instead
of their own AWS services.
upvoted 1 times
An IoT company has rolled out a jeet of sensors for monitoring temperatures in remote locations. Each device connects to AWS IoT Core and
sends a message
30 seconds, updating an Amazon DynamoDB table. A System Administrator users AWS IoT to verify the devices are still sending messages to AWS
IoT Core: the database is not updating.
What should a Solutions Architect check to determine why the database is not being updated?
A. Verify the AWS IoT Device Shadow service is subscribed to the appropriate topic and is executing the AWS Lambda function.
B. Verify that AWS IoT monitoring shows that the appropriate AWS IoT rules are being executed, and that the AWS IoT rules are enabled with
the correct rule actions.
C. Check the AWS IoT Fleet indexing service and verify that the thing group has the appropriate IAM role to update DynamoDB.
D. Verify that AWS IoT things are using MQTT instead of MQTT over WebSocket, then check that the provisioning has the appropriate policy
attached.
Correct Answer: D
upvoted 1 times
An enterprise company is using a multi-account AWS strategy. There are separate accounts for development staging and production workloads.
To control costs and improve governance the following requirements have been de+ned:
✑ The company must be able to calculate the AWS costs for each project.
✑ The company must be able to calculate the AWS costs for each environment development staging and production.
✑ Commonly deployed IT services must be centrally managed.
✑ Business units can deploy pre-approved IT services only.
✑ Usage of AWS resources in the development account must be limited.
Which combination of actions should be taken to meet these requirements? (Choose three.)
A. Apply environment, cost center, and application name tags to all taggable resources.
C. Con+gure AWS Trusted Advisor to obtain weekly emails with cost-saving estimates.
D. Create a portfolio for each business unit and add products to the portfolios using AWS CloudFormation in AWS Service Catalog.
B:
Visualize, understand, and manage your AWS costs based on Tags created in A:.
F:
Why NOT D:
AWS CloudFormation will not prevent usage of unauthorized AWS Services. SCP is used for that.
upvoted 1 times
B:
Visualize, understand, and manage your AWS costs based on Tags created in A:.
Having only tagging is not enough.
F:
Use SCP to limit AWS Resources deployment only to “pre-approved IT services only”.
Why NOT D:
AWS CloudFormation will not prevent usage of unauthorized AWS Services as per requirement “Business units can deploy pre-approved IT
services only”. SCP is used for that. CloudFormation is good for deployment of approved AWS Resources, not AWS Services.
upvoted 1 times
" # roka_ua 7 months, 1 week ago
Selected Answer: ADF
Vote ADF
upvoted 2 times
upvoted 2 times
" # Kian1 1 year ago
will go with ADF
upvoted 2 times
A company is planning to migrate an existing high performance computing (HPC) solution to the AWS Cloud. The existing solution consists of a
12-node cluster running Linux with high speed interconnectivity developed on a single rack. A solutions architect needs to optimize the
performance of the HPC cluster.
Which combination of steps will meet these requirements? (Choose two.)
C. Use Amazon EC2 instances that support Elastic Fabric Adapter (EFA).
Correct Answer: BE
upvoted 1 times
" # AzureDP900 11 months, 1 week ago
B and C is correct
upvoted 1 times
https://aws.amazon.com/hpc/efa/
Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of
inter-node communications at scale on AWS. Its custom-built operating system (OS) bypass hardware interface enhances the performance of
inter-instance communications, which is critical to scaling these applications. With EFA, High Performance Computing (HPC) applications using
the Message Passing Interface (MPI) and Machine Learning (ML) applications using NVIDIA Collective Communications Library (NCCL) can scale
to thousands of CPUs or GPUs. As a result, you get the application performance of on-premises HPC clusters with the on-demand elasticity and
flexibility of th...
upvoted 1 times
A company hosts a game player-matching service on a public facing, physical, on-premises instance that all users are able to access over the
internet. All tramc to the instance uses UDP. The company wants to migrate the service to AWS and provide a high level of security. A solutions
architect needs to design a solution for the player-matching service using AWS.
Which combination of steps should the solutions architect take to meet these requirements? (Choose three.)
A. Use a Network Load Balancer (NLB) in front of the player-matching instance. Use a friendly DNS entry in Amazon Route 53 pointing to the
NLB's Elastic IP address.
B. Use an Application Load Balancer (ALB) in front of the player-matching instance. Use a friendly DNS entry in Amazon Route 53 pointing to
the ALB's internet- facing fully quali+ed domain name (FQDN).
C. De+ne an AWS WAF rule to explicitly drop non-UDP tramc, and associate the rule with the load balancer.
D. Con+gure a network ACL rule to block all non-UDP tramc. Associate the network ACL with the subnets that hold the load balancer
instances.
upvoted 1 times
" # denccc 1 year ago
I would think BDF? Not sure if the order of answers changed? WAF for ALB.
upvoted 1 times
A company has multiple AWS accounts and manages these accounts which AWS Organizations. A developer was given IAM user credentials to
access AWS resources. The developer should have read-only access to all Amazon S3 buckets in the account. However, when the developer tries
to access the S3 buckets from the console, they receive an access denied error message with no bucket listed.
A solution architect reviews the permissions and +nds that the developer's IAM user is listed as having read-only access to all S3 buckets in the
account.
Which additional steps should the solutions architect take to troubleshoot the issue? (Choose two.)
D. Check for the permissions boundaries set for the IAM user.
Correct Answer: DE
B is INCORRECT because even though ACLs are resource-based policies you use ACLs to grant basic read/write permissions on the objects in
the bucket. You'll still be able to ListBuckets if there is an ACL on the bucket.
C is CORRECT because after the Deny Evaluation a Organization SCPs are evaluated and take affect/merged. (See Link Below)
D is CORRECT because a DENY on the permission boundary will not allow the developer to ListBuckets
E is INCORRECT because this is a IAM Permission and applied AFTER DENY, ORG SCP, and RESOURCE-based policy evaluation. In addition
the Solution Architect checked the developers IAM User and it was listed as readonly.
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html#policy-eval-denyallow
upvoted 33 times
A company is planning to migrate its business-critical applications from an on-premises data center to AWS. The company has an on-premises
installation of a
Microsoft SQL Server Always On cluster. The company wants to migrate to an AWS managed database service. A solutions architect must design
a heterogeneous database migration on AWS.
Which solution will meet these requirements?
A. Migrate the SQL Server databases to Amazon RDS for MySQL by using backup and restore utilities.
B. Use an AWS Snowball Edge Storage Optimized device to transfer data to Amazon S3. Set up Amazon RDS for MySQL. Use S3 integration
with SQL Server features, such as BULK INSERT.
C. Use the AWS Schema Conversion Tool to translate the database schema to Amazon RDS for MeSQL. Then use AWS Database Migration
Service (AWS DMS) to migrate the data from on-premises databases to Amazon RDS.
D. Use AWS DataSync to migrate data over the network between on-premises storage and Amazon S3. Set up Amazon RDS for MySQL. Use S3
integration with SQL Server features, such as BULK INSERT.
Correct Answer: A
Reference:
https://docs.aws.amazon.com/dms/latest/sbs/dms-sbs-welcome.html
A company has an application that generates reports and stores them in an Amazon bucket Amazon S3 bucket. When a user accesses their
report, the application generates a signed URL to allow the user to download the report. The company's security team has discovered that the +les
are public and that anyone can download them without authentication. The company has suspended the generation of new reports until the
problem is resolved.
Which set of action will immediately remediate the security issue without impacting the application's normal workjow?
A. Create an AWS Lambda function that applies all policy for users who are not authenticated. Create a scheduled event to invoke the Lambda
function.
B. Review the AWS Trusted advisor bucket permissions check and implement the recommend actions.
C. Run a script that puts a Private ACL on all of the object in the bucket.
D. Use the Block Public Access feature in Amazon S3 to set the IgnorePublicAcis option to TRUE on the bucket.
Correct Answer: B
---
upvoted 1 times
Block public access to buckets and objects granted through any access control lists (ACLs)
S3 will ignore all ACLs that grant public access to buckets and objects.
upvoted 3 times
A company hosts a legacy application that runs on an Amazon EC2 instance inside a VPC without internet access. Users access the application
with a desktop program installed on their corporate laptops. Communication between the laptops and the VPC jows through AWS Direct Connect
(DX). A new requirement states that all data in transit must be encrypted between users and the VPC.
Which strategy should a solutions architect use to maintain consistent network performance while meeting this new requirement?
A. Create a client VPN endpoint and con+gure the laptops to use an AWS client VPN to connect to the VPC over the internet.
B. Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public
virtual interface.
C. Create a new Site-to-Site VPN that connects to the VPC over the internet.
D. Create a new private virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX private
virtual interface.
Correct Answer: D
upvoted 1 times
" # TechX 4 months, 1 week ago
Selected Answer: D
D for me
upvoted 1 times
https://docs.aws.amazon.com/directconnect/latest/UserGuide/WorkingWithVirtualInterfaces.html
It is also clearly explain in this blog which references all the details in any AWS doc.
https://jayendrapatil.com/tag/direct-connect/
This doc is also only 2 days old. but with the use of a transit GW you can use Private IP and IPSEC.
https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-site-to-site-vpn-private-ip-vpns/
upvoted 1 times
https://docs.aws.amazon.com/directconnect/latest/UserGuide/encryption-in-transit.html
upvoted 1 times
Why do we need public virtual interface for communication between laptop and VPC over DX? There are no requirements of accessing from
internet. It should be PRIVATE virtual interface.
upvoted 1 times
A company is creating a centralized logging service running on Amazon EC2 that will receive and analyze logs from hundreds of AWS accounts.
AWS PrivateLink is being used to provide connectivity between the client services and the logging service.
In each AWS account with a client, an interface endpoint has been created for the logging service and is available. The logging service running on
EC2 instances with a Network Load Balancer (NLB) are deployed in different subnets. The clients are unable to submit logs using the VPC
endpoint.
Which combination of steps should a solutions architect take to resolve this issue? (Choose two.)
A. Check that the NACL is attached to the logging service subnet to allow communications to and from the NLB subnets. Check that the NACL
is attached to the NLB subnet to allow communications to and from the logging service subnets running on EC2 instances.
B. Check that the NACL is attached to the logging service subnets to allow communications to and from the interface endpoint subnets.
Check that the NACL is attached to the interface endpoint subnet to allow communications to and from the logging service subnets running
on EC2 instances.
C. Check the security group for the logging service running on the EC2 instances to ensure it allows ingress from the NLB subnets.
D. Check the security group for the logging service running on the EC2 instances to ensure it allows ingress from the clients.
E. Check the security group for the NLB to ensure it allows ingress from the interface endpoint subnets.
Correct Answer: DE
NLB sits in front of the Logging Services, so the NACL and Sec groups for the corresponding logging instances (and its subnet) need to check
for the NLB ingress. A/C for me
upvoted 2 times
With NLB, for security group attached to target EC2 instance (front by NLB) need to allow not only IP of NLB but also IP from client (If target type
is an instance), assume that we use EC2 only, so target type instance is fitted.
https://aws.amazon.com/premiumsupport/knowledge-center/security-group-load-balancer/
upvoted 1 times
A company is refactoring an existing web service that provides read and write access to structured data. The service must respond to short but
signi+cant spikes in the system load. The service must be fault tolerant across multiple AWS Regions.
Which actions should be taken to meet these requirements?
A. Store the data in Amazon DocumentDB. Create a single global Amazon CloudFront distribution with a custom origin built on edge-optimized
Amazon API Gateway and AWS Lambda. Assign the company's domain as an alternate domain for the distribution, and con+gure Amazon
Route 53 with an alias to the CloudFront distribution.
B. Store the data in replicated Amazon S3 buckets in two Regions. Create an Amazon CloudFront distribution in each Region, with custom
origins built on Amazon API Gateway and AWS Lambda launched in each Region. Assign the company's domain as an alternate domain for
both distributions, and con+gure Amazon Route 53 with a failover routing policy between them.
C. Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. In both Regions, run the web service
as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, con+gure an
alias record in the company's domain and a Route 53 latency-based routing policy with health checks to distribute tramc between the two
ALBs.
D. Store the data in Amazon Aurora global databases. Add Auto Scaling replicas to both Regions. Run the web service on Amazon EC2
instances in an Auto Scaling group behind an Application Load Balancer in each Region. Con+gure the instances to download the web service
code in the user data. In Amazon Route 53, con+gure an alias record for the company's domain and a multi-value routing policy
Correct Answer: A
For A and B, I don't see how CloudFront can have API Gateway as origin... A and B would be ruled out because CloudFront can have :
Web server, S3 bucket, or Elemental Media PAckage/Store for VOD as origins.
upvoted 1 times
upvoted 1 times
" # Ebi Highly Voted $ 1 year, 1 month ago
Answer is C.
D is not the right answer, although Aurora is better choice for structured data, but Aurora Global database supports one master only, so other
regions do not support write.
upvoted 18 times
A(wrong):Single Point of Failure, can't support fault tolerant across multiple regions.
B(wrong):S3 CRR is not fast enough. AWS Docs - "Most objects replicate within 15 minutes, but sometimes replication can take a couple hours
or more". By comparison, DynamoDB Global tables has sync latency of less than a sec - "In a global table, a newly written item is usually
propagated to all replica tables within a second.".
D(wrong):Unlike DynamoDB, Aurora Global database has only one master(only one writable node) in the case of multiple region deployment.
upvoted 2 times
Answer is C ; The soloution should be able to quickly scale Fargate and fault resilient then both region should be active . Dynamodb glbal table
and Route53 latency based rcords with health check
upvoted 1 times
" # cldy 11 months ago
C. Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. In both Regions, run the web service
as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, configure an
alias record in the companyג€™s domain and a Route 53 latency-based routing policy with health checks to distribute traffic between the two
ALBs.
upvoted 1 times
I would still lean towards C, even so, because one DocumentDB region must be primary, and the failover process to a secondary region is not
seamless by any means. You have to stop application writes in the primary (failed) region, and then promote the secondary region to its own
standalone master. Then you have to repoint your app to the secondary region. Not ideal.
https://aws.amazon.com/documentdb/global-clusters/
https://aws.amazon.com/blogs/database/introducing-amazon-documentdb-with-mongodb-compatibility-global-clusters/
upvoted 2 times
A company plans to migrate to AWS. A solutions architect uses AWS Application Discovery Service over the jeet and discovers that there is an
Oracle data warehouse and several PostgreSQL databases.
Which combination of migration patterns will reduce licensing costs and operational overhead? (Choose two.)
A. Lift and shift the Oracle data warehouse to Amazon EC2 using AWS DMS.
B. Migrate the Oracle data warehouse to Amazon Redshift using AWS SCT and AWS DMS
C. Lift and shift the PostgreSQL databases to Amazon EC2 using AWS DMS.
D. Migrate the PostgreSQL databases to Amazon RDS for PostgreSQL using AWS DMS.
E. Migrate the Oracle data warehouse to an Amazon EMR managed cluster using AWS DMS.
Correct Answer: DE
A solutions architect needs to de+ne a reference architecture for a solution for three-tier applications with web, application, and NoSQL data
layers. The reference architecture must meet the following requirements:
✑ High availability within an AWS Region
✑ Able to fail over in 1 minute to another AWS Region for disaster recovery
✑ Provide the most emcient solution while minimizing the impact on the user experience
Which combination of steps will meet these requirements? (Choose three.)
A. Use an Amazon Route 53 weighted routing policy set to 100/0 across the two selected Regions. Set Time to Live (TTL) to 1 hour.
B. Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL)
to 30 seconds.
C. Use a global table within Amazon DynamoDB so data can be accessed in the two selected Regions.
D. Back up data from an Amazon DynamoDB table in the primary Region every 60 minutes and then write the data to Amazon S3. Use S3
cross-Region replication to copy the data from the primary Region to the disaster recovery Region. Have a script import the data into
DynamoDB in a disaster recovery scenario.
E. Implement a hot standby model using Auto Scaling groups for the web and application layers across multiple Availability Zones in the
Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources.
F. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the
required resources.
upvoted 4 times
" # andypham 1 year ago
Yes, BBB CCC EEE
upvoted 2 times
A company has a Microsoft SQL Server database in its data center and plans to migrate data to Amazon Aurora MySQL. The company has already
used the AWS
Schema Conversion Tool to migrate triggers, stored procedures and other schema objects to Aurora MySQL. The database contains 1 TB of data
and grows less than 1 MB per day. The company's data center is connected to AWS through a dedicated 1Gbps AWS Direct Connect connection.
The company would like to migrate data to Aurora MySQL and perform recon+gurations with minimal downtime to the applications.
Which solution meets the company's requirements?
A. Shut down applications over the weekend. Create an AWS DMS replication instance and task to migrate existing data from SQL Server to
Aurora MySQL. Perform application testing and migrate the data to the new database endpoint.
B. Create an AWS DMS replication instance and task to migrate existing data and ongoing replication from SQL Server to Aurora MySQL.
Perform application testing and migrate the data to the new database endpoint.
C. Create a database snapshot of SQL Server on Amazon S3. Restore the database snapshot from Amazon S3 to Aurora MySQL. Create an
AWS DMS replication instance and task for ongoing replication from SQL Server to Aurora MySQL. Perform application testing and migrate the
data to the new database endpoint.
D. Create a SQL Server native backup +le on Amazon S3. Create an AWS DMS replication instance and task to restore the SQL Server backup
+le to Aurora MySQL. Create another AWS DMS task for ongoing replication from SQL Server to Aurora MySQL. Perform application testing
and migrate the data to the new database endpoint.
Correct Answer: B
B is correct because since your have used the AWS SCT all you need to do for this migration is migrate the existing data and keep replication
going until cutover.
upvoted 35 times
https://docs.aws.amazon.com/dms/latest/sbs/chap-sqlserver2aurora.steps.html
upvoted 2 times
" # tgv 1 year ago
BBB
---
upvoted 1 times
A company runs an application on a jeet of Amazon EC2 instances. The application requires low latency and random access to 100 GB of data.
The application must be able to access the data at up to 3.000 IOPS. A Development team has con+gured the EC2 launch template to provision a
100-GB Provisioned IOPS
(PIOPS) Amazon EBS volume with 3 000 IOPS provisioned. A Solutions Architect is tasked with lowering costs without impacting performance and
durability.
Which action should be taken?
A. Create an Amazon EFS +le system with the performance mode set to Max I/O. Con+gure the EC2 operating system to mount the EFS +le
system.
B. Create an Amazon EFS +le system with the throughput mode set to Provisioned. Con+gure the EC2 operating system to mount the EFS +le
system.
C. Update the EC2 launch template to allocate a new 1-TB EBS General Purpose SSO (gp2) volume.
D. Update the EC2 launch template to exclude the PIOPS volume. Con+gure the application to use local instance storage.
Correct Answer: A
A,B is misleading
upvoted 4 times
Selected Answer: C
Go with C.
upvoted 1 times
" # AzureDP900 11 months, 1 week ago
General Purpose SSD, It is typo in question. I will go with C.
upvoted 2 times
And we are not told how many servers are in the fleet, nor the throughput needed based on the application's average block size per operation.
Both are critical factors in making this decision. I'm not a fan of this question due to that missing info.
Without that crucial info, I'm just going to default to keeping things the way they're doing today with individual disks on each instance, and save
on cost by going with gp2, but that's really answering the question at all.
upvoted 3 times
The questions never said that's a single volume mounted/shared across instances!!!!
If you guys don't read the question at least twice it'll be difficult to go well in the exam
upvoted 9 times
EFS is not correct for this random access requirement, so rule out A/B
upvoted 1 times
A company recently transformed its legacy infrastructure provisioning scripts to AWS CloudFormation templates. The newly developed templates
are hosted in the company's private GitHub repository. Since adopting CloudFormation, the company has encountered several issues with updates
to the CloudFormation templates, causing execution or creating environment. Management is concerned by the increase in errors and has asked a
Solutions Architect to design the automated testing of CloudFormation template updates.
What should the Solution Architect do to meet these requirements?
A. Use AWS CodePipeline to create a change set from the CloudFormation templates stored in the private GitHub repository. Execute the
change set using AWS CodeDeploy. Include a CodePipeline action to test the deployment with testing scripts run by AWS CodeBuild.
B. Mirror the GitHub repository to AWS CodeCommit using AWS Lambda. Use AWS CodeDeploy to create a change set from the
CloudFormation templates and execute it. Have CodeDeploy test the deployment with testing scripts run by AWS CodeBuild.
C. Use AWS CodePipeline to create and execute a change set from the CloudFormation templates stored in the GitHub repository. Con+gure a
CodePipeline action to be deployment with testing scripts run by AWS CodeBuild.
D. Mirror the GitHub repository to AWS CodeCommit using AWS Lambda. Use AWS CodeBuild to create a change set from the CloudFormation
templates and execute it. Have CodeBuild test the deployment with testing scripts.
Correct Answer: B
A company has several Amazon EC2 instances to both public and private subnets within a VPC that is not connected to the corporate network. A
security group associated with the EC2 instances allows the company to use the Windows remote desktop protocol (RDP) over the internet to
access the instances. The security team has noticed connection attempts from unknown sources. The company wants to implement a more
secure solution to access the EC2 instances.
Which strategy should a solutions architect implement?
A. Deploy a Linux bastion host on the corporate network that has access to all instances in the VPC.
B. Deploy AWS Systems Manager Agent on the EC2 instances. Access the EC2 instances using Session Manager restricting access to users
with permission.
C. Deploy a Linux bastion host with an Elastic IP address in the public subnet. Allow access to the bastion host from 0.0.0.0/0.
D. Establish a Site-to-Site VPN connecting the corporate network to the VPC. Update the security groups to allow access from the corporate
network only.
Correct Answer: A
https://awscloudsecvirtualevent.com/workshops/module1/rdp/
upvoted 2 times
it is B
upvoted 1 times
Guys, with Systems Manager agent you can manage EC2 instances without the need to leave open ports to the world.
Also, you can control which user's can access Systems Manager, giving one more security control
upvoted 2 times
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/connecting_to_windows_instance.html#session-manager
upvoted 1 times
upvoted 1 times
A retail company has a custom .NET web application running on AWS that uses Microsoft SQL Server for the database. The application servers
maintain a user's session locally.
Which combination of architecture changes are needed to ensure all tiers of the solution are highly available? (Choose three.)
A. Refactor the application to store the user's session in Amazon ElastiCache. Use Application Load Balancers to distribute the load between
application instances.
B. Set up the database to generate hourly snapshots using Amazon EBS. Con+gure an Amazon CloudWatch Events rule to launch a new
database instance if the primary one fails.
C. Migrate the database to Amazon RDS for SQL Server. Con+gure the RDS instance to use a Multi-AZ deployment.
D. Move the .NET content to an Amazon S3 bucket. Con+gure the bucket for static website hosting.
E. Put the application instances in an Auto Scaling group. Con+gure the Auto Scaling group to create new instances if an instance becomes
unhealthy.
F. Deploy Amazon CloudFront in front of the application tier. Con+gure CloudFront to serve content from healthy application instances only.
upvoted 1 times
A company is using an existing orchestration tool to manage thousands of Amazon EC2 instances. A recent penetration test found a vulnerability
in the company's software stack. This vulnerability has prompted the company to perform a full evaluation of its current production environment.
The analysis determined that the following vulnerabilities exist within the environment:
✑ Operating systems with outdated libraries and known vulnerabilities are being used in production.
✑ Relational databases hosted and managed by the company are running unsupported versions with known vulnerabilities.
✑ Data stored in databases is not encrypted.
The solutions architect intends to use AWS Con+g to continuously audit and assess the compliance of the company's AWS resource
con+gurations with the company's policies and guidelines.
What additional steps will enable the company to secure its environments and track resources while adhering to best practices?
A. Use AWS Application Discovery Service to evaluate all running EC2 instances Use the AWS CLI to modify each instance, and use EC2 user
data to install the AWS Systems Manager Agent during boot. Schedule patching to run as a Systems Manager Maintenance Windows task.
Migrate all relational databases to Amazon RDS and enable AWS KMS encryption.
B. Create an AWS CloudFormation template for the EC2 instances. Use EC2 user data in the CloudFormation template to install the AWS
Systems Manager Agent, and enable AWS KMS encryption on all Amazon EBS volumes. Have CloudFormation replace all running instances.
Use Systems Manager Patch Manager to establish a patch baseline and deploy a Systems Manager Maintenance Windows task to execute
AWS-RunPatchBaseline using the patch baseline.
C. Install the AWS Systems Manager Agent on all existing instances using the company's current orchestration tool. Use the Systems Manager
Run Command to execute a list of commands to upgrade software on each instance using operating system-speci+c tools. Enable AWS KMS
encryption on all Amazon EBS volumes.
D. Install the AWS Systems Manager Agent on all existing instances using the company's current orchestration tool. Migrate all relational
databases to Amazon RDS and enable AWS KMS encryption. Use Systems Manager Patch Manager to establish a patch baseline and deploy a
Systems Manager Maintenance Windows task to execute AWS-RunPatchBaseline using the patch baseline.
Correct Answer: D
choosing DDDDDD
upvoted 4 times
upvoted 1 times
" # Waiweng 1 year ago
it;s D
upvoted 6 times
A company wants to improve cost awareness for its Amazon EMR platform. The company has allocated budgets for each team's Amazon EMR
usage. When a budgetary threshold is reached, a noti+cation should be sent by email to the budget omce's distribution list. Teams should be able
to view their EMR cluster expenses to date. A solutions architect needs to create a solution that ensures the policy is proactively and centrally
enforced in a multi-account environment.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
A. Update the AWS CloudFormation template to include the AWS::Budgets::Budget::resource with the Noti+cationsWithSubscribers property.
C. Create an EMR bootstrap action that runs at startup that calls the Cost Explorer API to set the budget on the cluster with the
GetCostForecast and Noti+cationsWithSubscribers actions.
D. Create an AWS Service Catalog portfolio for each team. Add each team's Amazon EMR cluster as an AWS CloudFormation template to their
Service Catalog portfolio as a Product.
E. Create an Amazon CloudWatch metric for billing. Create a custom alert when costs exceed the budgetary threshold.
Correct Answer: DE
You can use AWS Budgets to track your service costs and usage within AWS Service Catalog. You can associate
budgets with AWS Service Catalog products and portfolios.
AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are
forecasted to exceed) your budgeted amount.
If a budget is associated to a product, you can view information about the budget on the Products and Product
details page. If a budget is associated to a portfolio, you can view information about the budget on
the Portfolios and Portfolio details page.
When you click on a product or portfolio, you are taken to a detail page. These Portfolio detail and Product
detail pages have a section with detailed information about the associated budget. You can see the budgeted
amount, current spend, and forecasted spend. You also have the option to view budget details and edit the budget.
upvoted 3 times
" # andylogan 1 year ago
It's A D
upvoted 1 times
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-budgets-budget.html
upvoted 1 times
Any ways CF option is better in terms of centrally managing this requirement through script for each account.
upvoted 1 times
A company is migrating its on-premises systems to AWS. The user environment consists of the following systems:
✑ Windows and Linux virtual machines running on VMware.
Physical servers running Red Hat Enterprise Linux.
The company wants to be able to perform the following steps before migrating to AWS:
✑ Identify dependencies between on-premises systems.
✑ Group systems together into applications to build migration plans.
✑ Review performance data using Amazon Athena to ensure that Amazon EC2 instances are right-sized.
How can these requirements be met?
A. Populate the AWS Application Discovery Service import template with information from an on-premises con+guration management
database (CMDB). Upload the completed import template to Amazon S3, then import the data into Application Discovery Service.
B. Install the AWS Application Discovery Service Discovery Agent on each of the on-premises systems. Allow the Discovery Agent to collect
data for a period of time.
C. Install the AWS Application Discovery Service Discovery Connector on each of the on-premises systems and in VMware vCenter. Allow the
Discovery Connector to collect data for one week.
D. Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Install the AWS Application Discovery
Service Discovery Connector in VMware vCenter. Allow the Discovery Agent to collect data for a period of time.
Correct Answer: C
A company hosts a web application on AWS in the us-east-1 Region. The application servers are distributed across three Availability Zones behind
an Application
Load Balancer. The database is hosted in MySQL database on an Amazon EC2 instance. A solutions architect needs to design a cross-Region data
recovery solution using AWS services with an RTO of less than 5 minutes and an RPO of less than 1 minute. The solutions architect is deploying
application servers in us- west-2, and has con+gured Amazon Route 53 health checks and DNS failover to us-west-2.
Which additional step should the solutions architect take?
A. Migrate the database to an Amazon RDS for MySQL instance with a cross-Region read replica in us-west-2.
B. Migrate the database to an Amazon Aurora global database with the primary in us-east-1 and the secondary in us-west-2.
C. Migrate the database to an Amazon RDS for MySQL instance with a Multi-AZ deployment.
Correct Answer: B
it's B
upvoted 3 times
" # alisyech 1 year ago
should be B, https://aws.amazon.com/rds/aurora/global-database/
upvoted 2 times
A company wants to migrate its on-premises data center to the AWS Cloud. This includes thousands of virtualized Linux and Microsoft Windows
servers, SAN storage, Java and PHP applications with MYSQL, and Oracle databases. There are many dependent services hosted either in the
same data center or externally.
The technical documentation is incomplete and outdated. A solutions architect needs to understand the current environment and estimate the
cloud resource costs after the migration.
Which tools or services should solutions architect use to plan the cloud migration? (Choose three.)
B. AWS SMS
C. AWS x-Ray
E. Amazon Inspector
- Use the AWS Cloud Adoption Readiness Tool (CART) to generate a migration assessment report to identify gaps in organizational skills and
processes.
- Use AWS Migration Hub to discover and track the status of the application migration across AWS and partner solutions.
upvoted 7 times
These AWS tools and questionnaires are very helpful for assessment and planning before doing the migration activity.
upvoted 1 times
A company decided to purchase Amazon EC2 Reserved Instances. A solutions architect is tasked with implementing a solution where only the
master account in
AWS Organizations is able to purchase the Reserved Instances. Current and future member accounts should be blocked from purchasing Reserved
Instances.
Which solution will meet these requirements?
A. Create an SCP with the Deny effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to the root of the organization.
B. Create a new organizational unit (OU) Move all current member accounts to the new OU. Create an SCP with the Deny effect on the
ec2:PurchaseReservedInstancesOffering action. Attach the SCP to the new OU.
C. Create an AWS Con+g rule event that triggers automation that will terminate any Reserved Instances launched by member accounts.
D. Create two new organizational units (OUs): OU1 and OU2. Move all member accounts to OU2 and the master account to OU1. Create an SCP
with the Allow effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to OU1.
Correct Answer: C
A is CORRECT because applying the explicit deny on the API and attaching it to the root org allows for current and future account in ANY OU to
not be able to purchase RI's.
upvoted 27 times
So correct answer is A.
upvoted 16 times
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
upvoted 4 times
A company is using multiple AWS accounts. The DNS records are stored in a private hosted zone for Amazon Route 53 in Account A. The
company's applications and databases are running in Account B.
A solutions architect will deploy a two-tier application in a new VPC. To simplify the con+guration, the db.example.com CNAME record set for the
Amazon RDS endpoint was created in a private hosted zone for Amazon Route 53.
During deployment, the application failed to start. Troubleshooting revealed that db.example.com is not resolvable on the Amazon EC2 instance.
The solutions architect con+rmed that the record set was created correctly in Route 53.
Which combination of steps should the solutions architect take to resolve this issue? (Choose two.)
A. Deploy the database on a separate EC2 instance in the new VPC. Create a record set for the instance's private IP in the private hosted zone.
B. Use SSH to connect to the application tier EC2 instance. Add an RDS endpoint IP address to the /etc/resolv.conf +le.
C. Create an authorization to associate the private hosted zone in Account A with the new VPC in Account B.
D. Create a private hosted zone for the example com domain in Account B. Con+gure Route 53 replication between AWS accounts.
E. Associate a new VPC in Account B with a hosted zone in Account A. Delete the association authorization in Account A.
Correct Answer: BE
A solutions architect needs to advise a company on how to migrate its on-premises data processing application to the AWS Cloud. Currently, users
upload input +les through a web portal. The web server then stores the uploaded +les on NAS and messages the processing server over a
message queue. Each media +le can take up to 1 hour to process. The company has determined that the number of media +les awaiting
processing is signi+cantly higher during business hours, with the number of +les rapidly declining after business hours.
What is the MOST cost-effective migration recommendation?
A. Create a queue using Amazon SQS. Con+gure the existing web server to publish to the new queue. When there are messages in the queue,
invoke an AWS Lambda function to pull requests from the queue and process the +les. Store the processed +les in an Amazon S3 bucket.
B. Create a queue using Amazon MQ. Con+gure the existing web server to publish to the new queue. When there are messages in the queue,
create a new Amazon EC2 instance to pull requests from the queue and process the +les. Store the processed +les in Amazon EFS. Shut down
the EC2 instance after the task is complete.
C. Create a queue using Amazon MQ. Con+gure the existing web server to publish to the new queue. When there are messages in the queue,
invoke an AWS Lambda function to pull requests from the queue and process the +les. Store the processed +les in Amazon EFS.
D. Create a queue using Amazon SQS. Con+gure the existing web server to publish to the new queue. Use Amazon EC2 instances in an EC2
Auto Seating group to pull requests from the queue and process the +les. Scale the EC2 instances based on the SQS queue length. Store the
processed +les in an Amazon S3 bucket.
Correct Answer: D
D is correct.
upvoted 1 times
" # cldy 11 months ago
D. Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue. Use Amazon EC2 instances in an EC2
Auto Seating group to pull requests from the queue and process the files. Scale the EC2 instances based on the SQS queue length. Store the
processed files in an Amazon S3 bucket.
upvoted 1 times
D - is correct
upvoted 2 times
A company has a media catalog with metadata for each item in the catalog. Different types of metadata are extracted from the media items by an
application running on AWS Lambda. Metadata is extracted according to a number of rules with the output stored in an Amazon ElastiCache for
Redis cluster. The extraction process is done in batches and takes around 40 minutes to complete.
The update process is triggered manually whenever the metadata extraction rules change.
The company wants to reduce the amount of time it takes to extract metadata from its media catalog. To achieve this, a solutions architect has
split the single metadata extraction Lambda function into a Lambda function for each type of metadata.
Which additional steps should the solutions architect take to meet the requirements?
A. Create an AWS Step Functions workjow to run the Lambda functions in parallel. Create another Step Functions workjow that retrieves a list
of media items and executes a metadata extraction workjow for each one.
B. Create an AWS Batch compute environment for each Lambda function. Con+gure an AWS Batch job queue for the compute environment.
Create a Lambda function to retrieve a list of media items and write each item to the job queue.
C. Create an AWS Step Functions workjow to run the Lambda functions in parallel. Create a Lambda function to retrieve a list of media items
and write each item to an Amazon SQS queue. Con+gure the SQS queue as an input to the Step Functions workjow.
D. Create a Lambda function to retrieve a list of media items and write each item to an Amazon SQS queue. Subscribe the metadata extraction
Lambda functions to the SQS queue with a large batch size.
Correct Answer: C
The best solution presented is to use a combination of AWS Step Functions and Amazon SQS. This results in each
Lambda function being able to run in parallel and use a queue for buffering the jobs.
CORRECT: “Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create a Lambda
function to retrieve a list of files and write each item to an Amazon SQS queue. Configure the SQS queue as an input to the Step Functions
workflow” is the correct answer
upvoted 2 times
A utility company wants to collect usage data every 5 minutes from its smart meters to facilitate time-of-use metering. When a meter sends data
to AWS, the data is sent to Amazon API Gateway, processed by an AWS Lambda function and stored in an Amazon DynamoDB table. During the
pilot phase, the Lambda functions took from 3 to 5 seconds to complete.
As more smart meters are deployed, the Engineers notice the Lambda functions are taking from 1 to 2 minutes to complete. The functions are
also increasing in duration as new types of metrics are collected from the devices. There are many ProvisionedThroughputExceededException
errors while performing PUT operations on DynamoDB, and there are also many TooManyRequestsException errors from Lambda.
Which combination of changes will resolve these issues? (Choose two.)
C. Increase the payload size from the smart meters to send more data.
D. Stream the data into an Amazon Kinesis data stream from API Gateway and process the data in batches.
E. Collect data in an Amazon SQS FIFO queue, which triggers a Lambda function to process each message.
Correct Answer: AB
B is supported by https://aws.amazon.com/premiumsupport/knowledge-center/lambda-troubleshoot-throttling/
The blog quotes below.
Check for spikes in Duration metrics for your function
Concurrency depends on function duration. If your function code is taking too long to complete, then there might not be enough compute
resources.
Try increasing the function's memory setting. Then, use AWS X-Ray and CloudWatch Logs to isolate the cause of duration increases
D should not be ideal because it changes the whole architecture and will induce more latency I believe.
upvoted 2 times
Since this change has already passed the pilot phase and the issue is happening in the production workload, the simple fix should be
considered.
upvoted 9 times
To increase CPU for a Lambda function, oddly enough, you give it more memory: https://aws.amazon.com/blogs/compute/operating-lambda-
performance-optimization-part-2/
(This is the same kind of indirect performance increase by adjusting something seemingly unrelated like increasing an EBS disk's IOPS by
increasing the disk size.)
The "ProvisionedThroughputExceeded" exception is in the SDK the Lambda function is using to write to DynamoDB. When DynamoDB can't
keep up, it throws that error back to Lambda, and Lambda logs it. But it's indicating that you've run out of Write Capacity Units in DDB:
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/model
/ProvisionedThroughputExceededException.html
upvoted 1 times
" # pradhyumna 1 year ago
AB may still be correct. I think the hint here is that the data collection is not real time instead every 5 minutes which is kind of queueing, so we
would not really need an SQS or a KDS. So, by simply increasing the memory, lambda can process faster and since it is processing faster, an
increase in the WCU should really fix the issue.
upvoted 4 times
An ecommerce company has an order processing application it wants to migrate to AWS. The application has inconsistent data volume patterns,
but needs to be avail at all times. Orders must be processed as they occur and in the order that they are received.
Which set of steps should a solutions architect take to meet these requirements?
A. Use AWS Transfer for SFTP and upload orders as they occur. Use On-Demand Instances in multiple Availability Zones for processing.
B. Use Amazon SNS with FIFO and send orders as they occur. Use a single large Reserved Instance for processing.
C. Use Amazon SQS with FIFO and send orders as they occur. Use Reserved Instances in multiple Availability Zones for processing.
D. Use Amazon SQS with FIFO and send orders as they occur. Use Spot Instances in multiple Availability Zones for processing.
Correct Answer: C
First of all, if we set up the bid price to be equal the on-demand price of a particular instance, then we are always gonna get compute power. The
stop price can't be higher on-demand one and it's never gonna be interrupted.
Second of all, we can't predict the amount of RI to purchase due to "1)".
Third of all, the Q states "must be available all the time".
The perfect answer would be use fleet with RI + Spot, because we can't predict how many RI to purchase.
Without giving it too much thoughts it's C. But if you think about it for a bit longer, it seems to be D.
Following KISS principle, let's say it's C.
upvoted 2 times
C correct.
upvoted 1 times
" # vbal 10 months, 2 weeks ago
I don't see the point in using RI with SQS; https://aws.amazon.com/blogs/compute/running-cost-effective-queue-workers-with-amazon-sqs-
and-amazon-ec2-spot-instances/
Answer: D
upvoted 3 times
An AWS partner company is building a service in AWS Organizations using its organization named org1. This service requires the partner company
to have access to AWS resources in a customer account, which is in a separate organization named org2. The company must establish least
privilege security access using an API or command line tool to the customer account.
What is the MOST secure way to allow org1 to access resources in org2?
A. The customer should provide the partner company with their AWS account access keys to log in and perform the required tasks.
B. The customer should create an IAM user and assign the required permissions to the IAM user. The customer should then provide the
credentials to the partner company to log in and perform the required tasks.
C. The customer should create an IAM role and assign the required permissions to the IAM role. The partner company should then use the IAM
role's Amazon Resource Name (ARN) when requesting access to perform the required tasks.
D. The customer should create an IAM role and assign the required permissions to the IAM role. The partner company should then use the IAM
role's Amazon Resource Name (ARN), including the external ID in the IAM role's trust policy, when requesting access to perform the required
tasks.
Correct Answer: B
https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html
upvoted 1 times
upvoted 1 times
" # acloudguru 11 months, 2 weeks ago
Selected Answer: D
D is the Answer, such simple security question, hope I can have it in my real exam
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html
upvoted 1 times
An enterprise company is building an infrastructure services platform for its users. The company has the following requirements:
✑ Provide least privilege access to users when launching AWS infrastructure so users cannot provision unapproved services.
✑ Use a central account to manage the creation of infrastructure services.
✑ Provide the ability to distribute infrastructure services to multiple accounts in AWS Organizations.
Provide the ability to enforce tags on any infrastructure that is started by users.
Which combination of actions using AWS services will meet these requirements? (Choose three.)
A. Develop infrastructure services using AWS Cloud Formation templates. Add the templates to a central Amazon S3 bucket and add the-IAM
roles or users that require access to the S3 bucket policy.
B. Develop infrastructure services using AWS Cloud Formation templates. Upload each template as an AWS Service Catalog product to
portfolios created in a central AWS account. Share these portfolios with the Organizations structure created for the company.
C. Allow user IAM roles to have AWSCloudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organizations SCP at
the AWS account root user level to deny all services except AWS CloudFormation and Amazon S3.
D. Allow user IAM roles to have ServiceCatalogEndUserAccess permissions only. Use an automation script to import the central portfolios to
local AWS accounts, copy the TagOption assign users access and apply launch constraints.
E. Use the AWS Service Catalog TagOption Library to maintain a list of tags required by the company. Apply the TagOption to AWS Service
Catalog products or portfolios.
F. Use the AWS CloudFormation Resource Tags property to enforce the application of tags to any CloudFormation templates that will be
created for users.
If you apply the ServiceCatalogEndUserAccess policy, your users have access to the end user console view, but they won't have the permissions
that they need to launch products and manage provisioned products. You can grant these permissions directly to an end user in IAM, but if you
want to limit the access that end users have to AWS resources, you should attach the policy to a launch role. You then use AWS Service Catalog
to apply the launch role to a launch constraint for the product.
upvoted 1 times
AWSServiceCatalogEndUserReadOnlyAccess — Grants read-only access to the end user console view. Does not grant permission to launch
products or manage provisioned products.
BDE
upvoted 1 times
A Solutions Architect is building a solution for updating user metadata that is initiated by web servers. The solution needs to rapidly scale from
hundreds to tens of thousands of jobs in less than 30 seconds. The solution must be asynchronous always avertable and minimize costs.
Which strategies should the Solutions Architect use to meet these requirements?
A. Create an AWS SWF worker that will update user metadata updating web application to start a new workjow for every job.
B. Create an AWS Lambda function that will update user metadata. Create an Amazon SOS queue and con+gure it as an event source for the
Lambda function. Update the web application to send jobs to the queue.
C. Create an AWS Lambda function that will update user metadata. Create AWS Step Functions that will trigger the Lambda function. Update
the web application to initiate Step Functions for every job.
D. Create an Amazon SQS queue. Create an AMI with a worker to check the queue and update user metadata. Con+gure an Amazon EC2 Auto
Scaling group with the new AMI. Update the web application to send jobs to the queue.
Correct Answer: B
https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
The problem with D is that it takes time to spin up new instances in an ASG. And the question said "rapidly scale from hundreds to tens of
thousands of jobs in less than 30 seconds".
upvoted 1 times
" # WhyIronMan 1 year ago
I'll go with B
upvoted 3 times
Scaling up ec2 enough to handle the need in 30 seconds seems impossible for me.
upvoted 4 times
A company's main intranet page has experienced degraded response times as its user base has increased although there are no reports of users
seeing error pages. The application uses Amazon DynamoDB in read-only mode.
Amazon DynamoDB latency metrics for successful requests have been in a steady state even during times when users have reported degradation.
The
Development team has correlated the issue to ProvisionedThrough put Exceeded exceptions in the application logs when doing Scan and read
operations The team also identi+ed an access pattern of steady spikes of read activity on a distributed set of individual data items.
The Chief Technology Omcer wants to improve the user experience.
Which solutions will meet these requirements with the LEAST amount of changes to the application? (Choose two.)
A. Change the data model of the DynamoDB tables to ensure that all Scan and read operations meet DynamoDB best practices of uniform data
access, reaching the full request throughput provisioned for the DynamoDB tables.
B. Enable DynamoDB Auto Scaling to manage the throughput capacity as table tramc increases. Set the upper and lower limits to control costs
and set a target utilization given the peak usage and how quickly the tramc changes.
C. Provision Amazon ElastiCache for Redis with cluster mode enabled. The cluster should be provisioned with enough shards to spread the
application load and provision at least one read replica node for each shard.
D. Implement the DynamoDB Accelerator (DAX) client and provision a DAX cluster with the appropriate node types to sustain the application
load. Tune the item and query cache con+guration for an optimal user experience.
E. Remove error retries and exponential backoffs in the application code to handle throttling errors.
Correct Answer: AE
upvoted 2 times
" # tgv 1 year ago
BBB DDD
---
upvoted 2 times
A solutions architect has implemented a SAML 2.0 federated identity solution with their company's on-premises identity provider (IdP) to
authenticate users' access to the AWS environment. When the solutions architect tests authentication through the federated identity web portal,
access to the AWS environment is granted. However, when test users attempt to authenticate through the federated identity web portal, they are
not able to access the AWS environment.
Which items should the solutions architect check to ensure identity federation is properly con+gured? (Choose three.)
A. The IAM user's permissions policy has allowed the use of SAML federation for that user.
B. The IAM roles created for the federated users' or federated groups' trust policy have set the SAML provider as the principal.
C. Test users are not in the AWSFederatedUsers group in the company's IdR.
D. The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the IAM role, and the SAML
assertion from IdR.
E. The on-premises IdP's DNS hostname is reachable from the AWS environment VPCs.
F. The company's IdP de+nes SAML assertions that properly map users or groups in the company to IAM roles with appropriate permissions.
D: "The client app calls the AWS STS AssumeRoleWithSAML API, passing the ARN of the SAML provider, the ARN of the role to assume, and the
SAML assertion from IdP"
F: "In your organization's IdP, you define assertions that map users or groups in your organization to the IAM roles"
upvoted 18 times
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html
upvoted 3 times
3, Verify "Prerequisites for creating a role for SAML" : Principal must has "PROVIDER-NAME"
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_saml.html#idp_saml_Prerequisites
upvoted 1 times
i think
'the AWS environment VPCs' -> it does not need to assume role for SAML in aws account side
upvoted 1 times
D: "The client app calls the AWS STS AssumeRoleWithSAML API, passing the ARN of the SAML provider, the ARN of the role to assume, and the
SAML assertion from IdP"
F: "In your organization's IdP, you define assertions that map users or groups in your organization to the IAM roles"
upvoted 2 times
A company's security compliance requirements state that all Amazon EC2 images must be scanned for vulnerabilities and must pass a CVE
assessment. A solutions architect is developing a mechanism to create security- approved AMIs that can be used by developers. Any new AMIs
should go through an automated assessment process and be marked as approved before developers can use them. The approved images must be
scanned every 30 days to ensure compliance.
Which combination of steps should the solutions architect take to meet these requirements while following best practices? (Choose two.)
A. Use the AWS Systems Manager EC2 agent to run the CVE assessment on the EC2 instances launched from the AMIs that need to be
scanned.
B. Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use Amazon
EventBridge to trigger an AWS Systems Manager Automation document on all EC2 instances every 30 days.
C. Use Amazon Inspector to run the CVE assessment on the EC2 instances launched from the AMIs that need to be scanned.
D. Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use a managed
AWS Con+g rule for continuous scanning on all EC2 instances, and use AWS Systems Manager Automation documents for remediation.
E. Use AWS CloudTrail to run the CVE assessment on the EC2 instances launched from the AMIs that need to be scanned.
Correct Answer: BC
B - question has 30 days, this is the only answer has 30 days in it.
C - CVE needs to be inspected, use 'Amazon Inspector' only C has these words.
upvoted 4 times
A company uses AWS Organizations with a single OU named Production to manage multiple accounts. All accounts are members of the
Production OU.
Administrators use deny list SCPs in the root of the organization to manage access to restricted services.
The company recently acquired a new business unit and invited the new unit's existing AWS account to the organization. Once onboarded, the
administrators of the new business unit discovered that they are not able to update existing AWS Con+g rules to meet the company's policies.
Which option will allow administrators to make changes and continue to enforce the current policies without introducing additional long-term
maintenance?
A. Remove the organization's root SCPs that limit access to AWS Con+g. Create AWS Service Catalog products for the company's standard
AWS Con+g rules and deploy them throughout the organization, including the new account.
B. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Con+g actions. Move the
new account to the Production OU when adjustments to AWS Con+g are complete.
C. Convert the organization's root SCPs from deny list SCPs to allow list SCPs to allow the required services only. Temporally apply an SCP to
the organization's root that allows AWS Con+g actions for principals only in the new account.
D. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Con+g actions. Move the
organization's root SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS Con+g are complete.
Correct Answer: D
A company is launching a web-based application in multiple regions around the world. The application consists of both static content stored in a
private Amazon
S3 bucket and dynamic content hosted in Amazon ECS containers content behind an Application Load Balancer (ALB). The company requires that
the static and dynamic application content be accessible through Amazon CloudFront only.
Which combination of steps should a solutions architect recommend to restrict direct content access to CloudFront? (Choose three.)
A. Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB.
B. Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the CloudFront
distribution.
E. Update the S3 bucket ACL to allow access from the CloudFront distribution only.
F. Create a CloudFront Origin Access Identity (OAI) and add it to the CloudFront distribution. Update the S3 bucket policy to allow access to
the OAI only.
To deliver contents through only CloudFront, we need associate the Web ACL with CloudFront, not ALB. ALB is for ECS here and for OAI, doesn’t
need ALB.
https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-associating-aws-resource.html
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/?nc1=h_ls
upvoted 1 times
upvoted 4 times
A company has multiple lines of business (LOBs) that roll up to the parent company. The company has asked its solutions architect to develop a
solution with the following requirements:
✑ Produce a single AWS invoice for all of the AWS accounts used by its LOBs.
✑ The costs for each LOB account should be broken out on the invoice.
✑ Provide the ability to restrict services and features in the LOB accounts, as de+ned by the company's governance policy.
✑ Each LOB account should be delegated full administrator permissions, regardless of the governance policy.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
A. Use AWS Organizations to create an organization in the parent account for each LOB. Then, invite each LOB account to the appropriate
organization.
B. Use AWS Organizations to create a single organization in the parent account. Then, invite each LOB's AWS account to pin the organization.
C. Implement service quotas to de+ne the services and features that are permitted and apply the quotas to each LOB as appropriate.
D. Create an SCP that allows only approved services and features, then apply the policy to the LOB accounts. Enable consolidated billing in the
parent account's billing console and link the LOB accounts.
Correct Answer: CD
upvoted 1 times
An ecommerce website running on AWS uses an Amazon RDS for MySQL DB instance with General Purpose SSD storage. The developers chose an
appropriate instance type based on demand, and con+gured 100 GB of storage with a sumcient amount of free space.
The website was running smoothly for a few weeks until a marketing campaign launched. On the second day of the campaign, users reported long
wait times and time outs. Amazon CloudWatch metrics indicated that both reads and writes to the DB instance were experiencing long response
times. The CloudWatch metrics show 40% to 50% CPU and memory utilization, and sumcient free storage space is still available. The application
server logs show no evidence of database connectivity issues.
What could be the root cause of the issue with the marketing campaign?
A. It exhausted the I/O credit balance due to provisioning low disk storage during the setup phase.
B. It caused the data in the tables to change frequently, requiring indexes to be rebuilt to optimize queries.
D. It exhausted the network bandwidth available to the RDS for MySQL DB instance.
Correct Answer: C
upvoted 2 times
" # acloudguru 11 months, 3 weeks ago
Selected Answer: A
There is burst option but it can be exhausted
"When using General Purpose SSD storage, your DB instance receives an initial I/O credit balance of 5.4 million I/O credits. This initial credit
balance is enough to sustain a burst performance of 3,000 IOPS for 30 minutes."
upvoted 1 times
A solutions architect has been assigned to migrate a 50 TB Oracle data warehouse that contains sales data from on-premises to Amazon
Redshift. Major updates to the sales data occur on the +nal calendar day of the month. For the remainder of the month, the data warehouse only
receives minor daily updates and is primarily used for reading and reporting. Because of this, the migration process must start on the +rst day of
the month and must be complete before the next set of updates occur. This provides approximately 30 days to complete the migration and ensure
that the minor daily changes have been synchronized with the
Amazon Redshift data warehouse. Because the migration cannot impact normal business network operations, the bandwidth allocated to the
migration for moving data over the internet is 50 Mbps. The company wants to keep data migration costs low.
Which steps will allow the solutions architect to perform the migration within the speci+ed timeline?
A. Install Oracle database software on an Amazon EC2 instance. Con+gure VPN connectivity between AWS and the company's data center.
Con+gure the Oracle database running on Amazon EC2 to join the Oracle Real Application Clusters (RAC). When the Oracle database on
Amazon EC2 +nishes synchronizing, create an AWS DMS ongoing replication task to migrate the data from the Oracle database on Amazon
EC2 to Amazon Redshift. Verify the data migration is complete and perform the cut over to Amazon Redshift.
B. Create an AWS Snowball import job. Export a backup of the Oracle data warehouse. Copy the exported data to the Snowball device. Return
the Snowball device to AWS. Create an Amazon RDS for Oracle database and restore the backup +le to that RDS instance. Create an AWS DMS
task to migrate the data from the RDS for Oracle database to Amazon Redshift. Copy daily incremental backups from Oracle in the data center
to the RDS for Oracle database over the internet. Verify the data migration is complete and perform the cut over to Amazon Redshift.
C. Install Oracle database software on an Amazon EC2 instance. To minimize the migration time, con+gure VPN connectivity between AWS
and the company's data center by provisioning a 1 Gbps AWS Direct Connect connection. Con+gure the Oracle database running on Amazon
EC2 to be a read replica of the data center Oracle database. Start the synchronization process between the company's on-premises data
center and the Oracle database on Amazon EC2. When the Oracle database on Amazon EC2 is synchronized with the on-premises database,
create an AWS DMS ongoing replication task to migrate the data from the Oracle database read replica that is running on Amazon EC2 to
Amazon Redshift. Verify the data migration is complete and perform the cut over to Amazon Redshift.
D. Create an AWS Snowball import job. Con+gure a server in the company's data center with an extraction agent. Use AWS SCT to manage the
extraction agent and convert the Oracle schema to an Amazon Redshift schema. Create a new project in AWS SCT using the registered data
extraction agent. Create a local task and an AWS DMS task in AWS SCT with replication of ongoing changes. Copy data to the Snowball device
and return the Snowball device to AWS. Allow AWS DMS to copy data from Amazon S3 to Amazon Redshift. Verify that the data migration is
complete and perform the cut over to Amazon Redshift.
Correct Answer: A
- Transmitting via Snowball (Edge) ~ 3-5 days, can hold up to 80TB usable disk, feasible
Between B and D, difference is around whether to use SCT and DMS to Snowball in your datacenter, then move to AWS. Or, copy to Snowball
in data center, move to AWS, then do DMS WITHOUT SCT within AWS. Clearly, you need SCT to go from Oracle to Redshift, so it has to be D
https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/agents.dw.html
upvoted 7 times
D - how would you make it working if Snowball will travel to AWS for 3-4 days and 1 day more will be spent on restoring database?
upvoted 1 times
since it is Oracle to Redshit, it needs SCT. scan for the key word SCT and answer is D.
upvoted 1 times
" # denccc 1 year ago
D: https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/agents.dw.html
upvoted 1 times
A solutions architect is designing a disaster recovery strategy for a three-tier application. The application has an RTO of 30 minutes and an RPO of
5 minutes for the data tier. The application and web tiers are stateless and leverage a jeet of Amazon EC2 instances. The data tier consists of a
50 TB Amazon Aurora database.
Which combination of steps satis+es the RTO and RPO requirements while optimizing costs? (Choose two.)
A. Create daily snapshots of the EC2 instances and replicate the snapshots to another Region.
Correct Answer: AD
Because, if we have fleet of EC2 Instances, which are stateless, why even we are taking snapshots. Suppost, if we have 5 instance in app later,
10 instances in BL, what is the use of taking the snapshot of the disk of App Layes which is stateless, instead of that, we can maintain thin layer
of Hot Standby 1 instance in Web, 1 instance in BL behind autoscaling group with Cross Replication of Aurora, we can bring the entire layer with
in few minutes by standing up the instance by Cloud Formation with the DR database:
https://www.wellarchitectedlabs.com/reliability/disaster-recovery/workshop_4/
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-disaster-recovery.html
upvoted 2 times
What's needed here is Warm standby (RPO in seconds, RTO in minutes) but since that is not available the only valid option is [B][D] .
upvoted 1 times
A company has a primary Amazon S3 bucket that receives thousands of objects every day. The company needs to replicate these objects into
several other S3 buckets from various AWS accounts. A solutions architect is designing a new AWS Lambda function that is triggered when an
object is created in the main bucket and replicates the object into the target buckets. The objects do not need to be replicated in real time. There
is concern that this function may impact other critical
Lambda functions due to Lambda's regional concurrency limit.
How can the solutions architect ensure this new Lambda function will not impact other critical Lambda functions?
A. Set the new Lambda function reserved concurrency limit to ensure the executions do not impact other critical Lambda functions. Monitor
existing critical Lambda functions with Amazon CloudWatch alarms for the Throttles Lambda metric.
B. Increase the execution timeout of the new Lambda function to 5 minutes. Monitor existing critical Lambda functions with Amazon
CloudWatch alarms for the Throttles Lambda metric.
C. Con+gure S3 event noti+cations to add events to an Amazon SQS queue in a separate account. Create the new Lambda function in the
same account as the SQS queue and trigger the function when a message arrives in the queue.
D. Ensure the new Lambda function implements an exponential backoff algorithm. Monitor existing critical Lambda functions with Amazon
CloudWatch alarms for the Throttles Lambda metric.
Correct Answer: A
https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html
"Your function can't scale out of control – Reserved concurrency also limits your function from using concurrency from the unreserved pool,
which caps its maximum concurrency. You can reserve concurrency to prevent your function from using all the available concurrency in the
Region, or from overloading downstream resources."
upvoted 1 times
upvoted 14 times
C is wrong because the Lamdba functions which read the message from SQS may scale out to 1000 if hundreds of thousands of upload occur in
a very short time. It will impact the other Lamdba functions.
Refer to https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
upvoted 1 times
A company wants to run a serverless application on AWS. The company plans to provision its application in Docker containers running in an
Amazon ECS cluster.
The application requires a MySQL database and the company plans to use Amazon RDS. The company has documents that need to be accessed
frequently for the +rst 3 months, and rarely after that. The document must be retained for 7 years.
What is the MOST cost-effective solution to meet these requirements?
A. Create an ECS cluster using On-Demand Instances. Provision the database and its read replicas in Amazon RDS using Spot Instances. Store
the documents in an encrypted EBS volume, and create a cron job to delete the documents after 7 years.
B. Create an ECS cluster using a jeet of Spot Instances, with Spot Instance draining enabled. Provision the database and its read replicas in
Amazon RDS using Reserved Instances. Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents
that are older than 3 months to Amazon S3 Glacier, then delete the documents from Amazon S3 Glacier that are more than 7 years old.
C. Create an ECS cluster using On-Demand Instances. Provision the database and its read replicas in Amazon RDS using On-Demand
Instances. Store the documents in Amazon EFS. Create a cron job to move the documents that are older than 3 months to Amazon S3 Glacier.
Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
D. Create an ECS cluster using a jeet of Spot Instances with Spot Instance draining enabled. Provision the database and its read replicas in
Amazon RDS using On-Demand Instances. Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents
that are older than 3 months to Amazon S3 Glacier, then delete the documents in Amazon S3 Glacier after 7 years.
Correct Answer: B
https://aws.amazon.com/ec2/spot/containers-for-less/get-started/
https://aws.amazon.com/ec2/spot/instance-advisor/
upvoted 2 times
B for sure
upvoted 1 times
" # AzureDP900 11 months, 1 week ago
B Is right
upvoted 1 times
choosing B as answer.
A and C are eliminated - due to cron usage
D - eliminated due to on-demand instance where DB cost can be reduced by reserved instances (seems it needs to be run for several years).
upvoted 1 times
A +nancial services company receives a regular data feed from its credit card servicing partner. Approximately 5,000 records are sent every 15
minutes in plaintext, delivered over HTTPS directly into an Amazon S3 bucket with server-side encryption. This feed contains sensitive credit card
primary account number
(PAN) data. The company needs to automatically mask the PAN before sending the data to another S3 bucket for additional internal processing.
The company also needs to remove and merge speci+c +elds, and then transform the record into JSON format. Additionally, extra feeds are likely
to be added in the future, so any design needs to be easily expandable.
Which solutions will meet these requirements?
A. Trigger an AWS Lambda function on +le delivery that extracts each record and writes it to an Amazon SQS queue. Trigger another Lambda
function when new messages arrive in the SQS queue to process the records, writing the results to a temporary location in Amazon S3. Trigger
a +nal Lambda function once the SQS queue is empty to transform the records into JSON format and send the results to another S3 bucket for
internal processing.
B. Trigger an AWS Lambda function on +le delivery that extracts each record and writes it to an Amazon SQS queue. Con+gure an AWS Fargate
container application to automatically scale to a single instance when the SQS queue contains messages. Have the application process each
record, and transform the record into JSON format. When the queue is empty, send the results to another S3 bucket for internal processing
and scale down the AWS Fargate instance.
C. Create an AWS Glue crawler and custom classi+er based on the data feed formats and build a table de+nition to match. Trigger an AWS
Lambda function on +le delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation
requirements. De+ne the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal
processing.
D. Create an AWS Glue crawler and custom classi+er based upon the data feed formats and build a table de+nition to match. Perform an
Amazon Athena query on +le delivery to start an Amazon EMR ETL job to transform the entire record according to the processing and
transformation requirements. De+ne the output format as JSON. Once complete, send the results to another S3 bucket for internal processing
and scale down the EMR cluster.
Correct Answer: A
https://d1.awsstatic.com/Products/product-name/diagrams/product-page-diagram_Glue_Event-driven-ETL-
Pipelines.e24d59bb79a9e24cdba7f43ffd234ec0482a60e2.png
upvoted 6 times
Just in case the URL for that image gets modifed, scroll down to "Use Cases" on the home page for Glue: https://aws.amazon.com/glue/
upvoted 1 times
Lambda function on file delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation
requirements. Define the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
upvoted 1 times
" # AzureDP900 11 months, 1 week ago
c is correct
You can use a Glue crawler to populate the AWS Glue Data Catalog with tables. The Lambda function can be triggered
using S3 event notifications when object create events occur. The Lambda function will then trigger the Glue ETL job
to transform the records masking the sensitive data and modifying the output format to JSON. This solution meets all
requirements.
upvoted 1 times
A media company is serving video +les stored in Amazon S3 using Amazon CloudFront. The development team needs access to the logs to
diagnose faults and perform service monitoring. The log +les from CloudFront may contain sensitive information about users.
The company uses a log processing service to remove sensitive information before making the logs available to the development team. The
company has the following requirements for the unprocessed logs:
✑ The logs must be encrypted at rest and must be accessible by the log processing service only.
✑ Only the data protection team can control access to the unprocessed log +les.
✑ AWS CloudFormation templates must be stored in AWS CodeCommit.
✑ AWS CodePipeline must be triggered on commit to perform updates made to CloudFormation templates.
CloudFront is already writing the unprocessed logs to an Amazon S3 bucket, and the log processing service is operating against this S3 bucket.
Which combination of steps should a solutions architect take to meet the company's requirements? (Choose two.)
A. Create an AWS KMS key that allows the AWS Logs Delivery account to generate data keys for encryption Con+gure S3 default encryption to
use server-side encryption with KMS managed keys (SSE-KMS) on the log storage bucket using the new KMS key. Modify the KMS key policy to
allow the log processing service to perform decrypt operations.
B. Create an AWS KMS key that follows the CloudFront service role to generate data keys for encryption Con+gure S3 default encryption to use
KMS managed keys (SSE-KMS) on the log storage bucket using the new KMS key Modify the KMS key policy to allow the log processing
service to perform decrypt operations.
C. Con+gure S3 default encryption to use AWS KMS managed keys (SSE-KMS) on the log storage bucket using the AWS Managed S3 KMS key.
Modify the KMS key policy to allow the CloudFront service role to generate data keys for encryption Modify the KMS key policy to allow the log
processing service to perform decrypt operations.
D. Create a new CodeCommit repository for the AWS KMS key template. Create an IAM policy to allow commits to the new repository and
attach it to the data protection team's users. Create a new CodePipeline pipeline with a custom IAM role to perform KMS key updates using
CloudFormation Modify the KMS key policy to allow the CodePipeline IAM role to modify the key policy.
E. Use the existing CodeCommit repository for the AWS KMS key template. Create an IAM policy to allow commits to the new repository and
attach it to the data protection team's users. Modify the existing CodePipeline pipeline to use a custom IAM role and to perform KMS key
updates using CloudFormation. Modify the KMS key policy to allow the CodePipeline IAM role to modify the key policy.
Correct Answer: AD
If you enabled server-side encryption for your Amazon S3 bucket using AWS KMS-managed keys (SSE-KMS) with a customer-managed
Customer Master Key (CMK), you must add the following to the key policy for your CMK to enable writing log files to the bucket. You cannot use
the default CMK because CloudFront won't be able to upload the log files to the bucket.
URL : https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html#AccessLogsKMSPermissions
upvoted 11 times
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html
upvoted 1 times
A company's service for video game recommendations has just gone viral. The company has new users from all over the world. The website for
the service is hosted on a set of Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). The website
consists of static content with different resources being loaded depending on the device type.
Users recently reported that the load time for the website has increased. Administrators are reporting high loads on the EC2 instances that host
the service.
Which set actions should a solutions architect take to improve response times?
A. Create separate Auto Scaling groups based on device types. Switch to Network Load Balancer (NLB). Use the User-Agent HTTP header in
the NLB to route to a different set of EC2 instances.
B. Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use Lambda@Edge to load
different resources based on the User-Agent HTTP header.
C. Create a separate ALB for each device type. Create one Auto Scaling group behind each ALB. Use Amazon Route 53 to route to different
ALBs depending on the User-Agent HTTP header.
D. Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use the User-Agent HTTP
header to load different content.
Correct Answer: A
https://aws.amazon.com/blogs/networking-and-content-delivery/dynamically-route-viewer-requests-to-any-origin-using-lambdaedge/
upvoted 2 times
Here's the exact fragment URL on that page to the code to redirect based on device type:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-examples-vary-on-device-type
upvoted 1 times
A company is planning a large event where a promotional offer will be introduced. The company's website is hosted on AWS and backed by an
Amazon RDS for
PostgreSQL DB instance. The website explains the promotion and includes a sign-up page that collects user information and preferences.
Management expects large and unpredictable volumes of tramc periodically, which will create many database writes. A solutions architect needs
to build a solution that does not change the underlying data model and ensures that submissions are not dropped before they are committed to
the database.
Which solution meets these requirements?
A. Immediately before the event, scale up the existing DB instance to meet the anticipated demand. Then scale down after the event.
B. Use Amazon SQS to decouple the application and database layers. Con+gure an AWS Lambda function to write items from the queue into
the database.
C. Migrate to Amazon DynamoDB and manage throughput capacity with automatic scaling.
D. Use Amazon ElastiCache for Memcached to increase write capacity to the DB instance.
Correct Answer: D
Reference:
https://aws.amazon.com/elasticache/faqs/
A mobile app has become very popular, and usage has gone from a few hundred to millions of users. Users capture and upload images of
activities within a city, and provide ratings and recommendations. Data access patterns are unpredictable. The current application is hosted on
Amazon EC2 instances behind an
Application Load Balancer (ALB). The application is experiencing slowdowns and costs are growing rapidly.
Which changes should a solutions architect make to the application architecture to control costs and improve performance?
A. Create an Amazon CloudFront distribution and place the ALB behind the distribution. Store static content in Amazon S3 in an Infrequent
Access storage class.
B. Store static content in an Amazon S3 bucket using the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of
the S3 bucket and the ALB.
C. Place AWS Global Accelerator in front of the ALB. Migrate the static content to Amazon EFS, and then run an AWS Lambda function to
resize the images during the migration process.
D. Move the application code to AWS Fargate containers and swap out the EC2 instances with the Fargate containers.
Correct Answer: B
upvoted 1 times
A +nancial company with multiple departments wants to expand its on-premises environment to the AWS Cloud. The company must retain
centralized access control using an existing on-premises Active Directory (AD) service. Each department should be allowed to create AWS
accounts with precon+gured networking and should have access to only a speci+c list of approved services. Departments are not permitted to
have account administrator permissions.
What should a solutions architect do to meet these security requirements?
A. Con+gure AWS Identity and Access Management (IAM) with a SAML identity provider (IdP) linked to the on-premises Active Directory, and
create a role to grant access. Con+gure AWS Organizations with SCPs and create new member accounts. Use AWS CloudFormation templates
to con+gure the member account networking.
B. Deploy an AWS Control Tower landing zone. Create an AD Connector linked to the on-premises Active Directory. Change the identity source
in AWS Single Sign-On to use Active Directory. Allow department administrators to use Account Factory to create new member accounts and
networking. Grant the departments AWS power user permissions on the created accounts.
C. Deploy an Amazon Cloud Directory. Create a two-way trust relationship with the on-premises Active Directory, and create a role to grant
access. Set up an AWS Service Catalog to use AWS CloudFormation templates to create the new member accounts and networking. Use IAM
roles to allow access to approved AWS services.
D. Con+gure AWS Directory Service for Microsoft Active Directory with AWS Single Sign-On. Join the service to the on-premises Active
Directory. Use AWS CloudFormation to create new member accounts and networking. Use IAM roles to allow access to approved AWS
services.
Correct Answer: B
Reference:
https://d1.awsstatic.com/whitepapers/aws-overview.pdf
(46)
https://aws.amazon.com/controltower/features/
upvoted 14 times
1) Blueprints are available to provide identity management, federate access to accounts, centralize logging, establish cross-account security
audits, define workflows for provisioning accounts, and implement account baselines with network configurations.
2) Control Tower provides mandatory and strongly recommended high-level rules, called guardrails, that help enforce your policies using service
control policies (SCPs), or detect policy violations using AWS Config rules.
upvoted 3 times
https://aws.amazon.com/blogs/security/aws-federated-authentication-with-active-directory-federation-services-ad-fs/
upvoted 1 times
A large +nancial company is deploying applications that consist of Amazon EC2 and Amazon RDS instances to the AWS Cloud using AWS
CloudFormation.
The CloudFormation stack has the following stack policy:
The company wants to ensure that developers do not lose data by accidentally removing or replacing RDS instances when updating the
CloudFormation stack.
Developers also still need to be able to modify or remove EC2 instances as needed.
How should the company change the stack policy to meet these requirements?
A. Modify the statement to specify ג€Effect ג:€ג€Deny ג,€ג€Actionג€:[ג€Update:*ג€] for all logical RDS resources.
B. Modify the statement to specify ג€Effect ג:€ג€Deny ג,€ג€Actionג€:[ג€Update:Deleteג€] for all logical RDS resources.
C. Add a second statement that speci+es ג€Effect ג:€ג€Deny ג,€ג€Actionג€:[ג€Update:Delete ג,€ג€Update:Replaceג€] for all logical RDS
resources.
D. Add a second statement that speci+es ג€Effect ג:€ג€Deny ג,€ג€Actionג€:[ג€Update:*ג€] for all logical RDS resources.
Correct Answer: C
upvoted 1 times
" # backfringe 11 months, 2 weeks ago
CCCCCCC
upvoted 1 times
A & B are invalid because by overwriting that allow statement, you would not allow updates to anything. Whereas C & D leave the general
allow statement in place, but add another statement with more specific deny actions for the RDS resources
Between C & D, there are four options for the Update action:
- Update:Modify
- Update:Replace
- Update:Delete
- Update:*
The question says to deny "removing or replacing RDS instances", so that means we only need to deny Update:Replace and
Update:Delete, while still allowing Update:Modify
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html#stack-policy-reference
upvoted 3 times
https://docs.aws.amazon.com/ja_jp/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html
upvoted 3 times
A company is currently in the design phase of an application that will need an RPO of less than 5 minutes and an RTO of less than 10 minutes. The
solutions architecture team is forecasting that the database will store approximately 10 TB of data. As part of the design, they are looking for a
database solution that will provide the company with the ability to fail over to a secondary Region.
Which solution will meet these business requirements at the LOWEST cost?
A. Deploy an Amazon Aurora DB cluster and take snapshots of the cluster every 5 minutes. Once a snapshot is complete, copy the snapshot to
a secondary Region to serve as a backup in the event of a failure.
B. Deploy an Amazon RDS instance with a cross-Region read replica in a secondary Region. In the event of a failure, promote the read replica
to become the primary.
C. Deploy an Amazon Aurora DB cluster in the primary Region and another in a secondary Region. Use AWS DMS to keep the secondary Region
in sync.
D. Deploy an Amazon RDS instance with a read replica in the same Region. In the event of a failure, promote the read replica to become the
primary.
Correct Answer: B
upvoted 2 times
" # tvs 1 year ago
cost, so go with B.
upvoted 3 times
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html
Even with Aurora global DB, C refers to using DMS to sync the DB, which is invalid. And even if you ignore the DMS problem, it's still going to
be more costly to run an Aurora cluster in each region as opposed to the classic use case of a single read replica in the DR region and
promote to master during a DR scenario. And B also allows you to pick any RDS engine you want (as long as it supports read replicas), not
just Aurora
upvoted 2 times
A company has a web application that uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. A recent marketing campaign has
increased demand.
Monitoring software reports that many requests have signi+cantly longer response times than before the marketing campaign.
A solutions architect enabled Amazon CloudWatch Logs for API Gateway and noticed that errors are occurring on 20% of the requests. In
CloudWatch, the
Lambda function Throttles metric represents 1% of the requests and the Errors metric represents 10% of the requests. Application logs indicate
that, when errors occur, there is a call to DynamoDB.
What change should the solutions architect make to improve the current response times as the web application becomes more popular?
Correct Answer: B
upvoted 4 times
" # mustpassla 1 year ago
B for sure
upvoted 2 times
A European online newspaper service hosts its public-facing WordPress site in a collocated data center in London. The current WordPress
infrastructure consists of a load balancer, two web servers, and one MySQL database server. A solutions architect is tasked with designing a
solution with the following requirements:
✑ Improve the website's performance
✑ Make the web tier scalable and stateless
✑ Improve the database server performance for read-heavy loads
✑ Reduce latency for users across Europe and the US
✑ Design the new architecture with a goal of 99.9% availability
Which solution meets these requirements while optimizing operational emciency?
A. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of WordPress Amazon EC2 instances in one AWS Region and
three Availability Zones. Con+gure an Amazon ElastiCache cluster in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the
WordPress shared +les to Amazon EFS. Con+gure Amazon CloudFront with the ALB as the origin, and select a price class that includes the US
and Europe.
B. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of WordPress Amazon EC2 instances in two AWS Regions and
two Availability Zones in each Region. Con+gure an Amazon ElastiCache cluster in front of a global Amazon Aurora MySQL database. Move
the WordPress shared +les to Amazon EFS. Con+gure Amazon CloudFront with the ALB as the origin, and select a price class that includes the
US and Europe. Con+gure EFS cross- Region replication.
C. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of WordPress Amazon EC2 instances in one AWS Region and
three Availability Zones. Con+gure an Amazon DocumentDB table in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the
WordPress shared +les to Amazon EFS. Con+gure Amazon CloudFront with the ALB as the origin, and select a price class that includes all
global locations.
D. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of WordPress Amazon EC2 instances in two AWS Regions and
three Availability Zones in each Region. Con+gure an Amazon ElastiCache cluster in front of a global Amazon Aurora MySQL database. Move
the WordPress shared +les to Amazon FSx with cross-Region synchronization. Con+gure Amazon CloudFront with the ALB as the origin and a
price class that includes the US and Europe.
Correct Answer: A
A company built an ecommerce website on AWS using a three-tier web architecture. The application is Java-based and composed of an Amazon
CloudFront distribution, an Apache web server layer of Amazon EC2 instances in an Auto Scaling group, and a backend Amazon Aurora MySQL
database.
Last month, during a promotional sales event, users reported errors and timeouts while adding items to their shopping carts. The operations team
recovered the logs created by the web servers and reviewed Aurora DB cluster performance metrics. Some of the web servers were terminated
before logs could be collected and the Aurora metrics were not sumcient for query performance analysis.
Which combination of steps must the solutions architect take to improve application performance visibility during peak tramc events? (Choose
three.)
A. Con+gure the Aurora MySQL DB cluster to publish slow query and error logs to Amazon CloudWatch Logs.
B. Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances and implement tracing of SQL queries with the X-Ray
SDK for Java.
C. Con+gure the Aurora MySQL DB cluster to stream slow query and error logs to Amazon Kinesis
D. Install and con+gure an Amazon CloudWatch Logs agent on the EC2 instances to send the Apache logs to CloudWatch Logs.
E. Enable and con+gure AWS CloudTrail to collect and analyze application activity from Amazon EC2 and Aurora.
F. Enable Aurora MySQL DB cluster performance benchmarking and publish the stream to AWS X-Ray.
A solutions architect has an operational workload deployed on Amazon EC2 instances in an Auto Scaling group. The VPC architecture spans two
Availability
Zones (AZ) with a subnet in each that the Auto Scaling group is targeting. The VPC is connected to an on-premises environment and connectivity
cannot be interrupted. The maximum size of the Auto Scaling group is 20 instances in service. The VPC IPv4 addressing is as follows:
A. Update the Auto Scaling group to use the AZ2 subnet only. Delete and re-create the AZ1 subnet using half the previous address space.
Adjust the Auto Scaling group to also use the new AZ1 subnet. When the instances are healthy, adjust the Auto Scaling group to use the AZ1
subnet only. Remove the current AZ2 subnet. Create a new AZ2 subnet using the second half of the address space from the original AZ1
subnet. Create a new AZ3 subnet using half the original AZ2 subnet address space, then update the Auto Scaling group to target all three new
subnets.
B. Terminate the EC2 instances in the AZ1 subnet. Delete and re-create the AZ1 subnet using half the address space. Update the Auto Scaling
group to use this new subnet. Repeat this for the second AZ. De+ne a new subnet in AZ3, then update the Auto Scaling group to target all three
new subnets.
C. Create a new VPC with the same IPv4 address space and de+ne three subnets, with one for each AZ. Update the existing Auto Scaling
group to target the new subnets in the new VPC.
D. Update the Auto Scaling group to use the AZ2 subnet only. Update the AZ1 subnet to have the previous address space. Adjust the Auto
Scaling group to also use the AZ1 subnet again. When the instances are healthy, adjust the Auto Scaling group to use the AZ1 subnet only.
Update the current AZ2 subnet and assign the second half of the address space from the original AZ1 subnet. Create a new AZ3 subnet using
half the original AZ2 subnet address space, then update the Auto Scaling group to target all three new subnets.
Correct Answer: A
A: It says delete and recreate, however you need to terminate instances as well which option B points out clearly.
C: does not allow to use this approach as VPC is physically attached to on-prem
D: Modify is not allowed, you need to delete and create subnets
upvoted 2 times
A company is storing data on premises on a Windows +le server. The company produces 5 GB of new data daily.
The company migrated part of its Windows-based workload to AWS and needs the data to be available on a +le system in the cloud. The company
already has established an AWS Direct Connect connection between the on-premises network and AWS.
Which data migration strategy should the company use?
A. Use the +le gateway option in AWS Storage Gateway to replace the existing Windows +le server, and point the existing +le share to the new
+le gateway
B. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows +le server and Amazon FSx
C. Use AWS Data Pipeline to schedule a daily task to replicate data between the on-premises Windows +le server and Amazon Elastic File
System (Amazon EFS)
D. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows +le server and Amazon Elastic File System
(Amazon EFS)
Correct Answer: B
the statement says "Relocated" which means the migration has already happened and now what they want is just access to the data on-prem.
So has to be storage gateway.
upvoted 1 times
" # Anhdd 5 months, 1 week ago
Selected Answer: A
It's say that "relocated a portion of its Windows-based workload to AWS". So in this case we have to use Storage Gateway, because we need to
access data both from on-premis and on AWS. So we can't use DataSync which is used for transfer 100% data to AWS and keep no data remain
on-premis. That's my opinion, so the answer should be ANH
upvoted 1 times
A company uses AWS Organizations to manage one parent account and nine member accounts. The number of member accounts is expected to
grow as the business grows. A security engineer has requested consolidation of AWS CloudTrail logs into the parent account for compliance
purposes. Existing logs currently stored in Amazon S3 buckets in each individual member account should not be lost. Future member accounts
should comply with the logging strategy.
Which operationally emcient solution meets these requirements?
A. Create an AWS Lambda function in each member account with a cross-account role. Trigger the Lambda functions when new CloudTrail
logs are created and copy the CloudTrail logs to a centralized S3 bucket. Set up an Amazon CloudWatch alarm to alert if CloudTrail is not
con+gured properly.
B. Con+gure CloudTrail in each member account to deliver log events to a central S3 bucket. Ensure the central S3 bucket policy allows
PutObject access from the member accounts. Migrate existing logs to the central S3 bucket. Set up an Amazon CloudWatch alarm to alert if
CloudTrail is not con+gured properly.
C. Con+gure an organization-level CloudTrail in the parent account to deliver log events to a central S3 bucket. Migrate the existing CloudTrail
logs from each member account to the central S3 bucket. Delete the existing CloudTrail and logs in the member accounts.
D. Con+gure an organization-level CloudTrail in the parent account to deliver log events to a central S3 bucket. Con+gure CloudTrail in each
member account to deliver log events to the central S3 bucket.
Correct Answer: A
Reference:
https://aws.amazon.com/blogs/architecture/stream-amazon-cloudwatch-logs-to-a-centralized-account-for-audit-and-analysis/
upvoted 1 times
Cloud Trail cannot manage the logs for others. Only Destination bucket can be shared centrally
upvoted 2 times
https://d0.awsstatic.com/aws-answers/AWS_Multi_Account_Security_Strategy.pdf
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html
upvoted 1 times
A weather service provides high-resolution weather maps from a web application hosted on AWS in the eu-west-1 Region. The weather maps are
updated frequently and stored in Amazon S3 along with static HTML content. The web application is fronted by Amazon CloudFront.
The company recently expanded to serve users in the us-east-1 Region, and these new users report that viewing their respective weather maps is
slow from time to time.
Which combination of steps will resolve the us-east-1 performance issues? (Choose two.)
A. Con+gure the AWS Global Accelerator endpoint for the S3 bucket in eu-west-1. Con+gure endpoint groups for TCP ports 80 and 443 in us-
east-1.
B. Create a new S3 bucket in us-east-1. Con+gure S3 cross-Region replication to synchronize from the S3 bucket in eu-west-1.
C. Use Lambda@Edge to modify requests from North America to use the S3 Transfer Acceleration endpoint in us-east-1.
D. Use Lambda@Edge to modify requests from North America to use the S3 bucket in us-east-1.
E. Con+gure the AWS Global Accelerator endpoint for us-east-1 as an origin on the CloudFront distribution. Use Lambda@Edge to modify
requests from North America to use the new origin.
Correct Answer: BC
Cross-Region Replication is an asynchronous process, and the objects are eventually replicated. Most objects replicate within 15 minutes, but
sometimes replication can take a couple hours or more.
https://aws.amazon.com/premiumsupport/knowledge-center/s3-crr-replication-time/
To serve content from these other regions, we need to route requests to the different Amazon S3 buckets we’re using. In this post, we explore
how to accomplished this by using Amazon CloudFront as a content delivery network and Lambda@Edge as a router. We will also take a quick
look at how this impacts latency and cost.
Reference : https://aws.amazon.com/blogs/apn/using-amazon-cloudfront-with-multi-region-amazon-s3-origins/
upvoted 1 times
upvoted 1 times
" # WhyIronMan 1 year ago
I'll go with B, D
upvoted 1 times
A company is deploying a public-facing global application on AWS using Amazon CloudFront. The application communicates with an external
system. A solutions architect needs to ensure the data is secured during end-to-end transit and at rest.
Which combination of steps will satisfy these requirements? (Choose three.)
A. Create a public certi+cate for the required domain in AWS Certi+cate Manager and deploy it to CloudFront, an Application Load Balancer,
and Amazon EC2 instances.
B. Acquire a public certi+cate from a third-party vendor and deploy it to CloudFront, an Application Load Balancer, and Amazon EC2 instances.
C. Provision Amazon EBS encrypted volumes using AWS KMS and ensure explicit encryption of data when writing to Amazon EBS.
E. Use SSL or encrypt data while communicating with the external system using a VPN.
F. Communicate with the external system using plaintext and use the VPN to encrypt the data in transit.
https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html
upvoted 17 times
https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html
C is asking for explicit encryption on top of EBS encryption with KMS, I believe it's not needed.
upvoted 1 times
You can use public and private ACM certificates with the following AWS services:
• Elastic Load Balancing – Refer to the Elastic Load Balancing documentation
• Amazon CloudFront – Refer to the CloudFront documentation
• Amazon API Gateway – Refer to the API Gateway documentation
• AWS Elastic Beanstalk – Refer to the AWS Elastic Beanstalk documentation
• AWS CloudFormation – Support is currently limited to public certificates that use email validation. Refer to the AWS CloudFormation
documentation
In addition, you can use private certificates issued with ACM Private CA with EC2 instances, containers, IoT devices, and on your own servers.
https://aws.amazon.com/certificate-manager/faqs/?nc1=h_ls
upvoted 2 times
You can use private certificates issued with ACM Private CA with EC2 instances, containers, and on your own servers. At this time, public ACM
certificates can be used only with specific AWS services. See With which AWS services can I use ACM certificates?
https://aws.amazon.com/certificate-manager/faqs/?nc1=h_ls
upvoted 3 times
https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html
upvoted 3 times
A company provides a centralized Amazon EC2 application hosted in a single shared VPC. The centralized application must be accessible from
client applications running in the VPCs of other business units. The centralized application front end is con+gured with a Network Load Balancer
(NLB) for scalability.
Up to 10 business unit VPCs will need to be connected to the shared VPC. Some of the business unit VPC CIDR blocks overlap with the shared
VPC, and some overlap with each other. Network connectivity to the centralized application in the shared VPC should be allowed from authorized
business unit VPCs only.
Which network con+guration should a solutions architect use to provide connectivity from the client applications in the business unit VPCs to the
centralized application in the shared VPC?
A. Create an AWS Transit Gateway. Attach the shared VPC and the authorized business unit VPCs to the transit gateway. Create a single transit
gateway route table and associate it with all of the attached VPCs. Allow automatic propagation of routes from the attachments into the route
table. Con+gure VPC routing tables to send tramc to the transit gateway.
B. Create a VPC endpoint service using the centralized application NLB and enable the option to require endpoint acceptance. Create a VPC
endpoint in each of the business unit VPCs using the service name of the endpoint service. Accept authorized endpoint requests from the
endpoint service console.
C. Create a VPC peering connection from each business unit VPC to the shared VPC. Accept the VPC peering connections from the shared
VPC console. Con+gure VPC routing tables to send tramc to the VPC peering connection.
D. Con+gure a virtual private gateway for the shared VPC and create customer gateways for each of the authorized business unit VPCs.
Establish a Site-to-Site VPN connection from the business unit VPCs to the shared VPC. Con+gure VPC routing tables to send tramc to the
VPN connection.
Correct Answer: A
Reference:
https://d1.awsstatic.com/whitepapers/building-a-scalable-and-secure-multi-vpc-aws-network-infrastructure.pdf
NLBs always SNAT the client source IP address to their own IP within your VPC when the incoming request to the NLB via a gateway load
balancer endpoint or vpc endpoint (private link):
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#client-ip-preservation
(This can be annoying if you want the NLB's client IP preservation feature!)
upvoted 2 times
B correct.
upvoted 1 times
" # AzureDP900 11 months ago
I'll go with B
upvoted 1 times
A company has an on-premises monitoring solution using a PostgreSQL database for persistence of events. The database is unable to scale due
to heavy ingestion and it frequently runs out of storage.
The company wants to create a hybrid solution and has already set up a VPN connection between its network and AWS. The solution should
include the following attributes:
✑ Managed AWS services to minimize operational complexity.
✑ A buffer that automatically scales to match the throughput of data and requires no ongoing administration.
✑ A visualization tool to create dashboards to observe events in near-real time.
✑ Support for semi-structured JSON data and dynamic schemas.
Which combination of components will enable the company to create a monitoring solution that will satisfy these requirements? (Choose two.)
A. Use Amazon Kinesis Data Firehose to buffer events. Create an AWS Lambda function to process and transform events.
B. Create an Amazon Kinesis data stream to buffer events. Create an AWS Lambda function to process and transform events.
C. Con+gure an Amazon Aurora PostgreSQL DB cluster to receive events. Use Amazon QuickSight to read from the database and create near-
real-time visualizations and dashboards.
D. Con+gure Amazon Elasticsearch Service (Amazon ES) to receive events. Use the Kibana endpoint deployed with Amazon ES to create near-
real-time visualizations and dashboards.
E. Con+gure an Amazon Neptune DB instance to receive events. Use Amazon QuickSight to read from the database and create near-real-time
visualizations and dashboards.
Correct Answer: BC
upvoted 1 times
" # WhyIronMan 1 year ago
I'll go with with B,D
upvoted 1 times
A life sciences company is using a combination of open source tools to manage data analysis workjows and Docker containers running on
servers in its on- premises data center to process genomics data. Sequencing data is generated and stored on a local storage area network (SAN),
and then the data is processed.
The research and development teams are running into capacity issues and have decided to re-architect their genomics analysis platform on AWS
to scale based on workload demands and reduce the turnaround time from weeks to days.
The company has a high-speed AWS Direct Connect connection. Sequencers will generate around 200 GB of data for each genome, and individual
jobs can take several hours to process the data with ideal compute capacity. The end result will be stored in Amazon S3. The company is
expecting 10-15 job requests each day.
Which solution meets these requirements?
A. Use regularly scheduled AWS Snowball Edge devices to transfer the sequencing data into AWS. When AWS receives the Snowball Edge
device and the data is loaded into Amazon S3, use S3 events to trigger an AWS Lambda function to process the data.
B. Use AWS Data Pipeline to transfer the sequencing data to Amazon S3. Use S3 events to trigger an Amazon EC2 Auto Scaling group to
launch custom-AMI EC2 instances running the Docker containers to process the data.
C. Use AWS DataSync to transfer the sequencing data to Amazon S3. Use S3 events to trigger an AWS Lambda function that starts an AWS
Step Functions workjow. Store the Docker images in Amazon Elastic Container Registry (Amazon ECR) and trigger AWS Batch to run the
container and process the sequencing data.
D. Use an AWS Storage Gateway +le gateway to transfer the sequencing data to Amazon S3. Use S3 events to trigger an AWS Batch job that
executes on Amazon EC2 instances running the Docker containers to process the data.
Correct Answer: A
A company has +ve physical data centers in speci+c locations around the world. Each data center has hundreds of physical servers with a mix of
Windows and
Linux-based applications and database services. Each data center also has an AWS Direct Connect connection of 10 Gbps to AWS with a
company-approved
VPN solution to ensure that data transfer is secure. The company needs to shut down the existing data centers as quickly as possible and migrate
the servers and applications to AWS.
Which solution meets these requirements?
A. Install the AWS Server Migration Service (AWS SMS) connector onto each physical machine. Use the AWS Management Console to select
the servers from the server catalog, and start the replication. Once the replication is complete, launch the Amazon EC2 instances created by
the service.
B. Install the AWS DataSync agent onto each physical machine. Use the AWS Management Console to con+gure the destination to be an AMI,
and start the replication. Once the replication is complete, launch the Amazon EC2 instances created by the service.
C. Install the CloudEndure Migration agent onto each physical machine. Create a migration blueprint, and start the replication. Once the
replication is complete, launch the Amazon EC2 instances in cutover mode.
D. Install the AWS Application Discovery Service agent onto each physical machine. Use the AWS Migration Hub import option to start the
replication. Once the replication is complete, launch the Amazon EC2 instances created by the service.
Correct Answer: A
https://aws.amazon.com/application-migration-service/
upvoted 17 times
aws.amazon.com/blogs/architecture/field-notes-choosing-a-rehost-migration-tool-cloudendure-or-aws-sms/
upvoted 1 times
AWS still does not recommend CloudEndure currently. please check their website.
"AWS Application Migration Service (MGN) is the primary migration service recommended for lift-and-shift migrations to the AWS Cloud.
Customers who currently use CloudEndure Migration or AWS Server Migration Service (AWS SMS) are encouraged to switch to MGN for future
migrations."
upvoted 1 times
https://console.cloudendure.com/#/register/register
upvoted 1 times
" # WhyIronMan 1 year ago
I'll go with C
upvoted 2 times
A security engineer determined that an existing application retrieves credentials to an Amazon RDS for MySQL database from an encrypted +le in
Amazon S3. For the next version of the application, the security engineer wants to implement the following application design changes to improve
security:
✑ The database must use strong, randomly generated passwords stored in a secure AWS managed service.
✑ The application resources must be deployed through AWS CloudFormation.
✑ The application must rotate credentials for the database every 90 days.
A solutions architect will generate a CloudFormation template to deploy the application.
Which resources speci+ed in the CloudFormation template will meet the security engineer's requirements with the LEAST amount of operational
overhead?
A. Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the
database password. Specify a Secrets Manager RotationSchedule resource to rotate the database password every 90 days.
B. Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Create an AWS Lambda
function resource to rotate the database password. Specify a Parameter Store RotationSchedule resource to rotate the database password
every 90 days.
C. Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the
database password. Create an Amazon EventBridge scheduled rule resource to trigger the Lambda function password rotation every 90 days.
D. Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Specify an AWS AppSync
DataSource resource to automatically rotate the database password every 90 days.
Correct Answer: C
https://aws.amazon.com/secrets-manager/
upvoted 1 times
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
upvoted 3 times
A company has a three-tier application running on AWS with a web server, an application server, and an Amazon RDS MySQL DB instance. A
solutions architect is designing a disaster recovery (DR) solution with an RPO of 5 minutes.
Which solution will meet the company's requirements?
A. Con+gure AWS Backup to perform cross-Region backups of all servers every 5 minutes. Reprovision the three tiers in the DR Region from
the backups using AWS CloudFormation in the event of a disaster.
B. Maintain another running copy of the web and application server stack in the DR Region using AWS CloudFormation drift detection.
Con+gure cross-Region snapshots of the DB instance to the DR Region every 5 minutes. In the event of a disaster, restore the DB instance
using the snapshot in the DR Region.
C. Use Amazon EC2 Image Builder to create and copy AMIs of the web and application server to both the primary and DR Regions. Create a
cross-Region read replica of the DB instance in the DR Region. In the event of a disaster, promote the read replica to become the master and
reprovision the servers with AWS CloudFormation using the AMIs.
D. Create AMIs of the web and application servers in the DR Region. Use scheduled AWS Glue jobs to synchronize the DB instance with
another DB instance in the DR Region. In the event of a disaster, switch to the DB instance in the DR Region and reprovision the servers with
AWS CloudFormation using the AMIs.
Correct Answer: C
C and D don't really make sense to me. EC2 image builder is for deploying an creating new AMIs. Glue is for data integration. With B I am not
sure how drift detection would help as that would just allow a rollback and is not geared towards backup. Also A seemed to be the only one that
addressed backing up the web and app servers along with RDS.
upvoted 10 times
On C, AMI, and Database are already in place on DR site. Just need to activite the failover to make the DR become production.. All this can
happen in 5 mins... Thus my pick i C.
upvoted 2 times
Now, Disaster Strategy - Cold DR (Backup & Restore) vs Hot DR (active to active);
due to aggressive RPO => Replication (Hot DR)
Article on DR for RDS (though it has SQL server instead of MySQL; concept remains same) - To meet very aggressive RPO and RTO
requirements, your DR strategy needs to consider continuous replication capability
https://aws.amazon.com/blogs/database/cross-region-disaster-recovery-of-amazon-rds-for-sql-server/
upvoted 2 times
Selected Answer: A
https://docs.aws.amazon.com/aws-backup/latest/devguide/cross-region-backup.html
A
upvoted 1 times
A company wants to migrate its corporate data center from on premises to the AWS Cloud. The data center includes physical servers and VMs
that use VMware and Hyper-V. An administrator needs to select the correct services to collect data for the initial migration discovery process. The
data format should be supported by AWS Migration Hub. The company also needs the ability to generate reports from the data.
Which solution meets these requirements?
A. Use the AWS Agentless Discovery Connector for data collection on physical servers and all VMs. Store the collected data in Amazon S3.
Query the data with S3 Select. Generate reports by using Kibana hosted on Amazon EC2.
B. Use the AWS Application Discovery Service agent for data collection on physical servers and all VMs. Store the collected data in Amazon
Elastic File System (Amazon EFS). Query the data and generate reports with Amazon Athena.
C. Use the AWS Application Discovery Service agent for data collection on physical servers and Hyper-V. Use the AWS Agentless Discovery
Connector for data collection on VMware. Store the collected data in Amazon S3. Query the data with Amazon Athena. Generate reports by
using Amazon QuickSight.
D. Use the AWS Systems Manager agent for data collection on physical servers. Use the AWS Agentless Discovery Connector for data
collection on all VMs. Store, query, and generate reports from the collected data by using Amazon Redshift.
Correct Answer: C
https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-agent.html
https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-connector.html
upvoted 1 times
it's C
upvoted 2 times
A company is using Amazon Aurora MySQL for a customer relationship management (CRM) application. The application requires frequent
maintenance on the database and the Amazon EC2 instances on which the application runs. For AWS Management Console access, the system
administrators authenticate against
AWS Identity and Access Management (IAM) using an internal identity provider. For database access, each system administrator has a user name
and password that have previously been con+gured within the database.
A recent security audit revealed that the database passwords are not frequently rotated. The company wants to replace the passwords with
temporary credentials using the company's existing AWS access controls.
Which set of options will meet the company's requirements?
A. Create a new AWS Systems Manager Parameter Store entry for each database password. Enable parameter expiration to invoke an AWS
Lambda function to perform password rotation by updating the parameter value. Create an IAM policy allowing each system administrator to
retrieve their current password from the Parameter Store. Use the AWS CLI to retrieve credentials when connecting to the database.
B. Create a new AWS Secrets Manager entry for each database password. Con+gure password rotation for each secret using an AWS Lambda
function in the same VPC as the database cluster. Create an IAM policy allowing each system administrator to retrieve their current password.
Use the AWS CLI to retrieve credentials when connecting to the database.
C. Enable IAM database authentication on the database. Attach an IAM policy to each system administrator's role to map the role to the
database user name. Install the Amazon Aurora SSL certi+cate bundle to the system administrators' certi+cate trust store. Use the AWS CLI to
generate an authentication token used when connecting to the database.
D. Enable IAM database authentication on the database. Con+gure the database to use the IAM identity provider to map the administrator
roles to the database user. Install the Amazon Aurora SSL certi+cate bundle to the system administrators' certi+cate trust store. Use the AWS
CLI to generate an authentication token used when connecting to the database.
Correct Answer: C
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/users-connect-rds-iam/
C and D are in the fight... from a technical perspective D would be better BUT I could not find any doc that explains how to leverage an IdP with
IAM DB Auth, so I would go for C as it follows the current process to grant an IAM user DB rights.
CCC!
upvoted 1 times
Option A means the they would still rely on storing their password rather than using temp credentials.
upvoted 1 times
" # Suresh108 1 year ago
Going with CCCCC. Probable C and D, eliminated D due to Identity provider
upvoted 1 times
A company's AWS architecture currently uses access keys and secret access keys stored on each instance to access AWS services. Database
credentials are hard-coded on each instance. SSH keys for command-line remote access are stored in a secured Amazon S3 bucket. The company
has asked its solutions architect to improve the security posture of the architecture without adding operational complexity.
Which combination of steps should the solutions architect take to accomplish this? (Choose three.)
B. Use AWS Secrets Manager to store access keys and secret access keys
D. Use a secure jeet of Amazon EC2 bastion hosts for remote access
upvoted 1 times
" # Kopa 1 year ago
A,C,F no doubt
upvoted 1 times
A company wants to change its internal cloud billing strategy for each of its business units. Currently, the cloud governance team shares reports
for overall cloud spending with the head of each business unit. The company uses AWS Organizations to manage the separate AWS accounts for
each business unit. The existing tagging standard in Organizations includes the application, environment, and owner. The cloud governance team
wants a centralized solution so each business unit receives monthly reports on its cloud spending. The solution should also send noti+cations for
any cloud spending that exceeds a set threshold.
Which solution is the MOST cost-effective way to meet these requirements?
A. Con+gure AWS Budgets in each account and con+gure budget alerts that are grouped by application, environment, and owner. Add each
business unit to an Amazon SNS topic for each alert. Use Cost Explorer in each account to create monthly reports for each business unit.
B. Con+gure AWS Budgets in the organization's master account and con+gure budget alerts that are grouped by application, environment, and
owner. Add each business unit to an Amazon SNS topic for each alert. Use Cost Explorer in the organization's master account to create
monthly reports for each business unit.
C. Con+gure AWS Budgets in each account and con+gure budget alerts that are grouped by application, environment, and owner. Add each
business unit to an Amazon SNS topic for each alert. Use the AWS Billing and Cost Management dashboard in each account to create monthly
reports for each business unit.
D. Enable AWS Cost and Usage Reports in the organization's master account and con+gure reports grouped by application, environment, and
owner. Create an AWS Lambda function that processes AWS Cost and Usage Reports, sends budget alerts, and sends monthly reports to each
business unit's email list.
Correct Answer: B
upvoted 1 times
" # WhyIronMan 1 year ago
I'll go with B
upvoted 1 times
A company is con+guring connectivity to a multi-account AWS environment to support application workloads that serve users in a single
geographic region. The workloads depend on a highly available, on-premises legacy system deployed across two locations. It is critical for the
AWS workloads to maintain connectivity to the legacy system, and a minimum of 5 Gbps of bandwidth is required. All application workloads within
AWS must have connectivity with one another.
Which solution will meet these requirements?
A. Con+gure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from a DX partner for each on-premises location. Create
private virtual interfaces on each connection for each AWS account VPC. Associate the private virtual interface with a virtual private gateway
attached to each VPC.
B. Con+gure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from two DX partners for each on-premises location. Create
and attach a virtual private gateway for each AWS account VPC. Create a DX gateway in a central network account and associate it with the
virtual private gateways. Create a public virtual interface on each DX connection and associate the interface with the DX gateway.
C. Con+gure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from two DX partners for each on-premises location. Create a
transit gateway and a DX gateway in a central network account. Create a transit virtual interface for each DX interface and associate them
with the DX gateway. Create a gateway association between the DX gateway and the transit gateway.
D. Con+gure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from a DX partner for each on-premises location. Create and
attach a virtual private gateway for each AWS account VPC. Create a transit gateway in a central network account and associate it with the
virtual private gateways. Create a transit virtual interface on each DX connection and attach the interface to the transit gateway.
Correct Answer: B
An association between the Direct Connect gateway and the transit gateway.
A +nancial company needs to create a separate AWS account for a new digital wallet application. The company uses AWS Organizations to
manage its accounts.
A solutions architect uses the IAM user Support1 from the master account to create a new member account with +nance1@example.com as the
email address.
What should the solutions architect do to create IAM users in the new member account?
A. Sign in to the AWS Management Console with AWS account root user credentials by using the 64-character password from the initial AWS
Organizations email sent to +nance1@example.com. Set up the IAM users as required.
B. From the master account, switch roles to assume the OrganizationAccountAccessRole role with the account ID of the new member
account. Set up the IAM users as required.
C. Go to the AWS Management Console sign-in page. Choose ג€Sign in using root account credentials.ג€ Sign in by using the email address
+nance1@example.com and the master account's root password. Set up the IAM users as required.
D. Go to the AWS Management Console sign-in page. Sign in by using the account ID of the new member account and the Support1 IAM
credentials. Set up the IAM users as required.
Correct Answer: A
Reference:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_create.html
upvoted 2 times
https://aws.amazon.com/premiumsupport/knowledge-center/organizations-member-account-access/
upvoted 2 times
https://aws.amazon.com/premiumsupport/knowledge-center/organizations-member-account-access/
upvoted 2 times
https://aws.amazon.com/premiumsupport/knowledge-center/organizations-member-account-access/
upvoted 1 times
A company is designing a data processing platform to process a large number of +les in an Amazon S3 bucket and store the results in Amazon
DynamoDB.
These +les will be processed once and must be retained for 1 year. The company wants to ensure that the original +les and resulting data are
highly available in multiple AWS Regions.
Which solution will meet these requirements?
A. Create an S3 CreateObject event noti+cation to copy the +le to Amazon Elastic Block Store (Amazon EBS). Use AWS DataSync to sync the
+les between EBS volumes in multiple Regions. Use an Amazon EC2 Auto Scaling group in multiple Regions to attach the EBS volumes.
Process the +les and store the results in a DynamoDB global table in multiple Regions. Con+gure the S3 bucket with an S3 Lifecycle policy to
move the +les to S3 Glacier after 1 year.
B. Create an S3 CreateObject event noti+cation to copy the +le to Amazon Elastic File System (Amazon EFS). Use AWS DataSync to sync the
+les between EFS volumes in multiple Regions. Use an AWS Lambda function to process the EFS +les and store the results in a DynamoDB
global table in multiple Regions. Con+gure the S3 buckets with an S3 Lifecycle policy to move the +les to S3 Glacier after 1 year.
C. Copy the +les to an S3 bucket in another Region by using cross-Region replication. Create an S3 CreateObject event noti+cation on the
original bucket to push S3 +le paths into Amazon EventBridge (Amazon CloudWatch Events). Use an AWS Lambda function to poll EventBridge
(CloudWatch Events) to process each +le and store the results in a DynamoDB table in each Region. Con+gure both S3 buckets to use the S3
Standard-Infrequent Access (S3 Standard-IA) storage class and an S3 Lifecycle policy to delete the +les after 1 year.
D. Copy the +les to an S3 bucket in another Region by using cross-Region replication. Create an S3 CreateObject event noti+cation on the
original bucket to execute an AWS Lambda function to process each +le and store the results in a DynamoDB global table in multiple Regions.
Con+gure both S3 buckets to use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class and an S3 Lifecycle policy to delete the
+les after 1 year.
Correct Answer: A
https://aws.amazon.com/blogs/storage/replicating-existing-objects-between-s3-buckets/
upvoted 1 times
A company is running an Apache Hadoop cluster on Amazon EC2 instances. The Hadoop cluster stores approximately 100 TB of data for weekly
operational reports and allows occasional access for data scientists to retrieve data. The company needs to reduce the cost and operational
complexity for storing and serving this data.
Which solution meets these requirements in the MOST cost-effective manner?
A. Move the Hadoop cluster from EC2 instances to Amazon EMR. Allow data access patterns to remain the same.
B. Write a script that resizes the EC2 instances to a smaller instance type during downtime and resizes the instances to a larger instance type
before the reports are created.
C. Move the data to Amazon S3 and use Amazon Athena to query the data for reports. Allow the data scientists to access the data directly in
Amazon S3.
D. Migrate the data to Amazon DynamoDB and modify the reports to fetch data from DynamoDB. Allow the data scientists to access the data
directly in DynamoDB.
Correct Answer: C
https://blogs.perficient.com/2016/05/19/two-choices-1-amazon-emr-or-2-hadoop-on-ec2/
upvoted 1 times
A company is building a sensor data collection pipeline in which thousands of sensors write data to an Amazon Simple Queue Service (Amazon
SQS) queue every minute. The queue is processed by an AWS Lambda function that extracts a standard set of metrics from the sensor data. The
company wants to send the data to Amazon CloudWatch. The solution should allow for viewing individual and aggregate sensor metrics and
interactively querying the sensor log data using
CloudWatch Logs Insights.
What is the MOST cost-effective solution that meets these requirements?
A. Write the processed data to CloudWatch Logs in the CloudWatch embedded metric format.
B. Write the processed data to CloudWatch Logs. Then write the data to CloudWatch by using the PutMetricData API call.
C. Write the processed data to CloudWatch Logs in a structured format. Create a CloudWatch metric +lter to parse the logs and publish the
metrics to CloudWatch with dimensions to uniquely identify a sensor.
D. Con+gure the CloudWatch Logs agent for AWS Lambda. Output the metrics for each sensor in statsd format with tags to uniquely identify a
sensor. Write the processed data to CloudWatch Logs.
Correct Answer: C
https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-cloudwatch-launches-embedded-metric-format/
upvoted 1 times
A.
https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-cloudwatch-launches-embedded-metric-format/
upvoted 3 times
A car rental company has built a serverless REST API to provide data to its mobile app. The app consists of an Amazon API Gateway API with a
Regional endpoint, AWS Lambda functions, and an Amazon Aurora MySQL Serverless DB cluster. The company recently opened the API to mobile
apps of partners. A signi+cant increase in the number of requests resulted, causing sporadic database memory errors. Analysis of the API tramc
indicates that clients are making multiple HTTP GET requests for the same queries in a short period of time. Tramc is concentrated during
business hours, with spikes around holidays and other events.
The company needs to improve its ability to support the additional usage while minimizing the increase in costs associated with the solution.
Which strategy meets these requirements?
A. Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage.
B. Implement an Amazon ElastiCache for Redis cache to store the results of the database calls. Modify the Lambda functions to use the
cache.
C. Modify the Aurora Serverless DB cluster con+guration to increase the maximum amount of available memory.
D. Enable throttling in the API Gateway production stage. Set the rate and burst values to limit the incoming calls.
Correct Answer: A
Reference:
https://aws.amazon.com/getting-started/projects/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/module-4/
You can enable API caching in Amazon API Gateway to cache your endpoint's responses. With caching, you can reduce the number of calls
made to your endpoint and also improve the latency of requests to your API.
When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds.
API Gateway then responds to the request by looking up the endpoint response from the cache instead of making a request to your endpoint.
The default TTL value for API caching is 300 seconds. The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled.
upvoted 1 times
upvoted 2 times
A company has application services that have been containerized and deployed on multiple Amazon EC2 instances with public IPs. An Apache
Kafka cluster has been deployed to the EC2 instances. A PostgreSQL database has been migrated to Amazon RDS for PostgreSQL. The company
expects a signi+cant increase of orders on its platform when a new version of its jagship product is released.
What changes to the current architecture will reduce operational overhead and support the product release?
A. Create an EC2 Auto Scaling group behind an Application Load Balancer. Create additional read replicas for the DB instance. Create Amazon
Kinesis data streams and con+gure the application services to use the data streams. Store and serve static content directly from Amazon S3.
B. Create an EC2 Auto Scaling group behind an Application Load Balancer. Deploy the DB instance in Multi-AZ mode and enable storage auto
scaling. Create Amazon Kinesis data streams and con+gure the application services to use the data streams. Store and serve static content
directly from Amazon S3.
C. Deploy the application on a Kubernetes cluster created on the EC2 instances behind an Application Load Balancer. Deploy the DB instance
in Multi-AZ mode and enable storage auto scaling. Create an Amazon Managed Streaming for Apache Kafka cluster and con+gure the
application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.
D. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate and enable auto scaling behind an
Application Load Balancer. Create additional read replicas for the DB instance. Create an Amazon Managed Streaming for Apache Kafka
cluster and con+gure the application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.
Correct Answer: B
A company recently completed a large-scale migration to AWS. Development teams that support various business units have their own accounts
in AWS
Organizations. A central cloud team is responsible for controlling which services and resources can be accessed, and for creating operational
strategies for all teams within the company. Some teams are approaching their account service quotas. The cloud team needs to create an
automated and operationally emcient solution to proactively monitor service quotas. Monitoring should occur every 15 minutes and send alerts
when a team exceeds 80% utilization.
Which solution will meet these requirements?
A. Create a scheduled AWS Con+g rule to trigger an AWS Lambda function to call the GetServiceQuota API. If any service utilization is above
80%, publish a message to an Amazon Simple Noti+cation Service (Amazon SNS) topic to alert the cloud team. Create an AWS
CloudFormation template and deploy the necessary resources to each account.
B. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that triggers an AWS Lambda function to refresh the AWS Trusted Advisor
service limits checks and retrieve the most current utilization and service limit data. If the current utilization is above 80%, publish a message
to an Amazon Simple Noti+cation Service (Amazon SNS) topic to alert the cloud team. Create AWS CloudFormation StackSets that deploy the
necessary resources to all Organizations accounts.
C. Create an Amazon CloudWatch alarm that triggers an AWS Lambda function to call the Amazon CloudWatch GetInsightRuleReport API to
retrieve the most current utilization and service limit data. If the current utilization is above 80%, publish an Amazon Simple Email Service
(Amazon SES) noti+cation to alert the cloud team. Create AWS CloudFormation StackSets that deploy the necessary resources to all
Organizations accounts.
D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that triggers an AWS Lambda function to refresh the AWS Trusted Advisor
service limits checks and retrieve the most current utilization and service limit data. If the current utilization is above 80%, use Amazon
Pinpoint to send an alert to the cloud team. Create an AWS CloudFormation template and deploy the necessary resources to each account.
Correct Answer: A
Reference:
https://aws.amazon.com/solutions/implementations/limit-monitor/
Selected Answer: B
i agree it's b
upvoted 1 times
" # cldy 10 months, 1 week ago
B is correct.
upvoted 1 times
...
upvoted 1 times
Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. You can connect with customers over
channels like email, SMS, push, voice or in-app messaging. Amazon Pinpoint is easy to set up, easy to use, and is flexible for all marketing
communication scenarios. Segment your campaign audience for the right customer and personalize your messages with the right content.
Delivery and campaign metrics in Amazon Pinpoint measure the success of your communications. Amazon Pinpoint can grow with you and
scales globally to billions of messages per day across channels.
upvoted 1 times
An AWS customer has a web application that runs on premises. The web application fetches data from a third-party API that is behind a +rewall.
The third party accepts only one public CIDR block in each client's allow list.
The customer wants to migrate their web application to the AWS Cloud. The application will be hosted on a set of Amazon EC2 instances behind
an Application
Load Balancer (ALB) in a VPC. The ALB is located in public subnets. The EC2 instances are located in private subnets. NAT gateways provide
internet access to the private subnets.
How should a solutions architect ensure that the web application can continue to call the third-party API after the migration?
A. Associate a block of customer-owned public IP addresses to the VPC. Enable public IP addressing for public subnets in the VPC.
B. Register a block of customer-owned public IP addresses in the AWS account. Create Elastic IP addresses from the address block and
assign them to the NAT gateways in the VPC.
C. Create Elastic IP addresses from the block of customer-owned IP addresses. Assign the static Elastic IP addresses to the ALB.
D. Register a block of customer-owned public IP addresses in the AWS account. Set up AWS Global Accelerator to use Elastic IP addresses
from the address block. Set the ALB as the accelerator endpoint.
Correct Answer: D
B obviously.
upvoted 1 times
" # WhyIronMan 1 year ago
I'll go with B
upvoted 4 times
A company is using AWS Organizations to manage multiple AWS accounts. For security purposes, the company requires the creation of an
Amazon Simple
Noti+cation Service (Amazon SNS) topic that enables integration with a third-party alerting system in all the Organizations member accounts.
A solutions architect used an AWS CloudFormation template to create the SNS topic and stack sets to automate the deployment of
CloudFormation stacks.
Trusted access has been enabled in Organizations.
What should the solutions architect do to deploy the CloudFormation StackSets in all AWS accounts?
A. Create a stack set in the Organizations member accounts. Use service-managed permissions. Set deployment options to deploy to an
organization. Use CloudFormation StackSets drift detection.
B. Create stacks in the Organizations member accounts. Use self-service permissions. Set deployment options to deploy to an organization.
Enable the CloudFormation StackSets automatic deployment.
C. Create a stack set in the Organizations master account. Use service-managed permissions. Set deployment options to deploy to the
organization. Enable CloudFormation StackSets automatic deployment.
D. Create stacks in the Organizations master account. Use service-managed permissions. Set deployment options to deploy to the
organization. Enable CloudFormation StackSets drift detection.
Correct Answer: C
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-manage-auto-deployment.html
CCC
---
upvoted 1 times
A company wants to provide desktop as a service (DaaS) to a number of employees using Amazon WorkSpaces. WorkSpaces will need to access
+les and services hosted on premises with authorization based on the company's Active Directory. Network connectivity will be provided through
an existing AWS Direct
Connect connection.
The solution has the following requirements:
✑ Credentials from Active Directory should be used to access on-premises +les and services.
✑ Credentials from Active Directory should not be stored outside the company.
✑ End users should have single sign-on (SSO) to on-premises +les and services once connected to WorkSpaces.
Which strategy should the solutions architect use for end user authentication?
A. Create an AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) directory within the WorkSpaces VPC. Use the
Active Directory Migration Tool (ADMT) with the Password Export Server to copy users from the on-premises Active Directory to AWS Managed
Microsoft AD. Set up a one- way trust allowing users from AWS Managed Microsoft AD to access resources in the on-premises Active
Directory. Use AWS Managed Microsoft AD as the directory for WorkSpaces.
B. Create a service account in the on-premises Active Directory with the required permissions. Create an AD Connector in AWS Directory
Service to be deployed on premises using the service account to communicate with the on-premises Active Directory. Ensure the required TCP
ports are open from the WorkSpaces VPC to the on-premises AD Connector. Use the AD Connector as the directory for WorkSpaces.
C. Create a service account in the on-premises Active Directory with the required permissions. Create an AD Connector in AWS Directory
Service within the WorkSpaces VPC using the service account to communicate with the on-premises Active Directory. Use the AD Connector
as the directory for WorkSpaces.
D. Create an AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) directory in the AWS Directory Service within
the WorkSpaces VPC. Set up a one-way trust allowing users from the on-premises Active Directory to access resources in the AWS Managed
Microsoft AD. Use AWS Managed Microsoft AD as the directory for WorkSpaces. Create an identity provider with AWS Identity and Access
Management (IAM) from an on-premises ADFS server. Allow users from this identity provider to assume a role with a policy allowing them to
run WorkSpaces.
Correct Answer: D
Reference:
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html
First clue: "AD Connector is a directory gateway with which you can redirect directory requests to your on-premises Microsoft Active Directory
without caching any information in the cloud. " (https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_ad_connector.html)
which includes pretty much everything needed in the question
Other clue: one-way trust do not work with AWS SSO (https://docs.aws.amazon.com/singlesignon/latest/userguide/connectonpremad.html) that
would eliminate D.
upvoted 7 times
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_ad_connector.html
upvoted 2 times
C.
Not D. it says credential shoudn't leave outside.
upvoted 2 times
" # asfsdfsdf 3 months, 4 weeks ago
I will go with C ...
Caching is not being done on cloud...
Also it requires two-way trust in order to implement D
https://docs.aws.amazon.com/workspaces/latest/adminguide/launch-workspace-trusted-domain.html
taking look at the documentation creating it with one-way trust is done using AD connector:
https://docs.aws.amazon.com/workspaces/latest/adminguide/launch-workspace-ad-connector.html
And
https://d1.awsstatic.com/Projects/deploy-amazon-workspaces-one-way-trust-with-aws-directory-service.pdf
upvoted 1 times
upvoted 3 times
AD Connector simply connects your existing on-premises Active Directory to AWS. AD Connector is a directory gateway with which you can
redirect directory requests to your on-premises Microsoft Active Directory "without caching any information in the cloud. "
https://aws.amazon.com/single-sign-on/faqs/
upvoted 3 times
A company requires that all internal application connectivity use private IP addresses. To facilitate this policy, a solutions architect has created
interface endpoints to connect to AWS public services. Upon testing, the solutions architect notices that the service names are resolving to public
IP addresses, and that internal services cannot connect to the interface endpoints.
Which step should the solutions architect take to resolve this issue?
A. Update the subnet route table with a route to the interface endpoint
C. Con+gure the security group on the interface endpoint to allow connectivity to the AWS services
D. Con+gure an Amazon Route 53 private hosted zone with a conditional forwarder for the internal application
Correct Answer: B
You don't need to check private DNS because it’s turned on by default while you need to configure SG.
https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html
upvoted 1 times
A company has a data lake in Amazon S3 that needs to be accessed by hundreds of applications across many AWS accounts. The company's
information security policy states that the S3 bucket must not be accessed over the public internet and that each application should have the
minimum permissions necessary to function.
To meet these requirements, a solutions architect plans to use an S3 access point that is restricted to speci+c VPCs for each application.
Which combination of steps should the solutions architect take to implement this solution? (Choose two.)
A. Create an S3 access point for each application in the AWS account that owns the S3 bucket. Con+gure each access point to be accessible
only from the application's VPC. Update the bucket policy to require access from an access point
B. Create an interface endpoint for Amazon S3 in each application's VPC. Con+gure the endpoint policy to allow access to an S3 access point.
Create a VPC gateway attachment for the S3 endpoint
C. Create a gateway endpoint for Amazon S3 in each application's VPC. Con+gure the endpoint policy to allow access to an S3 access point.
Specify the route table that is used to access the access point.
D. Create an S3 access point for each application in each AWS account and attach the access points to the S3 bucket. Con+gure each access
point to be accessible only from the application's VPC. Update the bucket policy to require access from an access point.
E. Create a gateway endpoint for Amazon S3 in the data lake's VPC. Attach an endpoint policy to allow access to the S3 bucket. Specify the
route table that is used to access the bucket
Correct Answer: AC
https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/
https://aws.amazon.com/blogs/storage/setting-up-cross-account-amazon-s3-access-with-
s3-access-points/ => Account A (The Data Owner). This is the account you create the Amazon S3 Access Point in
upvoted 2 times
A company that runs applications on AWS recently subscribed to a new software-as-a-service (SaaS) data vendor. The vendor provides the data by
way of a
REST API that the vendor hosts in its AWS environment. The vendor offers multiple options for connectivity to the API and is working with the
company to +nd the best way to connect.
The company's AWS account does not allow outbound internet access from its AWS environment. The vendor's services run on AWS in the same
Region as the company's applications.
A solutions architect must implement connectivity to the vendor's API so that the API is highly available in the company's VPC.
Which solution will meet these requirements?
A. Connect to the vendor's public API address for the data service
B. Connect to the vendor by way of a VPC peering connection between the vendor's VPC and the company's VPC
C. Connect to the vendor by way of a VPC endpoint service that uses AWS PrivateLink
D. Connect to a public bastion host that the vendor provides. Tunnel the API tramc
Correct Answer: D
Reference:
https://docs.oracle.com/en-us/iaas/big-data/doc/use-bastion-host-connect-your-service.html
https://aws.amazon.com/blogs/apn/using-aws-privatelink-integrations-to-access-saas-solutions-from-apn-partners
/#:~:text=With%20AWS%20PrivateLink%2C%20you%20can,data%20to%20the%20public%20internet.
upvoted 14 times
upvoted 2 times
A company is developing a web application that runs on Amazon EC2 instances in an Auto Scaling group behind a public-facing Application Load
Balancer (ALB).
Only users from a speci+c country are allowed to access the application. The company needs the ability to log the access requests that have been
blocked. The solution should require the least possible maintenance.
Which solution meets these requirements?
A. Create an IPSet containing a list of IP ranges that belong to the speci+ed country. Create an AWS WAF web ACL. Con+gure a rule to block
any requests that do not originate from an IP range in the IPSet. Associate the rule with the web ACL. Associate the web ACL with the ALB.
B. Create an AWS WAF web ACL. Con+gure a rule to block any requests that do not originate from the speci+ed country. Associate the rule
with the web ACL. Associate the web ACL with the ALB.
C. Con+gure AWS Shield to block any requests that do not originate from the speci+ed country. Associate AWS Shield with the ALB.
D. Create a security group rule that allows ports 80 and 443 from IP ranges that belong to the speci+ed country. Associate the security group
with the ALB.
Correct Answer: A
WAF is designed to serve this case, for A making a IP list is impossible. AWS has such list, and can ganrutee 99.8% accurate, how can a
company do it?
upvoted 1 times
" # AzureDP900 11 months, 2 weeks ago
B completely make sense. A is wrong answer.
upvoted 1 times
A multimedia company needs to deliver its video-on-demand (VOD) content to its subscribers in a cost-effective way. The video +les range in size
from 1-15 GB and are typically viewed frequently for the +rst 6 months after creation, and then access decreases considerably. The company
requires all video +les to remain immediately available for subscribers. There are now roughly 30,000 +les, and the company anticipates doubling
that number over time.
What is the MOST cost-effective solution for delivering the company's VOD content?
A. Store the video +les in an Amazon S3 bucket using S3 Intelligent-Tiering. Use Amazon CloudFront to deliver the content with the S3 bucket
as the origin.
B. Use AWS Elemental MediaConvert and store the adaptive bitrate video +les in Amazon S3. Con+gure an AWS Elemental MediaPackage
endpoint to deliver the content from Amazon S3.
C. Store the video +les in Amazon Elastic File System (Amazon EFS) Standard. Enable EFS lifecycle management to move the video +les to
EFS Infrequent Access after 6 months. Create an Amazon EC2 Auto Scaling group behind an Elastic Load Balancer to deliver the content from
Amazon EFS.
D. Store the video +les in Amazon S3 Standard. Create S3 Lifecycle rules to move the video +les to S3 Standard-Infrequent Access (S3
Standard-IA) after 6 months and to S3 Glacier Deep Archive after 1 year. Use Amazon CloudFront to deliver the content with the S3 bucket as
the origin.
Correct Answer: D
A company manages hundreds of AWS accounts centrally in an organization in AWS Organizations. The company recently started to allow product
teams to create and manage their own S3 access points in their accounts. The S3 access points can be accessed only within VPCs, not on the
Internet.
What is the MOST operationally emcient way to enforce this requirement?
A. Set the S3 access point resource policy to deny the s3:CreateAccessPoint action unless the s3:AccessPointNetworkOrigin condition key
evaluates to VPC.
B. Create an SCP at the root level in the organization to deny the s3:CreateAccessPoint action unless the s3:AccessPointNetworkOrigin
condition key evaluates to VPC.
C. Use AWS CloudFormation StackSets to create a new IAM policy in each AWS account that allows the s3:CreateAccessPoint action only if
the s3:AccessPointNetworkOrigin condition key evaluates to VPC.
D. Set the S3 bucket policy to deny the s3:CreateAccessPoint action unless the s3:AccessPointNetworkOrigin condition key evaluates to VPC.
Correct Answer: D
Reference:
https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:CreateAccessPoint",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"s3:AccessPointNetworkOrigin": "VPC"
}
}
}]
}
upvoted 1 times
" # jyrajan69 7 months, 3 weeks ago
The question states clearly '. Recently, the firm began allowing product teams to build and administer their own S3 access points under their own
accounts' so setting SCP at root level would not allow this, therefore only possible solution is A.
upvoted 2 times
"You can set up AWS SCPs to require any new Access Point in the organization to be restricted to VPC-Only type. This makes sure that any
Access Point created in your organization provides access only from within the VPCs and there by firewalling your data to within your private
networks."
upvoted 2 times
A company needs to architect a hybrid DNS solution. This solution will use an Amazon Route 53 private hosted zone for the domain
cloud.example.com for the resources stored within VPCs.
✑ The company has the following DNS resolution requirements:
✑ On-premises systems should be able to resolve and connect to cloud.example.com.
All VPCs should be able to resolve cloud.example.com.
There is already an AWS Direct Connect connection between the on-premises corporate network and AWS Transit Gateway.
Which architecture should the company use to meet these requirements with the HIGHEST performance?
A. Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the
transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver.
B. Associate the private hosted zone to all the VPCs. Deploy an Amazon EC2 conditional forwarder in the shared services VPC. Attach all
VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the conditional
forwarder.
C. Associate the private hosted zone to the shared services VPC. Create a Route 53 outbound resolver in the shared services VPC. Attach all
VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the outbound
resolver.
D. Associate the private hosted zone to the shared services VPC. Create a Route 53 inbound resolver in the shared services VPC. Attach the
shared services VPC to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the
inbound resolver.
Correct Answer: A
Reference:
https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-
transit- gateway/
"When a Route 53 private hosted zone needs to be resolved in multiple VPCs and AWS accounts as described earlier, the most reliable pattern is
to share the private hosted zone between accounts and associate it to each VPC that needs it."
upvoted 18 times
https://aws.amazon.com/vi/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-
aws-transit-gateway/
So answer is A!!!!
upvoted 4 times
" # asfsdfsdf 3 months, 4 weeks ago
I will go with D there is a blog for this - there is no need to associate the private zone with all VPCs only with the shared one. the shared one will
be already associated with others.
https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-
transit-gateway/
upvoted 1 times
from-> https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-
and-aws-transit-gateway/
upvoted 1 times
If you associate a private hosted zone with shared services VPC no other VPC will be identified with this name.
upvoted 1 times
A development team has created a new jight tracker application that provides near-real-time data to users. The application has a front end that
consists of an
Application Load Balancer (ALB) in front of two large Amazon EC2 instances in a single Availability Zone. Data is stored in a single Amazon RDS
MySQL DB instance. An Amazon Route 53 DNS record points to the ALB.
Management wants the development team to improve the solution to achieve maximum reliability with the least amount of operational overhead.
Which set of actions should the team take?
A. Create RDS MySQL read replicas. Deploy the application to multiple AWS Regions. Use a Route 53 latency-based routing policy to route to
the application.
B. Con+gure the DB instance as Multi-AZ. Deploy the application to two additional EC2 instances in different Availability Zones behind an ALB.
C. Replace the DB instance with Amazon DynamoDB global tables. Deploy the application in multiple AWS Regions. Use a Route 53 latency-
based routing policy to route to the application.
D. Replace the DB instance with Amazon Aurora with Aurora Replicas. Deploy the application to multiple smaller EC2 instances across
multiple Availability Zones in an Auto Scaling group behind an ALB.
Correct Answer: B
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
Answer B is correct
upvoted 1 times
B doesn't offer MAXIMUM resiliency, following the well architected framework's resiliency pillar, DR scenario must be considered. In this scenario
we have a near real-time application, we would need DynamoDB + multi region for maximum resiliency for both App and DB. Moreover, we are
working the development team that can switch from RDS to NoSQL.
upvoted 6 times
https://youtu.be/ZCt3ctVfGIk?t=111
upvoted 10 times
upvoted 1 times
" # cale Most Recent % 2 months ago
Selected Answer: D
It's D
upvoted 1 times
1. Currently, App in "two large Amazon EC2" in 1 AZ, we can saving cost by smaller EC2 + Autoscaling in multi A-Z , not adds more large EC2
2. RDS multi AZ, that mean 2 instance equaly, double cost. With replicas, you can chose a maller RDS type for savign cost.
===> Finally, D
upvoted 1 times
Switching to Aurora will incur a big cost.. you can simply setup Multi-AZ, and switch the instances to be in different AZs. It is not the most
resilient architecture but it is improved and the most cost-effective one here.
upvoted 2 times
" # kyo 9 months ago
D is better than B.
upvoted 1 times
A multimedia company with a single AWS account is launching an application for a global user base. The application storage and bandwidth
requirements are unpredictable. The application will use Amazon EC2 instances behind an Application Load Balancer as the web tier and will use
Amazon DynamoDB as the database tier. The environment for the application must meet the following requirements:
✑ Low latency when accessed from any part of the world
✑ WebSocket support
✑ End-to-end encryption
Protection against the latest security threats
A. Use Amazon Route 53 and Amazon CloudFront for content distribution. Use Amazon S3 to store static content
B. Use Amazon Route 53 and AWS Transit Gateway for content distribution. Use an Amazon Elastic Block Store (Amazon EBS) volume to store
static content
C. Use AWS WAF with AWS Shield Advanced to protect the application
Correct Answer: BC
A company is using AWS Organizations to manage 15 AWS accounts. A solutions architect wants to run advanced analytics on the company's
cloud expenditures. The cost data must be gathered and made available from an analytics account. The analytics application runs in a VPC and
must receive the raw cost data each night to run the analytics.
The solutions architect has decided to use the Cost Explorer API to fetch the raw data and store the data in Amazon S3 in JSON format. Access to
the raw cost data must be restricted to the analytics application. The solutions architect has already created an AWS Lambda function to collect
data by using the Cost Explorer
API.
Which additional actions should the solutions architect take to meet these requirements?
A. Create an IAM role in the Organizations master account with permissions to use the Cost Explorer API, and establish trust between the role
and the analytics account. Update the Lambda function role and add sts:AssumeRole permissions. Assume the role in the master account
from the Lambda function code by using the AWS Security Token Service (AWS STS) AssumeRole API call. Create a gateway endpoint for
Amazon S3 in the analytics VPC. Create an S3 bucket policy that allows access only from the S3 endpoint.
B. Create an IAM role in the analytics account with permissions to use the Cost Explorer API. Update the Lambda function and assign the new
role. Create a gateway endpoint for Amazon S3 in the analytics VPC. Create an S3 bucket policy that allows access only from the analytics
VPC by using the aws:SourceVpc condition.
C. Create an IAM role in the Organizations master account with permissions to use the Cost Explorer API, and establish trust between the role
and the analytics account. Update the Lambda function role and add sts:AssumeRole permissions. Assume the role in the master account
from the Lambda function code by using the AWS Security Token Service (AWS STS) AssumeRole API call. Create an interface endpoint for
Amazon S3 in the analytics VPC. Create an S3 bucket policy that allows access only from the analytics VPC private CIDR range by using the
aws:SourceIp condition.
D. Create an IAM role in the analytics account with permissions to use the Cost Explorer API. Update the Lambda function and assign the new
role. Create an interface endpoint for Amazon S3 in the analytics VPC. Create an S3 bucket policy that allows access only from the S3
endpoint.
Correct Answer: B
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_permissions_overview.html
the administrator in the management account can create a role to grant cross-account permissions to a user in a member account as follows:
The management account administrator creates an IAM role and attaches a permissions policy to the role that grants permissions to the
organization's resources.
The management account administrator attaches a trust policy to the role that identifies the member account ID as the Principal who can assume
the role.
The member account administrator can then delegate permissions to assume the role to any users in the member account. Doing this allows
users in the member account to create or access resources in the management account and the organization. The principal in the trust policy can
also be an AWS service principal if you want to grant permissions to an AWS service to assume the role.
upvoted 11 times
upvoted 1 times
A company wants to migrate a 30 TB Oracle data warehouse from on premises to Amazon Redshift. The company used the AWS Schema
Conversion Tool (AWS
SCT) to convert the schema of the existing data warehouse to an Amazon Redshift schema. The company also used a migration assessment
report to identify manual tasks to complete.
The company needs to migrate the data to the new Amazon Redshift cluster during an upcoming data freeze period of 2 weeks. The only network
connection between the on-premises data warehouse and AWS is a 50 Mbps internet connection.
Which migration strategy meets these requirements?
A. Create an AWS Database Migration Service (AWS DMS) replication instance. Authorize the public IP address of the replication instance to
reach the data warehouse through the corporate +rewall. Create a migration task to run at the beginning of the fata freeze period.
B. Install the AWS SCT extraction agents on the on-premises servers. De+ne the extract, upload, and copy tasks to send the data to an Amazon
S3 bucket. Copy the data into the Amazon Redshift cluster. Run the tasks at the beginning of the data freeze period.
C. Install the AWS SCT extraction agents on the on-premises servers. Create a Site-to-Site VPN connection. Create an AWS Database Migration
Service (AWS DMS) replication instance that is the appropriate size. Authorize the IP address of the replication instance to be able to access
the on-premises data warehouse through the VPN connection.
D. Create a job in AWS Snowball Edge to import data into Amazon S3. Install AWS SCT extraction agents on the on-premises servers. De+ne
the local and AWS Database Migration Service (AWS DMS) tasks to send the data to the Snowball Edge device. When the Snowball Edge
device is returned to AWS and the data is available in Amazon S3, run the AWS DMS subtask to copy the data to Amazon Redshift.
Correct Answer: D
I'll go with D
for data > 20TB use Snowball
upvoted 2 times
" # vimgoru24 1 year ago
D. This is the way.
upvoted 1 times
AWS Database Migration Service (AWS DMS) can use Snowball Edge and Amazon S3 to migrate large databases more quickly than by other
methods
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.html
upvoted 4 times
A company that tracks medical devices in hospitals wants to migrate its existing storage solution to the AWS Cloud. The company equips all of its
devices with sensors that collect location and usage information. This sensor data is sent in unpredictable patterns with large spikes. The data is
stored in a MySQL database running on premises at each hospital. The company wants the cloud storage solution to scale with usage.
The company's analytics team uses the sensor data to calculate usage by device type and hospital. The team needs to keep analysis tools running
locally while fetching data from the cloud. The team also needs to use existing Java application and SQL queries with as few changes as possible.
How should a solutions architect meet these requirements while ensuring the sensor data is secure?
A. Store the data in an Amazon Aurora Serverless database. Serve the data through a Network Load Balancer (NLB). Authenticate users using
the NLB with credentials stored in AWS Secrets Manager.
B. Store the data in an Amazon S3 bucket. Serve the data through Amazon QuickSight using an IAM user authorized with AWS Identity and
Access Management (IAM) with the S3 bucket as the data source.
C. Store the data in an Amazon Aurora Serverless database. Serve the data through the Aurora Data API using an IAM user authorized with
AWS Identity and Access Management (IAM) and the AWS Secrets Manager ARN.
D. Store the data in an Amazon S3 bucket. Serve the data through Amazon Athena using AWS PrivateLink to secure the data in transit.
Correct Answer: A
https://aws.amazon.com/blogs/aws/new-data-api-for-amazon-aurora-serverless/
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html
upvoted 7 times
C) Store the data in an Amazon Aurora Serverless database. Serve the data through the Aurora Data API using an IAM user authorized with AWS
Identity and Access Management (IAM) and the AWS Secrets Manager ARN.
upvoted 1 times
upvoted 2 times
The following AWS Identity and Access Management (IAM) customer managed policy has been attached to an IAM user:
Which statement describes the access that this policy provides to the user?
A. The policy grants access to all Amazon S3 actions, including all actions in the prod-data S3 bucket
B. This policy denies access to all Amazon S3 actions, excluding all actions in the prod-data S3 bucket
C. This policy denies access to the Amazon S3 bucket and objects not having prod-data in the bucket name
D. This policy grants access to all Amazon S3 actions in the prod-data S3 bucket, but explicitly denies access to all other AWS services
Correct Answer: D
NotAction + NotResource
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_notaction.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_notresource.html
upvoted 12 times
upvoted 1 times
" # AzureDP900 11 months, 1 week ago
Selected Answer: D
D is correct for given scnerio!
upvoted 1 times
A company has implemented an ordering system using an event driven architecture. During initial testing, the system stopped processing orders.
Further log analysis revealed that one order message in an Amazon Simple Queue Service (Amazon SQS) standard queue was causing an error on
the backend and blocking all subsequent order messages. The visibility timeout of the queue is set to 30 seconds, and the backend processing
timeout is set to 10 seconds. A solutions architect needs to analyze faulty order messages and ensure that the system continues to process
subsequent messages.
Which step should the solutions architect take to meet these requirements?
A. Increase the backend processing timeout to 30 seconds to match the visibility timeout.
B. Reduce the visibility timeout of the queue to automatically remove the faulty message.
C. Con+gure a new SQS FIFO queue as a dead-letter queue to isolate the faulty messages.
D. Con+gure a new SQS standard queue as a dead-letter queue to isolate the faulty messages.
Correct Answer: D
Reference:
https://aws.amazon.com/blogs/compute/using-amazon-sqs-dead-letter-queues-to-control-message-failure/
upvoted 3 times
" # vimgoru24 1 year ago
D is the way you handle faulty messages in SQS
upvoted 3 times
A large company has a business-critical application that runs in a single AWS Region. The application consists of multiple Amazon EC2 instances
and an Amazon
RDS Multi-AZ DB instance. The EC2 instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones.
A solutions architect is implementing a disaster recovery (DR) plan for the application. The solutions architect has created a pilot light application
deployment in a new Region, which is referred to as the DR Region. The DR environment has an Auto Scaling group with a single EC2 instance and
a read replica of the RDS DB instance.
The solutions architect must automate a failover from the primary application environment to the pilot light environment in the DR Region.
Which solution meets these requirements with the MOST operational emciency?
A. Publish an application availability metric to Amazon CloudWatch in the DR Region from the application environment in the primary Region.
Create a CloudWatch alarm in the DR Region that is invoked when the application availability metric stops being delivered. Con+gure the
CloudWatch alarm to send a noti+cation to an Amazon Simple Noti+cation Service (Amazon SNS) topic in the DR Region. Add an email
subscription to the SNS topic that sends messages to the application owner. Upon noti+cation, instruct a systems operator to sign in to the
AWS Management Console and initiate failover operations for the application.
B. Create a cron task that runs every 5 minutes by using one of the application's EC2 instances in the primary Region. Con+gure the cron task
to check whether the application is available. Upon failure, the cron task noti+es a systems operator and attempts to restart the application
services.
C. Create a cron task that runs every 5 minutes by using one of the application's EC2 instances in the primary Region. Con+gure the cron task
to check whether the application is available. Upon failure, the cron task modi+es the DR environment by promoting the read replica and by
adding EC2 instances to the Auto Scaling group.
D. Publish an application availability metric to Amazon CloudWatch in the DR Region from the application environment in the primary Region.
Create a CloudWatch alarm in the DR Region that is invoked when the application availability metric stops being delivered. Con+gure the
CloudWatch alarm to send a noti+cation to an Amazon Simple Noti+cation Service (Amazon SNS) topic in the DR Region. Use an AWS Lambda
function that is invoked by Amazon SNS in the DR Region to promote the read replica and to add EC2 instances to the Auto Scaling group.
Correct Answer: A
An education company is running a web application used by college students around the world. The application runs in an Amazon Elastic
Container Service
(Amazon ECS) cluster in an Auto Scaling group behind an Application Load Balancer (ALB). A system administrator detects a weekly spike in the
number of failed login attempts, which overwhelm the application's authentication service. All the failed login attempts originate from about 500
different IP addresses that change each week. A solutions architect must prevent the failed login attempts from overwhelming the authentication
service.
Which solution meets these requirements with the MOST operational emciency?
A. Use AWS Firewall Manager to create a security group and security group policy to deny access from the IP addresses
B. Create an AWS WAF web ACL with a rate-based rule, and set the rule action to Block. Connect the web ACL to the ALB
C. Use AWS Firewall Manager to create a security group and security group policy to allow access only to speci+c CIDR ranges
D. Create an AWS WAF web ACL with an IP set match rule, and set the rule action to Block. Connect the web ACL to the ALB
Correct Answer: A
Reference:
https://docs.aws.amazon.com/waf/latest/developerguide/security-group-policies.html
upvoted 1 times
" # acloudguru 11 months, 1 week ago
Selected Answer: B
B,WAF is designed for this kind of DDOS
upvoted 2 times
You’d have hell of burden to manually blacklisting +500 IPs every week
upvoted 4 times
A company needs to store and process image data that will be uploaded from mobile devices using a custom mobile app. Usage peaks between 8
AM and 5 PM on weekdays, with thousands of uploads per minute. The app is rarely used at any other time. A user is noti+ed when image
processing is complete.
Which combination of actions should a solutions architect take to ensure image processing can scale to handle the load? (Choose three.)
A. Upload +les from the mobile software directly to Amazon S3. Use S3 event noti+cations to create a message in an Amazon MQ queue.
B. Upload +les from the mobile software directly to Amazon S3. Use S3 event noti+cations to create a message in an Amazon Simple Queue
Service (Amazon SQS) standard queue.
C. Invoke an AWS Lambda function to perform image processing when a message is available in the queue.
D. Invoke an S3 Batch Operations job to perform image processing when a message is available in the queue.
E. Send a push noti+cation to the mobile app by using Amazon Simple Noti+cation Service (Amazon SNS) when processing is complete.
F. Send a push noti+cation to the mobile app by using Amazon Simple Email Service (Amazon SES) when processing is complete.
upvoted 1 times
" # oppai1232 1 year ago
Why BCE instead of BDE?
Lambda times out at 15 mins, what if it needed to take more than that?
upvoted 1 times
A company's processing team has an AWS account with a production application. The application runs on Amazon EC2 instances behind a
Network Load
Balancer (NLB). The EC2 instances are hosted in private subnets in a VPC in the eu-west-1 Region. The VPC was assigned the CIDR block of
10.0.0.0/16. The billing team recently created a new AWS account and deployed an application on EC2 instances that are hosted in private
subnets in a VPC in the eu-central-1
Region. The new VPC is assigned the CIDR block of 10.0.0.0/16.
The processing application needs to securely communicate with the billing application over a proprietary TCP port.
What should a solutions architect do to meet this requirement with the LEAST amount of operational effort?
A. In the billing team's account, create a new VPC and subnets in eu-central-1 that use the CIDR block of 192.168.0.0/16. Redeploy the
application to the new subnets. Con+gure a VPC peering connection between the two VPCs.
B. In the processing team's account, add an additional CIDR block of 192.168.0.0/16 to the VPC in eu-west-1. Restart each of the EC2
instances so that they obtain a new IP address. Con+gure an inter-Region VPC peering connection between the two VPCs.
C. In the billing team's account, create a new VPC and subnets in eu-west-1 that use the CIDR block of 192.168.0.0/16. Create a VPC endpoint
service (AWS PrivateLink) in the processing team's account and an interface VPC endpoint in the new VPC. Con+gure an inter-Region VPC
peering connection in the billing team's account between the two VPCs.
D. In each account, create a new VPC with the CIDR blocks of 192.168.0.0/16 and 172.16.0.0/16. Create inter-Region VPC peering
connections between the billing team's VPCs and the processing team's VPCs. Create gateway VPC endpoints to allow tramc to route between
the VPCs.
Correct Answer: A
C: just declare the PrivateLink + Interface endpoint (using the existing NLB). Less work
upvoted 4 times
" # jyrajan69 8 months, 2 weeks ago
3 factors in this question, first it should be the least amount of effort, then there is the NLB and the need for secure connection. All of this can be
achieved by A, no issues with NLB based on the followjng link (https://aws.amazon.com/about-aws/whats-new/2018/10/network-load-balancer-
now-supports-inter-region-vpc-peering/). C is way more complicated and not required
upvoted 1 times
1. If it is using a VPC endpoint, why is a peering connection necessary? It can directly connect to the application via the endpoint so the extra
VPC and peering connection is an unnecessary step
2. 'Inter region peering' is enabled by default for all VPC peering connections so there is no special type of 'inter region peering' connection
3. The order is wrong. The processing account needs to access the billing application. So the VPC endpoint service should be created in the
Billing teams account, and the interface endpoint created in the processing account as the service provider.
upvoted 1 times
https://aws.amazon.com/about-aws/whats-new/2018/10/aws-privatelink-now-supports-access-over-inter-region-vpc-peering/
upvoted 1 times
A company that is developing a mobile game is making game assets available in two AWS Regions. Game assets are served from a set of Amazon
EC2 instances behind an Application Load Balancer (ALB) in each Region. The company requires game assets to be fetched from the closest
Region. If game assets become unavailable in the closest Region, they should be fetched from the other Region.
What should a solutions architect do to meet these requirements?
A. Create an Amazon CloudFront distribution. Create an origin group with one origin for each ALB. Set one of the origins as primary.
B. Create an Amazon Route 53 health check for each ALB. Create a Route 53 failover routing record pointing to the two ALBs. Set the Evaluate
Target Health value to Yes.
C. Create two Amazon CloudFront distributions, each with one ALB as the origin. Create an Amazon Route 53 failover routing record pointing
to the two CloudFront distributions. Set the Evaluate Target Health value to Yes.
D. Create an Amazon Route 53 health check for each ALB. Create a Route 53 latency alias record pointing to the two ALBs. Set the Evaluate
Target Health value to Yes.
Correct Answer: D
Latency routing policy – Use when you have resources in multiple AWS Regions and you want to route traffic to the region that provides the
best latency.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
upvoted 11 times
A correct - CloudFron: create an origin group with two origins: a primary and a secondary. If the primary origin is unavailable or returns specific
HTTP response status codes CloudFront automatically switches to the secondary origin
B wrong - "Create a Route 53 failover routing record pointing to the two ALBs" - you have to set failover in each Route 53 record (each ALB) as
Primary or Secondary
C wrong - "Create an Amazon Route 53 failover routing record pointing to the two CloudFront distributions." - same as above
D wrong - "Create a Route 53 latency alias record pointing to the two ALBs" - alias can use only one destination
upvoted 2 times
Latency routing for this use-case (having active resources in multiple regions)
upvoted 2 times
A large company is running a popular web application. The application runs on several Amazon EC2 Linux instances in an Auto Scaling group in a
private subnet.
An Application Load Balancer is targeting the instances in the Auto Scaling group in the private subnet. AWS Systems Manager Session Manager
is con+gured, and AWS Systems Manager Agent is running on all the EC2 instances.
The company recently released a new version of the application. Some EC2 instances are now being marked as unhealthy and are being
terminated. As a result, the application is running at reduced capacity. A solutions architect tries to determine the root cause by analyzing Amazon
CloudWatch logs that are collected from the application, but the logs are inconclusive.
How should the solutions architect gain access to an EC2 instance to troubleshoot the issue?
A. Suspend the Auto Scaling group's HealthCheck scaling process. Use Session Manager to log in to an instance that is marked as unhealthy.
B. Enable EC2 instance termination protection. Use Session Manager to log in to an instance that is marked as unhealthy.
C. Set the termination policy to OldestInstance on the Auto Scaling group. Use Session Manager to log in to an instance that is marked an
unhealthy.
D. Suspend the Auto Scaling group's Terminate process. Use Session Manager to log in to an instance that is marked as unhealthy.
Correct Answer: A
You can suspend the 'ReplaceUnhealthy' process to prevent unhealthy instances from being terminated.
See https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html
upvoted 2 times
https://aws.amazon.com/blogs/aws/new-instance-protection-for-auto-scaling/
upvoted 1 times
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html
Answer is D.
upvoted 1 times
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html#choosing-suspend-resume
upvoted 4 times
Actually your link also suggests option D and now looking at it, option D is the answer -- see my separate post for the reasoning
upvoted 1 times
" # gsw 1 year, 1 month ago
AWS actually suggests you should put your instances into the standby state to troubleshoot failure but that isn't an option here
upvoted 1 times
A software company hosts an application on AWS with resources in multiple AWS accounts and Regions. The application runs on a group of
Amazon EC2 instances in an application VPC located in the us-east-1 Region with an IPv4 CIDR block of 10.10.0.0/16. In a different AWS account,
a shared services VPC is located in the us-east-2 Region with an IPv4 CIDR block of 10.10.10.0/24. When a cloud engineer uses AWS
CloudFormation to attempt to peer the application
VPC with the shared services VPC, an error message indicates a peering failure.
Which factors could cause this error? (Choose two.)
D. One of the VPCs was not shared through AWS Resource Access Manager
E. The IAM role in the peer accepter account does not have the correct permissions
Correct Answer: AE
I'll go with A, E
upvoted 1 times
" # vimgoru24 1 year ago
A,E is way to go
upvoted 1 times
https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-vpc-peering-error/
upvoted 1 times
A company that develops consumer electronics with omces in Europe and Asia has 60 TB of software images stored on premises in Europe. The
company wants to transfer the images to an Amazon S3 bucket in the ap-northeast-1 Region. New software images are created daily and must be
encrypted in transit. The company needs a solution that does not require custom development to automatically transfer all existing and new
software images to Amazon S3.
What is the next step in the transfer process?
A. Deploy an AWS DataSync agent and con+gure a task to transfer the images to the S3 bucket
B. Con+gure Amazon Kinesis Data Firehose to transfer the images using S3 Transfer Acceleration
C. Use an AWS Snowball device to transfer the images with the S3 bucket as the target
D. Transfer the images over a Site-to-Site VPN connection using the S3 API with multipart upload
Correct Answer: A
. DataSync provides built-in security capabilities such as encryption of data in-transit, and data integrity verification in-transit and at-rest. It
optimizes use of network bandwidth, and automatically recovers from network connectivity failures. In addition, DataSync provides control and
monitoring capabilities such as data transfer scheduling and granular visibility into the transfer process through Amazon CloudWatch metrics,
logs, and events.
upvoted 4 times
https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html
https://docs.aws.amazon.com/snowball/latest/ug/shipping.html
upvoted 4 times
A company is running a distributed application on a set of Amazon EC2 instances in an Auto Scaling group. The application stores large amounts
of data on an
Amazon Elastic File System (Amazon EFS) +le system, and new data is generated monthly. The company needs to back up the data in a secondary
AWS Region to restore from in case of a performance problem in its primary Region. The company has an RTO of 1 hour. A solutions architect
needs to create a backup strategy while minimizing the extra cost.
Which backup strategy should the solutions architect recommend to meet these requirements?
A. Create a pipeline in AWS Data Pipeline. Copy the data to an EFS +le system in the secondary Region. Create a lifecycle policy to move +les
to the EFS One Zone-Infrequent Access storage class.
B. Set up automatic backups by using AWS Backup. Create a copy rule to copy backups to an Amazon S3 bucket in the secondary Region.
Create a lifecycle policy to move backups to the S3 Glacier storage class.
C. Set up AWS DataSync and continuously copy the +les to an Amazon S3 bucket in the secondary Region. Create a lifecycle policy to move
+les to the S3 Glacier Deep Archive storage class.
D. Turn on EFS Cross-Region Replication and set the secondary Region as the target. Create a lifecycle policy to move +les to the EFS
Infrequent Access storage class in the secondary Region.
Correct Answer: A
By elimination:
- D: there is no such thing "EFS Cross-Region Replication".... if you google it, everything points to AWS DataSync instead
upvoted 11 times
upvoted 1 times
" # hilft 5 months ago
I go D
upvoted 1 times
AWS Documentation clearly mentions AWS Backup as a recommended service for EFS backup solution.
"Recommended Amazon EFS backup solutions
There are two recommended solutions available for backing up your Amazon EFS file systems.
"
https://docs.aws.amazon.com/efs/latest/ug/alternative-efs-backup.html#recommended-backup-solutions
upvoted 3 times
A company runs an application on AWS. An AWS Lambda function uses credentials to authenticate to an Amazon RDS for MySQL DB instance. A
security risk assessment identi+ed that these credentials are not frequently rotated. Also, encryption at rest is not enabled for the DB instance.
The security team requires that both of these issues be resolved.
Which strategy should a solutions architect recommend to remediate these security risks?
A. Con+gure the Lambda function to store and retrieve the database credentials in AWS Secrets Manager and enable rotation of the
credentials. Take a snapshot of the DB instance and encrypt a copy of that snapshot. Replace the DB instance with a new DB instance that is
based on the encrypted snapshot.
B. Enable IAM DB authentication on the DB instance. Grant the Lambda execution role access to the DB instance. Modify the DB instance and
enable encryption.
C. Enable IAM DB authentication on the DB instance. Grant the Lambda execution role access to the DB instance. Create an encrypted read
replica of the DB instance. Promote the encrypted read replica to be the new primary node.
D. Con+gure the Lambda function to store and retrieve the database credentials as encrypted AWS Systems Manager Parameter Store
parameters. Create another Lambda function to automatically rotate the credentials. Create an encrypted read replica of the DB instance.
Promote the encrypted read replica to be the new primary node.
Correct Answer: D
Reference:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/enable-rotation-rds.html
Parameter store can store DB credentials as secure string but CANNOT rotate secrets, hence, go with A + Cannot enable encryption on existing
MySQL RDS instance, must create a new encrypted one from unencrypted snapshot.
upvoted 21 times
D is incorrect due to parameter store usage. There's no rotation provided by the service
upvoted 1 times
upvoted 1 times
" # KennethTam 8 months, 1 week ago
A is correct
upvoted 1 times
A company recently deployed a new application that runs on a group of Amazon EC2 Linux instances in a VPC. In a peered VPC, the company
launched an EC2
Linux instance that serves as a bastion host. The security group of the application instances allows access only on TCP port 22 from the private
IP of the bastion host. The security group of the bastion host allows access to TCP port 22 from 0.0.0.0/0 so that system administrators can use
SSH to remotely log in to the application instances from several branch omces.
While looking through operating system logs on the bastion host, a cloud engineer notices thousands of failed SSH logins to the bastion host from
locations around the world. The cloud engineer wants to change how remote access is granted to the application instances and wants to meet the
following requirements:
✑ Eliminate brute-force SSH login attempts.
✑ Retain a log of commands run during an SSH session.
✑ Retain the ability to forward ports.
Which solution meets these requirements for remote access to the application instances?
A. Con+gure the application instances to communicate with AWS Systems Manager. Grant access to the system administrators to use Session
Manager to establish a session with the application instances. Terminate the bastion host.
B. Update the security group of the bastion host to allow tramc from only the public IP addresses of the branch omces.
C. Con+gure an AWS Client VPN endpoint and provision each system administrator with a certi+cate to establish a VPN connection to the
application VPC. Update the security group of the application instances to allow tramc from only the Client VPN IPv4 CIDR. Terminate the
bastion host.
D. Con+gure the application instances to communicate with AWS Systems Manager. Grant access to the system administrators to issue
commands to the application instances by using Systems Manager Run Command. Terminate the bastion host.
Correct Answer: C
upvoted 1 times
Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or
manage SSH keys. Session Manager also allows you to comply with corporate policies that require controlled access to instances, strict security
practices, and fully auditable logs with instance access details, while still providing end users with simple one-click cross-platform access to your
managed instances.
upvoted 2 times
It says: "Logging isn't available for Session Manager sessions that connect through port forwarding or SSH. This is because SSH encrypts all
session data, and Session Manager only serves as a tunnel for SSH connections." So A is not correct...
I will choose B.
upvoted 2 times
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html
upvoted 2 times
A company that provisions job boards for a seasonal workforce is seeing an increase in tramc and usage. The backend services run on a pair of
Amazon EC2 instances behind an Application Load Balancer with Amazon DynamoDB as the datastore. Application read and write tramc is slow
during peak seasons.
Which option provides a scalable application architecture to handle peak seasons with the LEAST development effort?
A. Migrate the backend services to AWS Lambda. Increase the read and write capacity of DynamoDB
B. Migrate the backend services to AWS Lambda. Con+gure DynamoDB to use global tables
C. Use Auto Scaling groups for the backend services. Use DynamoDB auto scaling
D. Use Auto Scaling groups for the backend services. Use Amazon Simple Queue Service (Amazon SQS) and an AWS Lambda function to write
to DynamoDB
Correct Answer: C
A company has an application that sells tickets online and experiences bursts of demand every 7 days. The application has a stateless
presentation layer running on Amazon EC2, an Oracle database to store unstructured data catalog information, and a backend API layer. The front-
end layer uses an Elastic Load Balancer to distribute the load across nine On-Demand instances over three Availability Zones (AZs). The Oracle
database is running on a single EC2 instance. The company is experiencing performance issues when running more than two concurrent
campaigns. A solutions architect must design a solution that meets the following requirements:
✑ Address scalability issues.
✑ Increase the level of concurrency.
✑ Eliminate licensing costs.
✑ Improve reliability.
Which set of steps should the solutions architect take?
A. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce costs. Convert the Oracle
database into a single Amazon RDS reserved DB instance.
B. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce costs. Create two additional
copies of the database instance, then distribute the databases in separate AZs.
C. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce costs. Convert the tables in
the Oracle database into Amazon DynamoDB tables.
D. Convert the On-Demand Instances into Spot instances to reduce costs for the front end. Convert the tables in the Oracle database into
Amazon DynamoDB tables.
Correct Answer: A
A company wants to refactor its retail ordering web application that currently has a load-balanced Amazon EC2 instance jeet for web hosting,
database API services, and business logic. The company needs to create a decoupled, scalable architecture with a mechanism for retaining failed
orders while also minimizing operational costs.
Which solution will meet these requirements?
A. Use Amazon S3 for web hosting with Amazon API Gateway for database API services. Use Amazon Simple Queue Service (Amazon SQS)
for order queuing. Use Amazon Elastic Container Service (Amazon ECS) for business logic with Amazon SQS long polling for retaining failed
orders.
B. Use AWS Elastic Beanstalk for web hosting with Amazon API Gateway for database API services. Use Amazon MQ for order queuing. Use
AWS Step Functions for business logic with Amazon S3 Glacier Deep Archive for retaining failed orders.
C. Use Amazon S3 for web hosting with AWS AppSync for database API services. Use Amazon Simple Queue Service (Amazon SQS) for order
queuing. Use AWS Lambda for business logic with an Amazon SQS dead-letter queue for retaining failed orders.
D. Use Amazon Lightsail for web hosting with AWS AppSync for database API services. Use Amazon Simple Email Service (Amazon SES) for
order queuing. Use Amazon Elastic Kubernetes Service (Amazon EKS) for business logic with Amazon Elasticsearch Service (Amazon ES) for
retaining failed orders.
Correct Answer: C
Hints: Refactoring app to use GraphQL APIs (AppSync) + Serverless + DLQ for failed orders
upvoted 10 times
Method of Elimination -- look for failed order options in all the answers
upvoted 2 times
Unfortunately is a Trick question...While AppSync is no better than API GW in this context, DLQ is better choice than SQS long polling for
retaining failed orders
Damn aws...
upvoted 6 times
While AppSync is no better than API GW in this context, the latter part of the answer does mention DLQ which is a “must have”
upvoted 2 times
A +nancial company is building a system to generate monthly, immutable bank account statements for its users. Statements are stored in Amazon
S3. Users should have immediate access to their monthly statements for up to 2 years. Some users access their statements frequently, whereas
others rarely access their statements. The company's security and compliance policy requires that the statements be retained for at least 7 years.
What is the MOST cost-effective solution to meet the company's needs?
A. Create an S3 bucket with Object Lock disabled. Store statements in S3 Standard. De+ne an S3 Lifecycle policy to transition the data to S3
Standard-Infrequent Access (S3 Standard-IA) after 30 days. De+ne another S3 Lifecycle policy to move the data to S3 Glacier Deep Archive
after 2 years. Attach an S3 Glacier Vault Lock policy with deny delete permissions for archives less than 7 years old.
B. Create an S3 bucket with versioning enabled. Store statements in S3 Intelligent-Tiering. Use same-Region replication to replicate objects to
a backup S3 bucket. De+ne an S3 Lifecycle policy for the backup S3 bucket to move the data to S3 Glacier. Attach an S3 Glacier Vault Lock
policy with deny delete permissions for archives less than 7 years old.
C. Create an S3 bucket with Object Lock enabled. Store statements in S3 Intelligent-Tiering. Enable compliance mode with a default retention
period of 2 years. De+ne an S3 Lifecycle policy to move the data to S3 Glacier after 2 years. Attach an S3 Glacier Vault Lock policy with deny
delete permissions for archives less than 7 years old.
D. Create an S3 bucket with versioning disabled. Store statements in S3 One Zone-Infrequent Access (S3 One Zone-IA). De+ne an S3 Lifecycle
policy to move the data to S3 Glacier Deep Archive after 2 years. Attach an S3 Glacier Vault Lock policy with deny delete permissions for
archives less than 7 years old.
Correct Answer: D
S3 Object Lock protection is maintained regardless of which storage class the object resides in and throughout S3 Lifecycle transitions between
storage classes.
upvoted 2 times
A company hosts a large on-premises MySQL database at its main omce that supports an issue tracking system used by employees around the
world. The company already uses AWS for some workloads and has created an Amazon Route 53 entry for the database endpoint that points to
the on-premises database.
Management is concerned about the database being a single point of failure and wants a solutions architect to migrate the database to AWS
without any data loss or downtime.
Which set of actions should the solutions architect implement?
A. Create an Amazon Aurora DB cluster. Use AWS Database Migration Service (AWS DMS) to do a full load from the on-premises database to
Aurora. Update the Route 53 entry for the database to point to the Aurora cluster endpoint, and shut down the on-premises database.
B. During nonbusiness hours, shut down the on-premises database and create a backup. Restore this backup to an Amazon Aurora DB cluster.
When the restoration is complete, update the Route 53 entry for the database to point to the Aurora cluster endpoint, and shut down the on-
premises database.
C. Create an Amazon Aurora DB cluster. Use AWS Database Migration Service (AWS DMS) to do a full load with continuous replication from
the on-premises database to Aurora. When the migration is complete, update the Route 53 entry for the database to point to the Aurora cluster
endpoint, and shut down the on- premises database.
D. Create a backup of the database and restore it to an Amazon Aurora multi-master cluster. This Aurora cluster will be in a master-master
replication con+guration with the on-premises database. Update the Route 53 entry for the database to point to the Aurora cluster endpoint,
and shut down the on- premises database.
Correct Answer: C
upvoted 1 times
“Around the world” eliminates possibility for the maintenance window at night. The other difference is ability to leverage continuous replication in
MySQL to Aurora case.
upvoted 3 times
A company has a policy that all Amazon EC2 instances that are running a database must exist within the same subnets in a shared VPC.
Administrators must follow security compliance requirements and are not allowed to directly log in to the shared account. All company accounts
are members of the same organization in AWS Organizations. The number of accounts will rapidly increase as the company grows.
A solutions architect uses AWS Resource Access Manager to create a resource share in the shared account.
What is the MOST operationally emcient con+guration to meet these requirements?
A. Add the VPC to the resource share. Add the account IDs as principals
B. Add all subnets within the VPC to the resource share. Add the account IDs as principals
C. Add all subnets within the VPC to the resource share. Add the organization as a principal
D. Add the VPC to the resource share. Add the organization as a principal
Correct Answer: B
Reference:
https://aws.amazon.com/blogs/networking-and-content-delivery/vpc-sharing-a-new-approach-to-multiple-accounts-and-vpc-management/
upvoted 1 times
" # WhyIronMan 1 year ago
I'll go with C
upvoted 1 times
A company runs an application in the cloud that consists of a database and a website. Users can post data to the website, have the data
processed, and have the data sent back to them in an email, Data is stored in a MySQL database running on an Amazon EC2 instance. The
database is running with two private subnets. The website is running on Apache Tomcat in a single EC2 instance in a different VPC with one
public subnet. There is a single VPC peering connection between the database and website VPC.
The website has suffered several outages during the last month due to high traffic.
Which actions should a solutions architect take to increase the reliability of the application? (select three)
A – Place the Tomcat server in an Autoscaling group with multiple EC2 instances behind an Application Load Balancer
C – Migrate the MySQL database to Amazon Aurora with one Aurora Replica
F – Create an additional public subnet in a different Availability Zone in the website VPC
upvoted 3 times
You share the resources of the VPC which are Subnets in this case + add Organization as the principal as the number of accounts will grow in
future.
https://docs.aws.amazon.com/ram/latest/userguide/ram-ug.pdf
upvoted 4 times
A solutions architect is evaluating the reliability of a recently migrated application running on AWS. The front end is hosted on Amazon S3 and
accelerated by
Amazon CloudFront. The application layer is running in a stateless Docker container on an Amazon EC2 On-Demand Instance with an Elastic IP
address. The storage layer is a MongoDB database running on an EC2 Reserved Instance in the same Availability Zone as the application layer.
Which combination of steps should the solutions architect take to eliminate single points of failure with minimal application code changes?
(Choose two.)
A. Create a REST API in Amazon API Gateway and use AWS Lambda functions as the application layer
B. Create an Application Load Balancer and migrate the Docker container to AWS Fargate
E. Create an Application Load Balancer and move the storage layer to an EC2 Auto Scaling group
Correct Answer: AE
https://aws.amazon.com/documentdb/?nc1=h_ls
https://aws.amazon.com/blogs/containers/using-alb-ingress-controller-with-amazon-eks-on-fargate/
upvoted 7 times
A company operates an on-premises software-as-a-service (SaaS) solution that ingests several +les daily. The company provides multiple public
SFTP endpoints to its customers to facilitate the +le transfers. The customers add the SFTP endpoint IP addresses to their +rewall allow list for
outbound tramc. Changes to the
SFTP endpoint IP addresses are not permitted.
The company wants to migrate the SaaS solution to AWS and decrease the operational overhead of the +le transfer service.
Which solution meets these requirements?
A. Register the customer-owned block of IP addresses in the company's AWS account. Create Elastic IP addresses from the address pool and
assign them to an AWS Transfer for SFTP endpoint. Use AWS Transfer to store the +les in Amazon S3.
B. Add a subnet containing the customer-owned block of IP addresses to a VPC. Create Elastic IP addresses from the address pool and assign
them to an Application Load Balancer (ALB). Launch EC2 instances hosting FTP services in an Auto Scaling group behind the ALB. Store the
+les in attached Amazon Elastic Block Store (Amazon EBS) volumes.
C. Register the customer-owned block of IP addresses with Amazon Route 53. Create alias records in Route 53 that point to a Network Load
Balancer (NLB). Launch EC2 instances hosting FTP services in an Auto Scaling group behind the NLB. Store the +les in Amazon S3.
D. Register the customer-owned block of IP addresses in the company's AWS account. Create Elastic IP addresses from the address pool and
assign them to an Amazon S3 VPC endpoint. Enable SFTP support on the S3 bucket.
Correct Answer: A
AWS Transfer for SFTP enables you to easily move your file transfer workloads that use the Secure Shell File Transfer Protocol (SFTP) to AWS
without needing to modify your applications or manage any SFTP servers.
https://aws.amazon.com/about-aws/whats-new/2018/11/aws-transfer-for-sftp-fully-managed-sftp-for-s3/
upvoted 5 times
seamlessly migrate your file transfer workflows to AWS by integrating with existing authentication systems, and providing DNS routing with
Amazon Route 53 so nothing changes for your customers and partners, or their applications. With your data in Amazon S3 or Amazon EFS, you
can use it with AWS services for processing, analytics, machine learning, archiving, as well as home directories and developer tools.
upvoted 4 times
https://aws.amazon.com/about-aws/whats-new/2020/01/aws-transfer-for-sftp-supports-vpc-security-groups-and-elastic-ip-addresses/
upvoted 1 times
A company is migrating a legacy application from an on-premises data center to AWS. The application consists of a single application server and
a Microsoft SQL
Server database server. Each server is deployed on a VMware VM that consumes 500 TB of data across multiple attached volumes.
The company has established a 10 Gbps AWS Direct Connect connection from the closest AWS Region to its on-premises data center. The Direct
Connect connection is not currently in use by other services.
Which combination of steps should a solutions architect take to migrate the application with the LEAST amount of downtime? (Choose two.)
A. Use an AWS Server Migration Service (AWS SMS) replication job to migrate the database server VM to AWS.
D. Use an AWS Server Migration Service (AWS SMS) replication job to migrate the application server VM to AWS.
E. Use an AWS Database Migration Service (AWS DMS) replication instance to migrate the database to an Amazon RDS DB instance.
Correct Answer: BE
Maximum storage for RDS SQL Server is 16TB. RDS cannot be part of the solution.
upvoted 2 times
A company is creating a REST API to share information with six of its partners based in the United States. The company has created an Amazon
API Gateway
Regional endpoint. Each of the six partners will access the API once per day to post daily sales +gures.
After initial deployment, the company observes 1,000 requests per second originating from 500 different IP addresses around the world. The
company believes this tramc is originating from a botnet and wants to secure its API while minimizing cost.
Which approach should the company take to secure its API?
A. Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit
more than +ve requests per day. Associate the web ACL with the CloudFront distribution. Con+gure CloudFront with an origin access identity
(OAI) and associate it with the distribution. Con+gure API Gateway to ensure only the OAI can run the POST method.
B. Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit
more than +ve requests per day. Associate the web ACL with the CloudFront distribution. Add a custom header to the CloudFront distribution
populated with an API key. Con+gure the API to require an API key on the POST method.
C. Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the six partners. Associate the web ACL with the API.
Create a resource policy with a request limit and associate it with the API. Con+gure the API to require an API key on the POST method.
D. Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the six partners. Associate the web ACL with the API.
Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.
Correct Answer: B
"
A rate-based rule tracks the rate of requests for each originating IP address, and triggers the rule action on IPs with rates that go over a limit. You
set the limit as the number of requests per 5-minute time span......
The following caveats apply to AWS WAF rate-based rules:
The minimum rate that you can set is 100.
AWS WAF checks the rate of requests every 30 seconds, and counts requests for the prior five minutes each time. Because of this, it's possible
for an IP address to send requests at too high a rate for 30 seconds before AWS WAF detects and blocks it.
AWS WAF can block up to 10,000 IP addresses. If more than 10,000 IP addresses send high rates of requests at the same time, AWS WAF will
only block 10,000 of them.
"
https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.html
upvoted 1 times
----> Answer is B
upvoted 1 times
A company is running its AWS infrastructure across two AWS Regions. The company has four VPCs in the eu-west-1 Region and has two VPCs in
the us-east-1
Region. The company also has an on-premises data center in Europe that has two AWS Direct Connect connections in eu-west-1.
The company needs a solution in which Amazon EC2 instances in each VPC can connect to each other by using private IP addresses. Servers in
the on-premises data center also must be able to connect to those VPCs by using private IP addresses.
What is the MOST cost-effective solution that meets these requirements?
A. Create an AWS Transit Gateway in each Region, and attach each VPC to the transit gateway in that Region. Create cross-Region peering
between the transit gateways. Create two transit VIFs, and attach them to a single Direct Connect gateway. Associate each transit gateway
with the Direct Connect gateway.
B. Create VPC peering between each VPC in the same Region. Create cross-Region peering between each VPC in different Regions. Create two
private VIFs, and attach them to a single Direct Connect gateway. Associate each VPC with the Direct Connect gateway.
C. Create VPC peering between each VPC in the same Region. Create cross-Region peering between each VPC in different Regions. Create two
public VIFs that are con+gured to route AWS IP addresses globally to on-premises servers.
D. Create an AWS Transit Gateway in each Region, and attach each VPC to the transit gateway in that Region. Create cross-Region peering
between the transit gateways. Create two private VIFs, and attach them to a single Direct Connect gateway. Associate each VPC with the
Direct Connect gateway.
Correct Answer: B
While this makes TGW a good default for most network architectures, VPC peering is still a valid choice due to the following advantages it has
over TGW:
Lower cost — With VPC peering you only pay for data transfer charges. Transit Gateway has an hourly charge per attachment in addition to the
data transfer fees.
Latency — Unlike VPC peering, Transit Gateway is an additional hop between VPCs.
upvoted 7 times
A company runs an application that gives users the ability to search for videos and related information by using keywords that are curated from
content providers.
The application data is stored in an on-premises Oracle database that is 800 GB in size.
The company wants to migrate the data to an Amazon Aurora MySQL DB instance. A solutions architect plans to use the AWS Schema Conversion
Tool and
AWS Database Migration Service (AWS DMS) for the migration. During the migration, the existing database must serve ongoing requests. The
migration must be completed with minimum downtime.
Which solution will meet these requirements?
A. Create primary key indexes, secondary indexes, and referential integrity constraints in the target database before starting the migration
process.
B. Use AWS DMS to run the conversion report for Oracle to Aurora MySQL. Remediate any issues. Then use AWS DMS to migrate the data.
D. Turn off automatic backups and logging of the target database until the migration and cutover processes are complete.
Correct Answer: A
Reference:
https://docs.aws.amazon.com/dms/latest/sbs/chap-rdsoracle2aurora.html
A: AWS actually recommends to: "drop primary key indexes, secondary indexes, referential integrity constraints, and data manipulation language
(DML) triggers. Or you can delay their creation until after the full load tasks are complete" --> https://docs.aws.amazon.com/dms/latest/userguide
/CHAP_BestPractices.html
C: M5 doesn't exist --> https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html
D: You can't disable automated backups on Aurora. The backup retention period for Aurora is managed by the DB cluster -->
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html
upvoted 12 times
upvoted 1 times
A travel company built a web application that uses Amazon Simple Email Service (Amazon SES) to send email noti+cations to users. The company
needs to enable logging to help troubleshoot email delivery issues. The company also needs the ability to do searches that are based on recipient,
subject, and time sent.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
A. Create an Amazon SES con+guration set with Amazon Kinesis Data Firehose as the destination. Choose to send logs to an Amazon S3
bucket.
B. Enable AWS CloudTrail logging. Specify an Amazon S3 bucket as the destination for the logs.
C. Use Amazon Athena to query the logs in the Amazon S3 bucket for recipient, subject, and time sent.
D. Create an Amazon CloudWatch log group. Con+gure Amazon SES to send logs to the log group.
E. Use Amazon Athena to query the logs in Amazon CloudWatch for recipient, subject, and time sent.
Correct Answer: A
Reference -
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/ses-dg.pdf
AC
If you simply want a running total of each type of event (for example, so that you can set an alarm when the total gets too high), you can use
CloudWatch.
If you want detailed event records that you can output to another service such as Amazon OpenSearch Service or Amazon Redshift for analysis,
you can use Kinesis Data Firehose.
upvoted 1 times
You can't use any of the following email headers as the Dimension Name: Received, To, From, DKIM-Signature, CC, message-id, or Return-
Path...so A&C
upvoted 1 times
" # bobsmith2000 6 months ago
It's AC.
"The event destination that you choose depends on the level of detail you want about the events, and the way you want to receive the event
information. If you simply want a running total of each type of event (for example, so that you can set an alarm when the total gets too high), you
can use CloudWatch.
If you want detailed event records that you can output to another service such as Amazon OpenSearch Service or Amazon Redshift for analysis,
you can use Kinesis Data Firehose.
If you want to receive notifications when certain events occur, you can use Amazon SNS."
Source:
https://docs.aws.amazon.com/ses/latest/dg/event-publishing-add-event-destination.html
upvoted 2 times
https://docs.aws.amazon.com/ses/latest/dg/event-publishing-retrieving-firehose.html
upvoted 1 times
address from which the request was made, who made the request, when it was made, and so on"
So CloudTrail doesn't log email content, answer should be A, C instead of B
upvoted 3 times
" # student22 1 year, 1 month ago
A,C
SES --> Kinesis Firehose --> S3 --> Query with Athena
upvoted 2 times
You can both publish logs and metric to CloudWatch and Kinesis Data Firehose, but ONLY can publish detailed event records to Kinesis Data
Firehose.
And of course, once in Firehose you can put the logs in S3 and analyze them with Athena
A company is launching a new web application on Amazon EC2 instances. Development and production workloads exist in separate AWS
accounts.
According to the company's security requirements, only automated con+guration tools are allowed to access the production account. The
company's security team wants to receive immediate noti+cation if any manual access to the production AWS account or EC2 instances occurs.
Which combination of actions should a solutions architect take in the production account to meet these requirements? (Choose three.)
A. Turn on AWS CloudTrail logs in the application's primary AWS Region. Use Amazon Athena to query the logs for AwsConsoleSignIn events.
B. Con+gure Amazon Simple Email Service (Amazon SES) to send email to the security team when an alarm is activated.
C. Deploy EC2 instances in an Auto Scaling group. Con+gure the launch template to deploy instances without key pairs. Con+gure Amazon
CloudWatch Logs to capture system access logs. Create an Amazon CloudWatch alarm that is based on the logs to detect when a user logs in
to an EC2 instance.
D. Con+gure an Amazon Simple Noti+cation Service (Amazon SNS) topic to send a message to the security team when an alarm is activated.
E. Turn on AWS CloudTrail logs for all AWS Regions. Con+gure Amazon CloudWatch alarms to provide an alert when an AwsConsoleSignIn
event is detected.
F. Deploy EC2 instances in an Auto Scaling group. Con+gure the launch template to delete the key pair after launch. Con+gure Amazon
CloudWatch Logs for the system access logs. Create an Amazon CloudWatch dashboard to show user logins over time.
A company is running a workload that consists of thousands of Amazon EC2 instances. The workload is running in a VPC that contains several
public subnets and private subnets. The public subnets have a route for 0.0.0.0/0 to an existing internet gateway. The private subnets have a route
for 0.0.0.0/0 to an existing NAT gateway.
A solutions architect needs to migrate the entire jeet of EC2 instances to use IPv6. The EC2 instances that are in private subnets must not be
accessible from the public internet.
What should the solutions architect do to meet these requirements?
A. Update the existing VPC, and associate a custom IPv6 CIDR block with the VPC and all subnets. Update all the VPC route tables, and add a
route for ::/0 to the internet gateway.
B. Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the VPC and all subnets. Update the VPC route tables for
all private subnets, and add a route for ::/0 to the NAT gateway.
C. Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the VPC and all subnets. Create an egress-only internet
gateway. Update the VPC route tables for all private subnets, and add a route for ::/0 to the egress-only internet gateway.
D. Update the existing VPC, and associate a custom IPv6 CIDR block with the VPC and all subnets. Create a new NAT gateway, and enable IPv6
support. Update the VPC route tables for all private subnets, and add a route for ::/0 to the IPv6-enabled NAT gateway.
Correct Answer: C
A company is migrating applications from on premises to the AWS Cloud. These applications power the company's internal web forms. These web
forms collect data for speci+c events several times each quarter. The web forms use simple SQL statements to save the data to a local relational
database.
Data collection occurs for each event, and the on-premises servers are idle most of the time. The company needs to minimize the amount of idle
infrastructure that supports the web forms.
Which solution will meet these requirements?
A. Use Amazon EC2 Image Builder to create AMIs for the legacy servers. Use the AMIs to provision EC2 instances to recreate the applications
in the AWS Cloud. Place an Application Load Balancer (ALB) in front of the EC2 instances. Use Amazon Route 53 to point the DNS names of
the web forms to the ALB.
B. Create one Amazon DynamoDB table to store data for all the data input. Use the application form name as the table key to distinguish data
items. Create an Amazon Kinesis data stream to receive the data input and store the input in DynamoDB. Use Amazon Route 53 to point the
DNS names of the web forms to the Kinesis data stream's endpoint.
C. Create Docker images for each server of the legacy web form applications. Create an Amazon Elastic Container Service (Amazon EC2)
cluster on AWS Fargate. Place an Application Load Balancer in front of the ECS cluster. Use Fargate task storage to store the web form data.
D. Provision an Amazon Aurora Serverless cluster. Build multiple schemas for each web form's data storage. Use Amazon API Gateway and an
AWS Lambda function to recreate the data input forms. Use Amazon Route 53 to point the DNS names of the web forms to their corresponding
API Gateway endpoint.
Correct Answer: B
Reference:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/kds.html
Selected Answer: D
Serverless API + Serverless Business Logic + Serverless DB
upvoted 4 times
" # andylogan 1 year ago
It's D
upvoted 1 times
A company wants to migrate its data analytics environment from on premises to AWS. The environment consists of two simple Node.js
applications. One of the applications collects sensor data and loads it into a MySQL database. The other application aggregates the data into
reports. When the aggregation jobs run, some of the load jobs fail to run correctly.
The company must resolve the data loading issue. The company also needs the migration to occur without interruptions or changes for the
company's customers.
What should a solutions architect do to meet these requirements?
A. Set up an Amazon Aurora MySQL database as a replication target for the on-premises database. Create an Aurora Replica for the Aurora
MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as AWS Lambda functions
behind a Network Load Balancer (NLB), and use Amazon RDS Proxy to write to the Aurora MySQL database. When the databases are synced,
disable the replication job and restart the Aurora Replica as the primary instance. Point the collector DNS record to the NLB.
B. Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from
the on-premises database to Aurora. Move the aggregation jobs to run against the Aurora MySQL database. Set up collection endpoints behind
an Application Load Balancer (ALB) as Amazon EC2 instances in an Auto Scaling group. When the databases are synced, point the collector
DNS record to the ALB. Disable the AWS DMS sync task after the cutover from on premises to AWS.
C. Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from
the on-premises database to Aurora. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against
the Aurora Replica. Set up collection endpoints as AWS Lambda functions behind an Application Load Balancer (ALB), and use Amazon RDS
Proxy to write to the Aurora MySQL database. When the databases are synced, point the collector DNS record to the ALB. Disable the AWS
DMS sync task after the cutover from on premises to AWS.
D. Set up an Amazon Aurora MySQL database. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run
against the Aurora Replica. Set up collection endpoints as an Amazon Kinesis data stream. Use Amazon Kinesis Data Firehose to replicate the
data to the Aurora MySQL database. When the databases are synced, disable the replication job and restart the Aurora Replica as the primary
instance. Point the collector DNS record to the Kinesis data stream.
Correct Answer: B
upvoted 2 times
" # backfringe 11 months, 2 weeks ago
I'd go with C
upvoted 2 times
A company runs an application in the cloud that consists of a database and a website. Users can post data to the website, have the data
processed, and have the data sent back to them in an email. Data is stored in a MySQL database running on an Amazon EC2 instance. The
database is running in a VPC with two private subnets. The website is running on Apache Tomcat in a single EC2 instance in a different VPC with
one public subnet. There is a single VPC peering connection between the database and website VPC.
The website has suffered several outages during the last month due to high tramc.
Which actions should a solutions architect take to increase the reliability of the application? (Choose three.)
A. Place the Tomcat server in an Auto Scaling group with multiple EC2 instances behind an Application Load Balancer.
C. Migrate the MySQL database to Amazon Aurora with one Aurora Replica.
F. Create an additional public subnet in a different Availability Zone in the website VPC.
A solutions architect is building a web application that uses an Amazon RDS for PostgreSQL DB instance. The DB instance is expected to receive
many more reads than writes. The solutions architect needs to ensure that the large amount of read tramc can be accommodated and that the DB
instance is highly available.
Which steps should the solutions architect take to meet these requirements? (Choose three.)
A. Create multiple read replicas and put them into an Auto Scaling group.
C. Create an Amazon Route 53 hosted zone and a record set for each read replica with a TTL and a weighted routing policy.
D. Create an Application Load Balancer (ALB) and put the read replicas behind the ALB.
E. Con+gure an Amazon CloudWatch alarm to detect a failed read replicas. Set the alarm to directly invoke an AWS Lambda function to delete
its Route 53 record set.
F. Con+gure an Amazon Route 53 health check for each read replica using its endpoint.
You can incorporate Route 53 health checks to be sure that Route 53 directs traffic away from unavailable read replicas
upvoted 1 times
" # tgv 1 year ago
BBB CCC FFF
---
upvoted 3 times
A solutions architect at a large company needs to set up network security for outbound tramc to the internet from all AWS accounts within an
organization in AWS
Organizations. The organization has more than 100 AWS accounts, and the accounts route to each other by using a centralized AWS Transit
Gateway. Each account has both an internet gateway and a NAT gateway for outbound tramc to the internet. The company deploys resources only
into a single AWS Region.
The company needs the ability to add centrally managed rule-based +ltering on all outbound tramc to the internet for all AWS accounts in the
organization. The peak load of outbound tramc will not exceed 25 Gbps in each Availability Zone.
Which solution meets these requirements?
A. Create a new VPC for outbound tramc to the internet. Connect the existing transit gateway to the new VPC. Con+gure a new NAT gateway.
Create an Auto Scaling group of Amazon EC2 instances that run an open-source internet proxy for rule-based +ltering across all Availability
Zones in the Region. Modify all default routes to point to the proxy's Auto Scaling group.
B. Create a new VPC for outbound tramc to the internet. Connect the existing transit gateway to the new VPC. Con+gure a new NAT gateway.
Use an AWS Network Firewall +rewall for rule-based +ltering. Create Network Firewall endpoints in each Availability Zone. Modify all default
routes to point to the Network Firewall endpoints.
C. Create an AWS Network Firewall +rewall for rule-based +ltering in each AWS account. Modify all default routes to point to the Network
Firewall +rewalls in each account.
D. In each AWS account, create an Auto Scaling group of network-optimized Amazon EC2 instances that run an open-source internet proxy for
rule-based +ltering. Modify all default routes to point to the proxy's Auto Scaling group.
Correct Answer: B
=> B
upvoted 3 times
A company has multiple business units. Each business unit has its own AWS account and runs a single website within that account. The company
also has a single logging account. Logs from each business unit website are aggregated into a single Amazon S3 bucket in the logging account.
The S3 bucket policy provides each business unit with access to write data into the bucket and requires data to be encrypted.
The company needs to encrypt logs uploaded into the bucket using a single AWS Key Management Service (AWS KMS) CMK. The CMK that
protects the data must be rotated once every 365 days.
Which strategy is the MOST operationally emcient for the company to use to meet these requirements?
A. Create a customer managed CMK in the logging account. Update the CMK key policy to provide access to the logging account only.
Manually rotate the CMK every 365 days.
B. Create a customer managed CMK in the logging account. Update the CMK key policy to provide access to the logging account and business
unit accounts. Enable automatic rotation of the CMK.
C. Use an AWS managed CMK in the logging account. Update the CMK key policy to provide access to the logging account and business unit
accounts. Manually rotate the CMK every 365 days.
D. Use an AWS managed CMK in the logging account. Update the CMK key policy to provide access to the logging account only. Enable
automatic rotation of the CMK.
Correct Answer: A
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
"Automatic key rotation is disabled by default on customer managed keys but authorized users can enable and disable it. When you enable (or
re-enable) automatic key rotation, AWS KMS automatically rotates the KMS key one year (approximately 365 days) after the enable date and
every year thereafter."
upvoted 2 times
upvoted 2 times
" # etopics 4 months ago
D its correct:
In May 2022, AWS KMS changed the rotation schedule for AWS managed keys from every three years (approximately 1,095 days) to every year
(approximately 365 days).
New AWS managed keys are automatically rotated one year after they are created, and approximately every year thereafter.
Existing AWS managed keys are automatically rotated one year after their most recent rotation, and every year thereafter.
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
upvoted 1 times
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
upvoted 1 times
https://docs.aws.amazon.com/whitepapers/latest/kms-best-practices/aws-managed-and-customer-managed-cmks.html
upvoted 2 times
BBB
---
upvoted 3 times
A company wants to migrate an application to Amazon EC2 from VMware Infrastructure that runs in an on-premises data center. A solutions
architect must preserve the software and con+guration settings during the migration.
What should the solutions architect do to meet these requirements?
A. Con+gure the AWS DataSync agent to start replicating the data store to Amazon FSx for Windows File Server. Use the SMB share to host the
VMware data store. Use VM Import/Export to move the VMs to Amazon EC2.
B. Use the VMware vSphere client to export the application as an image in Open Virtualization Format (OVF) format. Create an Amazon S3
bucket to store the image in the destination AWS Region. Create and apply an IAM role for VM Import. Use the AWS CLI to run the EC2 import
command.
C. Con+gure AWS Storage Gateway for +les service to export a Common Internet File System (CIFS) share. Create a backup copy to the shared
folder. Sign in to the AWS Management Console and create an AMI from the backup copy. Launch an EC2 instance that is based on the AMI.
D. Create a managed-instance activation for a hybrid environment in AWS Systems Manager. Download and install Systems Manager Agent on
the on-premises VM. Register the VM with Systems Manager to be a managed instance. Use AWS Backup to create a snapshot of the VM and
create an AMI. Launch an EC2 instance that is based on the AMI.
Correct Answer: D
Reference:
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html
A company is running multiple workloads in the AWS Cloud. The company has separate units for software development. The company uses AWS
Organizations and federation with SAML to give permissions to developers to manage resources in their AWS accounts. The development units
each deploy their production workloads into a common production account.
Recently, an incident occurred in the production account in which members of a development unit terminated an EC2 instance that belonged to a
different development unit. A solutions architect must create a solution that prevents a similar incident from happening in the future. The solution
also must allow developers the possibility to manage the instances used for their workloads.
Which strategy will meet these requirements?
A. Create separate OUs in AWS Organizations for each development unit. Assign the created OUs to the company AWS accounts. Create
separate SCPs with a deny action and a StringNotEquals condition for the DevelopmentUnit resource tag that matches the development unit
name. Assign the SCP to the corresponding OU.
B. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation. Update the IAM
policy for the developers' assumed IAM role with a deny action and a StringNotEquals condition for the DevelopmentUnit resource tag and
aws:PrincipalTag/ DevelopmentUnit.
C. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation. Create an SCP
with an allow action and a StringEquals condition for the DevelopmentUnit resource tag and aws:PrincipalTag/DevelopmentUnit. Assign the
SCP to the root OU.
D. Create separate IAM policies for each development unit. For every IAM policy, add an allow action and a StringEquals condition for the
DevelopmentUnit resource tag and the development unit name. During SAML federation, use AWS Security Token Service (AWS STS) to assign
the IAM policy and match the development unit name to the assumed IAM role.
Correct Answer: B
A - Does not make much sense. An account can only belong to one OU. This is a single production account so it can't be in multiple OUs.
B - Session tag is used to identify which business unit a user is part of. IAM policy prevent them from modifying resources for any business unit
but their own.
C. This does not restrict any existing permissions so users can still modify resources from different business units.
D. STS cannot be used to assign a policy to an IAM role. A policy has to be assigned to the role before authentication occurs.
upvoted 9 times
It's B
upvoted 2 times
" # DerekKey 1 year ago
In my opinion
B is correct - they already have ALLOW therefore we need DENY
C is wrong - since they already have ALLOW permission adding additional ALLOW permission doesn't make sense
upvoted 2 times
A company's factory and automation applications are running in a single VPC. More than 20 applications run on a combination of Amazon EC2,
Amazon Elastic
Container Service (Amazon ECS), and Amazon RDS.
The company has software engineers spread across three teams. One of the three teams owns each application, and each time is responsible for
the cost and performance of all of its applications. Team resources have tags that represent their application and team. The teams use IAM
access for daily activities.
The company needs to determine which costs on the monthly AWS bill are attributable to each application or team. The company also must be
able to create reports to compare costs from the last 12 months and to help forecast costs for the next 12 months. A solutions architect must
recommend an AWS Billing and
Cost Management solution that provides these cost reports.
Which combination of actions will meet these requirements? (Choose three.)
A. Activate the user-de+ne cost allocation tags that represent the application and the team.
B. Activate the AWS generated cost allocation tags that represent the application and the team.
C. Create a cost category for each application in Billing and Cost Management.
D. By default, IAM users don't have access to the AWS Billing and Cost Management console. You or your account administrator must grant
users access.
F You can explore your usage and costs using the main graph, the Cost Explorer cost and usage reports, or the Cost Explorer RI reports. You can
view data for up to the last 12 months, forecast how much you're likely to spend for the next 12 months, and get recommendations for what
Reserved Instances to purchase.
upvoted 19 times
only root account is able to access billing, so, It is required to enable the IAM access to the teams to controls their cost. then,
F: to be able to see the cost using the cost allocation tags, It is required to enable "Cost Explorer".
upvoted 7 times
1. https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-cost-categories.html
2.https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-cost-categories.html
3. https://docs.aws.amazon.com/cost-management/latest/userguide/ce-enable.html
upvoted 1 times
A team collects and routes behavioral data for an entire company. The company runs a Multi-AZ VPC environment with public subnets, private
subnets, and in internet gateway. Each public subnet also contains a NAT gateway. Most of the company's applications read from and write to
Amazon Kinesis Data Streams.
Most of the workloads run in private subnets.
A solutions architect must review the infrastructure. The solution architect needs to reduce costs and maintain the function of the applications.
The solutions architect uses Cost Explorer and notices that the cost in the EC2-Other category is consistently high. A further review shows that
NatGateway-Bytes charges are increasing the cost in the EC2-Other category.
What should the solutions architect do to meet these requirements?
A. Enable VPC Flow Logs. Use Amazon Athena to analyze the logs for tramc that can be removed. Ensure that security groups are blocking
tramc that is responsible for high costs.
B. Add an interface VPC endpoint for Kinesis Data Streams to the VPC. Ensure that applications have the correct IAM permissions to use the
interface VPC endpoint.
C. Enable VPC Flow Logs and Amazon Detective. Review Detective +ndings for tramc that is not related to Kinesis Data Streams. Con+gure
security groups to block that tramc.
D. Add an interface VPC endpoint for Kinesis Data Streams to the VPC. Ensure that the VPC endpoint policy allows tramc from the
applications.
Correct Answer: B
https://aws.amazon.com/premiumsupport/knowledge-center/vpc-reduce-nat-gateway-transfer-costs/
VPC endpoint policies enable you to control access by either attaching a policy to a VPC endpoint or by using additional fields in a policy that is
attached to an IAM user, group, or role to restrict access to only occur via the specified VPC endpoint
upvoted 7 times
Since "a default policy gets attached for you to allow full access to the service" when you create the endpoint you don't really need to ensure
that the VPC endpoint policy allows traffic from the applications. But I guess this is just AWS way to confuse us
upvoted 1 times
A company is using multiple AWS accounts. The company has a shared service account and several other accounts for different projects.
A team has a VPC in a project account. The team wants to connect this VPC to a corporate network through an AWS Direct Connect gateway that
exists in the shared services account. The team wants to automatically perform a virtual private gateway association with the Direct Connect
gateway by using an already- tested AWS Lambda function while deploying its VPC networking stack. The Lambda function code can assume a
role by using AWS Security Token Service
(AWS STS). The team is using AWS CloudFormation to deploy its infrastructure.
Which combination of steps will meet these requirements? (Choose three.)
A. Deploy the Lambda function to the project account. Update the Lambda function's IAM role with the directconnect:* permission.
B. Create a cross-account IAM role in the shared services account that grants the Lambda function the directconnect:* permission. Add the
sts:AssumeRole permission to the IAM role that is associated with the Lambda function in the shared services account.
C. Add a custom resource to the CloudFormation networking stack that references the Lambda function in the project account.
D. Deploy the Lambda function that is performing the association to the shared services account. Update the Lambda function's IAM role with
the directconnect:* permission.
E. Create a cross-account IAM role in the shared services account that grants the sts:AssumeRole permission to the Lambda function with the
directconnect:* permission acting as a resource. Add the sts:AssumeRole permission with this cross-account IAM role as a resource to the
IAM role that belongs to the Lambda function in the project account.
F. Add a custom resource to the CloudFormation networking stack that references the Lambda function in the shared services account.
upvoted 1 times
Bad-worded answers
upvoted 1 times
"The owner of the virtual private gateway creates an association proposal and the owner of the Direct Connect gateway must accept the
association proposal."
So it makes sense in this case that the project account would create a virtual gateway association first, and then assume the cross-account role
to accept the association in the shared services account.
upvoted 4 times
Actually lambda function can be created in shared service account or project account. If lambda function is create in shared service account,
you need grant your cloudformation customer resource permission to call lambda, so Option F is incomplete. Option B is wrong because in
that case sts::AssumeRole is not needed. Only when lambda in other account that need to assume the role, you need grant sts::AssumeRole
permission. So ACE is answer.
upvoted 7 times
upvoted 3 times
" # blackgamer 1 year, 1 month ago
ACE is the answer
upvoted 1 times
A company is running a line-of-business (LOB) application on AWS to support its users. The application runs in one VPC, with a backup copy in a
second VPC in a different AWS Region for disaster recovery. The company has a single AWS Direct Connect connection between its on-premises
network and AWS. The connection terminates at a Direct Connect gateway.
All access to the application must originate from the company's on-premises network and tramc must be encrypted in transit through the use of
IPsec. The company is routing tramc through a VPN tunnel over the Direct Connect connection to provide the required encryption.
A business continuity audit determines that the Direct Connect connection represents a potential single point of failure for access to the
application. The company needs to remediate this issue as quickly as possible.
Which approach will meet these requirements?
A. Order a second Direct Connect connection to a different Direct Connect location. Terminate the second Direct Connect connection at the
same Direct Connect gateway.
B. Con+gure an AWS Site-to-Site VPN connection over the internet. Terminate the VPN connection at a virtual private gateway in the secondary
Region.
C. Create a transit gateway. Attach the VPCs to the transit gateway, and connect the transit gateway to the Direct Connect gateway. Con+gure
an AWS Site-to- Site VPN connection, and terminate it at the transit gateway.
D. Create a transit gateway. Attach the VPCs to the transit gateway, and connect the transit gateway to the Direct Connect gateway. Order a
second Direct Connect connection, and terminate it at the transit gateway.
Correct Answer: B
Selected Answer: B
I would choose B for its simplicity and not having to order a second DX
https://docs.aws.amazon.com/vpn/latest/s2svpn/VPNTunnels.html
A is the best solution because 1 DX is a point of failure, we have to address it by order the second
upvoted 2 times
B is just a disaster recovery site to store copy of the primary site. Also terminating the VPN to a private gateway will not help the primary region.
upvoted 5 times
B is just a disaster recovery site to store copy of the primary site. Also terminating the VPN to a private gateway will not help the primary
region.
upvoted 1 times
A large company in Europe plans to migrate its applications to the AWS Cloud. The company uses multiple AWS accounts for various business
groups. A data privacy law requires the company to restrict developers' access to AWS European Regions only.
What should the solutions architect do to meet this requirement with the LEAST amount of management overhead?
A. Create IAM users and IAM groups in each account. Create IAM policies to limit access to non-European Regions. Attach the IAM policies to
the IAM groups.
B. Enable AWS Organizations, attach the AWS accounts, and create OUs for European Regions and non-European Regions. Create SCPs to limit
access to non-European Regions and attach the policies to the OUs.
C. Set up AWS Single Sign-On and attach AWS accounts. Create permission sets with policies to restrict access to non-European Regions.
Create IAM users and IAM groups in each account.
D. Enable AWS Organizations, attach the AWS accounts, and create OUs for European Regions and non-European Regions. Create permission
sets with policies to restrict access to non-European Regions. Create IAM users and IAM groups in the primary account.
Correct Answer: B
B is wrong, because each account(meaning each business unit) has developers, meaning there are some IAM users in each account who has
access to AWS European Regions only. There is no point to create OUs for European Regions and non-European Regions. We can simply create
only one OU and attach SCP to that OU or root OU.
upvoted 1 times
upvoted 1 times
A company has several applications running in an on-premises data center. The data center runs a mix of Windows and Linux VMs managed by
VMware vCenter.
A solutions architect needs to create a plan to migrate the applications to AWS. However, the solutions architect discovers that the document for
the applications is not up to date and that there are no complete infrastructure diagrams. The company's developers lack time to discuss their
applications and current usage with the solutions architect.
What should the solutions architect do to gather the required information?
A. Deploy the AWS Server Migration Service (AWS SMS) connector using the OVA image on the VMware cluster to collect con+guration and
utilization data from the VMs.
B. Use the AWS Migration Portfolio Assessment (MPA) tool to connect to each of the VMs to collect the con+guration and utilization data.
C. Install the AWS Application Discovery Service on each of the VMs to collect the con+guration and utilization data.
D. Register the on-premises VMs with the AWS Migration Hub to collect con+guration and utilization data.
Correct Answer: C
Reference:
https://www.youtube.com/watch?v=aq6ohCf6PBo
https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-connector.html
"We recommended that all customers currently using Discovery Connector transition to the new Agentless Collector. Customer's currently using
Discovery Connector can continue to do so until Aug 31, 2023. After this date, data sent to AWS Application Discovery Service by Discovery
Connector will not be processed. Going forward, Application Discovery Service Agentless Collector is the supported discovery tool for agentless
data collection by AWS Application Discovery Service. "
upvoted 1 times
The AWS Application Discovery Agentless Connector is delivered as an Open Virtual Appliance (OVA) package that can be deployed to a VMware
host. Once configured with credentials to connect to vCenter, the Discovery Connector collects VM inventory, configuration, and performance
history such as CPU, memory, and disk usage and uploads it to Application Discovery Service data store.
upvoted 1 times
upvoted 1 times
" # acloudguru 11 months, 1 week ago
C, EASY ONE ,HOPE i can have it in my exam
upvoted 1 times
A company has 50 AWS accounts that are members of an organization in AWS Organizations. Each account contains multiple VPCs. The company
wants to use
AWS Transit Gateway to establish connectivity between the VPCs in each member account. Each time a new member account is created, the
company wants to automate the process of creating a new VPC and a transit gateway attachment.
Which combination of steps will meet these requirements? (Choose two.)
A. From the management account, share the transit gateway with member accounts by using AWS Resource Access Manager.
B. From the management account, share the transit gateway with member accounts by using an AWS Organizations SCP.
C. Launch an AWS CloudFormation stack set from the management account that automatically creates a new VPC and a VPC transit gateway
attachment in a member account. Associate the attachment with the transit gateway in the management account by using the transit gateway
ID.
D. Launch an AWS CloudFormation stack set from the management account that automatically creates a new VPC and a peering transit
gateway attachment in a member account. Share the attachment with the transit gateway in the management account by using a transit
gateway service-linked role.
E. From the management account, share the transit gateway with member accounts by using AWS Service Catalog.
Correct Answer: AC
A scienti+c company needs to process text and image data from an Amazon S3 bucket. The data is collected from several radar stations during a
live, time-critical phase of a deep space mission. The radar stations upload the data to the source S3 bucket. The data is pre+xed by radar station
identi+cation number.
The company created a destination S3 bucket in a second account. Data must be copied from the source S3 bucket to the destination S3 bucket
to meet a compliance objective. The replication occurs through the use of an S3 replication rule to cover all objects in the source S3 bucket.
One speci+c radar station is identi+ed as having the most accurate data. Data replication at this radar station must be monitored for completion
within 30 minutes after the radar station uploads the objects to the source S3 bucket.
What should a solutions architect do to meet these requirements?
A. Set up an AWS DataSync agent to replicate the pre+xed data from the source S3 bucket to the destination S3 bucket. Select to use all
available bandwidth on the task, and monitor the task to ensure that it is in the TRANSFERRING status. Create an Amazon EventBridge
(Amazon CloudWatch Events) rule to trigger an alert if this status changes.
B. In the second account, create another S3 bucket to receive data from the radar station with the most accurate data. Set up a new
replication rule for this new S3 bucket to separate the replication from the other radar stations. Monitor the maximum replication time to the
destination. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert when the time exceeds the desired threshold.
C. Enable Amazon S3 Transfer Acceleration on the source S3 bucket, and con+gure the radar station with the most accurate data to use the
new endpoint. Monitor the S3 destination bucket's TotalRequestLatency metric. Create an Amazon EventBridge (Amazon CloudWatch Events)
rule to trigger an alert if this status changes.
D. Create a new S3 replication rule on the source S3 bucket that +lters for the keys that use the pre+x of the radar station with the most
accurate data. Enable S3 Replication Time Control (S3 RTC). Monitor the maximum replication time to the destination. Create an Amazon
EventBridge (Amazon CloudWatch Events) rule to trigger an alert when the time exceeds the desired threshold.
Correct Answer: A
https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-s3-replication-time-control-for-predictable-replication-time-backed-by-sla
upvoted 1 times
https://cloudcompiled.com/tutorials/aws-datasync-transfer-data/
https://aws.amazon.com/blogs/storage/how-to-use-aws-datasync-to-migrate-data-between-amazon-s3-buckets/
upvoted 1 times
" # Gaurav_GGG 10 months, 2 weeks ago
D only talks about precise data expedite transfer. How about rest of the data? No options talk about it. So i am little confused.
upvoted 1 times
A company is serving +les to its customer through an SFTP server that is accessible over the Internet. The SFTP server is running on a single
Amazon EC2 instance with an Elastic IP address attached. Customers connect to the SFTP server through its Elastic IP address and use SSH for
authentication. The EC2 instance also has an attached security group that allows access from all customer IP addresses.
A solutions architect must implement a solution to improve availability, minimize the complexity of infrastructure management, and minimize the
disruption to customers who access +les. The solution must not change the way customers connect.
Which solution will meet these requirements?
A. Disassociate the Elastic IP address from the EC2 instance. Create an Amazon S3 bucket to be used for SFTP +le hosting. Create an AWS
Transfer Family server. Con+gure the Transfer Family server with a publicly accessible endpoint. Associate the SFTP Elastic IP address with
the new endpoint. Point the Transfer Family server to the S3 bucket. Sync all +les from the SFTP server to the S3 bucket.
B. Disassociate the Elastic IP address from the EC2 instance. Create an Amazon S3 bucket to be used for SFTP +le hosting. Create an AWS
Transfer Family server. Con+gure the Transfer Family server with a VPC-hosted, Internet-facing endpoint. Associate the SFTP Elastic IP
address with the new endpoint. Attach the security group with customer IP addresses to the new endpoint. Point the Transfer Family server to
the S3 bucket. Sync all +les from the SFTP server to the S3 bucket.
C. Disassociate the Elastic IP address from the EC2 instance. Create a new Amazon Elastic File System (Amazon EFS) +le system to be used
for SFTP +le hosting. Create an AWS Fargate task de+nition to run an SFTP server. Specify the EFS +le system as a mount in the task
de+nition. Create a Fargate service by using the task de+nition, and place a Network Load Balancer (NLB) in front of the service. When
con+guring the service, attach the security group with customer IP addresses to the tasks that run the SFTP server. Associate the Elastic IP
address with the NLB. Sync all +les from the SFTP server to the S3 bucket.
D. Disassociate the Elastic IP address from the EC2 instance. Create a multi-attach Amazon Elastic Block Store (Amazon EBS) volume to be
used for SFTP +le hosting. Create a Network Load Balancer (NLB) with the Elastic IP address attached. Create an Auto Scaling group with EC2
instances that run an SFTP server. De+ne in the Auto Scaling group that instances that are launched should attach the new multi-attach EBS
volume. Con+gure the Auto Scaling group to automatically add instances behind the NLB. Con+gure the Auto Scaling group to use the security
group that allows customer IP addresses for the EC2 instances that the Auto Scaling group launches. Sync all +les from the SFTP server to the
new multi-attach EBS volume.
Correct Answer: B
A company is running an application distributed over several Amazon EC2 instances in an Auto Scaling group behind an Application Load
Balancer. The security team requires that all application access attempts be made available for analysis. Information about the client IP address,
connection type, and user agent must be included.
Which solution will meet these requirements?
A. Enable EC2 detailed monitoring, and include network logs. Send all logs through Amazon Kinesis Data Firehose to an Amazon Elasticsearch
Service (Amazon ES) cluster that the security team uses for analysis.
B. Enable VPC Flow Logs for all EC2 instance network interfaces. Publish VPC Flow Logs to an Amazon S3 bucket. Have the security team use
Amazon Athena to query and analyze the logs.
C. Enable access logs for the Application Load Balancer, and publish the logs to an Amazon S3 bucket. Have the security team use Amazon
Athena to query and analyze the logs.
D. Enable Tramc Mirroring and specify all EC2 instance network interfaces as the source. Send all tramc information through Amazon Kinesis
Data Firehose to an Amazon Elasticsearch Service (Amazon ES) cluster that the security team uses for analysis.
Correct Answer: C
A company is running a legacy application on Amazon EC2 instances in multiple Availability Zones behind a software load balancer that runs on
an active/standby set of EC2 instances. For disaster recovery, the company has created a warm standby version of the application environment
that is deployed in another AWS
Region. The domain for the application uses a hosted zone from Amazon Route 53.
The company needs the application to use static IP addresses, even in the case of a failover event to the secondary Region. The company also
requires the client's source IP address to be available for auditing purposes.
Which solution meets these requirements with the LEAST amount of operational overhead?
A. Replace the software load balancer with an AWS Application Load Balancer. Create an AWS Global Accelerator accelerator. Add an endpoint
group for each Region. Con+gure Route 53 health checks. Add an alias record that points to the accelerator.
B. Replace the software load balancer with an AWS Network Load Balancer. Create an AWS Global Accelerator accelerator. Add an endpoint
group for each Region. Con+gure Route 53 health checks. Add a CNAME record that points to the DNS name of the accelerator.
C. Replace the software load balancer with an AWS Application Load Balancer. Use AWS Global Accelerator to create two separate
accelerators. Add an endpoint group for each Region. Con+gure Route 53 health checks. Add a record set that is con+gured for active-passive
DNS failover. Point the record set to the DNS names of the two accelerators.
D. Replace the software load balancer with an AWS Network Load Balancer. Use AWS Global Accelerator to create two separate accelerators.
Add an endpoint group for each Region. Con+gure Route 53 health checks. Add a record set that is con+gured for weighted round-robin DNS
failover. Point the record set to the DNS names of the two accelerators.
Correct Answer: C
https://docs.aws.amazon.com/global-accelerator/latest/dg/about-endpoints-endpoint-weights.html
upvoted 2 times
B and D are out coz : Global Accelerator does not support client IP address preservation for Network Load Balancer and Elastic IP address
endpoints.
C is also out coz it create two seperate accelerators.. It need seperate endpoint in same accelerators instead..
Thus answer is A
upvoted 1 times
https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html
upvoted 1 times
upvoted 1 times
" # ryu10_09 11 months, 3 weeks ago
***accelerator DOES NOT SUPPORT client ip address preservation for NLB*****
answer is B
upvoted 1 times
Between A and B, A is the better option as its easier to preserve the client IP with an ALB.
Answer: A
See - https://docs.aws.amazon.com/global-accelerator/latest/dg/getting-started.html#getting-started-add-endpoints
upvoted 2 times
A company maintains a restaurant review website. The website is a single-page application where +les are stored in Amazon S3 and delivered
using Amazon
CloudFront. The company receives several fake postings every day that are manually removed.
The security team has identi+ed that most of the fake posts are from bots with IP addresses that have a bad reputation within the same global
region. The team needs to create a solution to help restrict the bots from accessing the website.
Which strategy should a solutions architect use?
A. Use AWS Firewall Manager to control the CloudFront distribution security settings. Create a geographical block rule and associate it with
Firewall Manager.
B. Associate an AWS WAF web ACL with the CloudFront distribution. Select the managed Amazon IP reputation rule group for the web ACL
with a deny action.
C. Use AWS Firewall Manager to control the CloudFront distribution security settings. Select the managed Amazon IP reputation rule group
and associate it with Firewall Manager with a deny action.
D. Associate an AWS WAF web ACL with the CloudFront distribution. Create a rule group for the web ACL with a geographical match statement
with a deny action.
Correct Answer: C
A software company has deployed an application that consumes a REST API by using Amazon API Gateway, AWS Lambda functions, and an
Amazon
DynamoDB table. The application is showing an increase in the number of errors during PUT requests. Most of the PUT calls come from a small
number of clients that are authenticated with speci+c API keys.
A solutions architect has identi+ed that a large number of the PUT requests originate from one client. The API is noncritical, and clients can
tolerate retries of unsuccessful calls. However, the errors are displayed to customers and are causing damage to the API's reputation.
What should the solutions architect recommend to improve the customer experience?
A. Implement retry logic with exponential backoff and irregular variation in the client application. Ensure that the errors are caught and
handled with descriptive error messages.
B. Implement API throttling through a usage plan at the API Gateway level. Ensure that the client application handles code 429 replies without
error.
C. Turn on API caching to enhance responsiveness for the production stage. Run 10-minute load tests. Verify that the cache capacity is
appropriate for the workload.
D. Implement reserved concurrency at the Lambda function level to provide the resources that are needed during sudden increases in tramc.
Correct Answer: C
API Gateway recommends that you run a 10-minute load test to verify that your cache capacity is appropriate for your workload.
Reference:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html
Concern is "faults are visible to clients, jeopardizing the API's reputation", implies no failure should/error be made visible to client. App can retry
from backend in event of 429. Hence B is preferred.
upvoted 2 times
" # Hasitha99 6 months, 3 weeks ago
Selected Answer: B
API gateway support based on customers (since they are using API keys)
upvoted 2 times
A medical company is running an application in the AWS Cloud. The application simulates the effect of medical drugs in development.
The application consists of two parts: con+guration and simulation. The con+guration part runs in AWS Fargate containers in an Amazon Elastic
Container Service
(Amazon ECS) cluster. The simulation part runs on large, compute optimized Amazon EC2 instances. Simulations can restart if they are
interrupted.
The con+guration part runs 24 hours a day with a steady load. The simulation part runs only for a few hours each night with a variable load. The
company stores simulation results in Amazon S3, and researchers use the results for 30 days. The company must store simulations for 10 years
and must be able to retrieve the simulations within 5 hours.
Which solution meets these requirements MOST cost-effectively?
A. Purchase an EC2 Instance Savings Plan to cover the usage for the con+guration part. Run the simulation part by using EC2 Spot Instances.
Create an S3 Lifecycle policy to transition objects that are older than 30 days to S3 Intelligent-Tiering.
B. Purchase an EC2 Instance Savings Plan to cover the usage for the con+guration part and the simulation part. Create an S3 Lifecycle policy
to transition objects that are older than 30 days to S3 Glacier.
C. Purchase Compute Savings Plans to cover the usage for the con+guration part. Run the simulation part by using EC2 Spot Instances. Create
an S3 Lifecycle policy to transition objects that are older than 30 days to S3 Glacier.
D. Purchase Compute Savings Plans to cover the usage for the con+guration part. Purchase EC2 Reserved Instances for the simulation part.
Create an S3 Lifecycle policy to transition objects that are older than 30 days to S3 Glacier Deep Archive.
Correct Answer: D
Reference:
https://aws.amazon.com/savingsplans/faq/
A company manages multiple AWS accounts by using AWS Organizations. Under the root OU, the company has two OUs: Research and DataOps.
Because of regulatory requirements, all resources that the company deploys in the organization must reside in the ap-northeast-1 Region.
Additionally, EC2 instances that the company deploys in the DataOps OU must use a prede+ned list of instance types.
A solutions architect must implement a solution that applies these restrictions. The solution must maximize operational emciency and must
minimize ongoing maintenance.
Which combination of steps will meet these requirements? (Choose two.)
A. Create an IAM role in one account under the DataOps OU. Use the ec2:InstanceType condition key in an inline policy on the role to restrict
access to speci+c instance type.
B. Create an IAM user in all accounts under the root OU. Use the aws:RequestedRegion condition key in an inline policy on each user to restrict
access to all AWS Regions except ap-northeast-1.
C. Create an SCP. Use the aws:RequestedRegion condition key to restrict access to all AWS Regions except ap-northeast-1. Apply the SCP to
the root OU.
D. Create an SCP. Use the ec2:Region condition key to restrict access to all AWS Regions except ap-northeast-1. Apply the SCP to the root OU,
the DataOps OU, and the Research OU.
E. Create an SCP. Use the ec2:InstanceType condition key to restrict access to speci+c instance types. Apply the SCP to the DataOps OU.
Correct Answer: BC
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_deny-requested-region.html https://summitroute.com
/blog/2020/03/25/aws_scp_best_practices/
A company is hosting an image-processing service on AWS in a VPC. The VPC extends across two Availability Zones. Each Availability Zone
contains one public subnet and one private subnet.
The service runs on Amazon EC2 instances in the private subnets. An Application Load Balancer in the public subnets is in front of the service.
The service needs to communicate with the internet and does so through two NAT gateways. The service uses Amazon S3 for image storage. The
EC2 instances retrieve approximately 1 ׀¢ ’׀of data from an S3 bucket each day.
The company has promoted the service as highly secure. A solutions architect must reduce cloud expenditures as much as possible without
compromising the service's security posture or increasing the time spent on ongoing operations.
Which solution will meet these requirements?
A. Replace the NAT gateways with NAT instances. In the VPC route table, create a route from the private subnets to the NAT instances.
B. Move the EC2 instances to the public subnets. Remove the NAT gateways.
C. Set up an S3 gateway VPC endpoint in the VPC. Attach an endpoint policy to the endpoint to allow the required actions on the S3 bucket.
D. Attach an Amazon Elastic File System (Amazon EFS) volume to the EC2 instances. Host the image on the EFS volume.
Correct Answer: C
Create Amazon S3 gateway endpoint in the VPC and add a VPC endpoint policy. This VPC endpoint policy will have a statement that allows S3
access only via access points owned by the organization.
Reference:
https://lifesciences-resources.awscloud.com/aws-storage-blog/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points?
Languages=Korean
A company needs to implement a patching process for its servers. The on-premises servers and Amazon EC2 instances use a variety of tools to
perform patching.
Management requires a single report showing the patch status of all the servers and instances.
Which set of actions should a solutions architect take to meet these requirements?
A. Use AWS Systems Manager to manage patches on the on-premises servers and EC2 instances. Use Systems Manager to generate patch
compliance reports
B. Use AWS OpsWorks to manage patches on the on-premises servers and EC2 instances. Use Amazon QuickSight integration with OpsWorks
to generate patch compliance reports.
C. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to apply patches by scheduling an AWS Systems Manager patch
remediation job. Use Amazon Inspector to generate patch compliance reports.
D. Use AWS OpsWorks to manage patches on the on-premises servers and EC2 instances. Use AWS X-Ray to post the patch status to AWS
Systems Manager OpsCenter to generate patch compliance reports.
Correct Answer: A
You can use AWS Systems Manager Con+guration Compliance to scan your jeet of managed instances for patch compliance.
Reference:
https://aws.amazon.com/blogs/mt/how-moodys-uses-aws-systems-manager-to-patch-servers-across-multiple-cloud-providers/
A company is running a large containerized workload in the AWS Cloud. The workload consists of approximately 100 different services. The
company uses
Amazon Elastic Container Service (Amazon ECS) to orchestrate the workload.
Recently, the company's development team started using AWS Fargate instead of Amazon EC2 instances in the ECS cluster. In the past, the
workload has come close to running the maximum number of EC2 instances that are available in the account.
The company is worried that the workload could reach the maximum number of ECS tasks that are allowed. A solutions architect must implement
a solution that will notify the development team when Fargate reaches 80% of the maximum number of tasks.
What should the solutions architect do to meet this requirement?
A. Use Amazon CloudWatch to monitor the Sample Count statistic for each service in the ECS cluster. Set an alarm for when the math
expression sample count/ SERVICE_QUOTA(service)*100 is greater than 80. Notify the development team by using Amazon Simple
Noti+cation Service (Amazon SNS).
B. Use Amazon CloudWatch to monitor service quotas that are published under the AWS/Usage metric namespace. Set an alarm for when the
math expression metric/SERVICE_QUOTA(metric)*100 is greater than 80. Notify the development team by using Amazon Simple Noti+cation
Service (Amazon SNS).
C. Create an AWS Lambda function to poll detailed metrics form the ECS cluster. When the number of running Fargate tasks is greater than 80,
invoke Amazon Simple Email Service (Amazon SES) to notify the development team.
D. Create an AWS Con+g rule to evaluate whether the Fargate SERVICE_QUOTA is greater than 80. Use Amazon Simple Email Service (Amazon
SES) to notify the development team when the AWS Con+g rule is not compliant.
Correct Answer: B
To visualize a service quota and optionally set an alarm.
Reference:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Quotas-Visualize-Alarms.html
A company has a large number of AWS accounts in an organization in AWS Organizations. A different business group owns each account. All the
AWS accounts are bound by legal compliance requirements that restrict all operations outside the eu-west-2 Region.
The company's security team has mandated the use of AWS Systems Manager Session Manager across all AWS accounts.
Which solution should a solutions architect recommend to meet these requirements?
A. Create an SCP that denies access to all requests that do not target eu-west-2. Use the NotAction element to exempt global services from
the restriction. In AWS Organizations, apply the SCP to the root of the organization.
B. Create an SCP that denies access to all requests that do not target eu-west-2. Use the NotAction element to exempt global services from
the restriction. For each AWS account, use the AmNotLike condition key to add the ARN of the IAM role that is associated with the Session
Manager instance pro+le to the condition element of the SCP. In AWS Organizations apply, the SCP to the root of the organization.
C. Create an SCP that denies access to all requests that do not target eu-west-2. Use the NotAction element to exempt global services from
the restriction. In AWS Organizations, apply the SCP to the root of the organization. In each AWS account, create an IAM permissions
boundary that allows access to the IAM role that is associated with the Session Manager instance pro+le.
D. For each AWS account, create an IAM permissions boundary that denies access to all requests that do not target eu-west-2. For each AWS
account, apply the permissions boundary to the IAM role that is associated with the Session Manager instance pro+le.
Correct Answer: A
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_deny-requested-region.html
I am not too convinced with C nor A. How about B? SCP will have deny to run EC2 with condition ArnNotLike the session-manager-profile-role
upvoted 3 times
1. Create SCP policy to privent denies access to any operations outside of the specified Region.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_general.html#example-scp-deny-region
A company uses AWS Organizations. The company has an organization that has a central management account. The company plans to provision
multiple AWS accounts for different departments. All department accounts must be a member of the company's organization.
Compliance requirements state that each account must have only one VPC. Additionally, each VPC must have an identical network security
con+guration that includes fully con+gured subnets, gateways, network ACLs, and security groups.
The company wants this security setup to be automatically applied when a new department account is created. The company wants to use the
central management account for all security operations, but the central management account should not have the security setup.
Which approach meets these requirements with the LEAST amount of setup?
A. Create an OU within the company's organization. Add department accounts to the OU. From the central management account, create an
AWS CloudFormation template that includes the VPC and the network security con+gurations. Create a CloudFormation stack set by using this
template +le with automated deployment enabled. Apply the CloudFormation stack set to the OU.
B. Create a new organization with the central management account. Invite all AWS department accounts into the new organization. From the
central management account, create an AWS CloudFormation template that includes the VPC and the network security con+gurations. Create
a CloudFormation stack that is based on this template. Apply the CloudFormation stack to the newly created organization.
C. Invite department accounts to the company's organization. From the central management account, create an AWS CloudFormation
template that includes the VPC and the network security con+gurations. Create an AWS CodePipeline pipeline that will deploy the network
security setup to the newly created account. Specify the creation of an account as an event hook. Apply the event hook to the pipeline.
D. Invite department accounts to the company's organization. From the central management account, create an AWS CloudFormation template
that includes the VPC and the network security con+gurations. Create an AWS Lambda function that will deploy the VPC and the network
security setup to the newly created account. Create an event that watches for account creation. Con+gure the event to invoke the pipeline.
Correct Answer: B
Reference:
https://aws.amazon.com/blogs/security/how-to-use-aws-organizations-to-automate-end-to-end-account-creation/
A company owns a chain of travel agencies and is running an application in the AWS Cloud. Company employees use the application to search for
information about travel destinations. Destination content is updated four times each year.
Two +xed Amazon EC2 instances serve the application. The company uses an Amazon Route 53 public hosted zone with a multivalue record of
travel.example.com that returns the Elastic IP addresses for the EC2 instances. The application uses Amazon DynamoDB as its primary data
store. The company uses a self-hosted Redis instance as a caching solution.
During content updates, the load on the EC2 instances and the caching solution increases drastically. This increased load has led to downtime on
several occasions. A solutions architect must update the application so that the application is highly available and can handle the load that is
generated by the content updates.
Which solution will meet these requirements?
A. Set up DynamoDB Accelerator (DAX) as in-memory cache. Update the application to use DAX. Create an Auto Scaling group for the EC2
instances. Create an Application Load Balancer (ALB). Set the Auto Scaling group as a target for the ALB. Update the Route 53 record to use a
simple routing policy that targets the ALB's DNS alias. Con+gure scheduled scaling for the EC2 instances before the content updates.
B. Set up Amazon ElastiCache for Redis. Update the application to use ElastiCache. Create an Auto Scaling group for the EC2 instances.
Create an Amazon CloudFront distribution, and set the Auto Scaling group as an origin for the distribution. Update the Route 53 record to use a
simple routing policy that targets the CloudFront distribution's DNS alias. Manually scale up EC2 instances before the content updates.
C. Set up Amazon ElastiCache for Memcached. Update the application to use ElastiCache Create an Auto Scaling group for the EC2 instances.
Create an Application Load Balancer (ALB). Set the Auto Scaling group as a target for the ALB. Update the Route 53 record to use a simple
routing policy that targets the ALB's DNS alias. Con+gure scheduled scaling for the application before the content updates.
D. Set up DynamoDB Accelerator (DAX) as in-memory cache. Update the application to use DAX. Create an Auto Scaling group for the EC2
instances. Create an Amazon CloudFront distribution, and set the Auto Scaling group as an origin for the distribution. Update the Route 53
record to use a simple routing policy that targets the CloudFront distribution's DNS alias. Manually scale up EC2 instances before the content
updates.
Correct Answer: A
Reference:
https://aws.amazon.com/dynamodb/dax/
upvoted 1 times
A medical company is building a data lake on Amazon S3. The data must be encrypted in transit and at rest. The data must remain protected even
if S3 bucket is inadvertently made public.
Which combination of steps will meet these requirements? (Choose three.)
A. Ensure that each S3 bucket has a bucket policy that includes a Deny statement if the aws:SecureTransport condition is not present.
B. Create a CMK in AWS Key Management Service (AWS KMS). Turn on server-side encryption (SSE) on the S3 buckets, select SSE-KMS for the
encryption type, and use the CMK as the key.
C. Ensure that each S3 bucket has a bucket policy that includes a Deny statement for PutObject actions if the request does not include an
ג€s3:x-amz-server-side- encryptionג:€ג€aws:kmsג€ condition.
D. Turn on server-side encryption (SSE) on the S3 buckets and select SSE-S3 for the encryption type.
E. Ensure that each S3 bucket has a bucket policy that includes a Deny statement for PutObject actions if the request does not include an
ג€s3:x-amz-server-side- encryptionג:€ג€AES256ג€ condition.
F. Turn on AWS Con+g. Use the s3-bucket-public-read-prohibited, s3-bucket-public-write-prohibited, and s3-bucket-ssl-requests-only AWS
Con+g managed rules to monitor the S3 buckets.
A company is building an electronic document management system in which users upload their documents. The application stack is entirely
serverless and runs on AWS in the eu-central-1 Region. The system includes a web application that uses an Amazon CloudFront distribution for
delivery with Amazon S3 as the origin.
The web application communicates with Amazon API Gateway Regional endpoints. The API Gateway APIs call AWS Lambda functions that store
metadata in an
Amazon Aurora Serverless database and put the documents into an S3 bucket.
The company is growing steadily and has completed a proof of concept with its largest customer. The company must improve latency outside of
Europe
Which combination of actions will meet these requirements? (Choose two.)
A. Enable S3 Transfer Acceleration on the S3 bucket. Ensure that the web application uses the Transfer Acceleration signed URLs.
B. Create an accelerator in AWS Global Accelerator. Attach the accelerator to the CloudFront distribution.
D. Provision the entire stack in two other locations that are spread across the world. Use global databases on the Aurora Serverless cluster.
E. Add an Amazon RDS proxy between the Lambda functions and the Aurora Serverless database.
Correct Answer: BC
Reference:
https://aws.amazon.com/global-accelerator/faqs/
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-endpoint-types.html
upvoted 2 times
" # jyrajan69 8 months, 2 weeks ago
For those choosing B, please justify your answer. Global Accelerator and CloudFront are 2 separate services, how can you attach a Global
Accelerator to CF? That option is not available as far as I can see. So based on elimination have to go with A and C
upvoted 3 times
A solutions architect is troubleshooting an application that runs on Amazon EC2 instances. The EC2 instances run in an Auto Scaling group. The
application needs to access user data in an Amazon DynamoDB table that has +xed provisioned capacity.
To match the increased workload, the solutions architect recently doubled the maximum size of the Auto Scaling group. Now, when many
instances launch at the same time, some application components are throttled when the components scan the DynamoDB table. The Auto Scaling
group terminates the failing instances and starts new instances until all applications are running
A solutions architect must implement a solution to mitigate the throttling issue in the MOST cost-effective manner
Which solution will meet these requirements?
B. Duplicate the DynamoDB table. Con+gure the running copy of the application to select at random which table it access.
Correct Answer: C
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/on-demand-table-throttling-dynamodb/
upvoted 1 times
" # HellGate 9 months, 2 weeks ago
Is On-Demand cheaper than DAX?
upvoted 2 times
https://aws.amazon.com/premiumsupport/knowledge-center/on-demand-table-throttling-dynamodb/
upvoted 3 times
A solutions architect must analyze a company's Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to determine
whether the company is using resources emciently. The company is running several large, high-memory EC2 instances to host database clusters
that are deployed in active/ passive con+gurations. The utilization of these EC2 instances varies by the applications that use the databases, and
the company has not identi+ed a pattern.
The solutions architect must analyze the environment and take action based on the +ndings.
Which solution meets these requirements MOST cost-effectively?
A. Create a dashboard by using AWS Systems Manager OpsCenter. Con+gure visualizations for Amazon CloudWatch metrics that are
associated with the EC2 instances and their EBS volumes. Review the dashboard periodically, and identify usage patterns. Rightsize the EC2
instances based on the peaks in the metrics.
B. Turn on Amazon CloudWatch detailed monitoring for the EC2 instances and their EBS volumes. Create and review a dashboard that is based
on the metrics. Identify usage patterns. Rightsize the EC2 instances based on the peaks in the metrics.
C. Install the Amazon CloudWatch agent on each of the EC2 instances. Turn on AWS Compute Optimizer, and let it run for at least 12 hours.
Review the recommendations from Compute Optimizer, and rightsize the EC2 instances as directed.
D. Sign up for the AWS Enterprise Support plan. Turn on AWS Trusted Advisor. Wait 12 hours. Review the recommendations from Trusted
Advisor, and rightsize the EC2 instances as directed.
Correct Answer: A
A large mobile gaming company has successfully migrated all of its on-premises infrastructure to the AWS Cloud. A solutions architect is
reviewing the environment to ensure that it was built according to the design and that it is running in alignment with the Well-Architected
Framework.
While reviewing previous monthly costs in Cost Explorer, the solutions architect notices that the creation and subsequent termination of several
large instance types account for a high proportion of the costs. The solutions architect +nds out that the company's developers are launching new
Amazon EC2 instances as part of their testing and that the developers are not using the appropriate instance types.
The solutions architect must implement a control mechanism to limit the instance types that only the developers can launch.
Which solution will meet these requirements?
A. Create a desired-instance-type managed rule in AWS Con+g. Con+gure the rule with the instance types that are allowed. Attach the rule to
an event to run each time a new EC2 instance is launched.
B. In the EC2 console, create a launch template that speci+es the instance types that are allowed. Assign the launch template to the
developers' IAM accounts.
C. Create a new IAM policy. Specify the instance types that are allowed. Attach the policy to an IAM group that contains the IAM accounts for
the developers
D. Use EC2 Image Builder to create an image pipeline for the developers and assist them in the creation of a golden image.
Correct Answer: A
Reference:
https://docs.aws.amazon.com/con+g/latest/developerguide/evaluate-con+g_develop-rules_getting-started.html
upvoted 11 times
Config rules do not directly affect how end-users consume AWS. Config rules evaluate resource configurations only after a configuration
change has been completed and recorded by AWS Config. Config rules do not prevent the user from making changes that could be
non-compliant. To control what a user can provision on AWS and configuration parameters allowed during provisioning, please use
AWS Identity and Access Management (IAM) Policies and AWS Service Catalog respectively.
upvoted 1 times
A company with global omces has a single 1 Gbps AWS Direct Connect connection to a single AWS Region. The company's on-premises network
uses the connection to communicate with the company's resources in the AWS Cloud. The connection has a single private virtual interface that
connects to a single VPC.
A solutions architect must implement a solution that adds a redundant Direct Connect connection in the same Region. The solution also must
provide connectivity to other Regions through the same pair of Direct Connect connections as the company expands into other Regions.
Which solution meets these requirements?
A. Provision a Direct Connect gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct
Connect connection. Create a new private virtual interface on each connection, and connect both private virtual interfaces to the Direct
Connect gateway. Connect the Direct Connect gateway to the single VPC.