Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Proans

Download as pdf or txt
Download as pdf or txt
You are on page 1of 834

AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.

com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

- Expert Veri+ed, Online, Free.

! Custom View Settings

1 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #500 Topic 1

A company must deploy multiple independent instances of an application. The front-end application is internet accessible. However, corporate
policy stipulates that the backends are to be isolated from each other and the internet, yet accessible from a centralized administration server.
The application setup should be automated to minimize the opportunity for mistakes as new instances are deployed.
Which option meets the requirements and MINIMIZES costs?

A. Use an AWS CloudFormation template to create identical IAM roles for each region. Use AWS CloudFormation StackSets to deploy each
application instance by using parameters to customize for each instance, and use security groups to isolate each instance while permitting
access to the central server.

B. Create each instance of the application IAM roles and resources in separate accounts by using AWS CloudFormation StackSets. Include a
VPN connection to the VPN gateway of the central administration server.

C. Duplicate the application IAM roles and resources in separate accounts by using a single AWS CloudFormation template. Include VPC
peering to connect the VPC of each application instance to a central VPC.

D. Use the parameters of the AWS CloudFormation template to customize the deployment into separate accounts. Include a NAT gateway to
allow communication back to the central administration server.

Correct Answer: A

Community vote distribution


C (100%)

" # dumma Highly Voted $ 1 year, 1 month ago


Option A is wrong as security groups cannot be used across region. Also security groups may provide an opportunity for mistakes. Inter-region
data transfer charges are higher as compared to Intra-region transfer charges. Correct answer is C
upvoted 27 times

" # youq Highly Voted $ 1 year, 1 month ago


should be "C" ? IAM role is global so A is incorrect.
upvoted 10 times

" # Ganfeng 1 year ago


IAM role is global however, not cross account
upvoted 2 times

" # RVivek 8 months, 3 weeks ago


Answer A sates "IAM roles for each region", that is wrong
upvoted 1 times

" # Rocketeer 1 month, 3 weeks ago


It says "identical IAM roles for each region". Hence I can create role-region1 and role-region2 with the same policies. I think A is correct
for the least cost.
upvoted 1 times

" # Jonfernz Most Recent % 4 weeks, 1 day ago


What a terribly worded question.
upvoted 6 times

" # Dionenonly 1 month, 3 weeks ago


Selected Answer: C
the answer is C
upvoted 1 times

" # Rocketeer 1 month, 3 weeks ago


A
It says "identical IAM roles for each region". Hence I can create role-region1 and role-region2 with the same policies.
Security groups can be used to isolate the instances
A is much cheaper compared to C
upvoted 1 times

" # asfsdfsdf 8 months ago


The questions asks to minimize the cost and provide access to a central server.
Lets eliminate:
A - will not create any real connection outside the VPC - No IGW is allowed and no VPN/DC/Peering so...

2 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

B- Can work correctly however VPN connection is paid by the hour per connection + Egress data
D - cant be used as no IGW is allowed so NAT GW cant be created
So answer is:
C - Can work if the administration instance is in AWS - peering is free only data out is payed
If the question stated the admin server must be on premise then B was the answer regardless the cost
upvoted 3 times

" # cldy 11 months ago


C. Duplicate the application IAM roles and resources in separate accounts by using a single AWS CloudFormation template. Include VPC peering
to connect the VPC of each application instance to a central VPC.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


I agree with C as correct
upvoted 3 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: C
The issue with option A is that IAM roles are global and not regional.
upvoted 4 times

" # network_zeal 1 year ago


The issue with option A is that IAM roles are global and not regional.
But then C also is not convincing. Firstly, VPC peering will add to cost. Also if applications were already in different accounts, unless some inter
account networking was already in place(and this is not stated anywhere), they were anyway isolated from each others.
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # denccc 1 year ago


It's C
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

" # Amitv2706 1 year ago


For A.
Seems slightly unclear but is this statement not enough to clarify that central server inward access is allowed through security groups "and use
security groups to isolate each instance while permitting access to the central server. "
Seems its doing two parts :
1- Isolation instances
2- at the same time allowing central server
upvoted 3 times

" # blackgamer 1 year ago


I will go with C.
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 3 times

" # Kian1 1 year ago


will go with C
upvoted 2 times

3 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #501 Topic 1

A group of Amazon EC2 instances have been con+gured as a high performance computing (HPC) cluster. The instances are running in a
placement group, and are able to communicate with each other at network speeds of up to 20 Gbps.
The cluster needs to communicate with a control EC2 instance outside of the placement group. The control instance has the same instance type
and AMI as the other instances, and is con+gured with a public IP address.
How can the Solutions Architect improve the network speeds between the control instance and the instances in the placement group?

A. Terminate the control instance and relaunch it in the placement group.

B. Ensure that the instances are communicating using their private IP addresses.

C. Ensure that the control instance is using an Elastic Network Adapter.

D. Move the control instance inside the placement group.

Correct Answer: B
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

Community vote distribution


D (89%) 11%

" # TechGuru Highly Voted $ 1 year, 1 month ago


D : A instance can me moved to placement group
upvoted 22 times

" # AWS2020 1 year, 1 month ago


YES but you would need to terminate it first ad reluch it in the placement grp, so I think the answer is A
upvoted 2 times

" # Frank1 1 year, 1 month ago


The instance does not need to be terminated. It just needs to be stopped. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide
/placement-groups.html "Changing the Placement Group for an Instance"
upvoted 30 times

" # LCC92 1 year ago


updated to -> "Change the placement group for an instance"
upvoted 1 times

" # donathon Highly Voted $ 1 year, 1 month ago


A
You cannot move an existing instance into a placement group. Instances within a placement group generally have both a public and a private IP
address. Communications across a public IP address are limited to 5Gbps or less. Low-latency, high-throughput communications between
placement group members can only occur across the private interfaces, using the private IP addresses. These communications can be either
10Gbps or 20Gbps, and are limited by the instance's network configuration.
https://awsinsider.net/articles/2017/06/12/ec2-placement-groups.aspx
upvoted 13 times

" # Phat 1 year ago


I think Donathon is right.
upvoted 1 times

" # Utpal 1 year ago


Not at all. B is right option
upvoted 1 times

" # jamjam2020 1 year, 1 month ago


B
The cluster needs to communicate with a control EC2 instance outside of the placement group.
upvoted 5 times

" # UpsetUser 1 year ago


No Buddy, You can move existing instances to placement group. Even you can move an instance from one placement group to another, or
remove an instance from a placement group.

https://acloud.guru/forums/aws-certified-solutions-architect-associate/discussion/-LBVlYuS1HKudeoD52ur
/You%20can%20move%20an%20existing%20instance%20to%20a%20placement%20group

4 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 6 times

" # Hari008 8 months ago


The question clearly says the placement group should communicate with instance outside placement group
upvoted 1 times

" # biki1985 1 year ago


You can move instances outside the placement group to within the placement group. For that, you have to stop the instance and then make it
part of the instance group. Start instance once again.
upvoted 4 times

" # Dionenonly Most Recent % 3 weeks, 2 days ago


Selected Answer: A
I think it's A.
Instance can't be moved live.
I also sow this question in udemy and the answer is the same.
upvoted 1 times

" # Dionenonly 1 month, 3 weeks ago


Selected Answer: D
D. Yes you can't move existing instance to placement group without a down time. But question did not mention there should not be disruption so
D is okay.
upvoted 1 times

" # hilft 3 months, 1 week ago


D. directly from jon bonso's exam
upvoted 1 times

" # Ni_yot 7 months, 1 week ago


Selected Answer: D
the answer is D. Although like many have said the instance needs to be stopped first
upvoted 1 times

" # Ricky53 7 months, 3 weeks ago


Answer is "A": Terminate and relaunch
upvoted 1 times

" # asfsdfsdf 8 months ago


To optimize performance on the control instance need to move it into the placement group.
For elimination:
A - The question does not state if its the same region, I would assume it is - if so, no need to terminate the instance just to stop and modify it and
re-start it.
B- It can help but i assume the instances are already communication with a private since the cluster instances dont have one.
C - It would help but cant reach 20GBs
so answer is
D - The best way is to migrate the instance inside the cluster placement group - stop it - run the modify CLI command and start it.
https://docs.aws.amazon.com/cli/latest/reference/ec2/modify-instance-placement.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#change-instance-placement-group
upvoted 1 times

" # asfsdfsdf 8 months ago


In Addition to the above,
Network traffic to the internet and over an AWS Direct Connect connection to on-premises resources is limited to 5 Gbps.
Instances that are not within a cluster placement group can use up to 5 Gbps for single-flow traffic.
So at max using public / private IPs we can get 5GBs this will for sure eliminate B and C.
The correct answer is D if all instances are in the same Region/AZ.
upvoted 1 times

" # Alexey79 8 months ago


Selected Answer: D
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#change-instance-placement-group

Why NOT A:

Before you move or remove the instance, the instance must be in the stopped state.

Termination in AWS has different results than Stop.

The key difference between stopping and terminating an instance is that the attached bootable EBS volume will not be deleted. The data on your
EBS volume will remain after stopping while all information on the local (ephemeral) hard drive will be lost as usual.

upvoted 3 times

" # jyrajan69 8 months, 3 weeks ago

5 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

D. Based on the following:


To move an instance to a placement group using the AWS CLI
Stop the instance using the stop-instances command.
Use the modify-instance-placement command and specify the name of the placement group to which to move the instance. ...
Start the instance using the start-instances command
upvoted 1 times

" # cannottellname 9 months, 3 weeks ago


How can we enhance performance?
A. Terminate the control instance and relaunch it in the placement group. - Stopping helps changing the placement group. Hence, not needed.
B. Ensure that the instances are communicating using their private IP addresses. - There is no Throughput level difference between them
C. Ensure that the control instance is using an Elastic Network Adapter. - It is only helpful is instance is in cluster placement
D. Move the control instance inside the placement group. - Seems best option. But we need to stop & move - this is not mentioned and so, I will
go with A. They have also given hint that AMI is same & hence, termination & recreation seem to help.
upvoted 1 times

" # RVivek 9 months, 3 weeks ago


D. Here though put is the issue. So no need to chnage IP. We have to change the placement group
A is incorrect , because we can stop the instance and move placement group. No need to terminate
upvoted 1 times

" # Polu 9 months, 3 weeks ago


D - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-
groups.html#:~:text=Before%20you%20move%20or%20remove%20the%20instance
%2C%20the%20instance%20must%20be%20in%20the%20stopped%20state.
upvoted 1 times

" # cldy 11 months ago


D. Move the control instance inside the placement group.
upvoted 1 times

" # wem 11 months ago


B
"The cluster must communicate with an EC2 instance that is not a member of the placement group"
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


D is correct answer
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


D is correct answer
upvoted 2 times

6 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #502 Topic 1

A Solutions Architect has created an AWS CloudFormation template for a three-tier application that contains an Auto Scaling group of Amazon
EC2 instances running a custom AMI.
The Solutions Architect wants to ensure that future updates to the custom AMI can be deployed to a running stack by +rst updating the template
to refer to the new
AMI, and then invoking UpdateStack to replace the EC2 instances with instances launched from the new AMI.
How can updates to the AMI be deployed to meet these requirements?

A. Create a change set for a new version of the template, view the changes to the running EC2 instances to ensure that the AMI is correctly
updated, and then execute the change set.

B. Edit the AWS::AutoScaling::LaunchCon+guration resource in the template, changing its DeletionPolicy to Replace.

C. Edit the AWS::AutoScaling::AutoScalingGroup resource in the template, inserting an UpdatePolicy attribute.

D. Create a new stack from the updated template. Once it is successfully deployed, modify the DNS records to point to the new stack and
delete the old stack.

Correct Answer: C
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html https://docs.aws.amazon.com
/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchcon+g.html

Community vote distribution


C (100%)

" # 9Ow30 Highly Voted $ 1 year, 1 month ago


C

Quoting
"If you want to update existing instances when you update the LaunchConfiguration resource, you must specify an UpdatePolicy attribute for the
Auto Scaling group. "

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html
upvoted 28 times

" # Stec1980 1 year ago


That's for the AWS::AutoScaling::LaunchConfiguration resource though, not the AWS::AutoScaling::AutoScalingGroup resource, which is what
answer C suggests...
upvoted 3 times

" # Stec1980 1 year ago


Actually, yes this suggests you can "add an UpdatePolicy attribute to your Auto Scaling group to perform rolling updates"

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html
upvoted 2 times

" # DashL 1 year ago


When you update the launch template or launch configuration for an Auto Scaling group, this update action does not deploy any change
across the running Amazon EC2 instances in the Auto Scaling group. All new instances will get the updated configuration, but existing
instances continue to run with the configuration that they were originally launched with. This works the same way as any other Auto Scaling
group.

You can add an UpdatePolicy attribute to your stack to perform rolling updates (or replace the group) when a change has been made to the
group.
upvoted 2 times

" # dpvnme Highly Voted $ 1 year, 1 month ago


A. Ref: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html
upvoted 16 times

" # simonyu 1 year, 1 month ago


This link says change set will change the stack, not the template.
upvoted 4 times

" # TiredDad 1 year ago

7 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

"When you use CloudFormation, you manage related resources as a single unit called a stack. You create, update, and delete a collection
of resources by creating, updating, and deleting stacks. All the resources in a stack are defined by the stack's CloudFormation template" -
to change a stack, you change its template
upvoted 1 times
" # viet1991 1 year ago
Aws is C.
Without inserting an UpdatePolicy attribute to AWS::AutoScaling::LaunchConfiguration, execute the change set will only create new
LaunchConfiguration and existing instances are not affected.
upvoted 2 times

" # TiredDad 1 year ago


Option A says - Create a change set for a new version of the template, as part of that you would include UpdatePolicy attribute to the
resource "AWS::AutoScaling::AutoScalingGroup" (and not to LaunchConfiguration)
upvoted 3 times

" # aandc Most Recent % 4 months, 1 week ago


Selected Answer: C
vote for C
"To update existing instances when you update the AWS::AutoScaling::LaunchConfiguration resource, you can specify an UpdatePolicy attribute
for the group. "
upvoted 2 times

" # roka_ua 7 months, 1 week ago


Selected Answer: C
Vote C
upvoted 1 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: C
I think the "best" answer is C
upvoted 1 times

" # tkanmani76 10 months, 3 weeks ago


Answer A - Refer to Section "Update the AMI on an Amazon EC2 instance" in https://docs.aws.amazon.com/AWSCloudFormation/latest
/UserGuide/updating.stacks.walkthrough.html.
upvoted 1 times

" # tkanmani76 9 months, 3 weeks ago


C is right - Refer Remarks section https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


When you update the launch configuration for an Auto Scaling group, CloudFormation deletes that resource and creates a new launch
configuration with the updated properties and a new name. Existing instances are not affected. To update existing instances when you update
the AWS::AutoScaling::LaunchConfiguration resource , Answer C is right
upvoted 1 times

" # DerekKey 1 year ago


"UpdateStack to replace the EC2 instances with instances launched from the new AMI"
A wrong - will only apply to EC2 instances managed separately
B wrong
C correct - have to use UpdatePolicy attribute on Auto Scaling group
D wrong
upvoted 1 times

" # TiredDad 1 year ago


Your statement "A wrong - will only apply to EC2 instances managed separately" is incorrect. Option A says "Create a change set for a new
version of the template" and the template in this case corresponds to the stack which includes AutoScalingGroup
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 2 times

" # Pb55 1 year ago


https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html
C.
upvoted 1 times

" # ss160700 1 year ago


B: - reason as below
{

8 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

"Resources": {
"LaunchConfig": {
"Type": "AWS::AutoScaling::LaunchConfiguration",
"Properties": {
"KeyName": {
"Ref": "KeyName"
},
"ImageId": {
"Fn::FindInMap": [
"AWSRegionArch2AMI",
{
"Ref": "AWS::Region"
},
{
"Fn::FindInMap": [
"AWSInstanceType2Arch",
upvoted 1 times
" # ss160700 1 year ago
A - it us the changes
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 1 times

" # cnethers 1 year ago


The best answer is A because:
- creating a changeset is the accepted method for making changes to a stack.
- it includes all the changes in the template that are required
- changes that would be required would not just be updating the ASG UpdatePolicy attribute, it would also require updating the AMI ID,
it would be worth parameterizing the AMI ID so that it's just a parameter change in the future. Launch Config would need replacing because a
new AMI is being used and launch configs can't be updated,
they are replaced and then deleted once you have successfully deployed an "in-service" instance so that roll-back can occur.

There is more to this question than first considered. Don't rush to the first answer the makes sense, sometimes there is a better answer. B and C
and not wrong but they are not the best answer. D could be done but really why would you do that?
upvoted 1 times

" # gpark 1 year ago


C.
===
A. is not enough. Please, reference below
https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling-updates/
upvoted 1 times

" # TiredDad 1 year ago


UpdatePolicy has to be included in the template, so although not mentioned explicitly, option A covers that
upvoted 1 times

" # kiev 1 year ago


Change set is to help you view changes before they are made in CF. To introduce new AMI then you have to look at the launch auto scaling group
and update it and so I would go with C.
upvoted 1 times

" # TiredDad 1 year ago


Option A includes executing the change set as well. You don't need to separately look at "the launch auto scaling group and update it"
upvoted 1 times

" # Kian1 1 year ago


going with C
upvoted 1 times

9 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #503 Topic 1

A Solutions Architect is designing a multi-account structure that has 10 existing accounts. The design must meet the following requirements:
✑ Consolidate all accounts into one organization.
✑ Allow full access to the Amazon EC2 service from the master account and the secondary accounts.
✑ Minimize the effort required to add additional secondary accounts.
Which combination of steps should be included in the solution? (Choose two.)

A. Create an organization from the master account. Send invitations to the secondary accounts from the master account. Accept the
invitations and create an OU.

B. Create an organization from the master account. Send a join request to the master account from each secondary account. Accept the
requests and create an OU.

C. Create a VPC peering connection between the master account and the secondary accounts. Accept the request for the VPC peering
connection.

D. Create a service control policy (SCP) that enables full EC2 access, and attach the policy to the OU.

E. Create a full EC2 access policy and map the policy to a role in each account. Trust every other account to assume the role.

Correct Answer: AD
There is a concept of Permission Boundary vs Actual IAM Policies. That is, we have a concept of ‫ג‬€Allow‫ג‬€ vs ‫ג‬€Grant‫ג‬€. In terms of
boundaries, we have the following three boundaries:
1. SCP
2. User/Role boundaries
3. Session boundaries (ex. AssumeRole ... )
In terms of actual permission granting, we have the following:
1. Identity Policies
2. Resource Policies

Community vote distribution


AD (75%) AE (25%)

" # donathon Highly Voted $ 1 year, 1 month ago


AD
B\E: This is not minimizing the effort.
C: You don’t need VPC peering unless the EC2 needs access across the accounts.
upvoted 27 times

" # Mobidic 1 year, 1 month ago


SCP only reduce access, never give more. So D is clearly not a choice.
upvoted 3 times

" # sarah_t 1 year ago


SCP can deny or allow.
upvoted 4 times

" # Moon Highly Voted $ 1 year, 1 month ago


I believe answers are "A & E".
D: is wrong because SCP does not give privileges (enable EC2 full Access!).
E: is correct as it is a strategy for role assuming by other accounts, to perform full EC2 access.
upvoted 26 times

" # donathon 1 year, 1 month ago


The SCP will already enable full access for any accounts under the OU and in this case the 10 accounts. Assume role will work but is not
minimizing the effort. You can look under SCP policy and how it works
upvoted 14 times

" # chaudh 1 year, 1 month ago


I agreed with @donathon. If the question is "Grant full access..." instead of "Allow full access..." then E would be correct.
upvoted 2 times

" # tan9 1 year, 1 month ago


Option D minimize the effort required to add additional secondary accounts, while option E don't.

10 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 10 times
" # LCC92 1 year ago
you misunderstand the question.
-> Allow full access to [the Amazon EC2 service] from [the master account and the secondary accounts]. => means To allow all accounts to
access to their own EC2 service, which SCP can do.
upvoted 8 times

" # psou7 Most Recent % 2 weeks, 5 days ago


I will go with A E
upvoted 1 times

" # tomosabc1 1 month, 1 week ago


Selected Answer: AD
The answer is AD.

As the suggested answer says, there is a concept of Permission Boundary vs Actual IAM Policies. That is, we have a concept of "Allow" vs
"Grant". In terms of boundaries, we have the following three boundaries:
1. SCP
2. User/Role boundaries
3. Session boundaries (ex. AssumeRole ... )

In terms of actual permission granting, we have the following:


1. Identity Policies
2. Resource Policies

D is allowing permissions while E is granting permissions. In addition, E doesn't meet with the requirement "Minimize the effort required to add
additional secondary accounts", because the trusted relations of role in all existing accounts have to be changed when a new account needs to
be added, which is quite a lot of work. All things considered, D is a more preferable answer compared with E.
upvoted 1 times

" # Sathish1412 1 month, 3 weeks ago


Selected Answer: AD
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_strategies.html ----Check bottom of this page
upvoted 1 times

" # kadev 2 months, 1 week ago


For D: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_strategies.html
upvoted 1 times

" # galb818 2 months, 2 weeks ago


afadfasdfasdfasdfasdfsadfasdfsadfasdf
upvoted 1 times

" # CloudHandsOn 3 months, 2 weeks ago


A,D -> Using organizations with SCP applied is more efficient approach, NOT organizations with IAM
upvoted 1 times

" # bobsmith2000 5 months, 3 weeks ago


Selected Answer: AD
I suppose A it's no doubt.
Regarding D:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_syntax.html
Search for "Effect": "Allow" and read carefully
upvoted 1 times

" # tomosabc1 1 month, 1 week ago


To quote from the link you provided, "Even though this statement uses the same Allow value keyword as an IAM permission policy, in an SCP
it doesn't actually grant a user permission to do anything...", since both SCP(Allowing permissions) and Identity/Resource Policies use
"Effect":"Allow", your explanation can not be used to make D correct and E incorrect.
upvoted 1 times

" # RVivek 8 months, 3 weeks ago


Selected Answer: AE
B. Requesting from each account is more effort so prefer A
D: SCP can be used only to deny access and not gram access
upvoted 1 times

" # vbal 10 months, 3 weeks ago


A&E; D would have been right if the question was to give only EC2 Access by removing the FullAwsAccess and then attaching Full EC2 Access
but even that would have not been enough since IAM in each account need to give permission to each User & Group on all EC2 Actions.
upvoted 1 times

11 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # cldy 11 months ago


A. Create an organization from the master account. Send invitations to the secondary accounts from the master account. Accept the invitations
and create an OU.
D. Create a service control policy (SCP) that enables full EC2 access, and attach the policy to the OU.
upvoted 1 times

" # Kopa 1 year ago


Im for A,D
upvoted 1 times

" # DerekKey 1 year ago


A&E
A correct - create ORG and invite account AND THAN -->
Default AWS Organizations policy for new ORG is "FullAWSAccess" set on each OU. It gives full access to every operation. Users from master
can assume a role (set druing invitation process) in each connected account to get full Admin access.
Therefore: D wrong
If we want to "Allow full access to the Amazon EC2 service from the master account and >>the secondary accounts<<. We should create a role
with required permissions in each account and allow accounts to assume those roles.
I believe E correct.
BTW. SCP only sets what permissions can be used in an account:
"No permissions are granted by an SCP. An SCP defines a guardrail, or sets limits, on the actions that the account's administrator can delegate
to the IAM users and roles in the affected accounts."
upvoted 3 times

" # vbal 10 months, 3 weeks ago


This should be the highly accepted answer. Great Explanation.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,D
upvoted 1 times

" # Pb55 1 year ago


AD.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_syntax.html
{
"Statement": {
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
}
This allows EC2 for all accounts.
upvoted 2 times

" # digimaniac 1 year ago


D is better than E. SCP can grant EC2 full access as long as it is across the board for there is not conditioning statement in AllowList
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_strategies.html#orgs_policies_allowlist
upvoted 2 times

12 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #504 Topic 1

AnyCompany has acquired numerous companies over the past few years. The CIO for AnyCompany would like to keep the resources for each
acquired company separate. The CIO also would like to enforce a chargeback model where each company pays for the AWS services it uses.
The Solutions Architect is tasked with designing an AWS architecture that allows AnyCompany to achieve the following:
✑ Implementing a detailed chargeback mechanism to ensure that each company pays for the resources it uses.
✑ AnyCompany can pay for AWS services for all its companies through a single invoice.
✑ Developers in each acquired company have access to resources in their company only.
✑ Developers in an acquired company should not be able to affect resources in their company only.
✑ A single identity store is used to authenticate Developers across all companies.
Which of the following approaches would meet these requirements? (Choose two.)

A. Create a multi-account strategy with an account per company. Use consolidated billing to ensure that AnyCompany needs to pay a single
bill only.

B. Create a multi-account strategy with a virtual private cloud (VPC) for each company. Reduce impact across companies by not creating any
VPC peering links. As everything is in a single account, there will be a single invoice. Use tagging to create a detailed bill for each company.

C. Create IAM users for each Developer in the account to which they require access. Create policies that allow the users access to all
resources in that account. Attach the policies to the IAM user.

D. Create a federated identity store against the company's Active Directory. Create IAM roles with appropriate permissions and set the trust
relationships with AWS and the identity store. Use AWS STS to grant users access based on the groups they belong to in the identity store.

E. Create a multi-account strategy with an account per company. For billing purposes, use a tagging solution that uses a tag to identify the
company that creates each resource.

Correct Answer: AD

Community vote distribution


AD (100%)

" # donathon Highly Voted $ 1 year, 1 month ago


AD
B: VPC is not enough, you need a separate account for each company.
C: IAM is per account based and hence does not satisfy “a single identity store”.
E: Consolidated billing is the correct answer for this part.
upvoted 42 times

" # leeo 1 year, 1 month ago


How below requirement satisfy by option D.
Implementing a detailed chargeback mechanism to ensure that each company pays for the resources it uses.
upvoted 2 times

" # chaudh 1 year, 1 month ago


My understanding: Consolidated Billing will show the bills for all member accounts, tags should used for AWS resources, not account. A &
D are my choices.
upvoted 3 times

" # petebear55 1 year ago


u mean a and e :)
upvoted 1 times

" # shammous 1 year ago


You are ignoring the first requirement here and everybody is upvoting without checking ....
You have chosen D over E but E has what D offers as consolidated billing is active by default when using a multi-account strategy. What is
missing is a tagging solution for chargeback mechanism, like "Activate propagation of necessary cost allocation tags to consolidated billing"
which is provided in E.
Based on that, A and E are the correct answers.
Ref: https://aws.amazon.com/blogs/architecture/handling-aws-chargebacks-for-enterprise-customers/
upvoted 7 times

" # student2020 1 year ago


A and D are correct. There is no need for tagging. Each AWS account is separate and there is no connection between the VPCs. The AWS
bill will just show the charge for each account.
upvoted 5 times

13 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Viper57 1 year ago


I agree. Tagging is only required for a single account strategy, not a multi-account strategy.
upvoted 3 times

" # Moon Highly Voted $ 1 year, 1 month ago


I do support "A & D".
separate accounts, and single identity store.
upvoted 13 times

" # Dionenonly Most Recent % 1 month, 3 weeks ago


Selected Answer: AD
AD self explanatory
upvoted 1 times

" # bfal 7 months, 1 week ago


Having extensively reviewed the question. A and B are the correct answer, I shifted my position from A and D.
How so? Follow me

1. Create a multi-account strategy with a virtual private cloud (VPC) for each company- This is a multi-account strategy , Different account, with
associated VPCs
It meets the requirement of “The CIO of AnyCompany wishes to maintain a separation of resources for each acquired”

2.
Reduce impact across companies by not creating any VPC peering links- This requirement of separating resources is met by not peering VPCs

3. As everything is in a single account- As this is one organisation, it’s best practice to implement AWS organisation for consolidated billing, so
assume was organisation is implemented here.

4. Use tagging to create a detailed bill for each company. Tagging with help create detailed bill for each company. The key word is detailed. AWS
Control tower will give you the bill per company, but you will still need tagging to ensure the cost are detailed for each company.
upvoted 1 times

" # vbal 10 months, 3 weeks ago


D&E is perfect.
upvoted 1 times

" # cldy 11 months ago


A. Create a multi-account strategy with an account per company. Use consolidated billing to ensure that AnyCompany needs to pay a single bill
only.
D. Create a federated identity store against the company‫ג‬€™s Active Directory. Create IAM roles with appropriate permissions and set the trust
relationships with AWS and the identity store. Use AWS STS to grant users access based on the groups they belong to in the identity store.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


AD is perfect answer
upvoted 2 times

" # moon2351 1 year ago


AD is correct
upvoted 1 times

" # tgv 1 year ago


AAA DDD
---
upvoted 1 times

" # zolthar_z 1 year ago


One is D. I have the doubt between A and E, even E has the best practice option (use tags) missed the requirement of set-up a single invoice. So,
It's A & E
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,D
upvoted 2 times

" # Waiweng 1 year ago


it's AD
upvoted 3 times

" # digimaniac 1 year ago


E is redundant, in OU and consolidate billing, we already know the detailed billing of sub companies.
upvoted 1 times

14 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # trap 1 year ago


Correct Answer:D,E
Consolidated billing is enabled by default in AWS Organizations (multi-account strategy need AWS Organization)

https://aws.amazon.com/organizations/faqs/

Q: Which central governance and management capabilities does AWS Organizations enable?
AWS Organizations enables the following capabilities:

Automate AWS account creation and management, and provision resources with AWS CloudFormation Stacksets
Maintain a secure environment with policies and management of AWS security services
Govern access to AWS services, resources, and regions
Centrally manage policies across multiple AWS accounts
Audit your environment for compliance
View and manage costs with consolidated billing
Configure AWS services across multiple accounts
upvoted 7 times

" # kiev 1 year ago


I got DE but reading this forum many people went with AD but surely tagging is a better way of separating resources?
upvoted 6 times

" # Kian1 1 year ago


I will go with D,E trust relationship+STS and Tags. E is more convincing to me than A
upvoted 4 times

" # Ebi 1 year ago


Ad my choice
upvoted 2 times

15 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #505 Topic 1

A company deployed a three-tier web application in two regions: us-east-1 and eu-west-1. The application must be active in both regions at the
same time. The database tier of the application uses a single Amazon RDS Aurora database globally, with a master in us-east-1 and a read replica
in eu-west-1. Both regions are connected by a VPN.
The company wants to ensure that the application remains available even in the event of a region-level failure of all of the application's
components. It is acceptable for the application to be in read-only mode for up to 1 hour. The company plans to con+gure two Amazon Route 53
record sets, one for each of the regions.
How should the company complete the con+guration to meet its requirements while providing the lowest latency for the application end-users?
(Choose two.)

A. Use failover routing and con+gure the us-east-1 record set as primary and the eu-west-1 record set as secondary. Con+gure an HTTP health
check for the web application in us-east-1, and associate it to the us-east-1 record set.

B. Use weighted routing and con+gure each record set with a weight of 50. Con+gure an HTTP health check for each region, and attach it to
the record set for that region.

C. Use latency-based routing for both record sets. Con+gure a health check for each region and attach it to the record set for that region.

D. Con+gure an Amazon CloudWatch alarm for the health checks in us-east-1, and have it invoke an AWS Lambda function that promotes the
read replica in eu- west-1.

E. Con+gure Amazon RDS event noti+cations to react to the failure of the database in us-east-1 by invoking an AWS Lambda function that
promotes the read replica in eu-west-1.

Correct Answer: CE

Community vote distribution


CE (100%)

" # huhupai Highly Voted $ 1 year, 1 month ago


I would go for C, E.
upvoted 25 times

" # aws_arn_name 1 year ago


With E, failover execute only when DB failed. What if just application fail but DB not ?
upvoted 3 times

" # rb39 11 months ago


then C will take care of using web servers in the other region.
upvoted 2 times

" # LCC92 1 year ago


Ans is CD.
For E: RDS event does not support regional failure events. RDS event can only send to SNS.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.Messages.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.Subscribing.html
upvoted 10 times

" # donathon Highly Voted $ 1 year, 1 month ago


CD.
A\B: This would not be based on latency.
E: Amazon RDS uses the Amazon Simple Notification Service (Amazon SNS) to provide notification when an Amazon RDS event occurs. These
notifications can be in any notification form supported by Amazon SNS for an AWS Region, such as an email, a text message, or a call to an
HTTP endpoint. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html
upvoted 23 times

" # dpvnme 1 year, 1 month ago


Yep CD. E only send notification, can't invoke lambda
upvoted 8 times

" # Warrenn 1 year, 1 month ago


https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/monitoring-health-checks.html you can use SNS and lambda on the SNS
topic
upvoted 3 times

16 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # joanneli77 3 weeks, 5 days ago


...then the answer should say to use RDS to send to SNS to invoke Lambda. RDS can't directly invoke lambda as-written.
upvoted 1 times
" # tan9 1 year, 1 month ago
I will go for C, E.

D: Configure an Amazon CloudWatch alarm for the health checks in us-east-1, and have it invoke an AWS Lambda function that promotes the
read replica in eu- west-1.

How a alarm configured in

How can a alarm configured in one region invoking function in another region in case of the region itself is in a failure state?
upvoted 5 times

" # cinopi 1 year, 1 month ago


I am also with C,E

I can see few RDS events regarding failure, which I don't see in CloudWAtch

Example:
failure

RDS-EVENT-0031
The DB instance has failed due to an incompatible configuration or an underlying storage issue. Begin a point-in-time-restore for the DB
instance.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html
upvoted 1 times

" # Kopa 1 year ago


Using Amazon CloudWatch alarms, you can set up metric thresholds and send alerts to Amazon Simple Notification Service (SNS). SNS
can send notifications using e-mail, HTTP(S) endpoints, and Short Message Service (SMS) messages to mobile phones, and it can even
trigger a Lambda function.

https://aws.amazon.com/blogs/developer/send-real-time-amazon-cloudwatch-alarm-notifications-to-amazon-chime/
upvoted 1 times

" # Kopa 1 year ago


For me its C,E. D need SNS to trigger Lambda. Answer E AWS RDS events trigger Lambda directly.
upvoted 1 times

" # kirrim 1 year ago


RDS events also have to use SNS to trigger Lambda:

https://docs.aws.amazon.com/lambda/latest/dg/services-rds.html

"Amazon RDS sends notifications to an Amazon Simple Notification Service (Amazon SNS) topic, which you can configure to invoke
a Lambda function. Amazon SNS wraps the message from Amazon RDS in its own event document and sends it to your function."
upvoted 1 times

" # b3llman 1 year ago


You can configure Route53 to trigger alarms and send notifications to SNS when health checks report unhealthy and have SNS to trigger a
Lambda function to do stuff. So, D definitely works.
upvoted 3 times

" # rcher 1 year ago


Well the key is, Cloudwatch Alarm is invoking Lambda directly (based on Ans D and i think its not supported now), so i think it doesn't
work?
upvoted 2 times

" # gpark 1 year ago


@rcher
That's for a great point.
Was so confused on why D is not right even if E is correct.
upvoted 1 times

" # AYANtheGLADIATOR Most Recent % 2 months ago


C D bcz rds event notification can't invoke the lambda if its down.
upvoted 1 times

" # bobsmith2000 6 months, 2 weeks ago


Selected Answer: CE
C for latency-based routing
E. Amazon RDS event -> EventBridge -> Lambda
upvoted 1 times

17 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # jyrajan69 9 months ago


Not sure why no one here is even considering A as an answer, when you can configure it as ACTIVE-ACTIVE (https://docs.aws.amazon.com
/Route53/latest/DeveloperGuide/dns-failover-types.html) and given that we have an hour then E is the next part of the solution. So for me , it is
definitely AE. There is nothing here that even talks about Latency
upvoted 5 times

" # Burhan521 7 months, 2 weeks ago


because what would happen if the EU region was down. The users in EU wouldnt be routed to US
upvoted 1 times

" # frankzeng 10 months, 1 week ago


C,E. When there is any issue with the primary RDS, a RDS event about replication will occur and send a SNS. The health check for each region in
R53 can include the health check of database, application and others in all.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


CE is my answer
upvoted 1 times

" # cldy 11 months, 1 week ago


C. Use latency-based routing for both record sets. Configure a health check for each region and attach it to the record set for that region.
D. Configure an Amazon CloudWatch alarm for the health checks in us-east-1, and have it invoke an AWS Lambda function that promotes the
read replica in eu- west-1.
upvoted 1 times

" # wahlbergusa 11 months, 1 week ago


C and D for me.
E does not cover the region failure or web/app tier failures.
upvoted 1 times

" # nsei 1 year ago


I will go for C and D. E does not cover other application component failure
upvoted 2 times

" # wannaaws 1 year ago


I suppose for D, it meant for the Route53 health check, i.e, "Configure an Amazon CloudWatch alarm for the (Route53) health checks in us-
east-1, and have it invoke an AWS Lambda function that promotes the read replica in eu-west-1. ". This makes more sense
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/monitoring-health-checks.html
upvoted 1 times

" # kevin1024 1 year ago


It look likes B, C
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-complex-configs.html
upvoted 1 times

" # Madhu654 1 year ago


C and E

Create Eventbridge Rule that triggers on RDS Aurora event.


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-cloud-watch-events.html
upvoted 1 times

" # DerekKey 1 year ago


C&D correct
Use latency-based routing for both record sets - "The application must be active in both regions at the same time" & "while providing the lowest
latency for the application end-users"
Configure a health check for each region - "the application remains available even in the event of a region-level failure"
It is acceptable for the application to be in read-only mode for up to 1 hour - "Configure an Amazon CloudWatch alarm for the health checks in
us-east-1, and have it invoke an AWS Lambda function that promotes the read replica in eu-west-1."
E wrong - RDS can only send event to SNS. Lambda must subscribe to that SNS to be invoked
upvoted 3 times

" # TiredDad 1 year ago


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-cloud-watch-events.html
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C,D
upvoted 2 times

" # Pb55 1 year ago


Must be CD as RDS health check is regional and won’t respond to a regional failure as it will be down.
upvoted 2 times

18 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

19 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #506 Topic 1

A company runs a Windows Server host in a public subnet that is con+gured to allow a team of administrators to connect over RDP to
troubleshoot issues with hosts in a private subnet. The host must be available at all times outside of a scheduled maintenance window, and needs
to receive the latest operating system updates within 3 days of release.
What should be done to manage the host with the LEAST amount of administrative effort?

A. Run the host in a single-instance AWS Elastic Beanstalk environment. Con+gure the environment with a custom AMI to use a hardened
machine image from AWS Marketplace. Apply system updates with AWS Systems Manager Patch Manager.

B. Run the host on AWS WorkSpaces. Use Amazon WorkSpaces Application Manager (WAM) to harden the host. Con+gure Windows automatic
updates to occur every 3 days.

C. Run the host in an Auto Scaling group with a minimum and maximum instance count of 1. Use a hardened machine image from AWS
Marketplace. Apply system updates with AWS Systems Manager Patch Manager.

D. Run the host in AWS OpsWorks Stacks. Use a Chief recipe to harden the AMI during instance launch. Use an AWS Lambda scheduled event
to run the Upgrade Operating System stack command to apply system updates.

Correct Answer: B
Reference:
https://docs.aws.amazon.com/workspaces/latest/adminguide/workspace-maintenance.html

Community vote distribution


C (50%) B (50%)

" # Moon Highly Voted $ 1 year, 1 month ago


I would go with "C".
The least administrative, and most available is min/max 1 autoscaling. Hardened images are available in Market place. Patch manager is a
service to patch windows with updates.
upvoted 40 times

" # donathon Highly Voted $ 1 year, 1 month ago


B, least amount of effort.
https://docs.aws.amazon.com/workspaces/latest/adminguide/workspace-maintenance.html
A\C: Does not make sense
D: a lot more work than B.
upvoted 22 times

" # likku 1 year, 1 month ago


"manage the host with the LEAST amount of administrative effort" read the important point manage the host which means we have take the
managing of host into account. Option is C.
upvoted 11 times

" # Jupi 1 year ago


Workspace dont support windows server host.
upvoted 6 times

" # Smart 1 year, 1 month ago


No point in setting up an entire workspace along with directory inside a VPC to connect private instance.
upvoted 5 times

" # Kopa 1 year ago


Why C doesn't make sense?
upvoted 2 times

" # Dionenonly Most Recent % 3 weeks, 2 days ago


Selected Answer: B
If you are going to just consider what is asked B is the solution with the least amount of efforts.
upvoted 1 times

" # redipa 3 weeks ago


WorkSpaces doesn't use Windows Server, just client OS. It can't run on WorkSpaces so B is automatically out
upvoted 1 times

" # Jonfernz 4 weeks, 1 day ago

20 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

I like C but not sure how would an ASG serve any purpose in this scenario. Plus, WorkSpaces makes even less sense. Firstly, it's not cheap
especially to just be used as a host server.
upvoted 1 times

" # joanneli77 3 weeks, 5 days ago


It ensures one is always running. This handles AZ failure.
upvoted 2 times
" # linuxmaster007 1 month, 2 weeks ago
As per tutorials dojo answer is C - workspace
upvoted 2 times

" # Santo99 3 months ago


Selected Answer: C
Workspaces is for desktops and not for servers
upvoted 2 times

" # lurker8000 7 months, 2 weeks ago


Selected Answer: B
Voting for B, https://docs.aws.amazon.com/workspaces/latest/adminguide/workspace-maintenance.html I this document there's a link to MS site
to configure Group policy for patching: https://docs.microsoft.com/en-us/windows-server/administration/windows-server-update-services/deploy
/4-configure-group-policy-settings-for-automatic-updates
upvoted 2 times

" # cannottellname 9 months, 1 week ago


CCCCCCCCC
upvoted 1 times

" # pititcu667 9 months, 3 weeks ago


Selected Answer: C
c just because they mention server. it's misleading because b workspace seems right except it's just the desktop.
upvoted 1 times

" # tkanmani76 10 months, 2 weeks ago


In Workspace FAQ check the query - How will my Amazon WorkSpaces be patched with software updates?
The updates are automatically managed and delivered every Sunday - there is no mention of ability to change this earlier like 3 days. Hence WAM
is not an option and the right choice should be C.
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


I'll go with C
upvoted 1 times

" # pcops 11 months, 2 weeks ago


I will go with C.
upvoted 1 times

" # StelSen 1 year ago


I rejected B as I couldn't find any docs/links to prove WAM can be used to harden Workspaces. WAM purpose is different.
https://aws.amazon.com/workspaces/applicationmanager/. Hence choosing C.
upvoted 1 times

" # nodogoshi 1 year ago


B provide LEAST amount of administrative effort.
upvoted 1 times

" # blackgamer 1 year ago


C is my answer. It supports high availability by using auto scaling group, system patch manager can help on patching of server automatically as
well.
upvoted 2 times

" # 01037 1 year ago


Prefer C.
But B is also working, isn't it?
A
I don't think Elastic Beanstalk is used for this.
There are two kinds of environments, web server or worker.
It's not for bastion host.

D
AWS OpsWorks Stacks does not provide a way to apply updates to online Windows instances.
https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os-windows.html
upvoted 1 times

21 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # ajeeshb 1 year ago


C is my answer.
B cannot be the answer due to below two reasons
1. AWS workspace is a desktop service. You cannot have windows server host
2. WAM is for application deployment to workspace, not to harden the workspace
upvoted 2 times

22 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #507 Topic 1

A company has a large on-premises Apache Hadoop cluster with a 20 PB HDFS database. The cluster is growing every quarter by roughly 200
instances and 1
PB. The company's goals are to enable resiliency for its Hadoop data, limit the impact of losing cluster nodes, and signi+cantly reduce costs. The
current cluster runs 24/7 and supports a variety of analysis workloads, including interactive queries and batch processing.
Which solution would meet these requirements with the LEAST expense and down time?

A. Use AWS Snowmobile to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle
the interactive workload based on historical data from the on-premises cluster. Store the data on EMRFS. Minimize costs using Reserved
Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metrics.
Create job-speci+c, optimized clusters for batch workloads that are similarly optimized.

B. Use AWS Snowmobile to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster of a similar size and
con+guration to the current cluster. Store the data on EMRFS. Minimize costs by using Reserved Instances. As the workload grows each
quarter, purchase additional Reserved Instances and add to the cluster.

C. Use AWS Snowball to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle the
interactive workloads based on historical data from the on-premises cluster. Store the data on EMRFS. Minimize costs using Reserved
Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metrics.
Create job-speci+c, optimized clusters for batch workloads that are similarly optimized.

D. Use AWS Direct Connect to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle
the interactive workload based on historical data from the on-premises cluster. Store the data on EMRFS. Minimize costs using Reserved
Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metrics.
Create job-speci+c, optimized clusters for batch workloads that are similarly optimized.

Correct Answer: A
To migrate large datasets of 10 PB or more in a single location, you should use Snowmobile. For datasets less than 10 PB or distributed in
multiple locations, you should use Snowball. In addition, you should evaluate the amount of available bandwidth in your network backbone. If
you have a high speed backbone with hundreds of Gb/s of spare throughput, then you can use Snowmobile to migrate the large datasets all at
once. If you have limited bandwidth on your backbone, you should consider using multiple Snowballs to migrate the data incrementally.

Community vote distribution


A (100%)

" # MGM Highly Voted $ 1 year, 1 month ago


A
Q: How should I choose between Snowmobile and Snowball?

To migrate large datasets of 10PB or more in a single location, you should use Snowmobile. For datasets less than 10PB or distributed in multiple
locations, you should use Snowball. In addition, you should evaluate the amount of available bandwidth in your network backbone. If you have a
high speed backbone with hundreds of Gb/s of spare throughput, then you can use Snowmobile to migrate the large datasets all at once. If you
have limited bandwidth on your backbone, you should consider using multiple Snowballs to migrate the data incrementally.
upvoted 28 times

" # Moon Highly Voted $ 1 year, 1 month ago


I support answer "A".
Snowmobile, is used for PB of data, Snowball can't support that. (so A, or B).
Then, A is more cost effective.
upvoted 12 times

" # Moon 1 year, 1 month ago


for snowball edge, it support 100TB, then you may need 100 of them to make 10PB. So better to have Snowmobile.
upvoted 2 times

" # bilcat 1 year, 1 month ago


even less than that. Snowball Edge has 83TB of usable disk space.
upvoted 2 times

" # Student1950 Most Recent % 3 months, 3 weeks ago


never mind, we can do autoscaling with spot instance pooling as the link. It should be A
https://aws.amazon.com/getting-started/hands-on/ec2-auto-scaling-spot-instances/
upvoted 1 times

23 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Student1950 3 months, 3 weeks ago


with A, Can we apply autoscaling on spot instances ? I believe it should be B then
Minimize costs using Reserved Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on
Amazon CloudWatch metrics
upvoted 1 times

" # Anhdd 5 months ago


Selected Answer: A
A for sure, no doubt
upvoted 1 times

" # CGJoon 9 months, 1 week ago


The question says: "The present cluster is available 24 hours a day". Doesn't that mean that using spot instances for task nodes in option B
might not give you 24 hours a day availability? In that case, wouldn't the correct answer be option A?
upvoted 1 times

" # cldy 10 months, 1 week ago


A. Snowmobile for PB data.
upvoted 1 times

" # Ni_yot 10 months, 2 weeks ago


A for me. Snowmobile supports PBs of data
upvoted 1 times

" # Ni_yot 10 months, 2 weeks ago


You also want to use spot instances for batch jobs
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


A is right
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

" # Waiweng 1 year ago


it's A
upvoted 3 times

" # Kian1 1 year ago


going with A
upvoted 1 times

" # Ebi 1 year ago


Answer is A not C,
Snowmobile is for data sets over 10PB
upvoted 3 times

" # Ashodwbi 1 year ago


Guys, A and C are same answer
upvoted 1 times

" # Justu 1 year ago


SnowMobile is not the same as SnowBall!!! Over 10PB of data -> USE SnowMobile! -> A
upvoted 2 times

" # consultsk 1 year ago


I am not sure if anyone noticed. A and C both are having the same verbiage. Word to Word. I am not sure of the arguments made here. A is
correct and eventually C also. :) A or C.
upvoted 1 times

" # consultsk 1 year ago


Sorry, my misunderstanding ... A is correct. A is SnowMobile, C is SnowBall. Except that all are the same. A is only correct.
upvoted 1 times

" # petebear55 1 year ago


I was initailly drawn to C ,, however it is clearly A having read this
upvoted 2 times

" # T14102020 1 year ago


Correct answer is A. Snowmobile and Spot instances

24 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times

25 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #508 Topic 1

A company is running a large application on premises. Its technology stack consists of Microsoft .NET for the web server platform and Apache
Cassandra for the database. The company wants to migrate this application to AWS to improve service reliability. The IT team also wants to
reduce the time it spends on capacity management and maintenance of this infrastructure. The Development team is willing and available to make
code changes to support the migration.
Which design is the LEAST complex to manage after the migration?

A. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NET. Migrate the existing Cassandra database
to Amazon Aurora with multiple read replicas, and run both in a Multi-AZ mode.

B. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling
con+guration. Migrate the Cassandra database to Amazon EC2 instances that are running in a Multi-AZ con+guration.

C. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling
con+guration. Migrate the existing Cassandra database to Amazon DynamoDB.

D. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NET. Migrate the existing Cassandra database
to Amazon DynamoDB.

Correct Answer: D

Community vote distribution


C (100%)

" # donathon Highly Voted $ 1 year, 1 month ago


C
A\B\D: Not the least complicated to manage.
upvoted 34 times

" # MrP Highly Voted $ 1 year, 1 month ago


Apache Cassandra is NoSQL ( http://cassandra.apache.org/ ), which limits us to other NoSQL solutions (DynamoDB). Beanstalk supports .Net,
so C is the only one fulfilling both requirements of LEAST complexity for DB and application migration - and also post-migration efforts (which
was the main question).
upvoted 16 times

" # CloudHandsOn Most Recent % 3 months, 2 weeks ago


C. EASIEST plays a big part in this question. Beanstalk and DynamoDB (excluding configuring an EC2 instance) would be ideal here.
upvoted 1 times

" # KennethTam 7 months, 3 weeks ago


Selected Answer: C
C is correct
upvoted 3 times

" # cldy 11 months ago


C. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuration.
Migrate the existing Cassandra database to Amazon DynamoDB.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


IT staff wishes to decrease the amount of time spent on capacity management and infrastructure maintenance ---- So C is correct
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 2 times

" # blackgamer 1 year ago


I will go with C as it is easier to maintian.
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 4 times

26 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Kian1 1 year ago


going with C
upvoted 2 times

" # Ebi 1 year ago


I go with C
upvoted 5 times

" # Bulti 1 year ago


Answer is C. Elastic beanstalk and Dynamo DB are both managed services and the question mentioned that the developers are ready to refactor
the application. Other options are not AWS managed options and come with overhead.
upvoted 1 times

" # T14102020 1 year ago


Correct is C. Easy. Elastic Beanstalk for .NET and DynamoDB for CASSANDRA
upvoted 2 times

" # jackdryan 1 year ago


I'll go with C
upvoted 5 times

" # gookseang 1 year ago


seems D, my friend said C ..... so complex question, I think C is not avalliable for Large product
upvoted 1 times

" # gookseang 1 year ago


change to C
upvoted 1 times

" # Spiri79 1 year ago


I vote C. D is not reducing the management effort as well.
upvoted 3 times

" # enk 1 year ago


D is the answer
'Large application' - Beanstalk shouldn't be used for large production applications therefore B&C are bad choices.
upvoted 1 times

" # ipindado2020 1 year ago


It is C...

D is just an scaling group... no load balancer, not multizone....


requirement to "improve service reliability"
upvoted 1 times

27 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #509 Topic 1

A company has a requirement that only allows specially hardened AMIs to be launched into public subnets in a VPC, and for the AMIs to be
associated with a speci+c security group. Allowing non-compliant instances to launch into the public subnet could present a signi+cant security
risk if they are allowed to operate.
A mapping of approved AMIs to subnets to security groups exists in an Amazon DynamoDB table in the same AWS account. The company created
an AWS
Lambda function that, when invoked, will terminate a given Amazon EC2 instance if the combination of AMI, subnet, and security group are not
approved in the
DynamoDB table.
What should the Solutions Architect do to MOST quickly mitigate the risk of compliance deviations?

A. Create an Amazon CloudWatch Events rule that matches each time an EC2 instance is launched using one of the allowed AMIs, and
associate it with the Lambda function as the target.

B. For the Amazon S3 bucket receiving the AWS CloudTrail logs, create an S3 event noti+cation con+guration with a +lter to match when logs
contain the ec2:RunInstances action, and associate it with the Lambda function as the target.

C. Enable AWS CloudTrail and con+gure it to stream to an Amazon CloudWatch Logs group. Create a metric +lter in CloudWatch to match
when the ec2:RunInstances action occurs, and trigger the Lambda function when the metric is greater than 0.

D. Create an Amazon CloudWatch Events rule that matches each time an EC2 instance is launched, and associate it with the Lambda function
as the target.

Correct Answer: D

" # donathon Highly Voted $ 1 year, 1 month ago


D
A: This covers the harden AMI but not non-compliant ones. We want to execute the termination when the non-compliant ones launches.
B: S3 event notification has no filter.
C: Too tedious
upvoted 35 times

" # PacoDerek 1 year, 1 month ago


S3 event notification do have filter function
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
You can configure notifications to be filtered by the prefix and suffix of the key name of objects
upvoted 2 times

" # sb333 1 year, 1 month ago


However, that's not the kind of filter required. You would need to be able to filter based on the content of an object, not on the name of the
object. S3 event notification can only filter on name of object.
upvoted 6 times

" # sarah1 Highly Voted $ 1 year, 1 month ago


a: cloudwatch cannot validate "allowed AMIs"
b: cloudtrail logs do not have a unique identifier for s3 event filters to trigger off of - https://docs.aws.amazon.com/awscloudtrail/latest/userguide
/cloudtrail-log-file-examples.html
c: cloudwatch logs group - metric filter can only trigger SNS (an additional step not mentioned, before triggering lambda)
d: cloudwatch events rule can trigger off of "pending" (as mentioned by others) and can target a lambda function directly
upvoted 11 times

" # AzureDP900 Most Recent % 10 months, 2 weeks ago


D for sure
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 3 times

" # Waiweng 1 year ago


it's D
upvoted 3 times

" # Amitv2706 1 year ago


For those who are voting for C :

28 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Can CW metric filter directly trigger lambda ?

Answer is D.
upvoted 1 times

" # Kian1 1 year ago


going with D
upvoted 2 times

" # Ebi 1 year ago


D is the answer
upvoted 5 times

" # Bulti 1 year ago


correct answer is D.
upvoted 2 times

" # T14102020 1 year ago


Correct answer is D. CloudWatch and Lambda for all instances
upvoted 1 times

" # jackdryan 1 year ago


I'll go with D
upvoted 3 times

" # gookseang 1 year ago


seems D
upvoted 1 times

" # gookseang 1 year ago


seems D
upvoted 1 times

" # sam422 1 year ago


I go with D after reading this document https://d1.awsstatic.com/whitepapers/aws-building-ami-factory-process-using-ec2-ssm-marketplace-
and-service-catalog.pdf
upvoted 1 times

" # sam422 1 year ago


I go with D after reading this document https://d1.awsstatic.com/whitepapers/aws-building-ami-factory-process-using-ec2-ssm-marketplace-
and-service-catalog.pdf
upvoted 1 times

" # fullaws 1 year ago


D is correct
upvoted 2 times

" # inf 1 year ago


Answer: C
A - incorrect - need to trigger when any EC2 instance is launched
B - incorrect - does not inspect the logs, thus can't filter on appropriate events such as starting of an instance. Instead send S3 to CloudWatch
and use an Event Rule
C - incorrect - should work but akin to using a sledgehammer to crack a nut. Also it should have mentioned CloudWatch Alarm to trigger SNS
then Lambda, however what details of the instance are being passed to Lambda from CloudWatch? the notification includes the "metric", its
value, time, etc - not the AMI, EC2 Instance ID, subnet, etc. Or should have said use Cloudwatch Event Rules, like the next answer does.
D - correct - takes 2 minutes to configure via the console - Event Rule <- [ [ EC2 Events <- "AWS API Call via CloudTrail" <- RunInstances] ->
Lambda]. Nothing stopping us using RunInstances as the trigger. Remember CloudTrail is enabled by default and "AWS API Call via CloudTrail" is
an AWS managed event source - simple
upvoted 6 times

" # Phat 1 year ago


I think you mean D
upvoted 3 times

" # hilft 3 months, 2 weeks ago


good good
upvoted 1 times

29 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #510 Topic 1

A Solutions Architect must migrate an existing on-premises web application with 70 TB of static +les supporting a public open-data initiative. The
Architect wants to upgrade to the latest version of the host operating system as part of the migration effort.
Which is the FASTEST and MOST cost-effective way to perform the migration?

A. Run a physical-to-virtual conversion on the application server. Transfer the server image over the internet, and transfer the static data to
Amazon S3.

B. Run a physical-to-virtual conversion on the application server. Transfer the server image over AWS Direct Connect, and transfer the static
data to Amazon S3.

C. Re-platform the server to Amazon EC2, and use AWS Snowball to transfer the static data to Amazon S3.

D. Re-platform the server by using the AWS Server Migration Service to move the code and data to a new Amazon EC2 instance.

Correct Answer: C

Community vote distribution


C (100%)

" # donathon Highly Voted $ 1 year, 1 month ago


C
A: This will be too slow.
B: Direct connect takes too long to provision.
C: Because the question did not state what is the bandwidth of the company, using Snowball to transfer 70TB make sense.
D: While this is possible, we do not know If the server is physical or virtual and SMS just migrate it does not upgrade. Wherelse in C you can
immediately select the best AMI to start and rely on Snowball to transfer the data.
upvoted 35 times

" # DashL 1 year ago


Agree with Ans C.
But all the reasons provided here doesn't sound right.
The key part of the question is: "Which is the FASTEST and MOST cost-effective way to perform the migration?"
A: First of all, you cannot convert a physical server to a virtual server. You have to create a new VM and migrate the applications and data.
Because the question did not state what is the bandwidth, it is difficult to calculate how long it will take to transfer data.
B: Reasons same as A plus it will take significant amount of time to provision DX
C: Seems to be the most reasonable solution.
D: SMS cannot migrate a physical server. On top of that Because the question did not state what is the bandwidth, it is difficult to calculate
how long it will take to transfer data.
upvoted 3 times

" # Moon Highly Voted $ 1 year, 1 month ago


I do support answer "C".
The snowball is the FASTEST option for transfer (next business day for delivery).
upvoted 8 times

" # sarah_t 1 year ago


Migration via Snowball takes about a week.
upvoted 1 times

" # joanneli77 Most Recent % 3 weeks, 5 days ago


Without knowing local bandwidth, you can't determine whether snowball is faster or not. The problem is with the question. 70TB may take a long
time to transfer, but it may not.
upvoted 1 times

" # Dionenonly 1 month, 3 weeks ago


Selected Answer: C
c is the most feasible answer here
upvoted 1 times

" # jj22222 9 months, 1 week ago


C looks right
upvoted 1 times

" # cldy 11 months ago


C. Re-platform the server to Amazon EC2, and use AWS Snowball to transfer the static data to Amazon S3.
upvoted 1 times

30 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # AzureDP900 11 months, 1 week ago


Internet speed is not provided, direct connect is expensive. So my answer is C.
upvoted 1 times

" # acloudguru 11 months, 1 week ago


C it is a easy one hope I can have it in my exam
upvoted 1 times

" # kirrim 1 year ago


Don't see this one mentioned, so just calling it out... another reason D is invalid is because it's storing the static file data in EBS, not in S3.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 3 times

" # Ebi 1 year ago


I go with C
upvoted 5 times

" # 01037 1 year ago


C.
B is not only expensive, but also provision of DX takes time.
upvoted 2 times

" # Bulti 1 year ago


Answer is C. Using Snowball is going to be faster than Direct Connect. Other options are either not feasible or too slow.
upvoted 1 times

" # T14102020 1 year ago


Correct is C. Snowball and Re-platform
upvoted 2 times

" # jackdryan 1 year ago


I'll go with C
upvoted 5 times

" # gookseang 1 year ago


seems C
upvoted 1 times

31 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #511 Topic 1

A company has an application that generates a weather forecast that is updated every 15 minutes with an output resolution of 1 billion unique
positions, each approximately 20 bytes in size (20 Gigabytes per forecast). Every hour, the forecast data is globally accessed approximately 5
million times (1,400 requests per second), and up to 10 times more during weather events. The forecast data is overwritten every update. Users of
the current weather forecast application expect responses to queries to be returned in less than two seconds for each request.
Which design meets the required request rate and response time?

A. Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an Amazon API Gateway endpoint with
AWS Lambda functions responding to queries as the origin. Enable API caching on the API Gateway stage with a cache-control timeout set for
15 minutes.

B. Store forecast locations in an Amazon EFS volume. Create an Amazon CloudFront distribution that targets an Elastic Load Balancing group
of an Auto Scaling jeet of Amazon EC2 instances that have mounted the Amazon EFS volume. Set the cache-control timeout for 15 minutes in
the CloudFront distribution.

C. Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an API Gateway endpoint with AWS
Lambda functions responding to queries as the origin. Create an Amazon Lambda@Edge function that caches the data locally at edge
locations for 15 minutes.

D. Store forecast locations in Amazon S3 as individual objects. Create an Amazon CloudFront distribution targeting an Elastic Load Balancing
group of an Auto Scaling jeet of EC2 instances, querying the origin of the S3 object. Set the cache-control timeout for 15 minutes in the
CloudFront distribution.

Correct Answer: C
Reference:
https://aws.amazon.com/blogs/networking-and-content-delivery/lambdaedge-design-best-practices/

Community vote distribution


D (67%) B (33%)

" # donathon Highly Voted $ 1 year, 1 month ago


I have new insight after doing this question for 2nd time.
B
A: Cache control should be done at the Cloudfront not API Stage.
B: EFS has better performance than S3. The data size is only 20GB so this seems suitable.
C: Lambda@Edge does not cache data. Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your
application, which improves performance and reduces latency. With Lambda@Edge, you don't have to provision or manage infrastructure in
multiple locations around the world.
D: Why have the EC2 in the middle when CloudFront can set S3 as the origin?
upvoted 17 times

" # Ibranthovic 1 year, 1 month ago


I'm still not sure to be honest, I still prefer D over B.
upvoted 2 times

" # huhupai 1 year, 1 month ago


20GB is per forecast and forecast is updated every 15 minutes, use EFS is more expensive.
upvoted 2 times

" # Frank1 1 year, 1 month ago


Cache control should be done at API gateway level. https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-
caching.html
upvoted 3 times

" # JAWS1600 1 year, 1 month ago


A is good option. API Caching helps reduce calls made to origin.
https://medium.com/@bhargavshah2011/api-gateway-caching-3f86034ca491.
I just dont support B because it has EC2
Further more with number of location we have ( 1 billion) we need ES .Straight S3 wont do a good job for 15 minutes TTL.
upvoted 6 times

" # donathon Highly Voted $ 1 year, 1 month ago


D
Amazon EC2, Elastic Load Balancing, Amazon S3 buckets configured as website endpoints, or your own web server (HTTP). These are the only
origin that you can define for CloudFront.

32 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

EFS also has lower limits then S3 which make it less suitable for this case which may have 14k request per second.
You can control how long your files stay in a CloudFront cache before CloudFront forwards another request to your origin. Reducing the duration
allows you to serve dynamic content. Increasing the duration means your users get better performance because your files are more likely to be
served directly from the edge cache. A longer duration also reduces the load on your origin.
To change the cache duration for an individual file, you can configure your origin to add a Cache-Control max-age or Cache-Control s-maxage
directive, or an Expires header field to the file.
upvoted 13 times

" # sarah1 1 year, 1 month ago


cloudfront can target APIgateway (and most other dns origins):
https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-cloudfront-distribution/
upvoted 1 times

" # tiana528 11 months, 1 week ago


Not D. D says `Store forecast locations in Amazon S3 as individual objects`, the question says `15-minute weather prediction with a resolution
of 1 billion distinct locations`. Uploading so many small objects to s3 every 15 minutes seems very ineffective. EFS is much more efficient.
upvoted 1 times

" # Vizz5585 Most Recent % 2 weeks, 6 days ago


Selected Answer: B
The answer is B.
Lambdas have concurrency limits
S3 has minimum storage limits
upvoted 1 times

" # tomosabc1 1 month, 1 week ago


Selected Answer: D
I think the answer should be D. The following is what I consolidated after reading the analysis from all other comments.
The question seems to be inspired from the actual case study of the weather company. All their data are stored in S3. https://aws.amazon.com
/solutions/case-studies/the-weather-company/

A(wrong): Cache-control is not available for API Gateway, for which it is TTL.
upvoted 2 times

" # tomosabc1 1 month, 1 week ago


B(wrong): EFS limits :
1 read = 1 Operation
1 Write = 5 Operations.
EFS suports 35000 read operations limit only if you are just READING and not WRITING anything.
EFS has 7000 Write Operations limit limiting only if you are just WRITING and not READING anything
So EFS cannot handle 1 billlion files ( each 20 bytes) write requests in 15mins.

C(wrong): Maximum RPS for API Gateway is 10,000requests/s, for lambda it is 1,000requests/s. They can't meet with the requirements of
maximum 14,000+ requests/s during whether events. In addition, Lambda@Edge is not used to cache data at edge locations for the specific
time.
https://aws.amazon.com/blogs/networking-and-content-delivery/lambdaedge-design-best-practices/
upvoted 1 times

" # linuxmaster007 1 month, 2 weeks ago


Answer is B. Lambda can only handle 10,000 request. Also B is the answer per dojo tutorials.
upvoted 2 times

" # Sumit_Kumar 2 months, 4 weeks ago


Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/COPY/POST/DELETE or
5,500 GET/HEAD requests per second per partitioned prefix.
upvoted 1 times

" # cldy 11 months ago


B. Store forecast locations in an Amazon EFS volume. Create an Amazon CloudFront distribution that targets an Elastic Load Balancing group of
an Auto Scaling fleet of Amazon EC2 instances that have mounted the Amazon EFS volume. Set the cache-control timeout for 15 minutes in the
CloudFront distribution.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


D is right
upvoted 3 times

" # Kopa 1 year ago


Im going for B
upvoted 1 times

" # StelSen 1 year ago


The "cache-control timeout is possible in CloudFront only. API Gateway is time to live. Lambda@Edge don't have cache-control timeout option.
This left with only either B or D is right. Now, both B&D uses EC2/ASG. But From EC2, accessing EFS is faster than accessing S3.

33 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

https://dzone.com/articles/confused-by-aws-storage-options-s3-ebs-amp-efs-explained. So I chose "B".


upvoted 2 times
" # student22 1 year ago
B
---
B vs D - EFS better than S3 to query many small files frequently.
A & D - API gateway will throttle at 10k rpm by default.
upvoted 2 times

" # blackgamer 1 year ago


Yes, this is B. Lambda is out because of concurrent limit and response time, s3 is out because of update frequency.
upvoted 3 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 3 times

" # DashL 1 year ago


I have a question - home someone can provide some insight. I did a lot of search, but couldn't find an answer.
As per the document: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html, the Default quota for Lambda concurrent
executions is 1000, but can be increased to "Tens of thousands". So, why everybody is stuck on the figure of "1000" for lambda?
upvoted 1 times

" # DashL 1 year ago


Also, nowhere in the question it says that the solution has to be implemented immediately. So, there will be enough time to create a quota
increase request before implementing the solution.
upvoted 1 times

" # DashL 1 year ago


I have a question - home someone can provide some insight. I did a lot of search, but couldn't find an answer.
As per the document: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html, the Default quota for Lambda concurrent
executions is 1000, but can be increased to "Tens of thousands". So, why everybody is stuck on the figure of "1000" for lambda??
upvoted 1 times

" # SheldonHofstadter 1 year ago


According to this https://aws.amazon.com/solutions/case-studies/the-weather-company/.. they built it with S3 and they state The platform is
robust enough to handle between 10 and 15 billion transactions each day at 100,000 to 150,000 per second, depending on the weather.”
upvoted 3 times

" # Waiweng 1 year ago


it's B
upvoted 3 times

34 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #512 Topic 1

A company is using AWS CloudFormation to deploy its infrastructure. The company is concerned that, if a production CloudFormation stack is
deleted, important data stored in Amazon RDS databases or Amazon EBS volumes might also be deleted.
How can the company prevent users from accidentally deleting data in this way?

A. Modify the CloudFormation templates to add a DeletionPolicy attribute to RDS and EBS resources.

B. Con+gure a stack policy that disallows the deletion of RDS and EBS resources.

C. Modify IAM policies to deny deleting RDS and EBS resources that are tagged with an ‫ג‬€aws:cloudformation:stack-name‫ג‬€ tag.

D. Use AWS Con+g rules to prevent deleting RDS and EBS resources.

Correct Answer: A
With the DeletionPolicy attribute you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy
attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource by
default. To keep a resource when its stack is deleted, specify Retain for that resource. You can use retain for any resource. For example, you
can retain a nested stack, Amazon S3 bucket, or EC2 instance so that you can continue to use or modify those resources after you delete their
stacks.
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html

" # donathon Highly Voted $ 1 year, 1 month ago


A
With the DeletionPolicy attribute you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy
attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource by
default. To keep a resource when its stack is deleted, specify Retain for that resource. You can use retain for any resource. For example, you can
retain a nested stack, Amazon S3 bucket, or EC2 instance so that you can continue to use or modify those resources after you delete their
stacks.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html
upvoted 25 times

" # huhupai Highly Voted $ 1 year, 1 month ago


I would go for A.
upvoted 11 times

" # cldy Most Recent % 11 months ago


A. Modify the CloudFormation templates to add a DeletionPolicy attribute to RDS and EBS resources.
upvoted 1 times

" # AzureDP900 11 months ago


A is right
upvoted 1 times

" # moon2351 1 year ago


The answer is definitely A.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 1 times

" # Waiweng 1 year ago


it's A
upvoted 2 times

" # Ebi 1 year ago


A is the answer
upvoted 3 times

" # Bulti 1 year ago


Answer is A
upvoted 2 times

" # T14102020 1 year ago

35 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Correct is A. CloudFormation
upvoted 1 times
" # jackdryan 1 year ago
I'll go with A
upvoted 2 times

" # gookseang 1 year ago


AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
upvoted 1 times

" # ipindado2020 1 year ago


A for sure
upvoted 1 times

" # fullaws 1 year ago


A is correct
upvoted 1 times

" # NikkyDicky 1 year ago


A for sure
upvoted 1 times

" # paulwang 1 year, 1 month ago


A.
No doubt.
Just FYI,
You can prevent a stack from being accidentally deleted by enabling termination protection on the stack. That protect the whole stack from
delete.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-protect-stacks.html
upvoted 4 times

" # amog 1 year, 1 month ago


Answer is A
upvoted 1 times

36 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #513 Topic 1

A company is planning to migrate an application from on-premises to AWS. The application currently uses an Oracle database and the company
can tolerate a brief downtime of 1 hour when performing the switch to the new infrastructure. As part of the migration, the database engine will be
changed to MySQL. A
Solutions Architect needs to determine which AWS services can be used to perform the migration while minimizing the amount of work and time
required.
Which of the following will meet the requirements?

A. Use AWS SCT to generate the schema scripts and apply them on the target prior to migration. Use AWS DMS to analyze the current schema
and provide a recommendation for the optimal database engine. Then, use AWS DMS to migrate to the recommended engine. Use AWS SCT to
identify what embedded SQL code in the application can be converted and what has to be done manually.

B. Use AWS SCT to generate the schema scripts and apply them on the target prior to migration. Use AWS DMS to begin moving data from the
on-premises database to AWS. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new
database. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually.

C. Use AWS DMS to help identify the best target deployment between installing the database engine on Amazon EC2 directly or moving to
Amazon RDS. Then, use AWS DMS to migrate to the platform. Use AWS Application Discovery Service to identify what embedded SQL code in
the application can be converted and what has to be done manually.

D. Use AWS DMS to begin moving data from the on-premises database to AWS. After the initial copy, continue to use AWS DMS to keep the
databases in sync until cutting over to the new database. Use AWS Application Discovery Service to identify what embedded SQL code in the
application can be converted and what has to be done manually.

Correct Answer: B

" # donathon Highly Voted $ 1 year, 1 month ago


B
A: Need to minimize work and time required. MySQL has already been chosen why do we need to provide recommendation?
C\D: SCT needs to be used.
upvoted 22 times

" # sam422 1 year, 1 month ago


To move an instance to a placement group using the AWS CLI

Stop the instance using the stop-instances command.

Use the modify-instance-placement command and specify the name of the placement group to which to move the instance.

aws ec2 modify-instance-placement --instance-id i-0123a456700123456 --group-name MySpreadGroup


Start the instance using the start-instances command.
upvoted 1 times

" # huhupai Highly Voted $ 1 year, 1 month ago


AWS Application Discovery Service can't identify what embedded SQL code in the application, SCT can scan application source code for
embedded SQL statements and convert them as part of a database schema conversion project. So I think the correct answer is B.
upvoted 12 times

" # dpvnme 1 year, 1 month ago


Bbbbbbb
upvoted 5 times

" # cldy Most Recent % 11 months ago


B. Use AWS SCT to generate the schema scripts and apply them on the target prior to migration. Use AWS DMS to begin moving data from the
on-premises database to AWS. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new
database. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually.
upvoted 1 times

" # AzureDP900 11 months ago


I agree with B
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with B

37 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # Waiweng 1 year ago
It's B
upvoted 2 times

" # Kelvin 1 year ago


BBBBBBBBBB
upvoted 1 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # Ebi 1 year ago


Answer is B
upvoted 4 times

" # rkbala 1 year ago


B is the correct answer
upvoted 2 times

" # Bulti 1 year ago


Answer is B not A as it doesn't seem to meet the 1 hour downtime req
upvoted 1 times

" # petebear55 1 year ago


This is a typical example of how aws tries to trip u up in the exam .. using terms like SCT instead of the full The AWS Schema Conversion Tool
(AWS SCT) which helps convert your existing database schema from one database engine to another. You can convert from a relational OLTP
schema or any supported data warehouse OLAP schema to Amazon RDS (for example, Amazon Aurora MySQL or Amazon Aurora PostgreSQL,
among others).
upvoted 1 times

" # T14102020 1 year ago


Correct is B. SCT + without DMS to analyze
upvoted 1 times

" # MeepMeep 1 year ago


BBBBBBBBBB
upvoted 1 times

" # jackdryan 1 year ago


I'll go with B
upvoted 3 times

" # gookseang 1 year ago


seems B
upvoted 1 times

" # Paitan 1 year ago


Option B
upvoted 1 times

38 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #514 Topic 1

A company is using AWS to run an internet-facing production application written in Node.js. The Development team is responsible for pushing new
versions of their software directly to production. The application software is updated multiple times a day. The team needs guidance from a
Solutions Architect to help them deploy the software to the production jeet quickly and with the least amount of disruption to the service.
Which option meets these requirements?

A. Prepackage the software into an AMI and then use Auto Scaling to deploy the production jeet. For software changes, update the AMI and
allow Auto Scaling to automatically push the new AMI to production.

B. Use AWS CodeDeploy to push the prepackaged AMI to production. For software changes, recon+gure CodeDeploy with new AMI
identi+cation to push the new AMI to the production jeet.

C. Use AWS Elastic Beanstalk to host the production application. For software changes, upload the new application version to Elastic
Beanstalk to push this to the production jeet using a blue/green deployment method.

D. Deploy the base AMI through Auto Scaling and bootstrap the software using user data. For software changes, SSH to each of the instances
and replace the software with the new version.

Correct Answer: A

Community vote distribution


C (100%)

" # Moon Highly Voted $ 1 year, 1 month ago


"C",
ou can avoid this downtime by performing a blue/green deployment, where you deploy the new version to a separate environment, and then
swap CNAMEs of the two environments to redirect traffic to the new version instantly.
upvoted 23 times

" # donathon Highly Voted $ 1 year, 1 month ago


C
A\D: Not feasible
B: CodeDeploy does not push AMI.
upvoted 19 times

" # Dionenonly Most Recent % 1 month, 3 weeks ago


Selected Answer: C
C has the least amount of efforts
upvoted 1 times

" # Ni_yot 2 months, 3 weeks ago


yep C for me
upvoted 1 times

" # pal40sg 8 months, 1 week ago


Selected Answer: C
push this to the production fleet using a blue/green deployment method
upvoted 1 times

" # pititcu667 8 months, 2 weeks ago


Selected Answer: C
c seems to make sense
upvoted 1 times

" # AzureDP900 11 months ago


I will go with C
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 2 times

39 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # blackgamer 1 year ago


C is the correct answer
upvoted 1 times

" # Kian1 1 year ago


going with C
upvoted 2 times

" # Ebi 1 year ago


I go with C
upvoted 4 times

" # Firststack 1 year ago


Answer is C
upvoted 2 times

" # Bulti 1 year ago


Answer is C
upvoted 1 times

" # T14102020 1 year ago


Correct is C. Elastic Beanstalk
upvoted 1 times

" # jackdryan 1 year ago


I'll go with C
upvoted 4 times

" # gookseang 1 year ago


seems c
upvoted 1 times

40 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #515 Topic 1

A company used Amazon EC2 instances to deploy a web jeet to host a blog site. The EC2 instances are behind an Application Load Balancer
(ALB) and are con+gured in an Auto Scaling group. The web application stores all blog content on an Amazon EFS volume.
The company recently added a feature for bloggers to add video to their posts, attracting 10 times the previous user tramc. At peak times of day,
users report buffering and timeout issues while attempting to reach the site or watch videos.
Which is the MOST cost-emcient and scalable deployment that will resolve the issues for users?

A. Recon+gure Amazon EFS to enable maximum I/O.

B. Update the blog site to use instance store volumes for storage. Copy the site contents to the volumes at launch and to Amazon S3 at
shutdown.

C. Con+gure an Amazon CloudFront distribution. Point the distribution to an S3 bucket, and migrate the videos from EFS to Amazon S3.

D. Set up an Amazon CloudFront distribution for all site contents, and point the distribution at the ALB.

Correct Answer: C
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-https-connection-fails/

" # donathon Highly Voted $ 1 year, 1 month ago


C
A: Issue seems to be latency and load related. EFS does not solve the issue since the issue lies with EC2.
B: Risky as an EC2 instance failure could corrupt the data.
D: Origin cannot point to ALB (either S3, EC2 or HTTP based)?
upvoted 19 times

" # leeo 1 year, 1 month ago


C looks more relevant ,but we can add ALB as CF origin.
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-https-connection-fails/
upvoted 6 times

" # Ibranthovic 1 year, 1 month ago


We can add ALB as Cloudfront origin, then why not D ?
Why to use S3 when the data are already in EFS. and go to migration between S3 and EFS.
I think the right answer is D
upvoted 3 times

" # shammous 1 year ago


Because S3 is more cost-effective.
upvoted 1 times

" # ahmedghanem 1 year, 1 month ago


the deployment should be cost-effective and scalable
as u know EFS 10 time more expensive than S3
upvoted 12 times

" # PacoDerek 1 year, 1 month ago


orgin can be ELB, the point of D is the data to be served resided on EFS, point to ELB is useless
upvoted 2 times

" # bobsmith2000 6 months ago


Origins:
Using an Amazon S3 bucket
Using a MediaStore container or a MediaPackage channel
Using an Application Load Balancer
Using a Lambda function URL
Using Amazon EC2 (or another custom origin)
Using CloudFront origin groups
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/restrict-access-to-load-balancer.html
upvoted 1 times

" # ashp Highly Voted $ 1 year, 1 month ago


C.
S3 Cheaper compare to EFS
CF woks better with S3
Using ALB which makes easy to point to Video file when needed

41 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 9 times
" # cldy Most Recent % 11 months ago
C. Configure an Amazon CloudFront distribution. Point the distribution to an S3 bucket, and migrate the videos from EFS to Amazon S3.
upvoted 2 times

" # AzureDP900 11 months ago


C is right
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

" # pradhyumna 1 year ago


I would go with D. Option C may only solve the buffering issue with videos and there would be additional changes required for the app to provide
links to s3. The question also points at users having issues in reaching the site which can't be solved with option C, so an overall caching solution
like option D would really help.
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 1 times

" # Kian1 1 year ago


defo going with C
upvoted 2 times

" # Ebi 1 year ago


C doesn't seem to be correct, moving videos to S3 does not mean that architecture won't have EFS anymore, all other contents are still in EFS,
I go with D, ALB of course can be the origin for CFN
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-
specify.html#DownloadDistValuesDomainName
upvoted 2 times

" # lechuk 1 year ago


C does not say the EFS is going to be removed, just say to move videos from EFS to S3.

I'll go with C
upvoted 4 times

" # Firststack 1 year ago


I'll go with C
upvoted 1 times

" # Bulti 1 year ago


C is the right answer
upvoted 1 times

" # petebear55 1 year ago


D IS TOO COMPLEX AND DOES NOT MEET THE PARAMETERS OF THE QUESTION ... ANSWER IS C WHICH IS BEST PRACTICE
upvoted 1 times

" # T14102020 1 year ago


Correct is C. S3 + CloudFront
upvoted 2 times

" # jackdryan 1 year ago


I'll go with C
upvoted 4 times

" # gookseang 1 year ago


CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
upvoted 2 times

" # SMS 1 year ago


A. Reconfigure Amazon EFS to enable maximum I/O is the right answer.https://docs.aws.amazon.com/efs/latest/ug/performance.html
upvoted 1 times

" # Paitan 1 year ago


Option C
upvoted 2 times

42 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

43 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #516 Topic 1

A company runs its containerized batch jobs on Amazon ECS. The jobs are scheduled by submitting a container image, a task de+nition, and the
relevant data to an Amazon S3 bucket. Container images may be unique per job. Running the jobs as quickly as possible is of utmost importance,
so submitting job artifacts to the
S3 bucket triggers the job to run immediately. Sometimes there may be no jobs running at all. However, jobs of any size can be submitted with no
prior warning to the IT Operations team. Job de+nitions include CPU and memory resource requirements.
What solution will allow the batch jobs to complete as quickly as possible after being scheduled?

A. Schedule the jobs on an Amazon ECS cluster using the Amazon EC2 launch type. Use Service Auto Scaling to increase or decrease the
number of running tasks to suit the number of running jobs.

B. Schedule the jobs directly on EC2 instances. Use Reserved Instances for the baseline minimum load, and use On-Demand Instances in an
Auto Scaling group to scale up the platform based on demand.

C. Schedule the jobs on an Amazon ECS cluster using the Fargate launch type. Use Service Auto Scaling to increase or decrease the number of
running tasks to suit the number of running jobs.

D. Schedule the jobs on an Amazon ECS cluster using the Fargate launch type. Use Spot Instances in an Auto Scaling group to scale the
platform based on demand. Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs.

Correct Answer: C

" # donathon Highly Voted $ 1 year, 1 month ago


C
A: EC2 Launch type you have to determine the EC2 instance beforehand and scaling up down is not that fast as Fargate.
B: This is not feasible as it may not be fast enough and it’s not managed.
D: You cannot use Spot instance because it is not guaranteed.
upvoted 21 times

" # dpvnme Highly Voted $ 1 year, 1 month ago


I would go with C in this situation
upvoted 12 times

" # cldy Most Recent % 11 months ago


C. Schedule the jobs on an Amazon ECS cluster using the Fargate launch type. Use Service Auto Scaling to increase or decrease the number of
running tasks to suit the number of running jobs.
upvoted 1 times

" # AzureDP900 11 months ago


I go with C
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 2 times

" # kiev 1 year ago


Fargate. C
upvoted 1 times

" # Kian1 1 year ago


ofc going with C
upvoted 2 times

" # Ebi 1 year ago


I go with C
upvoted 3 times

" # cox1960 1 year ago


very strange answers since a "service" in ECS is not used for batches.
upvoted 1 times

44 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Bulti 1 year ago


Answer is C.
upvoted 1 times

" # petebear55 1 year ago


C THIS TOPIC IS COVERED QUITE WELL IN THE LATEST ADDED QUESTIONS ON WIZZ LABS ... REMEMBER ITS FARGATE ONLY IF YOU
WANT AWS TO MANAGE IT .. SO BE CAREFUL HERE IN THE EXAM ,,,,, ANSWER C
upvoted 3 times

" # T14102020 1 year ago


Correct is C. Fargate + without Spot
upvoted 1 times

" # jackdryan 1 year ago


I'll go with C
upvoted 3 times

" # gookseang 1 year ago


CCCCCCCCCCCCCCCCCCCC
upvoted 3 times

" # fullaws 1 year ago


C is correct
upvoted 2 times

" # NikkyDicky 1 year ago


C most likley
upvoted 2 times

45 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #517 Topic 1

A company receives clickstream data +les to Amazon S3 every +ve minutes. A Python script runs as a cron job once a day on an Amazon EC2
instance to process each +le and load it into a database hosted on Amazon RDS. The cron job takes 15 to 30 minutes to process 24 hours of data.
The data consumers ask for the data be available as soon as possible.
Which solution would accomplish the desired outcome?

A. Increase the size of the instance to speed up processing and update the schedule to run once an hour.

B. Convert the cron job to an AWS Lambda function and trigger this new function using a cron job on an EC2 instance.

C. Convert the cron job to an AWS Lambda function and schedule it to run once an hour using Amazon CloudWatch Events.

D. Create an AWS Lambda function that runs when a +le is delivered to Amazon S3 using S3 event noti+cations.

Correct Answer: D
Reference:
https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html

" # donathon Highly Voted $ 1 year, 1 month ago


D
A: Will not help.
B: Not feasible since it is based on a schedule not event.
C: This is still based on schedule.
upvoted 27 times

" # donathon 1 year, 1 month ago


https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
upvoted 2 times

" # PacoDerek 1 year, 1 month ago


C
as s3 event may lost. cloudwatch event is more reliable. using rate expression to trigger Lambda is ok
upvoted 1 times

" # pixepe 2 months ago


C (hourly schedule) is INCORRECT as requirement is "The data consumers ask for the data be available as soon as possible."

D is correct
upvoted 1 times

" # PacoDerek 1 year, 1 month ago


https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html
upvoted 2 times

" # LunchTime 1 year ago


D is the correct answer.
If S3 event notifications are configured property, this will not be an issue, as explained in the following link:
https://aws.amazon.com/premiumsupport/knowledge-center/lambda-configure-s3-event-notification/
upvoted 1 times

" # shammous 1 year ago


D is the answer, but I disagree with you regarding CW Event being "still based on schedule". It can do the same job as S3 events by
instantaneously triggering a lambda function based on a write event on S3, but the fact that it should be "as soon as possible" make S3
events service a better choice as it will perform quicker than CW Event.
upvoted 3 times

" # uopspop Highly Voted $ 1 year, 1 month ago


prefer D, too.
C is still hour-based, which is not "as soon as possible".
upvoted 9 times

" # Racinely Most Recent % 5 months, 2 weeks ago


Explanation
https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
upvoted 1 times

46 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # AzureDP900 11 months ago


D is fine with me. Amazon S3 using S3 event notifications
upvoted 1 times

" # cldy 11 months, 1 week ago


D. Create an AWS Lambda function that runs when a file is delivered to Amazon S3 using S3 event notifications.
upvoted 1 times

" # moon2351 1 year ago


D is correct
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 1 times

" # Waiweng 1 year ago


it's D
upvoted 2 times

" # Kian1 1 year ago


going with D
upvoted 2 times

" # Ebi 1 year ago


Answer is D
upvoted 4 times

" # Bulti 1 year ago


D is correct
upvoted 2 times

" # T14102020 1 year ago


Correct is D. Lambda runs when a file is delivered
upvoted 2 times

" # jackdryan 1 year ago


I'll go with D
upvoted 3 times

" # gookseang 1 year ago


DDDDDDDDDDDDDD
upvoted 2 times

" # Paitan 1 year ago


Definitely D. Since we are calling Lambda based on S3 notifications, 15 minutes limit of Lambda is not an issue here.
upvoted 1 times

" # fullaws 1 year ago


D is correct
upvoted 1 times

" # 3parusr 1 year ago


Has to be A - trick question since lambda Max run time is 15 minutes? https://aws.amazon.com/about-aws/whats-new/2018/10/aws-lambda-
supports-functions-that-can-run-up-to-15-minutes/
upvoted 3 times

" # khksoma 1 year ago


The cron job takes 15 to 30 minutes to process 24 hours of data. We are not doing this once a day..we are doing it based on the s3
notification.
upvoted 7 times

47 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #518 Topic 1

A company that is new to AWS reports it has exhausted its service limits across several accounts that are on the Basic Support plan. The
company would like to prevent this from happening in the future.
What is the MOST emcient way of monitoring and managing all service limits in the company's accounts?

A. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor,
provide noti+cations using Amazon SNS if the limits are close to exceeding the threshold.

B. Reach out to AWS Support to proactively increase the limits across all accounts. That way, the customer avoids creating and managing
infrastructure just to raise the service limits.

C. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor,
programmatically increase the limits that are close to exceeding the threshold.

D. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, and
use Amazon SNS for noti+cations if a limit is close to exceeding the threshold. Ensure that the accounts are using the AWS Business Support
plan at a minimum.

Correct Answer: A

Community vote distribution


D (86%) 14%

" # dcdcdc3 1 month, 3 weeks ago


D
https://docs.aws.amazon.com/solutions/latest/quota-monitor-on-aws/welcome.html
"To use this solution, each account must have a Business- or Enterprise-level AWS Support plan in order to gain access to the Trusted Advisor
service quota checks."
upvoted 1 times

" # epomatti 2 months ago


Selected Answer: D
It's D. Here is the proof you need:

https://aws.amazon.com/solutions/implementations/quota-monitor/
upvoted 2 times

" # nm4u 4 months, 1 week ago


AWS Documentation says below.
If you have a Basic or Developer Support plan, you can use the Trusted Advisor console to access all checks in the Service Limits category and
six checks in the Security category.

If you have a Business, Enterprise On-Ramp, or Enterprise Support plan, you can use the Trusted Advisor console and the AWS Support API to
access all Trusted Advisor checks.

Based on this, the Correct answer should be A.


upvoted 1 times

" # nm4u 4 months, 1 week ago


Changing to D.
Reason explained below. It's very tricky question.
Yes, we basic support, you can use trusted advisor console to access all checks in service limit. However, for the given usecase, we need to
have the service/quota monitoring automatically using lambda which will use aws trusted advisor via api calls. So correct answer should be D.
Reference: https://docs.aws.amazon.com/solutions/latest/quota-monitor-on-aws/welcome.html
upvoted 4 times

" # KengL 7 months, 3 weeks ago


D as trust advisor API is not available for basic plan.
https://docs.aws.amazon.com/awssupport/latest/user/Welcome.html
upvoted 3 times

" # user0001 5 months, 3 weeks ago


A is right https://aws.amazon.com/premiumsupport/plans/
D is wrong
upvoted 1 times

" # bobsmith2000 5 months ago

48 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

https://docs.aws.amazon.com/awssupport/latest/user/trustedadvisor.html
There are no mentions of any restrictions
upvoted 1 times

" # Sonujunko 8 months ago


Selected Answer: A
https://docs.aws.amazon.com/awssupport/latest/user/trusted-advisor.html

" If you have a Basic or Developer Support plan, you can use the Trusted Advisor console to access all checks in the Service Limits category and
six checks in the Security category."
upvoted 1 times

" # wassb 4 weeks ago


Trusted Advisor CONSOLE
upvoted 1 times

" # Alexey79 8 months, 1 week ago


Selected Answer: D
https://docs.aws.amazon.com/solutions/latest/limit-monitor/deployment.html
Prerequisites
To use this solution, each account must have a Business- or Enterprise-level AWS Support plan in order to gain access to the Trusted Advisor
Service Limits checks.
upvoted 2 times

" # padel 9 months, 1 week ago


Why not A instead of D ?
upvoted 1 times

" # Bigbearcn 9 months, 1 week ago


Selected Answer: D
It's D
upvoted 2 times

" # tkanmani76 9 months, 4 weeks ago


Should be D
upvoted 2 times

" # wahlbergusa 10 months ago


From "Trusted Advisor" page : "AWS Basic Support and AWS Developer Support customers can access core security checks and all checks for
service quotas."

I think it should be A. (sample solution : https://aws.amazon.com/solutions/implementations/limit-monitor/)


upvoted 1 times

" # AndySH 10 months, 1 week ago


Answer is D
upvoted 1 times

49 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #519 Topic 1

A company runs an IoT platform on AWS. IoT sensors in various locations send data to the company's Node.js API servers on Amazon EC2
instances running behind an Application Load Balancer. The data is stored in an Amazon RDS MySQL DB instance that uses a 4 TB General
Purpose SSD volume.
The number of sensors the company has deployed in the +eld has increased over time, and is expected to grow signi+cantly. The API servers are
consistently overloaded and RDS metrics show high write latency.
Which of the following steps together will resolve the issues permanently and enable growth as new sensors are provisioned, while keeping this
platform cost- emcient? (Choose two.)

A. Resize the MySQL General Purpose SSD storage to 6 TB to improve the volume's IOPS

B. Re-architect the database tier to use Amazon Aurora instead of an RDS MySQL DB instance and add read replicas

C. Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data

D. Use AWS X-Ray to analyze and debug application issues and add more API servers to match the load

E. Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance

Correct Answer: CE

" # Moon Highly Voted $ 1 year, 1 month ago


I will go with "C & E".
A: 6TB will not resolve the issue permanently.
B: the issue in the question is writing issue. So why we need read replica!
C: Kinesis is always best for IoT and load.
D: does not make sense.
E: the question does not say anything about keeping the same DB architect! Dynamo is so scalable, for indefinitely solution.
upvoted 34 times

" # Ibranthovic 1 year, 1 month ago


I was thinking about B, but i read your command, and i agree with you.
It's C and E
upvoted 4 times

" # MultiAZ 1 year ago


Actually B is better than E. Dynamo will add quite some cost, which is not according to the requirement.
Generally Aurora will perform better on writes than MySQL. Offloading the read queries (which are nto problematic, but still add load on the
IO subsystem) to read replica will leave more room for your writes.
AMD Aurora is very scalable, just like Dynamo, though lacking the single-millisecond response times
upvoted 2 times

" # DerekKey 1 year ago


You are wrong. The problem is related to writing information to DB. This Aurora implementation will have one WRITER. The problem will
persist.
upvoted 3 times

" # AWSum1 1 year ago


Thanks for pointing out why E is valid. I didn't think of it in that way
upvoted 1 times

" # dpvnme Highly Voted $ 1 year, 1 month ago


C&E would be my choice
upvoted 21 times

" # joanneli77 Most Recent % 3 weeks, 5 days ago


I'm not sure how you can assume DynamoDB is appropriate for the data if it has already been deployed to RDS. Are we assuming intelligence, or
assuming stupidity of prior engineers?
upvoted 1 times

" # KiraguJohn 4 months ago


If you change RDS to Dynamo DB will you also not be required to make some changes on Nodejs code as well?
upvoted 1 times

" # cldy 11 months ago


C. Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data
E. Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance

50 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # AzureDP900 11 months ago
C & E is the right answer
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C,E
upvoted 1 times

" # Waiweng 1 year ago


It's C&E
upvoted 3 times

" # awsexamprep47 1 year ago


Going with C&E
upvoted 1 times

" # Kian1 1 year ago


going with CE
upvoted 1 times

" # Trap_D0_r 1 year ago


CD
"The API Servers are constantly overloaded" only one answer addresses the API limit issue, and that's D. D must be one of the answers. As
someone else stated, Kinesis makes the most sense here. Kinesis will address the high write latency with Lambda to do processing/transforms
and more API servers will address the API bottleneck.
upvoted 1 times

" # Ebi 1 year ago


Dude you need to change the way you think. If you want to add more instances how many you will add to fix the load issue permanently? Also
is it cost efficient to add more servers???!
upvoted 3 times

" # Ebi 1 year ago


I go with CE
upvoted 3 times

" # Firststack 1 year ago


C & E for me
upvoted 1 times

" # Bulti 1 year ago


C & E is the right answer
upvoted 2 times

" # petebear55 1 year ago


YES B WILL NOT BE 'cost efficient' c and E
upvoted 1 times

" # T14102020 1 year ago


Correct is CE. Kinesis + DynamoDB
upvoted 2 times

" # jackdryan 1 year ago


I'll go with C,E
upvoted 4 times

51 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #520 Topic 1

A Solutions Architect is designing a system that will collect and store data from 2,000 internet-connected sensors. Each sensor produces 1 KB of
data every second. The data must be available for analysis within a few seconds of it being sent to the system and stored for analysis inde+nitely.
Which is the MOST cost-effective solution for collecting and storing the data?

A. Put each record in Amazon Kinesis Data Streams. Use an AWS Lambda function to write each record to an object in Amazon S3 with a
pre+x that organizes the records by hour and hashes the record's key. Analyze recent data from Kinesis Data Streams and historical data from
Amazon S3.

B. Put each record in Amazon Kinesis Data Streams. Set up Amazon Kinesis Data Firehouse to read records from the stream and group them
into objects in Amazon S3. Analyze recent data from Kinesis Data Streams and historical data from Amazon S3.

C. Put each record into an Amazon DynamoDB table. Analyze the recent data by querying the table. Use an AWS Lambda function connected
to a DynamoDB stream to group records together, write them into objects in Amazon S3, and then delete the record from the DynamoDB table.
Analyze recent data from the DynamoDB table and historical data from Amazon S3

D. Put each record into an object in Amazon S3 with a pre+x what organizes the records by hour and hashes the record's key. Use S3 lifecycle
management to transition objects to S3 infrequent access storage to reduce storage costs. Analyze recent and historical data by accessing
the data in Amazon S3

Correct Answer: C

Community vote distribution


B (75%) A (25%)

" # Moon Highly Voted $ 1 year, 1 month ago


I prefer "B" for scalability and cost-effectiveness..
Even I like A, for grouping by prefixes. But that is not a requirement in the question. Plus answer B is saying "group them into objects in amazon
S3". So it has some sort of classification for the streams in groups...maybe per second!!
So, my preference is B.
upvoted 25 times

" # Byrney 15 hours, 2 minutes ago


Custom prefixes are possible with Firehose (option B):
https://aws.amazon.com/blogs/big-data/amazon-kinesis-data-firehose-custom-prefixes-for-amazon-s3-objects/
upvoted 1 times

" # dijesim222 1 year ago


cost for s3 puts : 24 hr/day * 60 min/hr * 60 sec/min * 2000 req/sec * 0.005e-3 usd/req = 864 $/day
cost for kinesis data streams:
required shards (2) : 24 hr/day * 0.015 $/hr/shard * 2 shards = 0.72 $/day
puts (1kb is 1 payload unit): 24 hr/day * 60 min/hr * 60 sec/min * 2000 req/sec * 0.014e-6 $/req = 2.42 $/day

huge cost difference..


upvoted 6 times

" # huhupai Highly Voted $ 1 year, 1 month ago


I prefer B.
upvoted 12 times

" # dpvnme 1 year, 1 month ago


on second read, i'll go with B too
upvoted 3 times

" # jar0d 1 year ago


B is better than A , since its about cost-effective solution.
Choosing A involves lambda - that only adds cost to the equation.
upvoted 1 times

" # epomatti Most Recent % 2 months ago


Selected Answer: B
B - Firehose is cheaper than lambda. There is no requirement stated for manipulating the data, hence no requirement for Lambda making A
incorrect.
upvoted 1 times

" # Rocketeer 2 months, 1 week ago

52 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

B is more practical. I can buffer, group and write data to S3 every 60 secs. I do not want to write a file to S3 every seconds using the lambda.
upvoted 1 times

" # bobsmith2000 5 months, 3 weeks ago


Selected Answer: B
B no-brainer.
Data Streams for injecting data and realtime processing, Fire Hose for buffering and storing in S3.
Classic.
upvoted 1 times

" # Alexey79 8 months, 1 week ago


Selected Answer: A
“Within a few seconds of being submitted to the system, the data must be accessible for processing”
Kinesis Firehouse buffer time minimum 60 sec.
Real-time solution is required, which replaces Firehouse by Lamda.
https://aws.amazon.com/kinesis/
upvoted 1 times

" # user0001 5 months, 3 weeks ago


B is right answer as you can use Amazon Kinesis Data Streams for recent data and s3 for historical ,
upvoted 1 times

" # cldy 11 months ago


B. Put each record in Amazon Kinesis Data Streams. Set up Amazon Kinesis Data Firehouse to read records from the stream and group them into
objects in Amazon S3. Analyze recent data from Kinesis Data Streams and historical data from Amazon S3.
upvoted 1 times

" # AzureDP900 11 months ago


B is my choice
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: B
B is the right answer. Kinesis Data stream with Kinesis Forehouse reading and buffering from it to write to S3 is a standard ingestion pattern for
ingesting IoT data.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 2 times

" # blackgamer 1 year ago


B is the answer
upvoted 1 times

" # awsexamprep47 1 year ago


B is the answer
Perfect use case for Kinesis Data Stream,Kinesis Firehose & S-3 combination
upvoted 1 times

" # Kian1 1 year ago


it is B
upvoted 1 times

" # lechuk 1 year ago


A.
Firehose has ~60 seconds latency
upvoted 1 times

" # lechuk 1 year ago


Nevermind, It's B
upvoted 2 times

" # Ebi 1 year ago


Definitely B
upvoted 5 times

" # Bulti 1 year ago

53 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

B is the right answer. Kinesis Data stream with Kinesis Forehouse reading and buffering from it to write to S3 is a standard ingestion pattern for
ingesting IoT data.
upvoted 2 times

54 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #521 Topic 1

An auction website enables users to bid on collectible items. The auction rules require that each bid is processed only once and in the order it was
received. The current implementation is based on a jeet of Amazon EC2 web servers that write bid records into Amazon Kinesis Data Streams. A
single t2.large instance has a cron job that runs the bid processor, which reads incoming bids from Kinesis Data Streams and processes each bid.
The auction site is growing in popularity, but users are complaining that some bids are not registering.
Troubleshooting indicates that the bid processor is too slow during peak demand hours, sometimes crashes while processing, and occasionally
loses track of which records is being processed.
What changes should make the bid processing more reliable?

A. Refactor the web application to use the Amazon Kinesis Producer Library (KPL) when posting bids to Kinesis Data Streams. Refactor the
bid processor to jag each record in Kinesis Data Streams as being unread, processing, and processed. At the start of each bid processing run,
scan Kinesis Data Streams for unprocessed records.

B. Refactor the web application to post each incoming bid to an Amazon SNS topic in place of Kinesis Data Streams. Con+gure the SNS topic
to trigger an AWS Lambda function that processes each bid as soon as a user submits it.

C. Refactor the web application to post each incoming bid to an Amazon SQS FIFO queue in place of Kinesis Data Streams. Refactor the bid
processor to continuously the SQS queue. Place the bid processing EC2 instance in an Auto Scaling group with a minimum and a maximum
size of 1.

D. Switch the EC2 instance type from t2.large to a larger general compute instance type. Put the bid processor EC2 instances in an Auto
Scaling group that scales out the number of EC2 instances running the bid processor, based on the IncomingRecords metric in Kinesis Data
Streams.

Correct Answer: D
Reference:
https://d0.awsstatic.com/whitepapers/Building_a_Real_Time_Bidding_Platform_on_AWS_v1_Final.pdf

Community vote distribution


C (88%) 13%

" # Moon Highly Voted $ 1 year, 1 month ago


I prefer "C".
FIFO is better in this case compared to Kinesis, as it guarantee the order of the bid.
Min Max 1, is okay as the SQS will hold the queue in case of failure of the instance, till it come back again.
upvoted 26 times

" # AShahine21 1 year, 1 month ago


"Troubleshooting indicates that the bid processor is too slow during peak demand hours".. C will not solve this problem.
upvoted 1 times

" # 01037 1 year ago


Yes.
But the question is "What changes should make the bid processing more reliable?", only about reliability, not speed.
upvoted 5 times

" # Kelvin 1 year ago


Yes, C. Only SQS works. More than one EC2 instances in an auto scaling DOESN'T WORK as they are to serve one queue in FIFO.
upvoted 2 times

" # dijesim222 1 year ago


exactly. bids cannot processed in parallel, which rules out D completely, C is the only sensible answer left
upvoted 3 times

" # chicagomassageseeker 1 year ago


SQS is not suitable for Real time bidding. Also SQL FIFO has can scale only to a max 300 messages and 3000 messages (in batch). C for sure
doesn't fit the solution
upvoted 2 times

" # dijesim222 1 year ago


question mentions cron job hence no real time or near real time (cron job's max resolution 1 second)
upvoted 2 times

55 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Alvindo 8 months, 1 week ago


kinesis data streams consumes data in the order they are stored which is basically going to be fifo https://aws.amazon.com/kinesis/data-
streams/faqs/
upvoted 1 times
" # dumma Highly Voted $ 1 year, 1 month ago
Correct answer is D as a single consumer is not able to keep up the bids, multiple
consumers can be used with Auto Scaling based on the incoming records metric
upvoted 21 times

" # freely`y84 1 year, 1 month ago


agree with D.
https://d0.awsstatic.com/whitepapers/Building_a_Real_Time_Bidding_Platform_on_AWS_v1_Final.pdf
upvoted 8 times

" # Smart 1 year, 1 month ago


This one refers to Real-Time Bidding which I don't think is the scenario here.

Check this out: https://aws.amazon.com/blogs/compute/solving-complex-ordering-challenges-with-amazon-sqs-fifo-queues/


upvoted 4 times

" # AjayPrajapati Most Recent % 1 hour, 59 minutes ago


Selected Answer: C
C is correct
kinesis and SNS can have duplicate. App need to handle the duplicate.
upvoted 1 times

" # JohnPi 1 month ago


Ordering is guaranteed on a shard level of kinesis data streams, but not across all stream. hard to choose
upvoted 2 times

" # tomosabc1 1 month, 1 week ago


Selected Answer: D
The answer is C.

Because the auction website already used Kinesis Data Stream, but still its bid processor "sometimes crashes while processing, and occasionally
loses track of which records is being processed", the question is asking us to make the bid processing more reliable, rather than faster.
As for option D, neither "switch to a larger instance type" nor "adding more EC2 instances within an Auto Scaling group" are able to solve
aforementioned reliability issue.
upvoted 1 times

" # tomosabc1 1 month, 1 week ago


The answer is C. I voted for a wrong answer remissly.
upvoted 1 times

" # Rocketeer 2 months, 1 week ago


I prefer D. There can be multiple items being auctioned. With kinesis data streams I can get the bids for different items in different shards in order.
With FIFO, they will all be going through a single queue.
upvoted 1 times

" # Ni_yot 2 months, 2 weeks ago


Ans = C. SQS FIFO was made for this.
upvoted 1 times

" # Enigmaaaaaa 3 months, 1 week ago


This is between C and D.
For D - We need to make sure ordering is in in-place and process it once - Kinesis can do ordering but cant avoid duplicates especially with an
ASG, see below link:
https://docs.aws.amazon.com/streams/latest/dev/kinesis-record-processor-duplicates.html
For C - with FIFO we have dedup already set and ordering - ASG is making sure 1 instance is alive - although its better to scale it according to
queue size since max batch processing is 3000 messages.
Since the question asks about reliability and not throughput or speed - C will make sure all bids are processed in-order and only once. D Will
process in order and quickly but with duplicates.
So C is the only valid answer
upvoted 1 times

" # TechX 4 months, 1 week ago


Selected Answer: C
Answer: C
Explanation:
A\B: Not feasible
C: FIFO is better in this case compared to Kinesis, as it guarantee the order of the bid. Min Max 1, is okay as the SQS will hold the queue in case
of failure of the instance, till it come back again.
D: Still it does not solve the ordering issue.

56 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 3 times
" # tartarus23 6 months ago
Selected Answer: C
C. SQS then Kinesis decouples the architecture and business flow to ensure that all bids are getting sent almost real time.
upvoted 1 times

" # aloha123 6 months, 1 week ago


Selected Answer: C
My problem with D is that it didn't state how the processing of bids is coordinated among the EC2s.
upvoted 1 times

" # jj22222 7 months, 1 week ago


Selected Answer: C
C looks right
upvoted 1 times

" # KiraguJohn 10 months, 4 weeks ago


Kinesis just like SQS FIFO provides ordering of records. The only difference is that Kinesis is near real time.
upvoted 4 times

" # StelSen 1 year ago


People tends to choose Option-C, because of this "The auction rules require that each bid is processed only once and in the order it was
received.". But the real problem statement is different. i.e. Slow, Missing etc., All because of consumer side. Option-C stick to 1 instance. No use.
Option-D resolves problem. AWS Kinesis Data Stream also taking care of ORDER. Refer https://aws.amazon.com/kinesis/data-streams/faqs/
(Look for word ORDER)
upvoted 4 times

" # AWSum1 1 year ago


"Troubleshooting indicates that the bid processor is too slow during peak demand hours, sometimes crashes while processing, and occasionally
loses track of which records is being processed.
What changes should make the bid processing more reliable?"

Crashes while processing = Needs to be replaced asap to continue processing the bids

Occasionally loses track = Only happens sometimes not ALL the time

From troubleshooting , the problem is the BID PROCESSOR

Then, what changes to make it more RELIABLE = continued service due to crashes and slow processing

Answer is D.
upvoted 2 times

" # DerekKey 1 year ago


A is wrong - how the bid processor can flag a record in Kinesis Data Stream? It can only read data
B is wrong - would be OK if the answer would mention SNS FIFO
D is wrong - we don't know a number of shards - we can't have two consumers for the same shard
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
Troubleshooting indicates that the bid processor is too slow during peak demand hours
Web Servers and Kinesis are doing their job in the right way.
The bid processor is the bottleneck .
upvoted 2 times

" # JohnPi 1 month ago


order is per shard. with autoscaling multiple consumers (scaled ec2) will consume the same shard
upvoted 1 times

57 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #522 Topic 1

A bank is re-architecting its mainframe-based credit card approval processing application to a cloud-native application on the AWS cloud.
The new application will receive up to 1,000 requests per second at peak load. There are multiple steps to each transaction, and each step must
receive the result of the previous step. The entire request must return an authorization response within less than 2 seconds with zero data loss.
Every request must receive a response. The solution must be Payment Card Industry Data Security Standard (PCI DSS)-compliant.
Which option will meet all of the bank's objectives with the LEAST complexity and LOWEST cost while also meeting compliance requirements?

A. Create an Amazon API Gateway to process inbound requests using a single AWS Lambda task that performs multiple steps and returns a
JSON object with the approval status. Open a support case to increase the limit for the number of concurrent Lambdas to allow room for
bursts of activity due to the new application.

B. Create an Application Load Balancer with an Amazon ECS cluster on Amazon EC2 Dedicated Instances in a target group to process
incoming requests. Use Auto Scaling to scale the cluster out/in based on average CPU utilization. Deploy a web service that processes all of
the approval steps and returns a JSON object with the approval status.

C. Deploy the application on Amazon EC2 on Dedicated Instances. Use an Elastic Load Balancer in front of a farm of application servers in an
Auto Scaling group to handle incoming requests. Scale out/in based on a custom Amazon CloudWatch metric for the number of inbound
requests per second after measuring the capacity of a single instance.

D. Create an Amazon API Gateway to process inbound requests using a series of AWS Lambda processes, each with an Amazon SQS input
queue. As each step completes, it writes its result to the next step's queue. The +nal step returns a JSON object with the approval status.
Open a support case to increase the limit for the number of concurrent Lambdas to allow room for bursts of activity due to the new
application.

Correct Answer: C

Community vote distribution


D (75%) C (25%)

" # dpvnme Highly Voted $ 1 year, 1 month ago


Seems like D would be a better choice
upvoted 27 times

" # heany 3 weeks, 5 days ago


only problem is that : A Lambda function may run for up to 15 minutes (this is called the Lambda timeout), meaning Lambda is not suited to
long-running processes ' . According to the question, the app 'will receive up to 1,000 requests per second at peak load' which means the
lambda function will need to keep running. So, both A and D don't look right. that leaves only C.
upvoted 1 times

" # Yamchi 7 months ago


https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html
"Allocate tasks to multiple worker nodes: process a high number of credit card validation requests."
upvoted 5 times

" # sindra 3 days, 14 hours ago


https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues.html
Incline to D
upvoted 1 times

" # donathon Highly Voted $ 1 year, 1 month ago


A
A: The process must complete within 2 seconds. This sounds like what Lambda can do.
B\C: Not the most cost efficient compared to Lambda. Scaling may also not be fast enough.
D: SQS may have data loss due to DLQ? SQS also does not process in order so this may be another problem unless you use FIFO.
upvoted 15 times

" # PacoDerek 1 year, 1 month ago


SQS guarantee at-least-One delivery
upvoted 2 times

" # ashp 1 year, 1 month ago


The entire request must return an authorization response within less than 2 seconds with zero data loss.
upvoted 1 times

58 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # petebear55 1 year ago


a will cause too much bottle neck
upvoted 2 times

" # Amitv2706 1 year ago


if SQS is used we still have an option to keep storing the message as failed lambda will not delete it ideally.

But with A - Where that message/req will go if lambda fails ?

Seems D is better choice


upvoted 3 times
" # Rocketeer Most Recent % 1 month, 2 weeks ago
A for me.
Using SQS makes it asynch. How do you respond back to the API call ?
Also multiple lambdas and SQS will be slower than option A. Step function is a better option.
upvoted 3 times

" # tomosabc1 1 month, 1 week ago


"Using SQS makes it asynch. How do you respond back to the API call ?"
A good point! This definitely rules option D out.
upvoted 1 times

" # dcdcdc3 1 month, 3 weeks ago


D is preferred as it is more robust, with cloud native services etc, but 1000 requests per second with API GW may make this much more
expensive, maybe this is why C is proposed
upvoted 1 times

" # hilft 3 months, 1 week ago


D is the only choice here
upvoted 1 times

" # bobsmith2000 5 months, 2 weeks ago


Selected Answer: D
No data loss, full compliance to pci
upvoted 1 times

" # pal40sg 8 months, 1 week ago


Selected Answer: C
D. gp3 can't set IOPS
upvoted 1 times

" # czarno 7 months, 2 weeks ago


not only you specifically selected C, then you also went on to comment D... and reference to the previous question.
Other than that I think it is D... APIGW + Lambda + SQS
upvoted 1 times

" # cldy 10 months, 1 week ago


D is correct.
upvoted 1 times

" # cldy 11 months ago


D. Create an Amazon API Gateway to process inbound requests using a series of AWS Lambda processes, each with an Amazon SQS input
queue. As each step completes, it writes its result to the next step‫ג‬€™s queue. The final step returns a JSON object with the approval status.
Open a support case to increase the limit for the number of concurrent Lambdas to allow room for bursts of activity due to the new application.
upvoted 1 times

" # acloudguru 11 months, 1 week ago


Selected Answer: D
D, as SQS is also PCI DSS compliance
https://aws.amazon.com/compliance/services-in-scope/
upvoted 2 times

" # AWSum1 1 year ago


A

My simple understanding:
Multiple Lambda functions for each step can add up to 300ms/step
upvoted 3 times

" # student22 1 year ago


A

Why not D? The question is asking for the least complex working solution.

59 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # mustafa1p 1 year ago


It should be 'C' since emphasis is on compliance with PCI-DSS as long as the standard allows the app to be on shared tenants.
upvoted 1 times

" # Kopa 1 year ago


i think A is more complex because of all functions are included in one lambda, also it does not offer low cost as it will run all the time, while
several lambda they will process the function once and pass to the other lambda. Im more on D then A.
upvoted 1 times

" # DerekKey 1 year ago


D is wrong - the answer is not mentioning step functions
upvoted 1 times

" # DerekKey 1 year ago


https://d1.awsstatic.com/whitepapers/compliance/pci-dss-compliance-on-aws.pdf
upvoted 2 times

" # Salmariaz 1 year ago


AWS SQS is PCI complaint
https://aws.amazon.com/compliance/services-in-scope/
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 3 times

" # Waiweng 1 year ago


D works best which address zero data loss by implementing message queuing. SQS is a pull delivery system, so processing the queue is
dependent on the lambda function.
Lambda responses are double digit ms thus could deliver sub-2s responses; require limit increase as multiple lambdas could be running for each
request.
upvoted 6 times

60 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #523 Topic 1

A Solutions Architect is migrating a 10 TB PostgreSQL database to Amazon RDS for PostgreSQL. The company's internet link is 50 MB with a VPN
in the
Amazon VPC, and the Solutions Architect needs to migrate the data and synchronize the changes before the cutover. The cutover must take place
within an 8-day period.
What is the LEAST complex method of migrating the database securely and reliably?

A. Order an AWS Snowball device and copy the database using the AWS DMS. When the database is available in Amazon S3, use AWS DMS to
load it to Amazon RDS, and con+gure a job to synchronize changes before the cutover.

B. Create an AWS DMS job to continuously replicate the data from on premises to AWS. Cutover to Amazon RDS after the data is
synchronized.

C. Order an AWS Snowball device and copy a database dump to the device. After the data has been copied to Amazon S3, import it to the
Amazon RDS instance. Set up log shipping over a VPN to synchronize changes before the cutover.

D. Order an AWS Snowball device and copy the database by using the AWS Schema Conversion Tool. When the data is available in Amazon S3,
use AWS DMS to load it to Amazon RDS, and con+gure a job to synchronize changes before the cutover.

Correct Answer: B

Community vote distribution


D (43%) A (43%) 14%

" # donathon Highly Voted $ 1 year, 1 month ago


Answer is A.
B: Not possible. Because transferring 10TB over 50Mbps will take 17 days at least.
C: Use DMS to copy not database dump.
D: You don’t need SCT since there is no need for conversion.
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.Process.html
upvoted 25 times

" # 94xychen 1 year ago


B: It's 10TB over 50MB... not 50Mbps
upvoted 11 times

" # futen0326 8 months, 2 weeks ago


Question clearly states "50 megabits per second" which is Mbps. MBps is mega bytes per second.

Scary how people are testing for a cert like this, and don't even know the difference
upvoted 3 times

" # redipa 3 weeks ago


But the question does say "MB":
"The company's internet link is 50 MB with a VPN in the
Amazon VPC..."
upvoted 1 times

" # mostafasookar 6 months, 3 weeks ago


A is right
Megabits per second (Mbps) so it will take 17 days IF it is Megabytes per second (MBps) so B it will work
upvoted 1 times

" # 9Ow30 Highly Voted $ 1 year, 1 month ago


B
Using the calculator here http://www.calctool.org/CALC/prof/computing/transfer_time it will take 2 days to transfer 10TB over 50MB line.
upvoted 15 times

" # cinopi 1 year, 1 month ago


It will take 17.6606 days using that tool, NOT 2 days
upvoted 3 times

" # 9Ow30 1 year, 1 month ago


I checked again, the speed is 50 MB not 50 Mb so it will take 2. Please try again.
upvoted 2 times

61 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # syscao 7 months, 3 weeks ago


it's 50 Mb, not 50MB, if you check one more time.
upvoted 1 times

" # thirstylion 11 months, 3 weeks ago


Its Mbps (megabits) so 17 days.
upvoted 6 times
" # 9Ow30 1 year, 1 month ago
The company's internet link is 50 MB with a VPN
upvoted 2 times

" # cinopi 1 year, 1 month ago


Ahh, it's 2.20758 days. Correct.
So answer is B
upvoted 5 times

" # robsonchirara 4 months, 2 weeks ago


MB is not the same as Mb
B is for Byte
b is for bit
upvoted 2 times

" # mnizamu Most Recent % 1 week ago


It says very clearly that "The company's internet link is 50 MB with a VPN." Therefore, transferring 10TB of data over a 50 MB link will take 2 days
7 hours 33 mins 20 sec. Therefore, the answer should be B. Calculation: 8 bits/1 byte x 50 Megabytes/s = 400 Megabits/s
upvoted 2 times

" # kharakbeer 1 week, 6 days ago


Selected Answer: A
The internet speed is calculated by bitpersecond and NOT Bytepersecond. B is wrong the answer is A as you don't need SCT when migrating
from on-prem postgreSQL to AWS PostgreSQL
upvoted 1 times

" # Vizz5585 2 weeks, 6 days ago


Selected Answer: D
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.html
upvoted 1 times

" # joanneli77 3 weeks, 5 days ago


50 MB is not 50 mb. The author wrote the question wrong. One is 800% different from the other. Remember networks are measured in 'b'its not
'B'ytes, even though almost all other data is capital B. In either case, data transfer will take too long.
upvoted 1 times

" # JohnPi 1 month ago


Selected Answer: D
When you're using an Edge device, the data migration process has the following stages:

-You use the AWS Schema Conversion Tool (AWS SCT) to extract the data locally and move it to an Edge device.
-You ship the Edge device or devices back to AWS.
-After AWS receives your shipment, the Edge device automatically loads its data into an Amazon S3 bucket.
-AWS DMS takes the files and migrates the data to the target data store. If you are using change data capture (CDC), those updates are written
to the Amazon S3 bucket and then applied to the target data store.
upvoted 2 times

" # caveman712 1 month ago


Selected Answer: A
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.Process.html
upvoted 1 times

" # tomosabc1 1 month, 1 week ago


Selected Answer: B
The answer is B.
We can figure out the answer by ruling out the wrong ones.
A,D(wrong): Neither AWS DMS nor AWS Schema Conversion Tool can be used to copying on premise DB to Showball device.
https://aws.amazon.com/dms/schema-conversion-tool/
C(wrong): Log Shipping is for SQL server on EC2, rather than AWS RDS for PostgreSQL.
https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-sql-server/ec2-log-shipping.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html#SQLServer.Concepts.General.FeatureNonSupport
upvoted 1 times

" # Cal88 1 week, 1 day ago


You are wrong , DMS can be used with snowball to migrate databases from on-premise to AWS

62 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.html
upvoted 1 times

" # dcdcdc3 1 month, 3 weeks ago


MB means nothing really. If it is MBps then B is correct, If it is Mbps D is correct (Use SCT):
https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_DMSIntegration.html
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.html
upvoted 2 times

" # Cal88 1 week, 1 day ago


Why do you need schema conversion if you are migrating to the same DB
I think answer A is correct Snowball + DMS
upvoted 1 times

" # jerrykid 2 months, 2 weeks ago


C is correct. DMS now support S3
The only way to transfer data into Snowball is dump data and copy by Snowball agent, not DMS or SCT. Both DMS and SCT is service and
hosted in AWS.
upvoted 1 times

" # she1989 4 months, 1 week ago


Answer is D: https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.PostgreSQL.html
upvoted 1 times

" # azurehunter 5 months ago


Answer is D
A is wrong because DMS is cloud service and it cannot extract database offline. It requires VPN or DX for data migration and replication. So, the
only solution is to use SCT to extract the data and schemas prior to copy to Snowball.
upvoted 3 times

" # johnnsmith 8 months, 2 weeks ago


D is the correct answer. https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.htmlWhen you're using an Edge device, the data
migration process has the following stages:

You use the AWS Schema Conversion Tool (AWS SCT) to extract the data locally and move it to an Edge device.
You ship the Edge device or devices back to AWS.
After AWS receives your shipment, the Edge device automatically loads its data into an Amazon S3 bucket.
AWS DMS takes the files and migrates the data to the target data store. If you are using change data capture (CDC), those updates are written to
the Amazon S3 bucket and then applied to the target data store.
upvoted 6 times

" # AMKazi 9 months ago


Ans is B least complication way.
Calculation: 50 mbps is 50x60 seconds =3000 mb per minute.
3000 x 60 minutes = 180000 mb per hour
18000 x 24 hours = 4320000 mb per day = 4.32 Terabyte per day
so in 8 days you can transfer 4.32 x 8 = 34.56 terabytes
upvoted 2 times

" # zoliv 9 months, 1 week ago


Selected Answer: A
A. megabits = Mb
upvoted 1 times

" # cannottellname 9 months, 2 weeks ago


B is not reliable, anything can happen anytime also speeds are not consistent. For the exam as well, we should use AWS resources as much as
possible :P. A seems much more reliable than B.
upvoted 1 times

63 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #524 Topic 1

A Solutions Architect must update an application environment within AWS Elastic Beanstalk using a blue/green deployment methodology. The
Solutions Architect creates an environment that is identical to the existing application environment and deploys the application to the new
environment.
What should be done next to complete the update?

A. Redirect to the new environment using Amazon Route 53

B. Select the Swap Environment URLs option

C. Replace the Auto Scaling launch con+guration

D. Update the DNS records to point to the green environment

Correct Answer: B
Reference:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html

" # awsec2 Highly Voted $ 1 year, 1 month ago


b. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html
upvoted 22 times

" # cldy Most Recent % 11 months ago


B. Select the Swap Environment URLs option
upvoted 2 times

" # AzureDP900 11 months ago


I'll go with B
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # zolthar_z 1 year ago


The Answer is B, https://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/swap-the-environment-of-an-elastic-beanstalk-
application.html
upvoted 2 times

" # Waiweng 1 year ago


it's B
upvoted 1 times

" # awsexamprep47 1 year ago


B for sure
upvoted 1 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # Ebi 1 year ago


B is the answer
upvoted 3 times

" # tipzzz 1 year ago


answer is D :
https://medium.com/@kumargaurav1247/blue-green-deployment-introduction-68b01d471dde
upvoted 1 times

" # kirrim 1 year ago


That's the method for changing the CNAME entry if you have full control over the environment. In EB you can't change the CNAME record
yourself, EB controls that. You have to tell EB what environment it should point the CNAME record to. That is done by changing the FQDN
(technically not a URL, but AWS still calls it a URL) for the environment:

https://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/swap-the-environment-of-an-elastic-beanstalk-application.html

64 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # Bulti 1 year ago
Answer is B. You need to swap Environment URLs
upvoted 1 times

" # spring21 1 year ago


B : When an application is developed and deployed to an AWS Elastic Beanstalk environment, having two separate, but identical, environments—
blue and green—increases availability and reduces risk. In this Quick Start architecture, the blue environment is the production environment that
normally handles live traffic. The CI/CD pipeline architecture creates a clone (green) of the live Elastic Beanstalk environment (blue). The pipeline
then swaps the URLs between the two environments.
upvoted 3 times

" # T14102020 1 year ago


Correct is B. Swap Environment URLs
upvoted 1 times

" # jackdryan 1 year ago


I'll go with B
upvoted 2 times

" # AWSKrish 1 year ago


D: Please note it is B/G deployment and once Updating DNS is saffice. Wondering SWAP would do traffic in 2 directions once new env is ready
that is not needed,
upvoted 3 times

" # Phat 1 year ago


B is correct.
upvoted 1 times

" # petebear55 1 year ago


despite everyone disagreeing with us i'm inclined to go a,long with u
upvoted 1 times

" # fullaws 1 year ago


B is correct
upvoted 1 times

" # NikkyDicky 1 year ago


B for sure
upvoted 1 times

65 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #525 Topic 1

A company has a legacy application running on servers on premises. To increase the application's reliability, the company wants to gain actionable
insights using application logs. A Solutions Architect has been given following requirements for the solution:
✑ Aggregate logs using AWS.
✑ Automate log analysis for errors.
✑ Notify the Operations team when errors go beyond a speci+ed threshold.
What solution meets the requirements?

A. Install Amazon Kinesis Agent on servers, send logs to Amazon Kinesis Data Streams and use Amazon Kinesis Data Analytics to identify
errors, create an Amazon CloudWatch alarm to notify the Operations team of errors

B. Install an AWS X-Ray agent on servers, send logs to AWS Lambda and analyze them to identify errors, use Amazon CloudWatch Events to
notify the Operations team of errors.

C. Install Logstash on servers, send logs to Amazon S3 and use Amazon Athena to identify errors, use sendmail to notify the Operations team
of errors.

D. Install the Amazon CloudWatch agent on servers, send logs to Amazon CloudWatch Logs and use metric +lters to identify errors, create a
CloudWatch alarm to notify the Operations team of errors.

Correct Answer: D
Reference:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html https://docs.aws.amazon.com/kinesis-agent-
windows/latest/userguide/what-is-kinesis-agent-windows.html

" # donathon Highly Voted $ 1 year, 1 month ago


D
A: Amazon Kinesis Data Analytics used for data analytics.
B: Cannot be implemented on premise.
C: Athena is servers SQL based query system. Should use SNS instead of sendmail.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-on-premise.html
upvoted 22 times

" # AWS2020 1 year, 1 month ago


I think the Answer is A. D may work if we only need to identify the error not analyze it and you would need to put error filter such as 400x,
500x. What happens if we have error tat is not specified in metric filter
upvoted 3 times

" # LunchTime 1 year ago


This is obviously a close call between options A and D. Both appear to fulfill the requirements. However, AWS2020 makes a great point –
what if the error is not specified in the error filter? Consequently, option D may not report on some errors. Also, slides 17 and 18 in the
following AWS presentation would lead me to believe “A” is the answer they are looking for on the exam. https://www.slideshare.net
/AmazonWebServices/realtime-application-monitoring-with-amazon-kinesis-and-amazon-cloudwatch-aws-online-tech-talks
upvoted 4 times

" # AWSPro24 1 year, 1 month ago


I agree with A. It says "Log Analysis" not "Log Analytics" CloudWatch Logs can do the job. https://docs.aws.amazon.com
/AmazonCloudWatch/latest/logs/MonitoringLogData.html
upvoted 1 times

" # AWSPro24 1 year, 1 month ago


Sorry I meant D. Also the CloudWatch landing page includes the term "actionable insights"
https://aws.amazon.com/cloudwatch/

I don't see any reason why A would not work but it seems like overkill for just error counting.
upvoted 2 times

" # AWSPro24 1 year, 1 month ago


Interesting. It seems that CloudWatch Logs is even built on top of Kinesis https://forums.aws.amazon.com
/thread.jspa?threadID=157966
upvoted 3 times

" # JAWS1600 1 year ago


Here is what A is missing. Collecting the logs from on-prem servers and sending to kinesis. Option A does not provide solution for that
piece.

66 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # Moon Highly Voted $ 1 year, 1 month ago
I would for with A.
https://docs.aws.amazon.com/kinesis-agent-windows/latest/userguide/what-is-kinesis-agent-windows.html
https://medium.com/@khandelwal12nidhi/build-log-analytic-solution-on-aws-cc62a70057b2
upvoted 14 times

" # SD13 1 year ago


Kinesis agent can not forward logs to cloudwatch, how the cloudwatch alarm will be triggered? option A is missing this part
upvoted 4 times

" # Student1950 9 months ago


Kinesis agent can forward logs to AWS cloudwatch as per info at this blog
https://aws.amazon.com/blogs/big-data/collect-parse-transform-and-stream-windows-events-logs-and-metrics-using-amazon-kinesis-
agent-for-microsoft-windows/
upvoted 1 times

" # ibrahimsow Most Recent % 3 months, 1 week ago


The answer is D:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html
upvoted 1 times

" # RVivek 9 months ago


D.
The key is "Notify the Operations team when errors go beyond a specified threshold"
That says " metric filters " and a defined threshold
upvoted 3 times

" # Student1950 9 months ago


As per the following AWS blog from 2018, correct answer seems to be A
https://aws.amazon.com/blogs/big-data/collect-parse-transform-and-stream-windows-events-logs-and-metrics-using-amazon-kinesis-agent-
for-microsoft-windows/
upvoted 1 times

" # AzureDP900 11 months ago


I'll go with D
upvoted 2 times

" # cldy 11 months, 1 week ago


D. Install the Amazon CloudWatch agent on servers, send logs to Amazon CloudWatch Logs and use metric filters to identify errors, create a
CloudWatch alarm to notify the Operations team of errors.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


D is right, there is no need of kinesis data analytics for this .
upvoted 2 times

" # denccc 1 year ago


I go for D
upvoted 2 times

" # blackgamer 1 year ago


I think A is better suited than D.
Cloudwatch agent can only install on Amazon Linux 2. Since this is legacy applicaion, I am assuming they are on different OS. Kinesis agent can
install Redhat Linux 7 , so it is more reasonable here.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 2 times

" # Waiweng 1 year ago


it's D
upvoted 3 times

" # awsexamprep47 1 year ago


D is the answer.
CW Agent can be installed on On-Prem servers
upvoted 1 times

" # Kian1 1 year ago


going with D
upvoted 2 times

67 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Ebi 1 year ago


D is the answer
upvoted 5 times

" # Ebi 1 year ago


A is not the answer, first of all we don't need real-time, Kinesis is a very good use case of real time log analysis, second we don't need Kinesis
Analytics, only automation is required is capturing errors which can be done using CW metric, lastly Kinesis agent although can be installed
on-premise server but has more limited OS support compared to CW agent, so for a legacy app is not a good choice:

for Kinesis agent: "Your operating system must be either Amazon Linux AMI with version 2015.09 or later, or Red Hat Enterprise Linux version
7 or later."
https://docs.aws.amazon.com/streams/latest/dev/writing-with-agents.html#download-install
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html
upvoted 8 times

" # Rocketeer 2 months, 1 week ago


Part of the requirement is to do aggregations which can be done in kinesis data analytics. Hence I am leaning towards A.
upvoted 1 times

" # gookseang 1 year ago


go D ~
upvoted 1 times

" # 01037 1 year ago


D.
Both A and D meet the requirement, but D is simpler so is better.
upvoted 1 times

68 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #526 Topic 1

What combination of steps could a Solutions Architect take to protect a web workload running on Amazon EC2 from DDoS and application layer
attacks? (Choose two.)

A. Put the EC2 instances behind a Network Load Balancer and con+gure AWS WAF on it.

B. Migrate the DNS to Amazon Route 53 and use AWS Shield.

C. Put the EC2 instances in an Auto Scaling group and con+gure AWS WAF on it.

D. Create and use an Amazon CloudFront distribution and con+gure AWS WAF on it.

E. Create and use an internet gateway in the VPC and use AWS Shield.

Correct Answer: DE
Reference:
https://aws.amazon.com/answers/networking/aws-ddos-attack-mitigation/

Community vote distribution


BD (100%)

" # Waiweng Highly Voted $ 1 year ago


B and D
upvoted 11 times

" # CarisB Highly Voted $ 1 year ago


I go with B and D

"AWS Shield Standard automatically protects your Amazon Route 53 Hosted Zones from infrastructure layer DDoS attacks"
https://aws.amazon.com/shield/?nc1=h_ls&whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc

"AWS WAF can be deployed on Amazon CloudFront, the Application Load Balancer (ALB), Amazon API Gateway, and AWS AppSync."
https://aws.amazon.com/waf/faqs/
upvoted 7 times

" # Vizz5585 Most Recent % 2 weeks, 6 days ago


Selected Answer: BD
B and D
upvoted 1 times

" # TechX 4 months ago


Selected Answer: BD
100% BD
upvoted 2 times

" # KiraguJohn 4 months, 2 weeks ago


I have a problem with D because we have not been told whether the web content is static or dynamic. Can we use Cloudfront on a dynamic web
content?
upvoted 1 times

" # tartarus23 6 months ago


Selected Answer: BD
B. Route 53 and AWS Shields helps in mitigating the flood of DDoS attacks
D. Cloudfront and WAF also aid in preventing DDoS attacks
upvoted 2 times

" # roka_ua 7 months, 1 week ago


Selected Answer: BD
Vote BD
upvoted 1 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: BD
it is BD
upvoted 2 times

69 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # AzureDP900 11 months ago


Selected Answer: BD
B and D is the answer
upvoted 4 times

" # tonikus 1 year ago


how on Earth could E got marked as an answer?
It's B and D
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with B, D
upvoted 2 times

" # Amitv2706 1 year ago


B and D
upvoted 1 times

" # blackgamer 1 year ago


B and D is the answer
upvoted 1 times

" # nitinz 1 year ago


B and D is the answer
upvoted 2 times

" # awsexamprep47 1 year, 1 month ago


B&D is the answer
upvoted 1 times

" # JJu 1 year, 1 month ago


i go with B, D.

- AWS Shield
Amazon CloudFront distributions
Amazon Route 53 hosted zones
AWS Global Accelerator accelerators
Application load balancers
Elastic Load Balancing (ELB) load balancers
Amazon Elastic Compute Cloud (Amazon EC2) Elastic IP addresses
- AWS WAF
Amazon CloudFront
Amazon API Gateway REST API
Application Load Balancer
AWS AppSync GraphQL API

i refer this link :


https://docs.aws.amazon.com/ko_kr/waf/latest/developerguide/waf-chapter.html
https://docs.aws.amazon.com/ko_kr/waf/latest/developerguide/shield-chapter.html
upvoted 4 times

" # didek1986 1 year, 1 month ago


i think BD
upvoted 1 times

70 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #527 Topic 1

A photo-sharing and publishing company receives 10,000 to 150,000 images daily. The company receives the images from multiple suppliers and
users registered with the service. The company is moving to AWS and wants to enrich the existing metadata by adding data using Amazon
Rekognition.
The following is an example of the additional data:

As part of the cloud migration program, the company uploaded existing image data to Amazon S3 and told users to upload images directly to
Amazon S3.
What should the Solutions Architect do to support these requirements?

A. Trigger AWS Lambda based on an S3 event noti+cation to create additional metadata using Amazon Rekognition. Use Amazon DynamoDB
to store the metadata and Amazon ES to create an index. Use a web front-end to provide search capabilities backed by Amazon ES.

B. Use Amazon Kinesis to stream data based on an S3 event. Use an application running in Amazon EC2 to extract metadata from the images.
Then store the data on Amazon DynamoDB and Amazon CloudSearch and create an index. Use a web front-end with search capabilities
backed by CloudSearch.

C. Start an Amazon SQS queue based on S3 event noti+cations. Then have Amazon SQS send the metadata information to Amazon
DynamoDB. An application running on Amazon EC2 extracts data from Amazon Rekognition using the API and adds data to DynamoDB and
Amazon ES. Use a web front-end to provide search capabilities backed by Amazon ES.

D. Trigger AWS Lambda based on an S3 event noti+cation to create additional metadata using Amazon Rekognition. Use Amazon RDS MySQL
Multi-AZ to store the metadata information and use Lambda to create an index. Use a web front-end with search capabilities backed by
Lambda.

Correct Answer: D

Community vote distribution


A (100%)

" # nitinz Highly Voted $ 1 year, 1 month ago


A is answer
upvoted 14 times

" # Vizz5585 Most Recent % 2 weeks, 6 days ago


Selected Answer: A
A is the answer
upvoted 1 times

" # joanneli77 3 weeks, 5 days ago


I may need to search based on more than one metadata field. DynamoDB searches can't do every field. RDS. I literally had this use case IRL. D.
upvoted 1 times

" # aqiao 1 month ago


why need ES, why not search from ddb directly?
upvoted 1 times

" # epomatti 2 months ago


Selected Answer: A
A - It makes no sense to use a relational database for this use case.

Not sure why ES, it should be CloudSearch??


upvoted 1 times

" # pankajrawat 6 months, 1 week ago


Selected Answer: A
A is answer
upvoted 1 times

" # Buggie 10 months, 1 week ago


A it is
upvoted 1 times

71 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # RVivek 10 months, 2 weeks ago


A is correct
https://github.com/aws-samples/lambda-refarch-imagerecognition
upvoted 1 times

" # cldy 11 months ago


A. Trigger AWS Lambda based on an S3 event notification to create additional metadata using Amazon Rekognition. Use Amazon DynamoDB to
store the metadata and Amazon ES to create an index. Use a web front-end to provide search capabilities backed by Amazon ES.
upvoted 1 times

" # AzureDP900 11 months ago


A is right
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


A is more cost effective and no need of streams!
upvoted 1 times

" # backfringe 11 months, 2 weeks ago


I'd go with A
DynamoDB and Rekognition
upvoted 1 times

" # RVD 11 months, 2 weeks ago


Selected Answer: A
A is correct.
upvoted 1 times

" # backfringe 11 months, 2 weeks ago


I go with A
upvoted 1 times

" # acloudguru 11 months, 3 weeks ago


Selected Answer: A
refer to the blog.https://aws.amazon.com/blogs/machine-learning/find-distinct-people-in-a-video-with-amazon-rekognition/
upvoted 1 times

" # Kopa 1 year ago


Im going for A, Dynamo DB and Amazon Rekognition makes the difference.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 1 times

72 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #528 Topic 1

A Solutions Architect is redesigning an image-viewing and messaging platform to be delivered as SaaS. Currently, there is a farm of virtual
desktop infrastructure
(VDI) that runs a desktop image-viewing application and a desktop messaging application. Both applications use a shared database to manage
user accounts and sharing. Users log in from a web portal that launches the applications and streams the view of the application on the user's
machine. The Development Operations team wants to move away from using VDI and wants to rewrite the application.
What is the MOST cost-effective architecture that offers both security and ease of management?

A. Run a website from an Amazon S3 bucket with a separate S3 bucket for images and messaging data. Call AWS Lambda functions from
embedded JavaScript to manage the dynamic content, and use Amazon Cognito for user and sharing management.

B. Run a website from Amazon EC2 Linux servers, storing the images in Amazon S3, and use Amazon Cognito for user accounts and sharing.
Create AWS CloudFormation templates to launch the application by using EC2 user data to install and con+gure the application.

C. Run a website as an AWS Elastic Beanstalk application, storing the images in Amazon S3, and using an Amazon RDS database for user
accounts and sharing. Create AWS CloudFormation templates to launch the application and perform blue/green deployments.

D. Run a website from an Amazon S3 bucket that authorizes Amazon AppStream to stream applications for a combined image viewer and
messenger that stores images in Amazon S3. Have the website use an Amazon RDS database for user accounts and sharing.

Correct Answer: C

Community vote distribution


D (57%) A (43%)

" # AWSPro24 Highly Voted $ 1 year, 1 month ago


I believe the answer should be A. There are examples of filling in the dynamic elements of S3 websites with Lambda.

https://aws.amazon.com/blogs/architecture/create-dynamic-contact-forms-for-s3-static-websites-using-aws-lambda-amazon-api-gateway-and-
amazon-ses/
https://aws.amazon.com/getting-started/projects/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/

I feel the words "wants to rewrite the application" are key. They aren't looking to move the same code to AppStreah which is App streaming,
similar to VDI but scoped at the App level.

B - EC2 will be more expensive and "EC2 user data" is just silly and wrong
C - RDS isn't the best choice for a user store and there is no blue/green requirement
D - Don't believe AppStream can be launched from S3. Too Dynamic. Might be possible with Lambda.
upvoted 18 times

" # Smart 1 year, 1 month ago


Can "rewrite the app" means switching from VDI to App Streaming?
upvoted 2 times

" # inf Highly Voted $ 1 year ago


Answer: A
A - correct - solution will work and with low cost and management. No infrastructure to manage.
B - incorrect - cost of running and managing infrastructure expensive - not easy to maintain
C - incorrect - cost of running and managing infrastructure expensive - blue/green more so which requires the database to be external to the
environment or data will be lost.
D - incorrect - RDS for authentication/authorisation to provide secure access to S3? possible? plus cost of running infrastructure, and AppStream
is the same tech as the current streaming solution

Light reading
https://stackoverflow.com/questions/49782492/cognito-user-authorization-to-access-an-s3-object
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_cognito-bucket.html
upvoted 8 times

" # nsvijay04b1 Most Recent % 1 week, 1 day ago


Selected Answer: A
A) Java script trigger lambda, S3 is cost effective, cognito for auth
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/using-lambda-functions.html
B,C) costly and others explained already
D) App stream support user pool /sso/federated users not RDS, not cheap although pasy as u go, ques wants to refactor app from desktop not
migrate to another desktop steaming solution
upvoted 1 times

73 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # kharakbeer 1 week, 6 days ago


Selected Answer: A
A is right. Easy question ya 3azeezy
upvoted 1 times

" # joanneli77 3 weeks, 5 days ago


A has no database of record - where is the data? I went with D since it has a DB.
upvoted 1 times

" # tomosabc1 1 month ago


Selected Answer: D
A(wrong): AWS Lambda function cannot be called by embedded JavaScript directly, API Gateway is required, which is not mentioned by the
option.
B/C(wrong): These two options involve the use of EC2(EC2 is in used even in the case of Elastic Beanstalk), not cost effective, compared with D.
upvoted 2 times

" # tomosabc1 1 month ago


Selected Answer: D
A(wrong): AWS Lambda function cannot be called by embedded JavaScript directly, API Gateway is required, which is not mentioned by the
option.
B/C(wrong): These two options involve the use of EC2(EC2 is in used even in the case of Elastic Beanstalk), not cost effective, compared with D.
upvoted 2 times

" # tomosabc1 1 month ago


D(correct): AppStream 2.0 manages the AWS resources required to host and run your applications, scales automatically, and provides access
to your users on demand...With AppStream 2.0, you can easily add your existing desktop applications to AWS and enable your users to
instantly stream them(*** ease of management ***)...Your applications run on AWS compute resources, and data is never stored on users'
devices, which means they always get a high performance, secure experience((*** secure ***)). Unlike traditional on-premises solutions for
desktop application streaming, AppStream 2.0 offers pay-as-you-go pricing, with no upfront investment and no infrastructure to maintain(***
Cost effective ***). You can scale instantly and globally, ensuring that your users always have the best possible experience.
https://docs.aws.amazon.com/appstream2/latest/developerguide/what-is-appstream.html
upvoted 1 times

" # tomosabc1 1 month ago


Someone might argue that, as the question mentioned, the development operation team wants to move away from using VDI...I doubt
whether moving away from VDI means the same as moving away from AppStream 2.0.
upvoted 1 times

" # Dionenonly 1 month, 2 weeks ago


Selected Answer: A
A is the answer for me
upvoted 1 times

" # vbal 10 months, 2 weeks ago


A is the right Answer. Cognito Identity Pool would help run Lambda using AWS SDK for Javascript.
upvoted 1 times

" # vbal 10 months, 2 weeks ago


https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/using-lambda-functions.html
upvoted 1 times

" # cldy 11 months ago


A. Run a website from an Amazon S3 bucket with a separate S3 bucket for images and messaging data. Call AWS Lambda functions from
embedded JavaScript to manage the dynamic content, and use Amazon Cognito for user and sharing management.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


A is right because they want to discontinue VDI solutions.
upvoted 1 times

" # AWSum1 1 year ago


A is correct
upvoted 1 times

" # Bigbearcn 1 year ago


Option A is wrong. JavaScript is run on client-side and cannot load Lambda without api gateway. They don't mention api gateway anywhere.
I prefer option D. Even though it's not perfect. Considering the question require "offers both security and ease of management", It matches
AppStream better than others.
upvoted 1 times

" # 38745 1 year ago


A.

74 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Cognito as a keyword narrows down to A/B. CloudFormation is not the case so not B.
upvoted 1 times
" # tgv 1 year ago
AAA
---
upvoted 1 times

" # DerekKey 1 year ago


A should be OK
B is wrong - "that offers both security and ease of management" -> "EC2 user data to install and configure the application"
C is wrong - "Amazon RDS database for user accounts and sharing"
D is wrong - "The Development Operations team wants to move away from using VDI and wants to rewrite the application."
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 1 times

75 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #529 Topic 1

A company would like to implement a serverless application by using Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. They deployed
a proof of concept and stated that the average response time is greater than what their upstream services can accept. Amazon CloudWatch
metrics did not indicate any issues with DynamoDB but showed that some Lambda functions were hitting their timeout.
Which of the following actions should the Solutions Architect consider to improve performance? (Choose two.)

A. Con+gure the AWS Lambda function to reuse containers to avoid unnecessary startup time.

B. Increase the amount of memory and adjust the timeout on the Lambda function. Complete performance testing to identify the ideal memory
and timeout con+guration for the Lambda function.

C. Create an Amazon ElastiCache cluster running Memcached, and con+gure the Lambda function for VPC integration with access to the
Amazon ElastiCache cluster.

D. Enable API cache on the appropriate stage in Amazon API Gateway, and override the TTL for individual methods that require a lower TTL
than the entire stage.

E. Increase the amount of CPU, and adjust the timeout on the Lambda function. Complete performance testing to identify the ideal CPU and
timeout con+guration for the Lambda function.

Correct Answer: BD
Reference:
https://lumigo.io/blog/aws-lambda-timeout-best-practices/

Community vote distribution


BD (100%)

" # donathon Highly Voted $ 1 year, 1 month ago


BD
https://lumigo.io/blog/aws-lambda-timeout-best-practices/
A: While this will improve the situation, it may not be enough.
B: Memory – The amount of memory available to the function during execution. Choose an amount between 128 MB and 3,008 MB in 64 MB
increments. Lambda allocates CPU power linearly in proportion to the amount of memory configured. At 1,792 MB, a function has the equivalent
of 1 full vCPU (one vCPU-second of credits per second).
All calls made to AWS Lambda must complete execution within 900 seconds. The default timeout is 3 seconds, but you can set the timeout to
any value between 1 and 900 seconds.
C: The problem is not with the DB.
D: AWS API Gateway has a max timeout of 29 seconds for all integration types, which includes Lambda as well. It means that any API call
coming through API Gateway cannot exceed 29 seconds. It makes sense for most of the APIs except for few high computational ones.
E: Increase the memory not CPU.
upvoted 32 times

" # aandc Most Recent % 4 months, 1 week ago


Selected Answer: BD
You cannot config container being reused or not
upvoted 2 times

" # cannottellname 8 months, 3 weeks ago


A. https://aws.amazon.com/blogs/compute/container-reuse-in-lambda/ (Remember, you can’t depend on a container being reused, since it’s
Lambda’s prerogative to create a new one instead.)

B. Increase Memory is good option. (https://lumigo.io/learn/aws-lambda-timeout-best-practices/)

C. No DynamoDB

D. Sounds good to have less load on Lambda. Caching always gives things faster and better, lesser computation for Lambda. (https://lumigo.io
/learn/aws-lambda-timeout-best-practices/)

E. Not possible. Increase Memory to Increase CPU.


upvoted 1 times

" # tkanmani76 9 months, 3 weeks ago


A and B are right - as they help optimize and improve Lambda performance.
upvoted 1 times

" # tkanmani76 8 months, 3 weeks ago


Changing to B and D.

76 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # AzureDP900 11 months, 1 week ago
Before even looking answers I decided to go with B,D . It is most appropriate.
upvoted 1 times

" # nsei 1 year ago


B & D are the answers
upvoted 1 times

" # wakame 1 year ago


A B is correct!
upvoted 1 times

" # kyoneyam 1 year ago


https://lumigo.io/blog/aws-lambda-timeout-best-practices/
A: While this will improve the situation, it may not be enough.
B: Memory – The amount of memory available to the function during execution. Choose an amount between 128 MB and 3,008 MB in 64 MB
increments. Lambda allocates CPU power linearly in proportion to the amount of memory configured. At 1,792 MB, a function has the equivalent
of 1 full vCPU (one vCPU-second of credits per second).
All calls made to AWS
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B,D
upvoted 1 times

" # Waiweng 1 year ago


it's B,D
upvoted 1 times

" # SD13 1 year ago


A B looks correct. This question is asking to reduce execution time. D will only help if caching is applicable, not always.
upvoted 1 times

" # wakame 1 year ago


Agree with you !
API Cache is a feature that improves request latency.
But, If there is no cache, call Lambda.
Even if API Cache reduces calls to your Lambda, it often doesn't reduce the processing time of Lambda function.

On the other hand, A is correct. It is also mentioned in best practices.


https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html
upvoted 1 times

" # Kian1 1 year ago


will go with BD
upvoted 1 times

" # ujizane 1 year ago


B and D is correct
upvoted 1 times

" # ujizane 1 year ago


B is correct
upvoted 1 times

" # Ebi 1 year ago


I go with BD
upvoted 1 times

" # Bulti 1 year ago


Answer is B and D.
A- incorrect because there is no configuration in Lambda to reuse the same sandbox/contain
B- is correct because when memory size increases, the total time decreases. It means AWS keeps its promise and gives proportional CPU to
your function.
C:- there is no need to use ElasticCache as the problem is not related to caching data from DB.
D- This makes sense as it will increase performance and put less load on Lambda function.
E- You need to increase memory and not CPU.
upvoted 3 times

" # petebear55 1 year ago


B AND D ... BUT ANOTHER EXAMPLE OF SHI** AMAZON TYPE QUESTION PERSECUTING US !!! .. A WOULD BE CORRECT
https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html BUT IT MENTIONS containers NOT WHAT IS SPECIFIED IN THE LINK ... SO

77 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

A AND D FOR ME
upvoted 2 times

78 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #530 Topic 1

A company is migrating an application to AWS. It wants to use fully managed services as much as possible during the migration. The company
needs to store large, important documents within the application with the following requirements:
✑ The data must be highly durable and available.
✑ The data must always be encrypted at rest and in transit.
✑ The encryption key must be managed by the company and rotated periodically.
Which of the following solutions should the Solutions Architect recommend?

A. Deploy the storage gateway to AWS in +le gateway mode. Use Amazon EBS volume encryption using an AWS KMS key to encrypt the
storage gateway volumes.

B. Use Amazon S3 with a bucket policy to enforce HTTPS for connections to the bucket and to enforce server-side encryption and AWS KMS
for object encryption.

C. Use Amazon DynamoDB with SSL to connect to DynamoDB. Use an AWS KMS key to encrypt DynamoDB objects at rest.

D. Deploy instances with Amazon EBS volumes attached to store this data. Use EBS volume encryption using an AWS KMS key to encrypt the
data.

Correct Answer: A

Community vote distribution


B (100%)

" # donathon Highly Voted $ 1 year, 1 month ago


B. As Storage Gateway is not a managed service
upvoted 27 times

" # dpvnme Highly Voted $ 1 year, 1 month ago


B would be my choice
upvoted 10 times

" # kangtamo Most Recent % 4 months, 1 week ago


Selected Answer: B
Agree with B: S3 HTTPS
upvoted 1 times

" # cldy 11 months ago


B. Use Amazon S3 with a bucket policy to enforce HTTPS for connections to the bucket and to enforce server-side encryption and AWS KMS for
object encryption.
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


I will pick B.
upvoted 1 times

" # AWSum1 1 year ago


B. "Highly durable and available"
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

" # KittuCheeku 1 year ago


Definitely B
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 2 times

" # blackgamer 1 year ago


B for sure.
upvoted 1 times

79 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # KnightVictor 1 year ago


Should be B
upvoted 1 times

" # alisyech 1 year ago


i go with B
upvoted 1 times

" # awsexamprep47 1 year ago


B is the answer
All the encryption requirements are satisfied using S-3 bucket policy
upvoted 2 times

" # kiev 1 year ago


B for me. In fact I don't even worry to read when a question talks about storage that's fully managed and cost effective, I just for S3
upvoted 4 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # ujizane 1 year ago


B is collect
upvoted 2 times

" # Ebi 1 year ago


Answer is B for sure
upvoted 4 times

80 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #531 Topic 1

A Solutions Architect is designing a highly available and reliable solution for a cluster of Amazon EC2 instances.
The Solutions Architect must ensure that any EC2 instance within the cluster recovers automatically after a system failure. The solution must
ensure that the recovered instance maintains the same IP address.
How can these requirements be met?

A. Create an AWS Lambda script to restart any EC2 instances that shut down unexpectedly.

B. Create an Auto Scaling group for each EC2 instance that has a minimum and maximum size of 1.

C. Create a new t2.micro instance to monitor the cluster instances. Con+gure the t2.micro instance to issue an aws ec2 reboot-instances
command upon failure.

D. Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric, and then con+gure an EC2 action to recover the instance.

Correct Answer: D
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html

" # DJTau Highly Voted $ 1 year, 1 month ago


Answer = D
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html
upvoted 14 times

" # Ni_yot Most Recent % 10 months, 1 week ago


Yep D for me.
upvoted 1 times

" # cldy 10 months, 1 week ago


D correct.
upvoted 1 times

" # AzureDP900 11 months ago


I'll go with D
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 2 times

" # Waiweng 1 year ago


it's D
upvoted 2 times

" # Kian1 1 year ago


going with D
upvoted 1 times

" # Ebi 1 year ago


Answer is D
upvoted 4 times

" # Bulti 1 year ago


Answer is D based on the ability to configure the recover action on a CloudWatch event.
upvoted 3 times

" # T14102020 1 year ago


Correct is D. Recover with CloudWatch alarm for the StatusCheckFailed_System
upvoted 1 times

" # jackdryan 1 year ago


I'll go with D
upvoted 4 times

" # gookseang 1 year ago


DDDDDDDDDDDDDDDDDDD

81 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times

" # fullaws 1 year ago


D is correct
upvoted 2 times

" # NikkyDicky 1 year ago


D for sure
upvoted 2 times

" # jv1 1 year ago


D
https://n2ws.com/blog/aws-disaster-recovery/how-aws-instance-auto-recovery-works-and-the-data-corruption-challenge
upvoted 3 times

" # dojo 1 year, 1 month ago


What about retaining IP?
upvoted 1 times

" # qianhaopower 1 year, 1 month ago


“ A recovered instance is identical to the original instance, including the instance ID, private IP addreSs”
upvoted 2 times

" # sparkf1 1 year, 1 month ago


If your instance has a public IPv4 address, it retains the public IPv4 address after recovery.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html
upvoted 4 times

82 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #532 Topic 1

A public retail web application uses an Application Load Balancer (ALB) in front of Amazon EC2 instances running across multiple Availability
Zones (AZs) in a
Region backed by an Amazon RDS MySQL Multi-AZ deployment. Target group health checks are con+gured to use HTTP and pointed at the
product catalog page. Auto Scaling is con+gured to maintain the web jeet size based on the ALB health check.
Recently, the application experienced an outage. Auto Scaling continuously replaced the instances during the outage. A subsequent investigation
determined that the web server metrics were within the normal range, but the database tier was experiencing high load, resulting in severely
elevated query response times.
Which of the following changes together would remediate these issues while improving monitoring capabilities for the availability and
functionality of the entire application stack for future growth? (Choose two.)

A. Con+gure read replicas for Amazon RDS MySQL and use the single reader endpoint in the web application to reduce the load on the
backend database tier.

B. Con+gure the target group health check to point at a simple HTML page instead of a product catalog page and the Amazon Route 53 health
check against the product page to evaluate full application functionality. Con+gure Amazon CloudWatch alarms to notify administrators when
the site fails.

C. Con+gure the target group health check to use a TCP check of the Amazon EC2 web server and the Amazon Route 53 health check against
the product page to evaluate full application functionality. Con+gure Amazon CloudWatch alarms to notify administrators when the site fails.

D. Con+gure an Amazon CloudWatch alarm for Amazon RDS with an action to recover a high-load, impaired RDS instance in the database tier.

E. Con+gure an Amazon ElastiCache cluster and place it between the web application and RDS MySQL instances to reduce the load on the
backend database tier.

Correct Answer: CE

Community vote distribution


BE (100%)

" # kalyan_krishna742020 Highly Voted $ 1 year ago


BE..
"A: Since the issue lies with query response (read) it is cheaper and faster to use ElastiCache which is in memory.
B\C: Unlike a Classic Load Balancer or a Network Load Balancer, an Application Load Balancer can't have transport layer (layer 4) TCP or
SSL/TLS listeners. It supports only HTTP and HTTPS listeners. Additionally, it can't use backend authentication to authenticate HTTPS
connections between the load balancer and backend instances.
D: Should not recover the RDS instance."
upvoted 12 times

" # epomatti 2 months ago


Adding replicas is more of a "remediation" (as stated in the question) than adding a cache, which will require architectural changes.

Honestly it is annoying how badly these questions are written.

Problem is that "single reader endpoint" is a feature of Aurora, not RDS MySQL.

So probably A is incorrect.
upvoted 1 times

" # qurren Highly Voted $ 1 year ago


BE
The problem in A is that "reader endpoint" is for Aurora, not RDS!
upvoted 8 times

" # WhyIronMan 1 year ago


Also, configuring the same single reader endpoint will result the same problem, since the read replica will be overloaded
upvoted 1 times

" # Kyperos Most Recent % 2 months, 2 weeks ago


Between B & C, application can be failed but TCP Port (443) still UP so If using TCP healthcheck, it will not accurate. We must HTTP Healthcheck
to get HTTP Ressponse to enhance healthcheck capability.
--> B & E
upvoted 1 times

83 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # hilft 3 months, 2 weeks ago


B for sure
It's between A and E. I initially thought it was A but the forum suggests that the reader endpoint is for Aurora not RDS.
upvoted 1 times

" # bkrish 6 months, 4 weeks ago


Selected Answer: BE
B --> ALB with health check
E --> ElastiCache for DB read performance and to offload huge traffic
upvoted 1 times

" # HellGate 9 months, 1 week ago


My answer is C and E.

Simple health check like TCP check (ping) will be enough because R53 also perform full health check.
upvoted 1 times

" # tracyli 2 months, 3 weeks ago


but did it mention R53 here yet?
upvoted 1 times

" # AzureDP900 11 months ago


BE for me
upvoted 1 times

" # tgv 1 year ago


BBB EEE
---
upvoted 1 times

" # nerdicbynature 1 year ago


AB:

A: Single reader endpoint will allow for easy future growths by simply adding more replicas. Costs aren't mentioned. Thus I would prefer A to D
B: Monitoring should be as cheap as possible. Compared to C, HTTP-Checks are more reliable.
D: Does not work directly.
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with B,E
upvoted 2 times

" # victordun 1 year ago


opt for B&E
upvoted 1 times

" # Waiweng 1 year ago


BE for me
upvoted 3 times

" # blackgamer 1 year ago


BE for me.
upvoted 1 times

" # CarisB 1 year ago


B and E
upvoted 2 times

" # wasabidev 1 year, 1 month ago


BE for me
upvoted 1 times

" # nitinz 1 year, 1 month ago


B and E
upvoted 1 times

84 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #533 Topic 1

A company is running an email application across multiple AWS Regions. The company uses Ohio (us-east-2) as the primary Region and Northern
Virginia (us- east-1) as the Disaster Recovery (DR) Region. The data is continuously replicated from the primary Region to the DR Region by a
single instance on the public subnet in both Regions. The replication messages between the Regions have a signi+cant backlog during certain
times of the day. The backlog clears on its own after a short time, but it affects the application's RPO.
Which of the following solutions should help remediate this performance problem? (Choose two.)

A. Increase the size of the instances.

B. Have the instance in the primary Region write the data to an Amazon SQS queue in the primary Region instead, and have the instance in the
DR Region poll from this queue.

C. Use multiple instances on the primary and DR Regions to send and receive the replication data.

D. Change the DR Region to Oregon (us-west-2) instead of the current DR Region.

E. Attach an additional elastic network interface to each of the instances in both Regions and set up load balancing between the network
interfaces.

Correct Answer: CE

Community vote distribution


AC (67%) BC (33%)

" # kalyan_krishna742020 Highly Voted $ 1 year, 1 month ago


Note: The answers are independent. It does not say which COMBINATION of answers would remediate this problem.
A - correct - someone hires a useless architect who suggested a t2.nano in region 1 and region 2. Network performance is attrocious (low).
Wonder if increasing instance size will help message replication throughput? of course it will. both compute and networking.
B - incorrect - less correct than A and C. If concerned about RPO, i'd fire myself if i suggested storing replication messages in the source region,
rather than in the SQS queue in the target region - definitely an RPO pleaser.
C - correct - if network and compute was an issue, potentially doubles the speed of replicating and processing messages
D - incorrect - increases latency if networking was the root cause
E - incorrect - teaming doesn't improve network performance
upvoted 16 times

" # epomatti 2 months ago


Where does it say t2.nano?? Doesn't show to me.
upvoted 1 times

" # MrCarter 1 year ago


amazing explanation. But please dont fire yourself!!
upvoted 2 times

" # student2020 1 year ago


Teaming is not supported in AWS
upvoted 1 times

" # kirrim 1 year ago


Agree with the above reasoning. In addition, if you assume the backlogs are occurring because the primary region servers are under heavy
load, then if the primary region servers are too busy to replicate data to the secondary region, they're probably too busy to chunk up email
data to place into SQS queue messages, too.
upvoted 1 times

" # aandc Most Recent % 4 months ago


Go for AC, E makes no sense regarding load balancing
upvoted 1 times

" # kangtamo 4 months, 1 week ago


Selected Answer: AC
I will go with AC.
upvoted 2 times

" # vbal 10 months, 2 weeks ago


A & C looks good.
upvoted 2 times

85 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # cldy 11 months ago


A. Increase the size of the instances.
C. Use multiple instances on the primary and DR Regions to send and receive the replication data.
upvoted 2 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: BC
For B, SQS can decouple the message for DR to reduce the backlog,For C ,it will increase the capacity
upvoted 1 times

" # Riho 10 months, 1 week ago


It can't be B. "Amazon SQS queue in the primary Region instead". Queue is in primary region and if something happens - the data is also still
in primary region. No RPO improvement..
upvoted 1 times

" # Salmariaz 1 year ago


It’s between AC and CE
A Increasing instance size would help but we don’t know what’s the current size and assuming that it’s already small. Though this might help, but
not sure if this would be a good option in the long run to keep increasing vertically
B SQS is ruled out since it is in the source region and counterfeits DR purpose
D is shifting the problem
C might help for faster processing since there are multiple instances to send and receive.
E would also increase the bandwidth as
Instances with multiple network cards provide higher network performance, including bandwidth capabilities above 100 Gbps and improved
packet rate performance
Also AWS supports load balancing to multiple IPs of the same instance

https://aws.amazon.com/about-aws/whats-new/2017/09/elastic-load-balancing-network-load-balancer-now-supports-load-balancing-to-ip-
addresses-as-targets-for-aws-and-on-premises-resources/
upvoted 1 times

" # Kopa 1 year ago


im for A,C
upvoted 1 times

" # nodogoshi 1 year ago


BC. SQS is durable for backlog
upvoted 2 times

" # student22 1 year ago


A,C

Why not C? The SQS queue in the source region would not improve RPO.
upvoted 1 times

" # denccc 1 year ago


go for A and C
upvoted 1 times

" # kevin1024 1 year ago


Go with C, E
C => multiple instances means more bandwidth too
E => Instances with multiple network cards provide higher network performance, including bandwidth capabilities above 100 Gbps and improved
packet rate performance.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with A,C
upvoted 1 times

" # Waiweng 1 year ago


it's A and C
upvoted 2 times

" # blackgamer 1 year, 1 month ago


A and C to me.
upvoted 1 times

" # SD13 1 year, 1 month ago


B & C.
A is not correct as vertical scaling is not recommended.
upvoted 2 times

86 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # CarisB 1 year, 1 month ago


Agree on A and C. SQS max message size (256 kb) is not suited for replicating email data.
Teaming may have been interesting, but E just mentions "load balancing", not "teaming" (aggregate throughput).
upvoted 3 times

87 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #534 Topic 1

A company has implemented AWS Organizations. It has recently set up a number of new accounts and wants to deny access to a speci+c set of
AWS services in these new accounts.
How can this be controlled MOST emciently?

A. Create an IAM policy in each account that denies access to the services. Associate the policy with an IAM group, and add all IAM users to
the group.

B. Create a service control policy that denies access to the services. Add all of the new accounts to a single organizational unit (OU), and
apply the policy to that OU.

C. Create an IAM policy in each account that denies access to the services. Associate the policy with an IAM role, and instruct users to log in
using their corporate credentials and assume the IAM role.

D. Create a service control policy that denies access to the services, and apply the policy to the root of the organization.

Correct Answer: B
Reference:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html

Community vote distribution


B (100%)

" # donathon Highly Voted $ 1 year, 1 month ago


B
A\C: Not efficient.
D: Would affect all accounts.
upvoted 25 times

" # TechX Most Recent % 4 months, 1 week ago


Selected Answer: B
B for sure
upvoted 2 times

" # Ni_yot 9 months, 1 week ago


Selected Answer: B
B is correct. Applying at the root will affect all accounts.
upvoted 1 times

" # drwprch 9 months, 2 weeks ago


Selected Answer: B
B is correct
upvoted 1 times

" # AzureDP900 11 months ago


Selected Answer: B
B is correct
upvoted 1 times

" # TiredDad 1 year ago


is this an actual exam question!???
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 3 times

" # Waiweng 1 year ago


it' s B
upvoted 3 times

" # awsexamprep47 1 year ago


B is the answer.
Apply SCP on separate OU instead of applying it at root level, this will allow the solution to be scalable in case new accounts are added in the
future

88 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # ujizane 1 year ago


B is correct
upvoted 2 times

" # Ebi 1 year ago


B is the answer
upvoted 3 times

" # Bulti 1 year ago


Answer is B. Services not explicitly allowed in the SCP and associated with an OU deny access to the accounts within that OU.
upvoted 1 times

" # T14102020 1 year ago


Correct B. service control policy for new OU
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with B
upvoted 4 times

" # oopsy 1 year, 1 month ago


B for sure
upvoted 3 times

" # fullaws 1 year, 1 month ago


B is correct
upvoted 1 times

89 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #535 Topic 1

A company has deployed an application to multiple environments in AWS, including production and testing. The company has separate accounts
for production and testing, and users are allowed to create additional application users for team members or services, as needed. The Security
team has asked the Operations team for better isolation between production and testing with centralized controls on security credentials and
improved management of permissions between environments.
Which of the following options would MOST securely accomplish this goal?

A. Create a new AWS account to hold user and service accounts, such as an identity account. Create users and groups in the identity account.
Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles.

B. Modify permissions in the production and testing accounts to limit creating new IAM users to members of the Operations team. Set a
strong IAM password policy on each account. Create new IAM users and groups in each account to limit developer access to just the services
required to complete their job function.

C. Create a script that runs on each account that checks user accounts for adherence to a security policy. Disable any user or service
accounts that do not comply.

D. Create all user accounts in the production account. Create roles for access in the production account and testing accounts. Grant cross-
account access from the production account to the testing account.

Correct Answer: A
Reference:
https://aws.amazon.com/ru/blogs/security/how-to-centralize-and-automate-iam-policy-creation-in-sandbox-development-and-test-
environments/

Community vote distribution


A (100%)

" # donathon Highly Voted $ 1 year, 1 month ago


A
A: By centralizing users to a single account, a user can access the prod and test using assume role. This ensures that all actions are properly
logged and is the most secure. Adapted from this article: https://aws.amazon.com/blogs/security/how-to-centralize-and-automate-iam-policy-
creation-in-sandbox-development-and-test-environments/
B: This means the test users will still need to be created. The problem with test users is always security. Who is the actual person behind the
scene carrying out that specific actions? This is unlikely the most secure option.
C: Any answers that is asking you to write a script is very unlikely to be the answer.
D: This seems to be able to work too which is similar to A. But the Security team already asked for “better isolation with centralized controls”.
Hence I chose A.
upvoted 36 times

" # all_past 1 year, 1 month ago


I would prefer this one as well, B is not a centralized solution for me.
upvoted 1 times

" # jj22222 Most Recent % 7 months, 1 week ago


Selected Answer: A
A. Create a new AWS account to hold user and service accounts, such as an identity account. Create users and groups in the identity account.
Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles.
upvoted 1 times

" # drwprch 9 months, 2 weeks ago


Selected Answer: A
A. for sure
upvoted 1 times

" # AzureDP900 11 months ago


A is correct
upvoted 1 times

" # kirrim 1 year ago


It's definitely A, see bottom of p2 here: https://d0.awsstatic.com/aws-answers/AWS_Multi_Account_Security_Strategy.pdf

It's so commonly used, AWS even provided a way to color-code the console when you assume a role, so it shows up red when you're working in
a prod role, green in dev/test role, etc: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-console.html

90 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # denccc 1 year ago
will go with A
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 1 times

" # Waiweng 1 year ago


it's A
upvoted 2 times

" # ItsmeP 1 year ago


A is correct
upvoted 1 times

" # kiev 1 year ago


The Answer is A. I always knew it and have confirmed it with my questions and answer from Neal Davis. Guys choose A.
upvoted 1 times

" # Kian1 1 year ago


going with A
upvoted 2 times

" # ujizane 1 year ago


A is the answer
upvoted 1 times

" # Ebi 1 year ago


Answer is A
upvoted 3 times

" # Bulti 1 year ago


Answer is A since it is the most elegant solution when it comes to centralizing user management and granting access to the services in prod and
test account. However is it important to note that granting trust to the identity account alone will not prevent unauthorized access to services.
You will need to create a trust policy on IAM roles in prod and test accounts that grant assumeRole permission to specific groups or users in the
'identity account.
upvoted 2 times

" # petebear55 1 year ago


A B CAN NOT BE RIGHT BECAUSE THE QUESTION ASKS MOST SECURE !!! NOW HAVING 'GROUPS' ACCESS IS NOT SECURE BECAUSE
THERE IS NO WAY OF NOING WHICH USER IN THE GROUP ACCESSED THE RECOURCES. THUS A WHICH IS BEST PRACTICE 'ROLES'
upvoted 1 times

" # Kian1 1 year ago


so what is your best answer?
upvoted 1 times

" # T14102020 1 year ago


For sure A. One centralized account + roles
upvoted 1 times

" # gookseang 1 year ago


seems A
upvoted 2 times

91 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #536 Topic 1

The CISO of a large enterprise with multiple IT departments, each with its own AWS account, wants one central place where AWS permissions for
users can be managed and users authentication credentials can be synchronized with the company's existing on-premises solution.
Which solution will meet the CISO's requirements?

A. De+ne AWS IAM roles based on the functional responsibilities of the users in a central account. Create a SAML-based identity management
provider. Map users in the on-premises groups to IAM roles. Establish trust relationships between the other accounts and the central account.

B. Deploy a common set of AWS IAM users, groups, roles, and policies in all of the AWS accounts using AWS Organizations. Implement
federation between the on-premises identity provider and the AWS accounts.

C. Use AWS Organizations in a centralized account to de+ne service control policies (SCPs). Create a SAML-based identity management
provider in each account and map users in the on-premises groups to AWS IAM roles.

D. Perform a thorough analysis of the user base and create AWS IAM users accounts that have the necessary permissions. Set up a process to
provision and deprovision accounts based on data in the on-premises solution.

Correct Answer: C

Community vote distribution


C (50%) A (50%)

" # donathon Highly Voted $ 1 year, 1 month ago


C
To help you manage federation for multiple AWS accounts centrally, you can use AWS Single Sign-On to manage SSO access for all of your
accounts in AWS Organizations. https://aws.amazon.com/identity/federation/
A: The fact that the answer did not explain how “trust relationships” are created means I would avoid this answer if there is a better answer. In
this case C. You will also need to use a lot of assume roles in each and every account which can be tedious. This was what it used to be before
AWS Organization was launched.
B: Accounts are not centralized. (“one central place”)
D: There is no federation.
upvoted 19 times

" # donathon 1 year, 1 month ago


A
B\C: Accounts are not centralized. (“one central place”). Also SAML must be done in one account.
D: There is no federation.
upvoted 37 times

" # G3 1 year, 1 month ago


I feel it has to be C. SCPs offer central control over the maximum available permissions for all accounts in your organization. A doesnt
provide to centrally manage permissions.
upvoted 5 times

" # PacoDerek 1 year, 1 month ago


C. finally i got u once @donathon :D
SCPs are necessary but not sufficient for granting access in the accounts in your organization. Attaching an SCP to the organization root or
an organizational unit (OU) defines a guardrail for what actions accounts within the organization root or OU can do. You still need to attach
IAM policies to users and roles in your organization's accounts to actually grant permissions to them
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html
upvoted 1 times

" # sam422 1 year ago


Although A looks bit vague on trust relationships, it is how aws asks to do it having an central account. For C, SCP offers perimeter control.
I go with A
upvoted 1 times

" # Musk 1 year, 1 month ago


C says "identity management provider in each account" which I think is wrong, because you just need one, not one per account.
upvoted 4 times

" # manoj101 1 year ago


C is not correct. you can't have SAML across each account. That is not going to centralise access.
upvoted 2 times

" # chandler 1 year, 1 month ago


Probably A is the answer: https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html and search "trust" on

92 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

the page.
Also, question asks about "AWS permissions for users can be managed", SCP won't help too much about that. It's more like IAM's job.
upvoted 15 times
" # nsvijay04b1 Most Recent % 1 week, 1 day ago
Selected Answer: C
each account IAM identity provider and role for SAML access created and it should be trusted external IDP provider.
upvoted 1 times

" # epomatti 2 months ago


Selected Answer: A
A
One central place to synchronize users.

C is wrong.
upvoted 1 times

" # Ni_yot 2 months, 3 weeks ago


Will go with A. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml.html
upvoted 1 times

" # aandc 4 months, 1 week ago


A: C says " identity management provider in each account"
upvoted 1 times

" # cannottellname 9 months, 1 week ago


A Does not scale that well. Establishing trust and all, while also creating IAM role for each account because access limits can be different for
different accounts + person from 1 department should not access other departments accounts though they need same permissions. Hence,
there will be multiple IAM role + multiple account trusts....

Also, logging into central account and then assuming role for human resources does not seem a good option. This needs to be done at each and
every account level only.... and what kind of services will be needed that way...

C seems better to me here.


upvoted 1 times

" # vbal 11 months ago


why C ? Create an IDP in each Account..?
upvoted 1 times

" # student22 1 year ago


A is correct
Map on premise users to AWS Roles through SAML federation
C is similar but not centralized.
upvoted 2 times

" # denccc 1 year ago


I think it's A
upvoted 1 times

" # DerekKey 1 year ago


A correct - https://aws.amazon.com/blogs/security/how-to-use-trust-policies-with-iam-roles/
B wrong - "The CISO" + "wants one central place where AWS permissions ... can be managed and users authentication credentials can be
synchronized with the company’s existing on-premises solution"
C wrong - "The CISO" + "wants one central place where AWS permissions ... can be managed and users authentication credentials can be
synchronized with the company’s existing on-premises solution"
D wrong
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

" # Waiweng 1 year ago


it;s C
upvoted 2 times

" # Kian1 1 year ago


will go with A
upvoted 1 times

" # Ebi 1 year ago


Answer is A

93 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 3 times
" # 01037 1 year ago
Either A or C needs to create roles for all accounts, so neither can really control permissions centrally.
But SCP defines boundaries, so it can provide central permission control to some extent, and simpler.
So I'm inclined to C.
upvoted 1 times

" # Bulti 1 year ago


A is the right answer. There should be only one SAML Identity Provider for the organization which would federate with the on-prem solution to
maps the groups to the IAM roles in the centralized account. Using these IAM roles the user can assume a role in other accounts like prod and
test to access services using trust and permission policies associated with the assumed role in those accounts.
upvoted 1 times

" # Manolo0101 1 year ago


answer should be A
upvoted 1 times

94 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #537 Topic 1

A large company has increased its utilization of AWS over time in an unmanaged way. As such, they have a large number of independent AWS
accounts across different business units, projects, and environments. The company has created a Cloud Center of Excellence team, which is
responsible for managing all aspects of the AWS Cloud, including their AWS accounts.
Which of the following should the Cloud Center of Excellence team do to BEST address their requirements in a centralized way? (Choose two.)

A. Control all AWS account root user credentials. Assign AWS IAM users in the account of each user who needs to access AWS resources.
Follow the policy of least privilege in assigning permissions to each user.

B. Tag all AWS resources with details about the business unit, project, and environment. Send all AWS Cost and Usage reports to a central
Amazon S3 bucket, and use tools such as Amazon Athena and Amazon QuickSight to collect billing details by business unit.

C. Use the AWS Marketplace to choose and deploy a Cost Management tool. Tag all AWS resources with details about the business unit,
project, and environment. Send all AWS Cost and Usage reports for the AWS accounts to this tool for analysis.

D. Set up AWS Organizations. Enable consolidated billing, and link all existing AWS accounts to a master billing account. Tag all AWS
resources with details about the business unit, project and environment. Analyze Cost and Usage reports using tools such as Amazon Athena
and Amazon QuickSight, to collect billing details by business unit.

E. Using a master AWS account, create IAM users within the master account. De+ne IAM roles in the other AWS accounts, which cover each of
the required functions in the account. Follow the policy of least privilege in assigning permissions to each role, then enable the IAM users to
assume the roles that they need to use.

Correct Answer: AD

Community vote distribution


DE (75%) AD (25%)

" # ajeeshb Highly Voted $ 1 year, 1 month ago


Answers: DE
E because it manages IAM users centrally and uses roles in other accounts to which the IAM user can switch. Option A says to create IAM users
in all accounts which is not the best solution
upvoted 21 times

" # nsei Highly Voted $ 12 months ago


A & D are the right answers.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html
We recommend that you use the management account and its users and roles only for tasks that can be performed only by that account. Store
all of your AWS resources in other AWS accounts in the organization and keep them out of the management account... Therefore E is incorrect
upvoted 8 times

" # tkanmani76 9 months, 2 weeks ago


Agree on this - A& D
upvoted 1 times

" # cen007 3 months, 2 weeks ago


Read the question carefully.
The requirement is the central account wants to manage all ASPECT OF THE CLOUD INCLUDING THE AWS ACCOUNTS.
Therefore ROLES will be created in the child accounts that the master will assume.
So D & E is the correct answer.
upvoted 1 times

" # gerhardbl 2 months, 3 weeks ago


Creating all IAM Users in the main AWS Organization accounts and having no IAM Users in the sub-accounts is the consequence of
what you are saying (E). That means only IAM Roles in the sub-accounts. That seems very counter intuitive. You cannot even have users
and teams manage their own account and login to the Console as an IAM User in that case. For me, central management is more that
you at least control all the sub-accounts at the Root level, bring them under an Organization, and can then enforce security settings,
SCPs, Service Catalog etc. That is more A than E.
upvoted 1 times

" # aandc 4 months, 1 week ago


agree on A D
upvoted 1 times

" # Jonfernz Most Recent % 4 weeks ago


Selected Answer: DE

95 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

i do this only a daily basis. it's definitely D, E


upvoted 1 times

" # EgaHa 1 month, 1 week ago


Selected Answer: DE
Would go with D & E
upvoted 1 times

" # epomatti 2 months ago


Selected Answer: AD
A, D

E is wrong, it is a bad practice to use the master account for creating users.
upvoted 1 times

" # zoliv 9 months, 1 week ago


Selected Answer: DE
D & E. AWS Org & IAM Roles
upvoted 1 times

" # AzureDP900 11 months ago


D & E makes most sense.
upvoted 1 times

" # AWSum1 1 year ago


DE
Centrally manage. The wording of the other rotations being "each account" shows no central management
upvoted 1 times

" # DanShone 1 year ago


D and E
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D,E
upvoted 2 times

" # Kopa 1 year ago


Im also for DE
upvoted 1 times

" # Waiweng 1 year, 1 month ago


D and E
upvoted 3 times

" # blackgamer 1 year, 1 month ago


DE is the correct answer.
upvoted 1 times

" # KnightVictor 1 year, 1 month ago


Would go with D & E
upvoted 1 times

" # CarisB 1 year, 1 month ago


D and E in my opinion. A is not really a centralized solution.
upvoted 2 times

" # nitinz 1 year, 1 month ago


D & E makes most sense.
upvoted 1 times

96 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #538 Topic 1

To abide by industry regulations, a Solutions Architect must design a solution that will store a company's critical data in multiple public AWS
Regions, including in the United States, where the company's headquarters is located. The Solutions Architect is required to provide access to the
data stored in AWS to the company's global WAN network. The Security team mandates that no tramc accessing this data should traverse the
public internet.
How should the Solutions Architect design a highly available solution that meets the requirements and is cost-effective?

A. Establish AWS Direct Connect connections from the company headquarters to all AWS Regions in use. Use the company WAN to send
tramc over to the headquarters and then to the respective DX connection to access the data.

B. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region. Use the company WAN to send tramc
over a DX connection. Use inter-region VPC peering to access the data in other AWS Regions.

C. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region. Use the company WAN to send tramc
over a DX connection. Use an AWS transit VPC solution to access data in other AWS Regions.

D. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region. Use the company WAN to send tramc
over a DX connection. Use Direct Connect Gateway to access data in other AWS Regions.

Correct Answer: D
Reference:
https://aws.amazon.com/blogs/aws/new-aws-direct-connect-gateway-inter-region-vpc-access/

Community vote distribution


D (100%)

" # donathon Highly Voted $ 1 year, 1 month ago


D
This feature also allows you to connect to any of the participating VPCs from any Direct Connect location, further reducing your costs for making
using AWS services on a cross-region basis.
https://aws.amazon.com/blogs/aws/new-aws-direct-connect-gateway-inter-region-vpc-access/
A: There is only a single DC and hence is not highly available.
B: VPC peering means there are additional cost charges when data transfer between region. Also there is a 125 VPC peering limit. Data
transferred across Inter-Region VPC Peering connections is charged at the standard inter-region data transfer rates. https://aws.amazon.com
/about-aws/whats-new/2017/11/announcing-support-for-inter-region-vpc-peering/
C: Similar to B.
D: Remember one caveat which the question did not state is if there are multiple accounts: The VPCs that reference a particular Direct Connect
Gateway must have IP address ranges that do not overlap. Today, the VPCs must all be in the same AWS account; we plan to make this more
flexible in the future. https://aws.amazon.com/blogs/aws/new-aws-direct-connect-gateway-inter-region-vpc-access/
upvoted 53 times

" # Joeylee 1 year ago


Agree on D
upvoted 1 times

" # Kuro 1 year ago


Clearly explaination.
upvoted 1 times

" # shammous 1 year ago


Another argument is that there was no mention to have transit feature between VPCs, which is an extra reason to choose D over C (from apart
being more cost-effective)
upvoted 2 times

" # DashL 1 year ago


As per aws announcement in Mar 2019 (https://aws.amazon.com/about-aws/whats-new/2019/03/announcing-multi-account-support-for-
direct-connect-gateway/):

With the launch of multi-account support for Direct Connect gateway, you can associate up to 10 Amazon VPCs from multiple accounts with a
Direct Connect gateway. The Amazon VPCs and the Direct Connect gateway must be owned by AWS Accounts that belong to the same AWS
payer account ID.
upvoted 3 times

" # bebo Highly Voted $ 1 year, 1 month ago


Question 144
A. Use Amazon CloudFront with Amazon ECS for hosting the website. Use AWS Secrets Manager for provide

97 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

user management and authentication functions. Use ECS Docker containers to build an API.
B. Use Amazon Route 53 latency routing with an Application Load Balancer and AWS Fargate in different
regions for hosting the website. use Amazon Cognito to provide user management and authentication
functions. Use Amazon EKS containers.
C. Use Amazon CloudFront with Amazon S3 for hosting static web resources. Use Amazon Cognito to provide
user management authentication functions. Use Amazon API Gateway with AWS Lambda to build an API.
D. Use AWS Direct Connect with Amazon CloudFront and Amazon S3 for hosting static web resource. Use
Amazon Cognito to provide user management authentication functions. Use AWS Lambda to build an API.
Correct Answer: C
upvoted 15 times
" # HellGate Most Recent % 7 months, 2 weeks ago
Selected Answer: D
B, C, D are all right way... D > C > B
D is the best answer.
upvoted 1 times

" # GeniusMikeLiu 9 months, 2 weeks ago


why need two AWS Direct Connect connections?
upvoted 2 times

" # HellGate 9 months, 1 week ago


HA purpose
upvoted 2 times

" # AzureDP900 11 months ago


D is right
upvoted 1 times

" # student22 1 year ago


D
AWS region 1 --> VIF --> Direct Connect Gateway --> multiple VIF --> multiple AWS Regions
upvoted 3 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 1 times

" # Waiweng 1 year ago


it's D
upvoted 1 times

" # Pupu86 1 year ago


A: Doesn’t fulfill HA
B: charges are implemented for both inbound and outbound peering
C: charges are implemented for outbound only but solution only suitable for VPC transits purpose to overcome VPC peering mesh
D: Correct answer to link multiple regional traffic
upvoted 1 times

" # bnagaraja9099 1 year ago


D
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-aws-transit-gateway.html
upvoted 1 times

" # Firststack 1 year ago


D is correct
upvoted 1 times

" # Ebi 1 year ago


I go with D
upvoted 3 times

" # MichaelHuang 1 year ago


D
See the link for Direct Connect Gateway for multi-regions: https://aws.amazon.com/blogs/aws/new-aws-direct-connect-gateway-inter-region-
vpc-access/
upvoted 1 times

" # Bulti 1 year ago


Answer is D
upvoted 1 times

" # jackdryan 1 year ago


I'll go with D

98 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 3 times

" # GopiSivanathan 1 year ago


data is in the AWS public regions, so Direct connect Gateway can't be used. it should Transit VPC
upvoted 2 times

" # rcher 1 year ago


You can create Public VIF between Direct Connect and Direct Connect Gateway. and access AWS public services like S3. Transit VPC
works for VPN, which means IPSEC that work over the internet :)
upvoted 1 times
" # Edgecrusher77 1 year ago
C, Transit VPC
upvoted 1 times

" # ipindado2020 1 year ago


A. Multiple DX connections very expensive: KO
B. Transitive peering not allowed: KO.
C. transit VPC: OK
D. Direct Connect Gateway: OK

As for the additional costs of the transit VPC solution....


D seems to be correct
upvoted 1 times

99 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #539 Topic 1

A company wants to manage the costs associated with a group of 20 applications that are infrequently used, but are still business-critical, by
migrating to AWS.
The applications are a mix of Java and Node.js spread across different instance clusters. The company wants to minimize costs while
standardizing by using a single deployment methodology. Most of the applications are part of month-end processing routines with a small number
of concurrent users, but they are occasionally run at other times. Average application memory consumption is less than 1 GB, though some
applications use as much as 2.5 GB of memory during peak processing. The most important application in the group is a billing report written in
Java that accesses multiple data sources and often for several hours.
Which is the MOST cost-effective solution?

A. Deploy a separate AWS Lambda function for each application. Use AWS CloudTrail logs and Amazon CloudWatch alarms to verify
completion of critical jobs.

B. Deploy Amazon ECS containers on Amazon EC2 with Auto Scaling con+gured for memory utilization of 75%. Deploy an ECS task for each
application being migrated with ECS task scaling. Monitor services and hosts by using Amazon CloudWatch.

C. Deploy AWS Elastic Beanstalk for each application with Auto Scaling to ensure that all requests have sumcient resources. Monitor each
AWS Elastic Beanstalk deployment by using CloudWatch alarms.

D. Deploy a new Amazon EC2 instance cluster that co-hosts all applications by using EC2 Auto Scaling and Application Load Balancers. Scale
cluster size based on a custom metric set on instance memory utilization. Purchase 3-year Reserved Instance reservations equal to the
GroupMaxSize parameter of the Auto Scaling group.

Correct Answer: C

Community vote distribution


B (100%)

" # meenu2225 Highly Voted $ 1 year, 1 month ago


I will go with B, B makes more sense out of the others. All of these apps are begging to be containerised :)
upvoted 23 times

" # kirrim 1 year ago


Agree, ECS is the most cost-effective answer up there. Lambda would be great except that reporting app that runs for 2 hrs, which is a no go
for Lambda with a max execution time of 900 seconds (15 mins): https://docs.aws.amazon.com/whitepapers/latest/serverless-architectures-
lambda/timeout.html

Side note: It's not an available choice, but I'd argue that since these apps are only sporadically used, Fargate would likely be even more cost
effective than EC2-based ECS: https://aws.amazon.com/blogs/containers/theoretical-cost-optimization-by-amazon-ecs-launch-type-fargate-
vs-ec2/
upvoted 2 times

" # Sumit_Kumar Most Recent % 3 months ago


single deployment approach - elastic beanstalk
upvoted 2 times

" # foxrj21 5 months ago


Selected Answer: B
its B, ECS with ec2 or spot instances
upvoted 1 times

" # bobsmith2000 7 months ago


Selected Answer: B
It's B. No-brainer
upvoted 2 times

" # Hasitha99 7 months, 1 week ago


Based on the given answers,I will go with B.
upvoted 2 times

" # jyrajan69 9 months, 1 week ago


The key word is 'Most Cost Effective' so why no one looking at D, it says Reserved Instances which is designed to reduce costs, for me thats the
answer.
upvoted 1 times

100 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # jason2009 7 months, 4 weeks ago


D is incorrect because by purchasing RIs based on the Max value set in an ASG will for sure waste resources because you are paying for the
extra capacity unused. As matter of fact, because ECS does not have any additional cost and can utilize the RIs if you choose EC2, B for sure
is cheaper than D.
upvoted 4 times

" # Duke_YU 10 months ago


C is what Amazon want you to use, B is what most architect will choose in real life.
upvoted 3 times

" # user0001 6 months ago


see is right
upvoted 1 times

" # AzureDP900 11 months ago


I'll go with B
upvoted 1 times

" # tonikus 1 year ago


I think it's C based on "...while standardizing by using a single deployment methodology."
upvoted 2 times

" # tonikus 1 year ago


hmmm, I guess I'm changing to B
upvoted 1 times

" # nodogoshi 1 year ago


B. Microservice's question. ECS is for it.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # tuananhngo 1 year ago


what does cost-effective means? does it relate to price or lead time? we are not sure abt it.
I think the correct is C because B would take much effort to transform application to container type.
upvoted 2 times

" # kpcert 1 year ago


Why not use the Lambda? The question has the follwing keywords
1. Node.js and Java
2. Infrequent Usage and memory requirement is max 2.5 GB
3. Less concurrency
4. Need Cost-effective solution
upvoted 1 times

" # StanM 1 year ago


because one of the applications can run up to several hours while lambda can run only up to 15 mins. We could think of making an exception
for this application and run everything else as lambda, but the question insists on standardizing.
upvoted 8 times

" # Waiweng 1 year ago


it's B fof cost effectiveness
upvoted 4 times

" # blackgamer 1 year ago


B is the correct answer because it is more cost effective.
upvoted 1 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # Ebi 1 year ago


B and C both seem correct, I will go with B
upvoted 2 times

101 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #540 Topic 1

A Solutions Architect must build a highly available infrastructure for a popular global video game that runs on a mobile phone platform. The
application runs on
Amazon EC2 instances behind an Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The
database tier is an Amazon RDS MySQL Multi-AZ instance. The entire application stack is deployed in both us-east-1 and eu-central-1. Amazon
Route 53 is used to route tramc to the two installations using a latency-based routing policy. A weighted routing policy is con+gured in Route 53 as
a fail over to another region in case the installation in a region becomes unresponsive.
During the testing of disaster recovery scenarios, after blocking access to the Amazon RDS MySQL instance in eu-central-1 from all the application
instances running in that region. Route 53 does not automatically failover all tramc to us-east-1.
Based on this situation, which changes would allow the infrastructure to failover to us-east-1? (Choose two.)

A. Specify a weight of 100 for the record pointing to the primary Application Load Balancer in us-east-1 and a weight of 60 for the pointing to
the primary Application Load Balancer in eu-central-1.

B. Specify a weight of 100 for the record pointing to the primary Application Load Balancer in us-east-1 and a weight of 0 for the record
pointing to the primary Application Load Balancer in eu-central-1.

C. Set the value of Evaluate Target Health to Yes on the latency alias resources for both eu-central-1 and us-east-1.

D. Write a URL in the application that performs a health check on the database layer. Add it as a health check within the weighted routing
policy in both regions.

E. Disable any existing health checks for the resources in the policies and set a weight of 0 for the records pointing to primary in both eu-
central-1 and us-east-1, and set a weight of 100 for the primary Application Load Balancer only in the region that has healthy resources.

Correct Answer: BC

Community vote distribution


CD (100%)

" # haah Highly Voted $ 1 year, 1 month ago


support CD
upvoted 12 times

" # Madhu654 Highly Voted $ 1 year ago


Health check page targeting a database is a bad idea. The question talks about a weighted routing policy.

If all the records that have a weight greater than 0 are unhealthy, then Route 53 considers the zero-weighted records.

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/health-checks-how-route-53-chooses-records.html

Answer is BC
upvoted 5 times

" # ArreRaja Most Recent % 5 months, 1 week ago


BC
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-complex-configs.html
upvoted 2 times

" # bobsmith2000 5 months, 2 weeks ago


Selected Answer: CD
No-brainer
upvoted 1 times

" # Ronon 11 months ago


RDS won't use ALB, so A & B are incorrect, but we need to failover traffic to us-east-1. only B & C work.
upvoted 2 times

" # AzureDP900 11 months ago


I will go with CD
upvoted 1 times

" # acloudguru 11 months, 1 week ago


Question number is 357.
D. Write a URL in the application that performs a health check on the database layer. Add it as a health check within the weighted routing policy

102 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

in both regions.
Did we see the same question?
upvoted 1 times

" # moon2351 1 year ago


I'll go with C&D
upvoted 2 times

" # tgv 1 year ago


CCC DDD
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C,D
upvoted 1 times

" # Waiweng 1 year ago


it's C and D
upvoted 2 times

" # blackgamer 1 year ago


C and D is correct answer.
upvoted 1 times

" # kiev 1 year ago


CD and yes it is a repeat question. I like some of these as it helps consolidate my understanding of the questions.
upvoted 1 times

" # Ebi 1 year ago


I will go with CD
upvoted 3 times

" # T14102020 1 year ago


Correct is CD. Evaluate target health + health check DB
upvoted 1 times

" # jackdryan 1 year ago


I'll go with C,D
upvoted 2 times

" # NikkyDicky 1 year, 1 month ago


CD, dup of Q 142
upvoted 2 times

" # Phat 1 year, 1 month ago


dup with q140, not 142.
upvoted 1 times

103 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #541 Topic 1

An online e-commerce business is running a workload on AWS. The application architecture includes a web tier, an application tier for business
logic, and a database tier for user and transactional data management. The database server has a 100 GB memory requirement. The business
requires cost-emcient disaster recovery for the application with an RTO of 5 minutes and an RPO of 1 hour. The business also has a regulatory for
out-of-region disaster recovery with a minimum distance between the primary and alternate sites of 250 miles.
Which of the following options can the Solutions Architect design to create a comprehensive solution for this customer that meets the disaster
recovery requirements?

A. Back up the application and database data frequently and copy them to Amazon S3. Replicate the backups using S3 cross-region
replication, and use AWS CloudFormation to instantiate infrastructure for disaster recovery and restore data from Amazon S3.

B. Employ a pilot light environment in which the primary database is con+gured with mirroring to build a standby database on m4.large in the
alternate region. Use AWS CloudFormation to instantiate the web servers, application servers and load balancers in case of a disaster to bring
the application up in the alternate region. Vertically resize the database to meet the full production demands, and use Amazon Route 53 to
switch tramc to the alternate region.

C. Use a scaled-down version of the fully functional production environment in the alternate region that includes one instance of the web
server, one instance of the application server, and a replicated instance of the database server in standby mode. Place the web and the
application tiers in an Auto Scaling behind a load balancer, which can automatically scale when the load arrives to the application. Use
Amazon Route 53 to switch tramc to the alternate region.

D. Employ a multi-region solution with fully functional web, application, and database tiers in both regions with equivalent capacity. Activate
the primary database in one region only and the standby database in the other region. Use Amazon Route 53 to automatically switch tramc
from one region to another using health check routing policies.

Correct Answer: D

Community vote distribution


C (100%)

" # TK2019 Highly Voted $ 1 year, 1 month ago


This should be C. As RTO is in minutes (https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/plan-for-disaster-recovery-dr.html)
Warm standby (RPO in seconds, RTO in minutes): Maintain a scaled-down version of a fully functional environment always running in the DR
Region. Business-critical systems are fully duplicated and are always on, but with a scaled down fleet. When the time comes for recovery, the
system is scaled up quickly to handle the production load.
upvoted 20 times

" # haah Highly Voted $ 1 year, 1 month ago


support C
upvoted 12 times

" # kangtamo Most Recent % 4 months, 1 week ago


Selected Answer: C
Agree with C.
upvoted 1 times

" # nickstudy7 9 months ago


Why Not C? Pilot Light
https://aws.amazon.com/blogs/architecture/disaster-recovery-dr-architecture-on-aws-part-iii-pilot-light-and-warm-standby/
upvoted 1 times

" # Jonfernz 4 weeks ago


All those Cloudformation stacks will take way longer than 5 minutes to deploy. Will not satisfy RTO.
upvoted 2 times

" # Smartphone 10 months ago


RTO is 5 minutes. Hence is D.
upvoted 1 times

" # Punitsolanki 9 months ago


It should be C.

Warm standby (RPO in seconds, RTO in minutes): Maintain a scaled-down but fully functional version of your workload always running in the
DR Region. Business-critical systems are fully duplicated and are always on, but with a scaled down fleet. When the time comes for recovery,

104 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

the system is scaled up quickly to handle the production load. The more scaled-up the Warm Standby is, the lower RTO and control plane
reliance will be. When scaled up to full scale this is known as a Hot Standby.
upvoted 2 times

" # cldy 11 months ago


C. Use a scaled-down version of the fully functional production environment in the alternate region that includes one instance of the web server,
one instance of the application server, and a replicated instance of the database server in standby mode. Place the web and the application tiers
in an Auto Scaling behind a load balancer, which can automatically scale when the load arrives to the application. Use Amazon Route 53 to
switch traffic to the alternate region.
upvoted 1 times

" # AzureDP900 11 months ago


I will go with C
upvoted 2 times

" # AWSum1 1 year ago


C - RTO is 5 min and its a scaled down environment. D would be good if it needed to be active active and cost was not a concern
upvoted 1 times

" # student22 1 year ago


D
C is more cost effective but it seems to need manual switching over which we can't guarantee to happen in 5 minutes. D is automatic.
upvoted 3 times

" # Cotter 1 year ago


Why? you focus cost, is there the question about cost effective?
upvoted 1 times

" # Viper57 1 year ago


The questions says "The business requires cost-efficient disaster recovery for the application with an RTO of 5 minutes and an RPO of 1
hour.". It's always good to read the question carefully or you will miss important details.
upvoted 3 times

" # student22 1 year ago


Good point. Changing my answers to C.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

" # jobe42 1 year ago


C, for D i miss the replication stuff, just another database in standby missing the data from the active node. Best thing is here a Auroa Global
Database
upvoted 1 times

" # Kopa 1 year ago


Im for C, it fulfills the requirements and its more cost effective then D.
upvoted 1 times

" # Waiweng 1 year ago


support C
upvoted 3 times

" # blackgamer 1 year ago


Not very good design, but closest one is C.
upvoted 1 times

" # kiev 1 year ago


C and another repeat question. Honestly guys I wanted to ask a genuine question here. Does anyone resort to cramming some of the answers?
Because I do my best to analyse things and get most of them right but I have had to cram some of them and if I see those questions, I kind of
know the answers already.
upvoted 1 times

" # nitinz 1 year ago


I do 3 revisions, one first pass nothing makes sense, on second pass things kind of make sense. on 3 pass things are crystal clear but some
are still not making sense. those which are not making sense after 3rd pass, I copy paste those questions and memorize the keywords in
question and keywords in answer. hope it helps.
upvoted 3 times

" # sarah_t 1 year ago


No, as the questions on the exam may vary.
If I am confident in my answer, I check with the comments. If I'm right, I move on.
If I am not confident, I look for explanations, links and resources in the comments and try to figure out the exact reasons.

105 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

If I have no clue whatsoever, I go read the documentation on AWS and/or try it myself in my AWS account(s) before even looking at the
comments. Only when I think I know the answer, I check.

This is somewhat time-consuming but that way I really learn the stuff, not cram for the exam alone.
upvoted 3 times

" # Ebi 1 year ago


DR solution needs to be cost efficient, so the D is not the answer.
A,B does not work with 5 min RTO.
C is the best answer, although still not perfect, it doesn't mention about vertical resizing of DB instance after failover
upvoted 4 times

" # T14102020 1 year, 1 month ago


Correct is C. 5min RTO = scale-down
upvoted 2 times

106 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #542 Topic 1

A company runs a memory-intensive analytics application using on-demand Amazon EC2 C5 compute optimized instance. The application is used
continuously and application demand doubles during working hours. The application currently scales based on CPU usage. When scaling in
occurs, a lifecycle hook is used because the instance requires 4 minutes to clean the application state before terminating.
Because users reported poor performance during working hours, scheduled scaling actions were implemented so additional instances would be
added during working hours. The Solutions Architect has been asked to reduce the cost of the application.
Which solution is MOST cost-effective?

A. Use the existing launch con+guration that uses C5 instances, and update the application AMI to include the Amazon CloudWatch agent.
Change the Auto Scaling policies to scale based on memory utilization. Use Reserved Instances for the number of instances required after
working hours, and use Spot Instances to cover the increased demand during working hours.

B. Update the existing launch con+guration to use R5 instances, and update the application AMI to include SSM Agent. Change the Auto
Scaling policies to scale based on memory utilization. Use Reserved Instances for the number of instances required after working hours, and
use Spot Instances with on-Demand instances to cover the increased demand during working hours.

C. Use the existing launch con+guration that uses C5 instances, and update the application AMI to include SSM Agent. Leave the Auto Scaling
policies to scale based on CPU utilization. Use scheduled Reserved Instances for the number of instances required after working hours, and
use Spot Instances to cover the increased demand during working hours.

D. Create a new launch con+guration using R5 instances, and update the application AMI to include the Amazon CloudWatch agent. Change
the Auto Scaling policies to scale based on memory utilization. Use Reserved Instances for the number of instances required after working
hours, and use Standard Reserved Instances with On-Demand Instances to cover the increased demand during working hours.

Correct Answer: D
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring_ec2.html

Community vote distribution


D (100%)

" # jay1ram2 Highly Voted $ 1 year, 1 month ago


The "When scaling in occurs, a lifecycle hook is used because the instance requires 4 minutes to clean the application state before terminating."
requirement eliminates spot nodes. So, R5 with reserved and on-demand is a more suitable choice. Option D
upvoted 16 times

" # joanneli77 3 weeks, 5 days ago


This eliminates A, B and C... nicely done.
upvoted 1 times

" # Amitv2706 1 year ago


good point coz spot instance termination notice time is 2 mins,
upvoted 3 times

" # meenu2225 Highly Voted $ 1 year, 1 month ago


One thing: You cannot update the existing launch configuration i.e.update the AMI inside it. Which leaves only option D.
upvoted 6 times

" # AWSum1 1 year ago


Probably the best and most direct way of answering this question
upvoted 1 times

" # nsvijay04b1 Most Recent % 6 days, 9 hours ago


Selected Answer: D
only option that doesn't say edit existing LC/LT.
Spot not suitable for 4 min life cycle for cleanup
reserved instances to save cost.
It wud be nice to have in the option "Scheduled RIs" for day time load and "Standard RIs" otherwise all the time.

https://aws.amazon.com/ec2/pricing/reserved-instances/

Standard RIs: These provide the most significant discount (up to 72% off On-Demand) and are best suited for steady-state usage.

Scheduled RIs: These are available to launch within the time windows you reserve. This option allows you to match your capacity reservation to a

107 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

predictable recurring schedule that only requires a fraction of a day, a week, or a month.
upvoted 1 times

" # cldy 11 months ago


D. Create a new launch configuration using R5 instances, and update the application AMI to include the Amazon CloudWatch agent. Change the
Auto Scaling policies to scale based on memory utilization. Use Reserved Instances for the number of instances required after working hours,
and use Standard Reserved Instances with On-Demand Instances to cover the increased demand during working hours.
upvoted 1 times

" # AzureDP900 11 months ago


I'll go with D
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 1 times

" # Waiweng 1 year ago


it's D
upvoted 3 times

" # blackgamer 1 year ago


The answer is D. SSM agent is not relevant here and the application is memory-intensive which R5 instances are memory optimised instances.
Need cloudwatch agent to install to monitor memory utilisation as it is custom metric.
upvoted 1 times

" # Ebi 1 year ago


Answer is D
upvoted 5 times

" # MichaelHuang 1 year ago


Why not B?
R5 is for memory intensive, so B or D; SSM for memory metric, so only B.
upvoted 1 times

" # 01037 1 year ago


"the instance requires 4 minutes to clean the application state before terminating" rules out Spot Instance
upvoted 2 times

" # ujizane 1 year ago


need CloudWatch agent for memory metrix not ssm agent
upvoted 3 times

" # T14102020 1 year ago


Correct is D. New lunches + RI + without Spot
upvoted 1 times

" # jackdryan 1 year ago


I'll go with D
upvoted 2 times

" # srinivasa 1 year, 1 month ago


Memory Utilization of EC2 is not a cloud watch metric.
C is the right answer
upvoted 1 times

" # alexmena1981 1 year, 1 month ago


You are wrong https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring_ec2.html
upvoted 2 times

" # CYL 1 year, 1 month ago


D. Spot instance not suitable for such use case due to terminating timing and working hours expectation. Use R5 EC2 instances for memory
intensive usecases.
upvoted 3 times

" # NikkyDicky 1 year, 1 month ago


D. Dup of Q144
upvoted 4 times

" # JAWS1600 1 year, 1 month ago


I agree to D.
For folks that think that launch config cannot be updated/changed. here is the link
https://docs.aws.amazon.com/autoscaling/ec2/userguide/change-launch-config.html

108 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 4 times

" # VrushaliD 1 year, 1 month ago


reserved instances for increased load??
upvoted 1 times
" # FreeSwan 1 year, 1 month ago
B is correct.
upvoted 1 times

109 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #543 Topic 1

A company has a data center that must be migrated to AWS as quickly as possible. The data center has a 500 Mbps AWS Direct Connect link and
a separate, fully available 1 Gbps ISP connection. A Solutions Architect must transfer 20 TB of data from the data center to an Amazon S3 bucket.
What is the FASTEST way transfer the data?

A. Upload the data to the S3 bucket using the existing DX link.

B. Send the data to AWS using the AWS Import/Export service.

C. Upload the data using an 80 TB AWS Snowball device.

D. Upload the data to the S3 bucket using S3 Transfer Acceleration.

Correct Answer: B
Import/Export supports importing and exporting data into and out of Amazon S3 buckets. For signi+cant data sets, AWS Import/Export is often
faster than Internet transfer and more cost effective than upgrading your connectivity.
Reference:
https://stackshare.io/stackups/aws-direct-connect-vs-aws-import-export

Community vote distribution


D (100%)

" # jay1ram2 Highly Voted $ 1 year, 1 month ago


Transfer Acceleration over a fully available 1 Gbps can theoretically move around 10TB/Day.

1 Gbps = (1024/8) MBPs = 128 MBps


(128 MBps * 3600 secs * 24 Hrs)/1024 = 10,800 GB/Day = 10TB/Day

Along with Transfer Acceleration, which provides a consistent experience, the entire data can be moved in 2 days. However AWS Import/Export
(now snowball) takes around a week to make the data available on AWS. The Answer is D.
upvoted 23 times

" # tomosabc1 Most Recent % 1 month, 1 week ago


Selected Answer: D
The answer should be D. Agree with jay1ram2's analysis.
upvoted 1 times

" # aloha123 6 months, 1 week ago


Selected Answer: D
AWS Import/Export is now Snowball. It takes about 4-6 days to receive a Snowball. With option D we can complete in 2 days.
upvoted 3 times

" # HellGate 9 months, 1 week ago


My answer is C

Question didn’t say about location information so it’s not easy to compare transfer rate thru S3 Transfer Acceleration . When I check aws
document I could find pretty similar case with mig data of 25TB from below link on example 2.

https://aws.amazon.com/snowball/pricing/

However, it's worth to memorize 1Gbps = 10TB/Day.


upvoted 2 times

" # tkanmani76 9 months, 3 weeks ago


D - Considering the data is 20 TB we can complete in 2 days using S3 Transfer acceleration - if we look at higher data say 50TB, then it would
make sense to use Snowball as it would take same time (say 5 days) but almost 60% cheaper than S3 TA.
upvoted 2 times

" # AzureDP900 11 months ago


I'll go with D
upvoted 1 times

" # nodogoshi 1 year ago


A. S3 Transfer Acceleration do nothing in this situation. simply upload with direct connect.
upvoted 2 times

110 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # WhyIronMan 1 year ago


I'll go with D
upvoted 2 times

" # Waiweng 1 year ago


it's D
upvoted 3 times

" # blackgamer 1 year ago


D seems to be the most suitable. But very ambiguous answers still.
upvoted 2 times

" # Sun`yhome 1 year ago


Nooo D!
Please never use business internet to transfer huge data with full bandwidth. Your boss even get delay to open Google.
A is my choice. it doesn't need logistic and transport time for arrival, copy, shipping, etc.
C seems old since AWS started to offer Snowball mobile edge
https://aws.amazon.com/snowball/faqs/
https://aws.amazon.com/snowball/faqs/
upvoted 2 times

" # sarah_t 1 year ago


The question says "a separate, fully available 1 Gbps ISP connection". And asks for the FASTEST solution.

It may not be ideal in a real-world setting, but it IS the fastest.


upvoted 2 times

" # WhyIronMan 1 year ago


"a separate, fully available 1 Gbps ISP connection".
"FASTEST solution"
I'll go with D
upvoted 1 times

" # kiev 1 year ago


Another repeat question and D is the answer
upvoted 1 times

" # Ebi 1 year ago


Using 1Gbps internet speed you can transfer around 10TB of data per day much less than Snowball
D is my choice
upvoted 3 times

" # T14102020 1 year ago


Correct is D. S3 Transfer Acceleration
upvoted 3 times

" # jackdryan 1 year ago


I'll go with D
upvoted 4 times

" # ipindado2020 1 year, 1 month ago


D for sure...
Public upload should be fastest that DX
upvoted 1 times

" # NikkyDicky 1 year, 1 month ago


D. Dup of Q145
upvoted 3 times

111 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #544 Topic 1

A company wants to host its website on AWS using serverless architecture design patterns for global customers. The company has outlined its
requirements as follow:
✑ The website should be responsive.
✑ The website should offer minimal latency.
✑ The website should be highly available.
✑ Users should be able to authenticate through social identity providers such as Google, Facebook, and Amazon.
✑ There should be baseline DDoS protections for spikes in tramc.
How can the design requirements be met?

A. Use Amazon CloudFront with Amazon ECS for hosting the website. Use AWS Secrets Manager to provide user management and
authentication functions. Use ECS Docker containers to build an API.

B. Use Amazon Route 53 latency routing with an Application Load Balancer and AWS Fargate in different regions for hosting the website. Use
Amazon Cognito to provide user management and authentication functions. Use Amazon EKS containers to build an API.

C. Use Amazon CloudFront with Amazon S3 for hosting static web resources. Use Amazon Cognito to provide user management and
authentication functions. Use Amazon API Gateway with AWS Lambda to build an API.

D. Use AWS Direct Connect with Amazon CloudFront and Amazon S3 for hosting static web resources. Use Amazon Cognito to provide user
management authentication functions. Use AWS Lambda to build an API.

Correct Answer: C

" # daviens Highly Voted $ 1 year ago


C. The question mentions protection against DDoS attack. You can setup WAF on API gateway but you cannot set WAF on EKS, ECS nor
Lambda
upvoted 8 times

" # TechX Most Recent % 4 months, 1 week ago


Why not B, I see B meet the requirement...
upvoted 1 times

" # AzureDP900 11 months ago


I'll go with C
upvoted 1 times

" # vbal 11 months, 1 week ago


the ECS control plane services (those services that deliver ECS management capabilities to our customers) do not take service dependencies
outside of the Region in which we are operating to avoid multi-region impact - Not sure if that means ECS Custer is Regional?
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C

That's why you'll use Amazon API Gateway with AWS Lambda to build an API.
And recall that:
A company wants to host its website on AWS using serverless architecture design patterns

SAM is not compatible with EKS but it is with Lambda and API Gateway
upvoted 1 times

" # digimaniac 1 year ago


B, what does a website is responsive mean? It is called Responsive UI framework. This needs server side scripting, which means you can't use
S3 for hosting.
upvoted 2 times

" # WhyIronMan 1 year ago


That's why you'll use Amazon API Gateway with AWS Lambda to build an API.
And recall that:
A company wants to host its website on AWS using serverless architecture design patterns
SAM is not compatible with EKS but it is with Lambda and API Gateway
upvoted 2 times

" # Waiweng 1 year ago

112 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

it's C
upvoted 2 times
" # Ebi 1 year, 1 month ago
C is the answer
upvoted 4 times

" # T14102020 1 year, 1 month ago


Correct is C. CloudFront + S3 + Cognito.
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with C
upvoted 3 times

" # CYL 1 year, 1 month ago


C. These combination of services are stateless and highly scaleable.
upvoted 1 times

" # NikkyDicky 1 year, 1 month ago


C. Dup of Q146
upvoted 2 times

" # meenu2225 1 year, 1 month ago


Although I like option B, but its has both Fargate and EKS, this option needs bit more explaination. Option D is a bad design, Direct connect
seriously?. Option A is missing congito/user authentication method. Which leaves only option C.
upvoted 2 times

" # Bouji1982 1 year, 1 month ago


D is the answer, Npo need for API-GW
upvoted 1 times

" # fw 1 year, 1 month ago


Answer is C
upvoted 3 times

" # LunchTime 1 year, 1 month ago


This is a replication of Question #146Topic 2
upvoted 3 times

" # nil3112 1 year ago


Hi
is that means this question#543 is not from topic2 ?
upvoted 1 times

113 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #545 Topic 1

A company is currently using AWS CodeCommit for its source control and AWS CodePipeline for continuous integration. The pipeline has a build
stage for building the artifacts, which is then staged in an Amazon S3 bucket.
The company has identi+ed various improvement opportunities in the existing process, and a Solutions Architect has been given the following
requirements:
✑ Create a new pipeline to support feature development
✑ Support feature development without impacting production applications
✑ Incorporate continuous testing with unit tests
✑ Isolate development and production artifacts
✑ Support the capability to merge tested code into production code.
How should the Solutions Architect achieve these requirements?

A. Trigger a separate pipeline from CodeCommit feature branches. Use AWS CodeBuild for running unit tests. Use CodeBuild to stage the
artifacts within an S3 bucket in a separate testing account.

B. Trigger a separate pipeline from CodeCommit feature branches. Use AWS Lambda for running unit tests. Use AWS CodeDeploy to stage the
artifacts within an S3 bucket in a separate testing account.

C. Trigger a separate pipeline from CodeCommit tags. Use Jenkins for running unit tests. Create a stage in the pipeline with S3 as the target
for staging the artifacts with an S3 bucket in a separate testing account.

D. Create a separate CodeCommit repository for feature development and use it to trigger the pipeline. Use AWS Lambda for running unit
tests. Use AWS CodeBuild to stage the artifacts within different S3 buckets in the same production account.

Correct Answer: A
Reference:
https://docs.aws.amazon.com/codebuild/latest/userguide/how-to-create-pipeline.html

" # Ebi Highly Voted $ 1 year ago


I will go with A
upvoted 6 times

" # challenger1 Most Recent % 11 months ago


My Answer: A
Use codebuild the whole way
upvoted 1 times

" # AzureDP900 11 months ago


I'll go with A
upvoted 1 times

" # tonikus 1 year ago


A is the answer since all others are nonsense (considering this is AWS exam )
upvoted 3 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

" # Waiweng 1 year ago


it's A
upvoted 4 times

" # kiev 1 year ago


A and it is another repeat
upvoted 1 times

" # nil3112 1 year ago


Hi Liev
Could you please help in identifying topic2 questions. i was told that after page40, topic2 questions are there but as per your comment it
seems this 545 belongs to topic1
upvoted 2 times

114 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # tvs 1 year ago


Did you found from where topic 2 starts
upvoted 3 times
" # T14102020 1 year ago
Correct is A. CodeBuild for unit tests
upvoted 2 times

" # jackdryan 1 year ago


I'll go with A
upvoted 3 times

" # CYL 1 year ago


A. Codebuild to verify against a feature branch before merging. Store artifacts in S3.
upvoted 2 times

" # NikkyDicky 1 year, 1 month ago


A. Dup of Q147
upvoted 2 times

" # seamas 1 year, 1 month ago


A, AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages
that are ready to deploy.
upvoted 4 times

" # Mkumar 1 year, 1 month ago


Answer is A
upvoted 2 times

" # LunchTime 1 year, 1 month ago


This is a replication of Question #147Topic 2
upvoted 1 times

115 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #546 Topic 1

A company runs an ordering system on AWS using Amazon SQS and AWS Lambda, with each order received as a JSON message. Recently the
company had a marketing event that led to a tenfold increase in orders. With this increase, the following undesired behaviors started in the
ordering system:
✑ Lambda failures while processing orders lead to queue backlogs.
✑ The same orders have been processed multiple times.
A Solutions Architect has been asked to solve the existing issues with the ordering system and add the following resiliency features:
✑ Retain problematic orders for analysis.
✑ Send noti+cation if errors go beyond a threshold value.
How should the Solutions Architect meet these requirements?

A. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after
processing, increase the visibility timeout for the messages, create a dead letter queue for messages that could not be processed, create an
Amazon CloudWatch alarm on Lambda errors for noti+cation.

B. Receive single messages with each Lambda invocation, put additional Lambda workers to poll the queue, delete messages after
processing, increase the message timer for the messages, use Amazon CloudWatch Logs for messages that could not be processed, create a
CloudWatch alarm on Lambda errors for noti+cation.

C. Receive multiple messages with each Lambda invocation, use long polling when receiving the messages, log the errors from the message
processing code using Amazon CloudWatch Logs, create a dead letter queue with AWS Lambda to capture failed invocations, create
CloudWatch events on Lambda errors for noti+cation.

D. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after
processing, increase the visibility timeout for the messages, create a delay queue for messages that could not be processed, create an
Amazon CloudWatch metric on Lambda errors for noti+cation.

Correct Answer: D

Community vote distribution


A (100%)

" # jay1ram2 Highly Voted $ 1 year, 1 month ago


The correct answer is A.

B - Single message/lambda will increase concurrency requirements and increased failure rates. There is no "Lambda workers" just increased
concurrency limit.
C - There is no long polling in Lambda
D is incorrect, the delay queue is used to throttle incoming messages and not handle messages that could not be processed.
upvoted 27 times

" # RedKane 1 year ago


https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html "For standard queues, Lambda uses long polling to poll a queue until it
becomes active."
upvoted 2 times

" # Juks 11 months ago


Long polling is not needed as the incoming message count is high but increasing the visibility timeout in option A will help with the
processing of the message.
upvoted 1 times

" # santanu77 Highly Voted $ 1 year, 1 month ago


Delay will increase unprocessed msgs on the queue. Exactly opposite of what is needed. We rather need a dead letter queue for messages that
could not be handled. Option C.

D is wrong becuase of Delay queue


B is wrong Single order per message will increase more messages in the queue
A and C are possible choices
A is better as increasing the visibility timeout decreases the possibility of duplicate message processing.
upvoted 6 times

" # 01037 1 year ago


C is wrong, because no mention of deleting message.
upvoted 2 times

116 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # CloudHandsOn Most Recent % 3 months, 2 weeks ago


A.
DLQ and CW Alarms is all you need :)
upvoted 1 times

" # cldy 11 months ago


A. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after
processing, increase the visibility timeout for the messages, create a dead letter queue for messages that could not be processed, create an
Amazon CloudWatch alarm on Lambda errors for notification.
upvoted 1 times

" # AzureDP900 11 months ago


Correct Answer is A. Dead Letter Queue, increased visibility timeout
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: A
A is best option due to mention of dead-letter queue.
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

" # sydon 1 year ago


C
long polling(SQS) and dead-letter queue
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html
upvoted 1 times

" # Waiweng 1 year ago


it's A
upvoted 3 times

" # blackgamer 1 year ago


Yes, A is the answer. Dead Letter Queue for failed messages.
upvoted 1 times

" # Amitv2706 1 year ago


A is best option due to mention of dead-letter queue.
upvoted 2 times

" # KnightVictor 1 year ago


i'll go with A
upvoted 1 times

" # Ebi 1 year ago


A is the best answer
upvoted 4 times

" # Bulti 1 year ago


Correct Answer is A. Dead Letter Queue, increased visibility timeout, multiple messages all satisfy the requriements.
upvoted 3 times

" # T14102020 1 year ago


Correct is A. dead letter queue NOT delay letter queue
upvoted 3 times

" # jackdryan 1 year ago


I'll go with A
upvoted 2 times

" # CYL 1 year ago


A. Dead letter queue for error messages.
upvoted 1 times

117 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #547 Topic 1

An organization has recently grown through acquisitions. Two of the purchased companies use the same IP CIDR range. There is a new short-term
requirement to allow AnyCompany A (VPC-A) to communicate with a server that has the IP address 10.0.0.77 in AnyCompany B (VPC-B).
AnyCompany A must also communicate with all resources in AnyCompany C (VPC-C). The Network team has created the VPC peer links, but it is
having issues with communications between VPC-A and VPC-B. After an investigation, the team believes that the routing tables in the VPCs are
incorrect.
What con+guration will allow AnyCompany A to communicate with AnyCompany C in addition to the database in AnyCompany B?

A. On VPC-A, create a static route for the VPC-B CIDR range (10.0.0.0/24) across VPC peer pcx-AB. Create a static route of 10.0.0.0/16 across
VPC peer pcx-AC. On VPC-B, create a static route for VPC-A CIDR (172.16.0.0/24) on peer pcx-AB. On VPC-C, create a static route for VPC-A
CIDR (172.16.0.0/24) across peer pcx-AC.

B. On VPC-A, enable dynamic route propagation on pcx-AB and pcx-AC. On VPC-B, enable dynamic route propagation and use security groups
to allow only the IP address 10.0.0.77/32 on VPC peer pcx-AB. On VPC-C, enable dynamic route propagation with VPC-A on peer pcx-AC.

C. On VPC-A, create network access control lists that block the IP address 10.0.0.77/32 on VPC peer pcx-AC. On VPC-A, create a static route
for VPC-B CIDR (10.0.0.0/24) on pcx-AB and a static route for VPC-C CIDR (10.0.0.0/24) on pcx-AC. On VPC-B, create a static route for VPC-A
CIDR (172.16.0.0/24) on peer pcx-AB. On VPC-C, create a static route for VPC-A CIDR (172.16.0.0/24) across peer pcx-AC.

D. On VPC-A, create a static route for the VPC-B (10.0.0.77/32) database across VPC peer pcx-AB. Create a static route for the VPC-C CIDR on
VPC peer pcx-AC. On VPC-B, create a static route for VPC-A CIDR (172.16.0.0/24) on peer pcx-AB. On VPC-C, create a static route for VPC-A
CIDR (172.16.0.0/24) across peer pcx-AC.

Correct Answer: C

Community vote distribution


D (100%)

" # LCC92 Highly Voted $ 1 year ago


D will work, /32 will be prioritized (Routing prioritize smaller cider).
However, it will not be perfect, A wont able to communicate with 10.0.0.77 in VPC-C, because it will always be routed to B for that destination IP.
But it is "short-term requirement " in this question, so the solution is acceptable.
upvoted 18 times

" # awsexamprep47 Highly Voted $ 1 year, 1 month ago


D is correct /32 longest prefix path
upvoted 11 times

" # kangtamo Most Recent % 5 months ago


Selected Answer: D

118 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Go with D
upvoted 1 times
" # AzureDP900 11 months ago
D works fine
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
It's the only one with makes sense from the Networking perspective
upvoted 5 times

" # Waiweng 1 year, 1 month ago


it's D
upvoted 3 times

" # CarisB 1 year, 1 month ago


Answer is D
upvoted 1 times

" # nitinz 1 year, 1 month ago


D is okay
upvoted 1 times

119 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #548 Topic 1

A company is designing a new highly available web application on AWS. The application requires consistent and reliable connectivity from the
application servers in AWS to a backend REST API hosted in the company's on-premises environment. The backend connection between AWS and
on-premises will be routed over an AWS Direct Connect connection through a private virtual interface. Amazon Route 53 will be used to manage
private DNS records for the application to resolve the IP address on the backend REST API.
Which design would provide a reliable connection to the backend API?

A. Implement at least two backend endpoints for the backend REST API, and use Route 53 health checks to monitor the availability of each
backend endpoint and perform DNS-level failover.

B. Install a second Direct Connect connection from a different network carrier and attach it to the same virtual private gateway as the +rst
Direct Connect connection.

C. Install a second cross connect for the same Direct Connect connection from the same network carrier, and join both connections to the
same link aggregation group (LAG) on the same private virtual interface.

D. Create an IPSec VPN connection routed over the public internet from the on-premises data center to AWS and attach it to the same virtual
private gateway as the Direct Connect connection.

Correct Answer: B

Community vote distribution


B (60%) D (40%)

" # 01037 Highly Voted $ 1 year, 1 month ago


D.
Why almost everyone chose B.
B is clearly wrong. Check below.
"You cannot associate a virtual private gateway with more than one Direct Connect gateway"
in
https://docs.aws.amazon.com/directconnect/latest/UserGuide/virtualgateways.html
upvoted 23 times

" # LCC92 1 year ago


we can associate a VPG with multiple DX.
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect.html
"Figure 6 - Redundant AWS Direct Connect"
The correct answer is B.
upvoted 15 times

" # heany 1 month ago


correct. Direct connect gateway is not required here. two Direct connects can be connected to VIFs of VPG
upvoted 2 times

" # GeniusMikeLiu 10 months, 1 week ago


It should be D, question is care about secure, VPN with IP Sec is secure, and Direct Connect is Private not secure.
upvoted 4 times

" # Byrney 13 hours, 50 minutes ago


"The application requires consistent and reliable connectivity" not secure.
upvoted 1 times

" # LCC92 1 year ago


"You cannot associate a virtual private gateway with more than one Direct Connect gateway"
Direct Connect Gateway is a different entity, we dont use it in this question.
upvoted 9 times

" # jay1ram2 Highly Voted $ 1 year, 1 month ago


My answer is B - 2 DX connection to on-prem provides more reliable connectivity between AWS and data center
https://aws.amazon.com/answers/networking/aws-multiple-data-center-ha-network-connectivity/

A - The ask is, Which design would provide a "reliable connection" to the backend API? not to re-design the backend implementation for High
Availability.
C - 2 DX connections from the same provider create a single point of failure
D - VPN over the public internet is generally less reliable than a dedicated DX connection.
upvoted 22 times

120 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # joanneli77 Most Recent % 3 weeks, 4 days ago


A VPN+DX is common because it is cheap, but it is not as reliable as two DX (add CGW hardware and different routing paths and it's messy). Two
DX can be connected to one VGW.
upvoted 1 times

" # kadev 2 months, 2 weeks ago


Maybe the Question is not exactly.
"a secure connection " => D maybe almost matching. But why we need VPN over internet while we have Direct Connect lol
upvoted 1 times

" # Harithareddynn 2 months, 3 weeks ago


Selected Answer: D
The question is about security, hence VPN connection is needed and it is D
upvoted 1 times

" # epomatti 2 months ago


No, it says "reliable", not secure. Not the same thing.
upvoted 1 times

" # hilft 3 months, 1 week ago


It's D.
upvoted 1 times

" # KiraguJohn 3 months, 2 weeks ago


Which architecture would be most likely to establish a secure connection to the backend API?
- Did i miss anything here? i thought the question is about security
upvoted 1 times

" # nexus2020 3 months ago


Not sure what you are trying to say here.
Direct Connect is private, and it is secure as no one else can see what going on there.
VPN is secure, based on it is encrypted, but VPN is used on public network due to the nature that the public network is public, not secure.

VPN is not more secure than Direct Connect.

so IMO, DirectConnect is fine.


upvoted 1 times

" # Student1950 3 months, 3 weeks ago


here is the correct link
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-vpn.html
upvoted 2 times

" # Student1950 3 months, 3 weeks ago


I believe Answer should be D but its not worded correctly. Basically we can enable IPSec VPN on existing DX connection using Public VIF of DX
to establish secure communication between AWS and On-Prem as the link below
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-aws-transit-gateway-vpn.html
upvoted 1 times

" # JonnyB1001 3 months, 3 weeks ago


Not A: thats HA.
Not B: You cannot associate a virtual private gateway with more than one Direct Connect gateway and you cannot attach a private virtual
interface to more than one Direct Connect gateway.
C: same-same, so not dependable.
D: provides secure and diverse route. So D is the answer
upvoted 1 times

" # hilft 4 months ago


I would go for D
upvoted 1 times

" # KiraguJohn 4 months, 1 week ago


D: For a secure connection
B: For a redundancy
upvoted 1 times

" # bobsmith2000 5 months, 2 weeks ago


Selected Answer: B
"content and dependable" connection.
It's definitively B.
For encryption in transit via Direct Connect in case of REST the one could use HTTPS
upvoted 2 times

121 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Alvindo 6 months ago


Answer is D
B is for redundancy question asked SECURE, which a VPN offers since it encrypts the networl
upvoted 1 times

" # Netaji 7 months ago


Selected Answer: B
when its DX, it's secure by default because it's not internet comparatively - question also says as first condition " The application needs constant
and dependable communication between its AWS application servers and a backend REST API housed on-premises." so its B

people are saying Direct connect gateway I agree not mentioned answer so no question and normal DX will connect 2 connection on VPG
as below
https://aws.amazon.com/directconnect/resiliency-recommendation/?nc=sn&loc=4&dn=2

if anyone want to see a direct connect gateway, please see below URL

https://www.stax.io/changelog/2020-10-06-new-direct-connect-functionality-for-stax-networks/
upvoted 2 times

" # Netaji 7 months ago


when its DX, it's secure by default because it's not internet comparatively - question also says as first condition " The application needs constant
and dependable communication between its AWS application servers and a backend REST API housed on-premises." so its B
upvoted 1 times

" # czarno 7 months, 2 weeks ago


The correct answer is D.
The key thing to focus on is "secure connection"... also HA.
The setup in the question is NOT secure as it has only one DX connection with a PRIVATE VIF setup.
We can only setup encryption via the Public VIF as VPN connects to the public IP of VGW/TGW.
With a DX private VIF + VPN over a public internet we get at least one secure connection (VPN) + HA.
Option B is out of the question as:
1. Adding another DX connection with a private VIF is NOT secure
2. We still have a single point of failure being the single Customer Gateway...

So the answer is definitely D


upvoted 1 times

122 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #549 Topic 1

A retail company is running an application that stores invoice +les in an Amazon S3 bucket and metadata about the +les in an Amazon DynamoDB
table. The application software runs in both us-east-1 and eu-west-1. The S3 bucket and DynamoDB table are in us-east-1. The company wants to
protect itself from data corruption and loss of connectivity to either Region.
Which option meets these requirements?

A. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in
us-east-1. Enable versioning on the S3 bucket.

B. Create an AWS Lambda function triggered by Amazon CloudWatch Events to make regular backups of the DynamoDB table. Set up S3 cross-
region replication from us-east-1 to eu-west-1. Set up MFA delete on the S3 bucket in us-east-1.

C. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable versioning on the S3 bucket. Implement strict
ACLs on the S3 bucket.

D. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in
us-east-1. Set up S3 cross-region replication from us-east-1 to eu-west-1.

Correct Answer: D

" # Ebi Highly Voted $ 1 year ago


I go with D
upvoted 10 times

" # jyrajan69 Most Recent % 8 months, 3 weeks ago


Must consider the fact that they want safeguard the data, and only B addresses that by adding MFA, to prevent accidental delete. So will go with
B
upvoted 1 times

" # [Removed] 9 months, 4 weeks ago


Most definitely D. CRR requires versioning to be turned on and the requirement is connectivity to any region. D checks all the boxes
upvoted 3 times

" # cldy 11 months ago


D. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in
us-east-1. Set up S3 cross-region replication from us-east-1 to eu-west-1.
upvoted 1 times

" # AzureDP900 11 months ago


D is right
upvoted 1 times

" # backfringe 11 months, 1 week ago


I go with D
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 1 times

" # Kukkuji 1 year ago


Correct answer is D. To enable s3 replication, Both source and destination buckets must have versioning enabled.
upvoted 2 times

" # ss160700 1 year ago


A - it is about backup and restore to prevent data loss / corruption
upvoted 1 times

" # Waiweng 1 year ago


it's D
upvoted 4 times

" # wasabidev 1 year ago


A. Amazon Route 53 supports DNSSEC for domain registration as well as DNSSEC signing
upvoted 1 times

123 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Japs 1 year ago


Its D. You need to replicate data across regions with S3 cross region replication
upvoted 1 times

" # kiev 1 year ago


D is the answer and it is another repeat
upvoted 1 times

" # Japs 1 year ago


Question... did you start from topic 2? i.e. question 398?
upvoted 1 times

" # T14102020 1 year ago


Correct is D. DynamoDB global table + S3 cross-region replication
upvoted 4 times

" # jackdryan 1 year, 1 month ago


I'll go with D
upvoted 3 times

" # CYL 1 year, 1 month ago


D. Cross region replication for S3 and global dynamo tables.
upvoted 2 times

" # directconnect 1 year, 1 month ago


Answer is D
upvoted 3 times

" # NikkyDicky 1 year, 1 month ago


D. Dup of Q150
upvoted 2 times

124 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #550 Topic 1

A company wants to launch an online shopping website in multiple countries and must ensure that customers are protected against potential
`man-in-the-middle` attacks.
Which architecture will provide the MOST secure site access?

A. Use Amazon Route 53 for domain registration and DNS services. Enable DNSSEC for all Route 53 requests. Use AWS Certi+cate Manager
(ACM) to register TLS/SSL certi+cates for the shopping website, and use Application Load Balancers con+gured with those TLS/SSL
certi+cates for the site. Use the Server Name Identi+cation extension in all client requests to the site.

B. Register 2048-bit encryption keys from a third-party certi+cate service. Use a third-party DNS provider that uses the customer managed
keys for DNSSec. Upload the keys to ACM, and use ACM to automatically deploy the certi+cates for secure web services to an EC2 front-end
web server jeet by using NGINX. Use the Server Name Identi+cation extension in all client requests to the site.

C. Use Route 53 for domain registration. Register 2048-bit encryption keys from a third-party certi+cate service. Use a third-party DNS service
that supports DNSSEC for DNS requests that use the customer managed keys. Import the customer managed keys to ACM to deploy the
certi+cates to Classic Load Balancers con+gured with those TLS/SSL certi+cates for the site. Use the Server Name Identi+cation extension in
all clients requests to the site.

D. Use Route 53 for domain registration, and host the company DNS root servers on Amazon EC2 instances running Bind. Enable DNSSEC for
DNS requests. Use ACM to register TLS/SSL certi+cates for the shopping website, and use Application Load Balancers con+gured with those
TLS/SSL certi+cates for the site. Use the Server Name Identi+cation extension in all client requests to the site.

Correct Answer: B

Community vote distribution


A (100%)

" # wasabidev Highly Voted $ 1 year ago


A, now Amazon Route 53 supports DNSSEC for domain registration as well as DNSSEC signing
upvoted 19 times

" # Mkumar Highly Voted $ 1 year ago


Answer: D
upvoted 8 times

" # hilft 3 months, 1 week ago


A better
upvoted 1 times

" # ravisar Most Recent % 5 months ago


The answer is A - https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-configure-dnssec.html "You can protect your domain
from this type of attack, known as DNS spoofing or a man-in-the-middle attack, by configuring Domain Name System Security Extensions
(DNSSEC), a protocol for securing DNS traffic"
upvoted 2 times

" # bobsmith2000 5 months, 2 weeks ago


Selected Answer: A
Seems to be A.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-configuring-dnssec.html
upvoted 2 times

" # kyo 9 months ago


Selected Answer: A
Answer is A. Bind is not good. CLB is wrong. ACM's SSL certificate cannot use in EC2 instance.
upvoted 2 times

" # Ni_yot 9 months, 4 weeks ago


Agree its A. See link https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-configure-dnssec.html
upvoted 1 times

" # cldy 10 months, 1 week ago


A correct.
upvoted 1 times

125 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # cldy 11 months ago


A. Use Amazon Route 53 for domain registration and DNS services. Enable DNSSEC for all Route 53 requests. Use AWS Certificate Manager
(ACM) to register TLS/SSL certificates for the shopping website, and use Application Load Balancers configured with those TLS/SSL certificates
for the site. Use the Server Name Identification extension in all client requests to the site.
upvoted 1 times

" # denccc 1 year ago


It's A
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 3 times

" # Waiweng 1 year ago


it's A
upvoted 4 times

" # blackgamer 1 year ago


A for me
upvoted 1 times

" # BloodCube 1 year ago


After June 2021, the answer is A
Before that, D is correct.
upvoted 3 times

" # Amitv2706 1 year ago


A, as now AWS supports DNSSEC on its own.
upvoted 3 times

" # kalyan_krishna742020 1 year ago


Answer is D since R53 started supporting DNSSEC since last December 2020 which is not over 6 months yet.
upvoted 3 times

" # Ebi 1 year ago


With very recent announcement from AWS answer should be A:
https://aws.amazon.com/about-aws/whats-new/2020/12/announcing-amazon-route-53-support-dnssec/
upvoted 7 times

" # 01037 1 year ago


A.
Old question?
According to
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-configure-dnssec.html
Amazon Route 53 supports DNSSEC for domain registration as well as DNSSEC signing
upvoted 3 times

126 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #551 Topic 1

A company is creating an account strategy so that they can begin using AWS. The Security team will provide each team with the permissions they
need to follow the principle or least privileged access. Teams would like to keep their resources isolated from other groups, and the Finance team
would like each team's resource usage separated for billing purposes.
Which account creation process meets these requirements and allows for changes?

A. Create a new AWS Organizations account. Create groups in Active Directory and assign them to roles in AWS to grant federated access.
Require each team to tag their resources, and separate bills based on tags. Control access to resources through IAM granting the minimally
required privilege.

B. Create individual accounts for each team. Assign the security account as the master account, and enable consolidated billing for all other
accounts. Create a cross-account role for security to manage accounts, and send logs to a bucket in the security account.

C. Create a new AWS account, and use AWS Service Catalog to provide teams with the required resources. Implement a third-party billing
solution to provide the Finance team with the resource use for each team based on tagging. Isolate resources using IAM to avoid account
sprawl. Security will control and monitor logs and permissions.

D. Create a master account for billing using Organizations, and create each team's account from that master account. Create a security
account for logs and cross-account access. Apply service control policies on each account, and grant the Security team cross-account access
to all accounts. Security will create IAM policies for each account to maintain least privilege access.

Correct Answer: B
By creating individual IAM users for people accessing your account, you can give each IAM user a unique set of security credentials. You can
also grant different permissions to each IAM user. If necessary, you can change or revoke an IAM user's permissions anytime. (If you give out
your root user credentials, it can be dimcult to revoke them, and it is impossible to restrict their permissions.)
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html

Community vote distribution


D (100%)

" # Ebi Highly Voted $ 1 year ago


D is the answer
upvoted 12 times

" # Bulti Highly Voted $ 1 year ago


Answer is D. AWS organization, separation of accounts by function is a standard AWS best practice when it comes to account creation.
upvoted 5 times

" # Naj_64 Most Recent % 1 month, 2 weeks ago


How does D satisfies "Finance team would like each team's resource usage separated for billing purposes."?
upvoted 4 times

" # Vinafec 3 weeks, 2 days ago


You don't have to enable consolidated billing
upvoted 1 times

" # jj22222 9 months ago


D. Create a master account for billing using Organizations, and create each team‫ג‬€™s account from that master account. Create a security
account for logs and cross-account access. Apply service control policies on each account, and grant the Security team cross-account access
to all accounts. Security will create IAM policies for each account to maintain least privilege access.
upvoted 1 times

" # kyo 9 months, 1 week ago


Selected Answer: D
D: AWS Organizations is the best solution
upvoted 1 times

" # cannottellname 9 months, 3 weeks ago


Selected Answer: D
D is correct
upvoted 1 times

127 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Ni_yot 10 months, 1 week ago


D for me.
upvoted 1 times

" # cldy 10 months, 1 week ago


D correct.
upvoted 1 times

" # AzureDP900 11 months ago


D is the best answer
upvoted 1 times

" # ryu10_09 11 months, 3 weeks ago


D is the answer
upvoted 1 times

" # CloudChef 1 year ago


A) Reason/ Require each team to tag their resources, and separate bills based on tags.
upvoted 3 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 2 times

" # Waiweng 1 year ago


it is D
upvoted 2 times

" # blackgamer 1 year ago


D is the answer.
upvoted 1 times

" # T14102020 1 year ago


Correct is D. master account
upvoted 1 times

" # jackdryan 1 year ago


I'll go with D
upvoted 3 times

" # CYL 1 year ago


D. Use SCP to control organizational level policies.
upvoted 1 times

128 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #552 Topic 1

A company has a 24 TB MySQL database in its on-premises data center that grows at the rate of 10 GB per day. The data center is connected to
the company's
AWS infrastructure with a 50 Mbps VPN connection.
The company is migrating the application and workload to AWS. The application code is already installed and tested on Amazon EC2. The
company now needs to migrate the database and wants to go live on AWS within 3 weeks.
Which of the following approaches meets the schedule with LEAST downtime?

A. 1. Use the VM Import/Export service to import a snapshot of the on-premises database into AWS. 2. Launch a new EC2 instance from the
snapshot. 3. Set up ongoing database replication from on premises to the EC2 database over the VPN. 4. Change the DNS entry to point to the
EC2 database. 5. Stop the replication.

B. 1. Launch an AWS DMS instance. 2. Launch an Amazon RDS Aurora MySQL DB instance. 3. Con+gure the AWS DMS instance with on-
premises and Amazon RDS database information. 4. Start the replication task within AWS DMS over the VPN. 5. Change the DNS entry to point
to the Amazon RDS MySQL database. 6. Stop the replication.

C. 1. Create a database export locally using database-native tools. 2. Import that into AWS using AWS Snowball. 3. Launch an Amazon RDS
Aurora DB instance. 4. Load the data in the RDS Aurora DB instance from the export. 5. Set up database replication from the on-premises
database to the RDS Aurora DB instance over the VPN. 6. Change the DNS entry to point to the RDS Aurora DB instance. 7. Stop the
replication.

D. 1. Take the on-premises application osine. 2. Create a database export locally using database-native tools. 3. Import that into AWS using
AWS Snowball. 4. Launch an Amazon RDS Aurora DB instance. 5. Load the data in the RDS Aurora DB instance from the export. 6. Change the
DNS entry to point to the Amazon RDS Aurora DB instance. 7. Put the Amazon EC2 hosted application online.

Correct Answer: C

Community vote distribution


C (100%)

" # Ebi Highly Voted $ 1 year ago


C is the answer,
with 50Mbps connection only around 11TB can be transferred in 3 weeks, so 24TB of data must be transferred differently which is Snowball in
this case
upvoted 12 times

" # jay1ram2 Highly Voted $ 1 year, 1 month ago


The Correct Answer is C
upvoted 7 times

" # skyblue07 Most Recent % 4 months, 2 weeks ago


Selected Answer: C
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.NonRDSRepl.html
upvoted 1 times

" # cldy 11 months ago


C. 1. Create a database export locally using database-native tools. 2. Import that into AWS using AWS Snowball. 3. Launch an Amazon RDS
Aurora DB instance. 4. Load the data in the RDS Aurora DB instance from the export. 5. Set up database replication from the on-premises
database to the RDS Aurora DB instance over the VPN. 6. Change the DNS entry to point to the RDS Aurora DB instance. 7. Stop the replication.
upvoted 1 times

" # AzureDP900 11 months ago


C is right answer!
upvoted 1 times

" # Goram113 1 year ago


Now DMS can use snowball for entry synchronization and it would be best answer, but in available options C is best.
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

129 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Waiweng 1 year ago


it's C
upvoted 3 times

" # ExtHo 1 year ago


C is correct and D required at least 1 week downtime.
upvoted 2 times

" # alisyech 1 year ago


C is correct
upvoted 1 times

" # T14102020 1 year ago


Correct is C. snowball + without of premise offline
upvoted 3 times

" # jackdryan 1 year ago


I'll go with C
upvoted 1 times

" # kopper2019 1 year ago


C is the answer as soon as you see 25TB over a 25mb links no way.... based on torrent leeching experience :)
upvoted 1 times

" # CYL 1 year ago


C. D requires downtime.
upvoted 1 times

" # NikkyDicky 1 year ago


C. Dup of Q153
upvoted 2 times

" # 01037 1 year ago


149 now
upvoted 1 times

" # Oleksandr 1 year ago


I think it's C
DMS: 50Mbps = 6.25 MB/sec = ... = 0.5Tb/day. 24 Tb = 48 days, which is way above 3 weeks.
upvoted 5 times

" # meenu2225 1 year, 1 month ago


C is the one.
upvoted 2 times

130 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #553 Topic 1

A company wants to allow its Marketing team to perform SQL queries on customer records to identify market segments. The data is spread
across hundreds of +les. The records must be encrypted in transit and at rest. The Team Manager must have the ability to manage users and
groups, but no team members should have access to services or resources not required for the SQL queries. Additionally, Administrators need to
audit the queries made and receive noti+cations when a query violates rules de+ned by the Security team.
AWS Organizations has been used to create a new account and an AWS IAM user with administrator permissions for the Team Manager.
Which design meets these requirements?

A. Apply a service control policy (SCP) that allows access to IAM, Amazon RDS, and AWS CloudTrail. Load customer records in Amazon RDS
MySQL and train users to execute queries using the AWS CLI. Stream the query logs to Amazon CloudWatch Logs from the RDS database
instance. Use a subscription +lter with AWS Lambda functions to audit and alarm on queries against personal data.

B. Apply a service control policy (SCP) that denies access to all services except IAM, Amazon Athena, Amazon S3, and AWS CloudTrail. Store
customer record +les in Amazon S3 and train users to execute queries using the CLI via Athena. Analyze CloudTrail events to audit and alarm
on queries against personal data.

C. Apply a service control policy (SCP) that denies access to all services except IAM, Amazon DynamoDB, and AWS CloudTrail. Store customer
records in DynamoDB and train users to execute queries using the AWS CLI. Enable DynamoDB streams to track the queries that are issued
and use an AWS Lambda function for real-time monitoring and alerting.

D. Apply a service control policy (SCP) that allows access to IAM, Amazon Athena, Amazon S3, and AWS CloudTrail. Store customer records
as +les in Amazon S3 and train users to leverage the Amazon S3 Select feature and execute queries using the AWS CLI. Enable S3 object-level
logging and analyze CloudTrail events to audit and alarm on queries against personal data.

Correct Answer: D

Community vote distribution


B (100%)

" # jay1ram2 Highly Voted $ 1 year, 1 month ago


The answer is B. This is the only option that satisfies all requirements

Encryption Rest/Transit - S3/Athena


Manage users and groups - IAM
Deny Access - Ensures the strictest access.
Audit Queries - CloudTrail logs

A - RDS MySQL only pushes slow query log to CLoudwatch


C - DynamoDB streams push only data changes not SQL
D - This option up Athena but recommends using S3 select
upvoted 26 times

" # dcdcdc3 Most Recent % 1 month, 2 weeks ago


this is what S3 Select is:
https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-s3-announces-new-features-for-s3-select/
upvoted 1 times

" # cldy 11 months ago


B. Apply a service control policy (SCP) that denies access to all services except IAM, Amazon Athena, Amazon S3, and AWS CloudTrail. Store
customer record files in Amazon S3 and train users to execute queries using the CLI via Athena. Analyze CloudTrail events to audit and alarm on
queries against personal data.
upvoted 1 times

" # AzureDP900 11 months ago


I'll go with B
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: B
Encryption Rest/Transit - S3/Athena
Manage users and groups - IAM
Deny Access - Ensures the strictest access.
Audit Queries - CloudTrail logs

A - RDS MySQL only pushes slow query log to CLoudwatch

131 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

C - DynamoDB streams push only data changes not SQL


D - This option up Athena but recommends using S3 select

This is a easy one for solution type of questions, hope I can have it in my exam
upvoted 1 times
" # Smartphone 1 year ago
Answer is B.
Each of the following policies is an example of a deny list policy strategy. Deny list policies must be attached along with other policies that allow
the approved actions in the affected accounts.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples.html
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 2 times

" # blackgamer 1 year ago


Answer is B. Athena can query but what it S3 select.
upvoted 1 times

" # gsw 1 year ago


there is nothing to suggest in the question that it is required to pull out hundreds of queries at a time in which case why B? Surely D is ok?
upvoted 1 times

" # Viper57 1 year ago


It is not possible to grant permissions using SCP, only deny them. This means you can ignore all questions that state "Use an SCP that allows
access".
upvoted 1 times

" # Amitv2706 1 year ago


B is correct. Athena can run queries on multiple files at same time. However S3 Select is applicable for only one object at a time
upvoted 4 times

" # AWSum1 1 year ago


Correct. And the question states 100s of files
upvoted 1 times

" # Ebi 1 year ago


I will go with B
upvoted 3 times

" # T14102020 1 year ago


Correct is B. Athena + SCP denies
upvoted 3 times

" # jackdryan 1 year ago


I'll go with B
upvoted 3 times

" # CYL 1 year ago


B. Use Deny policies in order to restrict usage of services outside the allowable ones.
upvoted 1 times

" # NikkyDicky 1 year ago


B, Dup of Q154
upvoted 2 times

" # Kopa 1 year ago


why you guys write always dup of questions on 1-450 questions, i dont understand. Should we look into 1-450 questions too?
upvoted 1 times

" # 01037 1 year ago


150 now
upvoted 1 times

" # roger8978 1 year ago


B is the answer
upvoted 1 times

132 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

133 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #554 Topic 1

A Solutions Architect is responsible for redesigning a legacy Java application to improve its availability, data durability, and scalability. Currently,
the application runs on a single high-memory Amazon EC2 instance. It accepts HTTP requests from upstream clients, adds them to an in-memory
queue, and responds with a
200 status. A separate application thread reads items from the queue, processes them, and persists the results to an Amazon RDS MySQL
instance. The processing time for each item takes 90 seconds on average, most of which is spent waiting on external service calls, but the
application is written to process multiple items in parallel.
Tramc to this service is unpredictable. During periods of high load, items may sit in the internal queue for over an hour while the application
processes the backlog.

In addition, the current system has issues with availability and data loss if the single application node fails.
Clients that access this service cannot be modi+ed. They expect to receive a response to each HTTP request they send within 10 seconds before
they will time out and retry the request.
Which approach would improve the availability and durability of the system while decreasing the processing latency and minimizing costs?

A. Create an Amazon API Gateway REST API that uses Lambda proxy integration to pass requests to an AWS Lambda function. Migrate the
core processing code to a Lambda function and write a wrapper class that provides a handler method that converts the proxy events to the
internal application data model and invokes the processing module.

B. Create an Amazon API Gateway REST API that uses a service proxy to put items in an Amazon SQS queue. Extract the core processing code
from the existing application and update it to pull items from Amazon SQS instead of an in-memory queue. Deploy the new processing
application to smaller EC2 instances within an Auto Scaling group that scales dynamically based on the approximate number of messages in
the Amazon SQS queue.

C. Modify the application to use Amazon DynamoDB instead of Amazon RDS. Con+gure Auto Scaling for the DynamoDB table. Deploy the
application within an Auto Scaling group with a scaling policy based on CPU utilization. Back the in-memory queue with a memory-mapped +le
to an instance store volume and periodically write that +le to Amazon S3.

D. Update the application to use a Redis task queue instead of the in-memory queue. Build a Docker container image for the application.
Create an Amazon ECS task de+nition that includes the application container and a separate container to host Redis. Deploy the new task
de+nition as an ECS service using AWS Fargate, and enable Auto Scaling.

Correct Answer: B
Reference:
https://aws.amazon.com/blogs/database/introducing-amazon-elasticsearch-service-as-a-target-in-aws-database-migration-service/

Community vote distribution


B (100%)

" # jay1ram2 Highly Voted $ 1 year, 1 month ago


Not sure why C is the correct answer. The obvious challenges here are long workloads, scalability based on queue load, and reliability. Almost
always the defacto answer to queue related workload is SQS. Since the workloads are very long (90 minutes) Lambdas cannot be used (15 mins
max timeout). So, autoscaled smaller EC2 nodes that wait on external services to complete the task makes more sense. If the task fails, the
message is returned to the queue and retried.

My answer is B
upvoted 26 times

" # VrushaliD 1 year ago


Its 90 seconds not minutes, but still ans is B
upvoted 5 times

" # LisX 1 year ago


What is the issue with A then using Lambda?
upvoted 1 times

" # vkbajoria 1 year ago


It is not because of Lambda that question A is incorrect. It is because it didn't talk about local queue that was in used in EC2.
upvoted 2 times

" # AWSum1 Highly Voted $ 1 year ago

134 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

This question is long and causes brain fatigue lol.

B , sqs will satisfy the need


upvoted 6 times
" # TechX Most Recent % 4 months, 1 week ago
Selected Answer: B
I'll go with B
upvoted 1 times

" # ravisar 5 months ago


Will Answer B satisfy the requirement below? "They anticipate receiving a response to each HTTP request they submit within 10 seconds, at
which point the request will time out and be retried." Will SQS processing time depends on the subscriber?
upvoted 1 times

" # jj22222 7 months, 1 week ago


Selected Answer: B
B looks right
upvoted 1 times

" # vbal 11 months ago


instance store volume - better IOPS?
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # Chubb 1 year ago


what does it mean by upstream client?
upvoted 1 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # Ebi 1 year ago


I will go with B
upvoted 4 times

" # Bulti 1 year ago


Answer is B.
upvoted 2 times

" # Britts 1 year ago


Why B? Not sure if API Gateway can support HTTP (it only does HTTPS) and client application can't be changed to invoke HTTPS
upvoted 1 times

" # beebatov 1 year ago


I believe you can make "HTTP" API requests
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop.html#http-api-examples
upvoted 1 times

" # DashL 1 year ago


That is a "HTTP API" - not making an HTTP request to API gateway endpoint. None of the answers seems right.
upvoted 1 times

" # DashL 1 year ago


My bad, C seems to be the only possible Answer.
upvoted 1 times

" # T14102020 1 year ago


Correct is B. SQS + without Lambda, DynamoDB, Redis
upvoted 2 times

" # jackdryan 1 year ago


I'll go with B
upvoted 3 times

" # CYL 1 year ago


B. Using SQS to decouple the incoming request and processing nodes. Auto scaling to scale based on traffic. This increases reliability.
upvoted 1 times

135 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # NikkyDicky 1 year ago


B. Dup of Q155
upvoted 3 times

" # Mkumar 1 year, 1 month ago


My answer is B
upvoted 3 times

136 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #555 Topic 1

A Solutions Architect needs to migrate a legacy application from on premises to AWS. On premises, the application runs on two Linux servers
behind a load balancer and accesses a database that is master-master on two servers. Each application server requires a license +le that is tied to
the MAC address of the server's network adapter. It takes the software vendor 12 hours to send ne license +les through email. The application
requires con+guration +les to use static.
IPv4 addresses to access the database servers, not DNS.
Given these requirements, which steps should be taken together to enable a scalable architecture for the application servers? (Choose two.)

A. Create a pool of ENIs, request license +les from the vendor for the pool, and store the license +les within Amazon S3. Create automation to
download an unused license, and attach the corresponding ENI at boot time.

B. Create a pool of ENIs, request license +les from the vendor for the pool, store the license +les on an Amazon EC2 instance, modify the
con+guration +les, and create an AMI from the instance. use this AMI for all instances.

C. Create a bootstrap automation to request a new license +le from the vendor with a unique return email. Have the server con+gure itself with
the received license +le.

D. Create bootstrap automation to attach an ENI from the pool, read the database IP addresses from AWS Systems Manager Parameter Store,
and inject those parameters into the local con+guration +les. Keep SSM up to date using a Lambda function.

E. Install the application on an EC2 instance, con+gure the application, and con+gure the IP address information. Create an AMI from this
instance and use if for all instances.

Correct Answer: CD

Community vote distribution


AD (100%)

" # jay1ram2 Highly Voted $ 1 year, 1 month ago


I choose A and D
upvoted 26 times

" # vkbajoria Highly Voted $ 1 year ago


Majority of the answers are incorrect. Is Examtopics purposely providing all the wrong answer?
upvoted 5 times

" # vkbajoria 1 year ago


The answer is A & D
upvoted 3 times

" # AzureDP900 11 months ago


They might have provided intentionally wrong answers, We have to read and understand what is right vs wrong
upvoted 1 times

" # aandc Most Recent % 4 months, 1 week ago


Selected Answer: AD
AD is correct
upvoted 1 times

" # bkrish 6 months, 4 weeks ago


I will go with A&D
upvoted 1 times

" # AzureDP900 11 months ago


I will go with AD
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,D
upvoted 1 times

" # Waiweng 1 year ago


It s A&D
upvoted 3 times

137 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # 01037 1 year ago


A, D
If it's supported by the vendor, C is a better choice.
upvoted 1 times

" # ExtHo 1 year ago


A&D
Having the license files on an Amazon S3 bucket reduces the management overhead for the EC2 instances, as you can easily add/remove more
license keys if needed.

Having the database IP addresses on Parameter Store ensures that all the EC2 instances will have a central location to retrieve the IP addresses.
This also reduces the need to constantly update any script from inside the EC2 instance even if you add/remove more databases in the future.
upvoted 4 times

" # Pupu86 1 year ago


The objective is to tie the license file to a recognisable NIC (ENI in this case) and extract the IP and bind it accordingly. AD fulfils both
requirements.
upvoted 1 times

" # certainly 1 year ago


AD. B is incorrect. the license is binded with specific MAC address. you cannot store license file in the EC2 whose NIC is assigned from the pool
programmatically
upvoted 1 times

" # Kian1 1 year ago


going with AD
upvoted 2 times

" # Firststack 1 year ago


A & D is correct
upvoted 2 times

" # Ebi 1 year ago


I will go with AD
upvoted 3 times

" # 01037 1 year ago


What does it mean that tying a license file to the MAC address of the server's network adapter?
upvoted 1 times

" # Bulti 1 year ago


For security purposes its best not to store the license files on EC2 instance. So I will go with A & D
upvoted 1 times

" # gookseang 1 year ago


BD my friend says AD
upvoted 1 times

" # binhdx 1 year ago


should be AD, don't store license into Ec2.
upvoted 1 times

138 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #556 Topic 1

A company has an Amazon VPC that is divided into a public subnet and a private subnet. A web application runs in Amazon VPC, and each subnet
has its own
NACL. The public subnet has a CIDR of 10.0.0.0/24. An Application Load Balancer is deployed to the public subnet. The private subnet has a CIDR
of 10.0.1.0/24.
Amazon EC2 instances that run a web server on port 80 are launched into the private subnet.
Only network tramc that is required for the Application Load Balancer to access the web application can be allowed to travel between the public
and private subnets.
What collection of rules should be written to ensure that the private subnet's NACL meets the requirement? (Choose two.)

A. An inbound rule for port 80 from source 0.0.0.0/0.

B. An inbound rule for port 80 from source 10.0.0.0/24.

C. An outbound rule for port 80 to destination 0.0.0.0/0.

D. An outbound rule for port 80 to destination 10.0.0.0/24.

E. An outbound rule for ports 1024 through 65535 to destination 10.0.0.0/24.

Correct Answer: BC
Reference:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario3.html

Community vote distribution


BE (67%) BD (33%)

" # Riho Highly Voted $ 10 months, 1 week ago


https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-groups.html#elb-vpc-nacl - Right answer should be B,E
upvoted 11 times

" # Dionenonly Most Recent % 1 month, 2 weeks ago


Selected Answer: BE
B E would be the best answer
upvoted 2 times

" # CloudHandsOn 3 months, 2 weeks ago


B.E. - First choice, and believe this is the correct answer
upvoted 2 times

" # aandc 4 months ago


Selected Answer: BE
ephemeral ports are needed
upvoted 2 times

" # JonJon03 4 months ago


Selected Answer: BD
ALB terminates flow/has proxy behaviour. https://aws.amazon.com/elasticloadbalancing/features/?nc=sn&loc=2&dn=1
upvoted 2 times

" # bobsmith2000 5 months, 3 weeks ago


NACL is stateless. So we must set up both inbound and outbound.
B. An inbound rule for port 80 from source 10.0.0.0/24. Allows access from pub sub on 80.
E. An outbound rule for ports 1024 through 65535 to destination 10.0.0.0/24. Allow outbound to pub sub on ephemeral ports
upvoted 4 times

" # adsdadasdad 7 months, 1 week ago


Its not, you made the mistake of thinking the application load balancer is an internal one. Thus the answer is correct
upvoted 1 times

139 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #557 Topic 1

A company has an internal AWS Elastic Beanstalk worker environment inside a VPC that must access an external payment gateway API available
on an HTTPS endpoint on the public internet. Because of security policies, the payment gateway's Application team can grant access to only one
public IP address.
Which architecture will set up an Elastic Beanstalk environment to access the company's application without making multiple changes on the
company's end?

A. Con+gure the Elastic Beanstalk application to place Amazon EC2 instances in a private subnet with an outbound route to a NAT gateway in
a public subnet. Associate an Elastic IP address to the NAT gateway that can be whitelisted on the payment gateway application side.

B. Con+gure the Elastic Beanstalk application to place Amazon EC2 instances in a public subnet with an internet gateway. Associate an
Elastic IP address to the internet gateway that can be whitelisted on the payment gateway application side.

C. Con+gure the Elastic Beanstalk application to place Amazon EC2 instances in a private subnet. Set an HTTPS_PROXY application
parameter to send outbound HTTPS connections to an EC2 proxy server deployed in a public subnet. Associate an Elastic IP address to the
EC2 proxy host that can be whitelisted on the payment gateway application side.

D. Con+gure the Elastic Beanstalk application to place Amazon EC2 instances in a public subnet. Set the HTTPS_PROXY and NO_PROXY
application parameters to send non-VPC outbound HTTPS connections to an EC2 proxy server deployed in a public subnet. Associate an
Elastic IP address to the EC2 proxy host that can be whitelisted on the payment gateway application side.

Correct Answer: A
Reference:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc.html

Community vote distribution


A (75%) C (25%)

" # Mkumar Highly Voted $ 1 year, 1 month ago


Answer is A
upvoted 12 times

" # sashenka 11 months, 3 weeks ago


https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-static-IP-address/
upvoted 1 times

" # Ebi Highly Voted $ 1 year ago


I will go with A
upvoted 6 times

" # Rocketeer Most Recent % 1 month, 2 weeks ago


Both A and C will work. However A is much easier to set up and least impact.
Hence A is the answer.
upvoted 1 times

" # Dionenonly 1 month, 2 weeks ago


Selected Answer: A
A. Plain and simple
upvoted 2 times

" # Israel 1 month, 3 weeks ago


Elastic Beanstalk doesn't support proxy settings like HTTPS_PROXY for configuring a web proxy.

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc.html
upvoted 2 times

" # hilft 3 months, 2 weeks ago


Why not C?
upvoted 1 times

" # tartarus23 6 months, 1 week ago


Selected Answer: A
A. offers security the the EC2 instance as they are in private subnet and internet connection to public subnet is via NAT gateway is secured.
elastic IP of the NAT gateway can then be whitelisted by the payment gateway app

140 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # HellGate 7 months, 1 week ago


Selected Answer: C
Need Proxy service for HTTPS communication.
upvoted 1 times

" # Ni_yot 8 months, 1 week ago


A for me.
makes sense
upvoted 1 times

" # AzureDP900 11 months ago


I will go with A
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 1 times

" # Waiweng 1 year ago


it's A
upvoted 4 times

" # T14102020 1 year ago


Correct is A. NAT + Elastic IP
upvoted 4 times

" # jackdryan 1 year ago


I'll go with A
upvoted 4 times

" # CYL 1 year ago


A. Simplest approach to having all the requests originating to have the same public IP.
upvoted 2 times

" # MultiAZ 1 year ago


What's wrong with B? IGW also has Elastic IP that can be whitelisted.
upvoted 1 times

" # sam422 1 year ago


Elastic beanstalk in private vpc or not exposed to internet looking at question
upvoted 3 times

" # NikkyDicky 1 year ago


A. Dup of Q158
upvoted 2 times

141 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #558 Topic 1

A company has a website that enables users to upload videos. Company policy states the uploaded videos must be analyzed for restricted
content. An uploaded video is placed in Amazon S3, and a message is pushed to an Amazon SQS queue with the video's location. A backend
application pulls this location from
Amazon SQS and analyzes the video.
The video analysis is compute-intensive and occurs sporadically during the day. The website scales with demand. The video analysis application
runs on a +xed number of instances. Peak demand occurs during the holidays, so the company must add instances to the application during this
time. All instances used are currently on-demand Amazon EC2 T2 instances. The company wants to reduce the cost of the current solution.
Which of the following solutions is MOST cost-effective?

A. Keep the website on T2 instances. Determine the minimum number of website instances required during off-peak times and use Spot
Instances to cover them while using Reserved Instances to cover peak demand. Use Amazon EC2 R4 and Amazon EC2 R5 Reserved Instances
in an Auto Scaling group for the video analysis application.

B. Keep the website on T2 instances. Determine the minimum number of website instances required during off-peak times and use Reserved
Instances to cover them while using On-Demand Instances to cover peak demand. Use Spot Fleet for the video analysis application comprised
of Amazon EC2 C4 and Amazon EC2 C5 Spot Instances.

C. Migrate the website to AWS Elastic Beanstalk and Amazon EC2 C4 instances. Determine the minimum number of website instances
required during off-peak times and use On-Demand Instances to cover them while using Spot capacity to cover peak demand. Use Spot Fleet
for the video analysis application comprised of C4 and Amazon EC2 C5 instances.

D. Migrate the website to AWS Elastic Beanstalk and Amazon EC2 R4 instances. Determine the minimum number of website instances
required during off-peak times and use Reserved Instances to cover them while using On-Demand Instances to cover peak demand. Use Spot
Fleet for the video analysis application comprised of R4 and Amazon EC2 R5 instances.

Correct Answer: B

" # Pupu86 Highly Voted $ 1 year ago


B is correct as it uses C4 and C5 (compute intensive instances) while D uses R4 and R5 (memory intensive instances) even if beanstalk auto-
scaling is taken into account.
upvoted 12 times

" # joe16 1 year ago


B.
Main points to support this answer - Reserved Instances for off-peak load, spot for video processing, C4/5 for compute optimized video
processing.
upvoted 2 times

" # Waiweng Highly Voted $ 1 year ago


it's B
upvoted 7 times

" # AzureDP900 Most Recent % 10 months, 3 weeks ago


B is correct
upvoted 2 times

" # cldy 11 months ago


B. Keep the website on T2 instances. Determine the minimum number of website instances required during off-peak times and use Reserved
Instances to cover them while using On-Demand Instances to cover peak demand. Use Spot Fleet for the video analysis application comprised
of Amazon EC2 C4 and Amazon EC2 C5 Spot Instances.
upvoted 2 times

" # AzureDP900 11 months ago


B is right.. use Reserved Instances to cover them while using On-Demand Instances to cover peak demand. Use Spot Fleet
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

" # hezll 1 year ago


why not D. Beanstalk has auto-scalling,

142 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # Ebi 1 year ago
I go with B
upvoted 3 times

" # T14102020 1 year ago


Correct is B. T2 + RI for off-peak + without Beanstalk
upvoted 2 times

" # jackdryan 1 year ago


I'll go with B
upvoted 2 times

" # CYL 1 year ago


B. Correct usage of reserved, on-demand and spot instances.
upvoted 1 times

" # NikkyDicky 1 year ago


B. Dup of Q159
upvoted 3 times

" # meenu2225 1 year ago


B seems right
upvoted 2 times

" # Mkumar 1 year ago


Answer is B
upvoted 4 times

" # LunchTime 1 year, 1 month ago


This is a replication of Question #159Topic 2
upvoted 1 times

" # shyamexamprep 3 weeks, 5 days ago


where is topic 2 questions pls confirm?
I am preparing for solution architect professional exam ,kindly confirm examtopics questions from 390-953 are enough to pass the exam.
upvoted 2 times

143 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #559 Topic 1

A company has an application that uses Amazon EC2 instances in an Auto Scaling group. The Quality Assurance (QA) department needs to launch
a large number of short-lived environments to test the application. The application environments are currently launched by the Manager of the
department using an AWS
CloudFormation template. To launch the stack, the Manager uses a role with permission to use CloudFormation, EC2, and Auto Scaling APIs. The
Manager wants to allow testers to launch their own environments, but does not want to grant broad permissions to each user.
Which set up would achieve these goals?

A. Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department permission to assume the Manager's role and
add a policy that restricts the permissions to the template and the resources it creates. Train users to launch the template from the
CloudFormation console.

B. Create an AWS Service Catalog product from the environment template. Add a launch constraint to the product with the existing role. Give
users in the QA department permission to use AWS Service Catalog APIs only. Train users to launch the templates from the AWS Service
Catalog console.

C. Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department permission to use CloudFormation and S3 APIs,
with conditions that restrict the permission to the template and the resources it creates. Train users to launch the template from the
CloudFormation console.

D. Create an AWS Elastic Beanstalk application from the environment template. Give users in the QA department permission to use Elastic
Beanstalk permissions only. Train users to launch Elastic Beanstalk environment with the Elastic Beanstalk CLI, passing the existing role to
the environment as a service role.

Correct Answer: B
Reference:
https://aws.amazon.com/ru/blogs/mt/how-to-launch-secure-and-governed-aws-resources-with-aws-cloudformation-and-aws-service-catalog/

Community vote distribution


B (100%)

" # jay1ram2 Highly Voted $ 1 year, 1 month ago


A? Grant QA users access to Manager Role is a blatant violation of Security.

B makes more sense to me as it restricts users to create services through the catalog.
upvoted 26 times

" # Blair77 Most Recent % 3 weeks, 3 days ago


Selected Answer: B
I'll go with B!!
upvoted 1 times

" # AzureDP900 11 months ago


Service Catalog is right option. B for sure right answer.
upvoted 4 times

" # denccc 1 year ago


It's B, not sure what other discussion is going on below.
upvoted 3 times

" # AzureDP900 11 months ago


Below discussions doesn't seems related to this question, I am not sure what they are taking :)
upvoted 4 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 3 times

" # chuck_lee 1 year ago


needs to improve the scalable performance and availability of the database.
Which solution meets these requirements?

A. Create an Amazon CloudWatch alarm action that triggers a Lambda function to add an Amazon RDS for MySQL read replica when resource

144 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

utilization hits a threshold


B. Migrate the database to Amazon Aurora, and add a read replica Add a database connection pool outside of the Lambda handler function
C. Migrate the database to Amazon Aurora, and add a read replica Use Amazon Route 53 weighted records
D. Migrate the database to Amazon Aurora, and add an Aurora Replica Configure Amazon RDS Proxy to manage database connection pools
upvoted 1 times

" # joe16 1 year ago


D.
Lambdas are stateless and can't rely on connection pool. To get over this problem, AWS provide RDS proxy for connection pool management.
upvoted 1 times
" # chuck_lee 1 year ago
A company runs a software-as-a-service (SaaS) application on AWS. The application consists of AWS Lambda functions and an Amazon RDS for
MySQL Multi-AZ database. During market events the application has a much higher workload than normal Users notice slow response times
during the peak periods because of many database connections. The company needs
upvoted 1 times

" # Bulti 1 year ago


Answer is B.
upvoted 2 times

" # T14102020 1 year ago


Correct is B. Service Catalog product
upvoted 1 times

" # jackdryan 1 year ago


I'll go with B
upvoted 4 times

" # bbnbnuyh 1 year ago


B makes sense as AWS Service Catalog is created for a use-case like this.
upvoted 1 times

" # CYL 1 year ago


B, using service catalog to show what are the allowed services will be the easiest way to approach the restrictions.
upvoted 1 times

" # JBRIAN 1 year ago


A. Verify the AWS IoT Device Shadow service is subscribed to the appropriate topic and is executing the AWS Lambda function.
B. Verify that AWS IoT monitoring shows that the appropriate AWS IoT rules are being executed, and that the AWS IoT rules are enabled with the
correct rule actions.
C. Check the AWS IoT Fleet indexing service and verify that the thing group has the appropriate IAM role to update DynamoDB.
D. Verify that AWS IoT things are using MQTT instead of MQTT over WebScocket, then check that the provisioning has the appropriate policy
attached.
upvoted 1 times

" # Phat 1 year ago


D is correct
upvoted 1 times

" # Fua 1 year ago


explain please
upvoted 1 times

" # lifebegins 8 months, 1 week ago


Answer is B: IoT Rules. https://docs.aws.amazon.com/iot/latest/developerguide/iot-rules.html
upvoted 1 times

" # JBRIAN 1 year ago


NO.80 An IoT company has rolled out a fleet of sensors for monitoring temperatures in remote locations. Each device connect to AWS IoT Core
and sends a message 30 seconds, updating an Amazon DynamoDB table. A System Administrator users AWS IoT to verify the devices are still
sending messages to AWS IoT Core: the database is not updating.
What should a Solution Architect check to determine why the database is not being updated?
upvoted 1 times

" # JBRIAN 1 year ago


A. Store the data in Amazon DocumentDB Create a single global Amazon CloudFront distribution with a custom origin built on edge-optimized
Amazon API Gateway and AWS Lambda Assign the company's domain as an alternate domain for the distribution. and configure Amazon Route
53 with an alias to the CloudFront distribution
B. Store the data in replicated Amazon S3 buckets in two Regions Create an Amazon CloudFront distribution in each Region, with custom origins
built on Amazon API Gateway and AWS Lambda launched in each Region Assign the company's domain as an alternate domain for both
distributions and configure Amazon Route 53 with a failover routing policy between them
C. Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode In both Regions, run the web service as
Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB) In Amazon Route 53, configure an alias
record in the company's domain and a Route 53 latency-based routing policy with health checks to distribute traffic between the two ALBs

145 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times

" # SadioMane 1 year ago


Answer is ABD
upvoted 1 times

" # SadioMane 1 year ago


Sorry. The answer is meant for Q #63
upvoted 1 times

" # Nit_1 1 year ago


What is the ans for Q77
upvoted 1 times
" # JBRIAN 1 year ago
NO.77 A company is refactoring an existing web service that provides read and write access to structured data. The service must respond to
short but significant spikes in the system load The service must be fault tolerant across multiple AWS Regions.
Which actions should be taken to meet these requirements?
upvoted 1 times

" # JBRIAN 1 year ago


A. Apply environment, cost center, and application name tags to all taggable resources
B. Configure custom budgets and define thresholds using Cost Explorer
C. Configure AWS Trusted Advisor to obtain weekly emails with cost-saving estimates
D. Create a portfolio for each business unit and add products to the portfolios using AWS
CloudFormation in AWS Service Catalog
E. Configure a billing alarm in Amazon CloudWatch.
F. Configure SCPs in AWS Organizations to allow services available using AWS
upvoted 1 times

" # DuyPhan 1 year ago


the correct answer is ABD
upvoted 1 times

" # perio 1 year ago


I agree.
'A' for calculating the costs for each project, environment.
'B' for limiting the usage of resources in the dev account.
'D' for business units deploying pre-approved IT services only.
upvoted 1 times

" # Nkem 1 year ago


ADF
A: Calculating costs for each project, environment
D: Limiting deployed resources
F: Centrally managing IT services and limiting AWS resources in the development account
upvoted 4 times

" # Nkem 1 year ago


Reversed myself. Answer is ABD
upvoted 1 times

" # NNHAN 1 year ago


B is wrong, threshold should be defined in Budget service.
upvoted 2 times

" # A_New_Guy 1 year ago


The correct answer is ADF
upvoted 1 times

146 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #560 Topic 1

A company has several teams, and each team has their own Amazon RDS database that totals 100 TB. The company is building a data query
platform for
Business Intelligence Analysts to generate a weekly business report. The new system must run ad-hoc SQL queries.
What is the MOST cost-effective solution?

A. Create a new Amazon Redshift cluster. Create an AWS Glue ETL job to copy data from the RDS databases to the Amazon Redshift cluster.
Use Amazon Redshift to run the query.

B. Create an Amazon EMR cluster with enough core nodes. Run an Apache Spark job to copy data from the RDS databases to a Hadoop
Distributed File System (HDFS). Use a local Apache Hive metastore to maintain the table de+nition. Use Spark SQL to run the query.

C. Use an AWS Glue ETL job to copy all the RDS databases to a single Amazon Aurora PostgreSQL database. Run SQL queries on the Aurora
PostgreSQL database.

D. Use an AWS Glue crawler to crawl all the databases and create tables in the AWS Glue Data Catalog. Use an AWS Glue ETL job to load data
from the RDS databases to Amazon S3, and use Amazon Athena to run the queries.

Correct Answer: A

Community vote distribution


D (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


D. ad-hoc queries + cost advantage over Redshift -> Athena.
upvoted 15 times

" # Ebi Highly Voted $ 1 year ago


Cheapest option is D
upvoted 6 times

" # hilft Most Recent % 3 months, 2 weeks ago


Badly formed question. each team got 100tb of data set and you still not using Redshift? The right answer is D. because it is asking for the most
cost-efficient way of querying.
upvoted 1 times

" # aandc 4 months, 1 week ago


Selected Answer: D
keyword: ad-hoc SQL queries, cost -> Athena
upvoted 1 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: D
correct answer is D
upvoted 1 times

" # CloudChef 9 months, 2 weeks ago


Selected Answer: D
D via Digital Cloud Training
upvoted 2 times

" # pititcu667 9 months, 3 weeks ago


Selected Answer: D
d because cheap + ad-hoc
upvoted 2 times

" # AzureDP900 11 months ago


I will go with D, This is most cost -effective solution.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 1 times

147 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Waiweng 1 year ago


it's D
upvoted 3 times

" # digimaniac 1 year ago


D. Redshift, EMR, and Anthena can all do the job. Read this article.
https://aws.amazon.com/athena/faqs/#When_to_use_Athena_vs_other_big_data_services
upvoted 4 times

" # Kian1 1 year ago


going with D
upvoted 2 times

" # rkbala 1 year ago


D. Athena is cheap over Redshift
upvoted 1 times

" # Bulti 1 year ago


Correct answer is D.
upvoted 1 times

" # Britts 1 year ago


The question clearly states, that the purpose is to have Business analytics kind of queries. pushes this towards Redshift. i.e. A. Otherwise why
would somebody ever need a redshift cluster, if S3 and Athena could have been used
upvoted 3 times

" # petebear55 1 year ago


"QUERIES" !!!! IS THE KEY HERE!! ... This indicates simple things like .. how many blue jumpers were sold etc. Redshift would be more
appropriate for DEEP statistical analysis ... such as plotting flight routes .. answer is D
upvoted 3 times

" # T14102020 1 year ago


Correct is D. Athena + Glue Crawler
upvoted 1 times

" # jackdryan 1 year ago


I'll go with D
upvoted 3 times

148 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #561 Topic 1

A company provides AWS solutions to its users with AWS CloudFormation templates. Users launch the templates in their accounts to have
different solutions provisioned for them. The users want to improve the deployment strategy for solutions while retaining the ability to do the
following:
✑ Add their own features to a solution for their speci+c deployments.
✑ Run unit tests on their changes.
✑ Turn features on and off for their deployments.
✑ Automatically update with code changes.
✑ Run security scanning tools for their deployments.
Which strategies should the Solutions Architect use to meet the requirements?

A. Allow users to download solution code as Docker images. Use AWS CodeBuild and AWS CodePipeline for the CI/CD pipeline. Use Docker
images for different solution features and the AWS CLI to turn features on and off. Use AWS CodeDeploy to run unit tests and security scans,
and for deploying and updating a solution with changes.

B. Allow users to download solution code artifacts. Use AWS CodeCommit and AWS CodePipeline for the CI/CD pipeline. Use AWS Amplify
plugins for different solution features and user prompts to turn features on and off. Use AWS Lambda to run unit tests and security scans, and
AWS CodeBuild for deploying and updating a solution with changes.

C. Allow users to download solution code artifacts in their Amazon S3 buckets. Use Amazon S3 and AWS CodePipeline for the CI/CD
pipelines. Use CloudFormation StackSets for different solution features and to turn features on and off. Use AWS Lambda to run unit tests and
security scans, and CloudFormation for deploying and updating a solution with changes.

D. Allow users to download solution code artifacts. Use AWS CodeCommit and AWS CodePipeline for the CI/CD pipeline. Use the AWS Cloud
Development Kit constructs for different solution features, and use the manifest +le to turn features on and off. Use AWS CodeBuild to run unit
tests and security scans, and for deploying and updating a solution with changes.

Correct Answer: A
Reference:
https://www.slideshare.net/AmazonWebServices/cicd-for-containers-a-way-forward-for-your-devops-pipeline

Community vote distribution


D (100%)

" # bbnbnuyh Highly Voted $ 1 year, 1 month ago


D. AWS CDK enables you to define your infrastructure with code and provision it through AWS CloudFormation. You get all the benefits of
CloudFormation, including repeatable deployment, easy rollback, and drift detection.
upvoted 23 times

" # cloudgc Highly Voted $ 1 year, 1 month ago


Answer-C.
A - codedeploy - not used for unit tests and security scans
B - codebuild - not used for deploying and updating
D - codebuild - not used for deploying and updating
upvoted 6 times

" # Kelvin1477 1 year, 1 month ago


somewhat agree with stacksets use case for providing various flavor of the solution template
upvoted 1 times

" # cloudgc 1 year, 1 month ago


looks like a keyword is missing in Answer-D.

and XXXX for deploying and updating a solution with changes.

if this is true then the answer can be D.


upvoted 3 times

" # dijesim222 1 year ago


CDK as in answer D can EITHER output cloudformation tempaltes OR deploy the stack immediately (which is totally feasible to do with
codebuild). IF the output was a cloudformation template (which was NOT in answer D) it is totally feasible to deploy cloudformation
templates with codepipeline alone, no codedeploy etc. is needed. -> anwser D is perfect
upvoted 2 times

149 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # rahulseth Most Recent % 3 months, 3 weeks ago


A. CodeBuild can't be use for AWS run unit tests and security scans, and for deploying and updating a solution with changes.
upvoted 1 times

" # bobokyo 8 months, 1 week ago


D is correct.
https://docs.aws.amazon.com/solutions/latest/smart-product-solution/components.html
upvoted 1 times

" # cldy 11 months ago


D. Allow users to download solution code artifacts. Use AWS CodeCommit and AWS CodePipeline for the CI/CD pipeline. Use the AWS Cloud
Development Kit constructs for different solution features, and use the manifest file to turn features on and off. Use AWS CodeBuild to run unit
tests and security scans, and for deploying and updating a solution with changes.
upvoted 2 times

" # AzureDP900 11 months ago


I will go with D.
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: D
Correct is D. CodePipeline + Cloud Development Kits for turn features on and off + CodeBuild to run unit tests
upvoted 4 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 2 times

" # Waiweng 1 year ago


it's D
upvoted 5 times

" # blackgamer 1 year ago


D seems to be correct.
upvoted 3 times

" # alisyech 1 year ago


D is correct
upvoted 2 times

" # Kian1 1 year ago


going with D
upvoted 3 times

" # lechuk 1 year ago


Seems there is somthing missing in D. CodeBuild is not intended to deploy
upvoted 1 times

" # Ebi 1 year ago


Answer is D
upvoted 6 times

" # Bulti 1 year ago


D is the correct answer. No other options allows the developers to modify the solution code and deploy it using CodePipeline the way D does.
upvoted 2 times

" # T14102020 1 year, 1 month ago


Correct is D. CodePipeline + Cloud Development Kits for turn features on and off + CodeBuild to run unit tests
upvoted 4 times

" # ting_66 1 year, 1 month ago


CodeBuild can build, test, and ofc run scanning job. CDK is a CloudFormation for developers.
D is correct
upvoted 3 times

150 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #562 Topic 1

A company uses Amazon S3 to host a web application. Currently, the company uses a continuous integration tool running on an Amazon EC2
instance that builds and deploys the application by uploading it to an S3 bucket. A Solutions Architect needs to enhance the security of the
company's platform with the following requirements:
✑ A build process should be run in a separate account from the account hosting the web application.
✑ A build process should have minimal access in the account it operates in.
✑ Long-lived credentials should not be used.
As a start, the Development team created two AWS accounts: one for the application named web account process; other is a named build
account.
Which solution should the Solutions Architect use to meet the security requirements?

A. In the build account, create a new IAM role, which can be assumed by Amazon EC2 only. Attach the role to the EC2 instance running the
continuous integration process. Create an IAM policy to allow s3: PutObject calls on the S3 bucket in the web account. In the web account,
create an S3 bucket policy attached to the S3 bucket that allows the build account to use s3:PutObject calls.

B. In the build account, create a new IAM role, which can be assumed by Amazon EC2 only. Attach the role to the EC2 instance running the
continuous integration process. Create an IAM policy to allow s3: PutObject calls on the S3 bucket in the web account. In the web account,
create an S3 bucket policy attached to the S3 bucket that allows the newly created IAM role to use s3:PutObject calls.

C. In the build account, create a new IAM user. Store the access key and secret access key in AWS Secrets Manager. Modify the continuous
integration process to perform a lookup of the IAM user credentials from Secrets Manager. Create an IAM policy to allow s3: PutObject calls
on the S3 bucket in the web account, and attack it to the user. In the web account, create an S3 bucket policy attached to the S3 bucket that
allows the newly created IAM user to use s3:PutObject calls.

D. In the build account, modify the continuous integration process to perform a lookup of the IAM user credentials from AWS Secrets Manager.
In the web account, create a new IAM user. Store the access key and secret access key in Secrets Manager. Attach the PowerUserAccess IAM
policy to the IAM user.

Correct Answer: A

Community vote distribution


B (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


B. No long term credentials -> use roles. Bucket policies to grant permissions to the role, not the account itself.
upvoted 20 times

" # joe16 1 year ago


B.
https://aws.amazon.com/premiumsupport/knowledge-center/s3-cross-account-upload-access/
upvoted 4 times

" # gnic Most Recent % 2 months, 1 week ago


Selected Answer: B
It's B. "allow new role to use the API putObject"
upvoted 1 times

" # aandc 4 months, 1 week ago


Selected Answer: B
B BBBBB
upvoted 1 times

" # roka_ua 7 months, 1 week ago


Selected Answer: B
Vote B
upvoted 1 times

" # cannottellname 8 months, 3 weeks ago


When DEV account assumes role in PROD account, the s3 only has to allow the role of PROD account to make any changes. 2 way trust is not
required in S3 policy - it is at IAM level which should already be taken care.

BBB

151 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # shotty1 9 months, 2 weeks ago
I am pretty sure it is A. Using a role as a trusted Principal for cross account access has never worked for me, even though the documentation is
sometimes a bit vague on that topic.
upvoted 2 times

" # pititcu667 9 months, 3 weeks ago


Selected Answer: B
roles should be used no?
upvoted 1 times

" # bwestpha 10 months, 2 weeks ago


It's A. B is just creating a policy, but not a role which can be used by anything. Additionally there should be a two way trust established, but isn't.
It's not good to enable the complete build-account to write into the bucket by the bucket policy, but at least this scenario will work and fullfills the
requirements. A
upvoted 1 times

" # AzureDP900 11 months ago


I am going with B. Initially I thought of D , however that doesn't make any sense.
upvoted 2 times

" # moon2351 1 year ago


Answer is B
upvoted 1 times

" # denccc 1 year ago


I would think it's A, no? Can you allow a remote role in your bucket policy?
upvoted 1 times

" # denccc 1 year ago


Okay it's B: https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/
upvoted 5 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 3 times

" # Waiweng 1 year ago


it's B
upvoted 4 times

" # blackgamer 1 year ago


Going with B
upvoted 1 times

" # certainly 1 year ago


A. is correct. https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html
upvoted 1 times

" # sarah_t 1 year ago


Your link says B:
"At the end of this tutorial, you have the following:
- Users in the Development account (the trusted account) that are allowed to assume a specific role in the Production account.
- A role in the Production account (the trusting account) that is allowed to access a specific Amazon S3 bucket.
- The productionapp bucket in the Production account."
upvoted 2 times

" # Kian1 1 year, 1 month ago


going with B
upvoted 2 times

" # Firststack 1 year, 1 month ago


B is correct
upvoted 2 times

152 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #563 Topic 1

A jeet of Amazon ECS instances is used to poll an Amazon SQS queue and update items in an Amazon DynamoDB database. Items in the table
are not being updated, and the SQS queue is +lling up. Amazon CloudWatch Logs are showing consistent 400 errors when attempting to update
the table. The provisioned write capacity units are appropriately con+gured, and no throttling is occurring.
What is the LIKELY cause of the failure?

A. The ECS service was deleted.

B. The ECS con+guration does not contain an Auto Scaling group.

C. The ECS instance task execution IAM role was modi+ed.

D. The ECS task role was modi+ed.

Correct Answer: C

Community vote distribution


D (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


D. Between executionRoleArn (option C) and taskRoleArn (D), only the latter is used to interact with DynamoDB. The former is used to download
images or write logs to Cloudwatch.

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html

Status 400 with DynamoDB. Here,probably an authn failure due to someone messing up the role.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.MessagesAndCodes
upvoted 32 times

" # MarcChartouny 1 year ago


Nemer in Arabic means 'Tiger'... And it seems you are a real AWS Tiger Man!! #Guru_Level
upvoted 9 times

" # RVivek Most Recent % 9 months, 2 weeks ago


D is the answer
upvoted 1 times

" # GeniusMikeLiu 11 months ago


Selected Answer: D
https://sysadmins.co.za/difference-with-ecs-task-and-execution-iam-roles-on-aws/
upvoted 2 times

" # AzureDP900 11 months ago


I will go with D, This question is part of Neal Davis practice tests.
upvoted 2 times

" # backfringe 11 months, 2 weeks ago


I go with D
upvoted 1 times

" # acloudguru 11 months, 3 weeks ago


Selected Answer: D
C is only for agent related tasks such as cloudwatch, secret manager ,ECR, while this is 400 error, must be something wrong between
DynamoDB, so such role should be D.
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 3 times

" # tuananhngo 1 year ago


D IS CORRECT
upvoted 3 times

" # Waiweng 1 year ago


it's D

153 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 4 times
" # blackgamer 1 year ago
D is the answer.
upvoted 2 times

" # certainly 1 year ago


agree D.
upvoted 1 times

" # Kian1 1 year ago


going with D
upvoted 2 times

" # Firststack 1 year ago


D - Task role modification
upvoted 2 times

" # Ebi 1 year ago


I will go with D
upvoted 4 times

" # Bulti 1 year ago


D is the correct answer.
upvoted 1 times

" # T14102020 1 year, 1 month ago


Correct is D. Task Role
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with D
upvoted 2 times

154 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #564 Topic 1

A mobile gaming application publishes data continuously to Amazon Kinesis Data Streams. An AWS Lambda function processes records from the
data stream and writes to an Amazon DynamoDB table. The DynamoDB table has an auto scaling policy enabled with the target utilization set to
70%.
For several minutes at the start and end of each day, there is a spike in tramc that often exceeds +ve times the normal load. The company notices
the
GetRecords.IteratorAgeMilliseconds metric of the Kinesis data stream temporarily spikes to over a minute for several minutes. The AWS Lambda
function writes
ProvisionedThroughputExceededException messages to Amazon CloudWatch Logs during these times, and some records are redirected to the
dead letter queue.
No exceptions are thrown by the Kinesis producer on the gaming application.
What change should the company make to resolve this issue?

A. Use Application Auto Scaling to set a scaling schedule to scale out write capacity on the DynamoDB table during predictable load spikes.

B. Use Amazon CloudWatch Events to monitor the dead letter queue and invoke a Lambda function to automatically retry failed records.

C. Reduce the DynamoDB table auto scaling policy's target utilization to 20% to more quickly respond to load spikes.

D. Increase the number of shards in the Kinesis data stream to increase throughput capacity.

Correct Answer: D

Community vote distribution


A (71%) D (29%)

" # hailiang Highly Voted $ 1 year, 1 month ago


Its A. The alerts clearly indicate the problem was caused by sudden spike in traffic. Autoscaling on DDB didnt work because the suddenness of
the spike, which is why you need to scale out the DDB before the traffic spike comes in rather than wait for the actual spike to trigger the scaling
upvoted 18 times

" # sam422 1 year, 1 month ago


It makes sense to auto scale dynamodb when cpu utilisation is being spiked, rather than predicting the spike time
upvoted 2 times

" # sarah_t 1 year ago


This https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html points to C, not A
upvoted 1 times

" # sarah_t 1 year ago


However, after reading this https://aws.amazon.com/about-aws/whats-new/2017/11/scheduled-scaling-now-available-for-application-
auto-scaling/ I am probably going with A...
upvoted 2 times

" # b3llman Highly Voted $ 1 year, 1 month ago


Ans: C
Although it had auto scaling enabled in Dynamodb, it did not scale quick enough. Dynamodb's auto scaling relies on cloudwatch alarms and it
takes at least a minute to trigger each scaling based on the 70% utilisation target. This was explained in the GetRecords.IteratorAgeMilliseconds
matrix from Kinesis that lambda was not getting records from Kinesis quick enough.
https://docs.aws.amazon.com/streams/latest/dev/monitoring-with-cloudwatch.html

Since the spikes were huge and it hit the provisioned WCU during that time before auto-scaling could kick in. It resulted in
ProvisionedThroughputExceededException from Dynamodb. As a result, it took a few rounds (a few mins) to scale to the desired utilisation
target.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html

So, the solution is to lower the utilisation target and let it scale ASAP.
upvoted 9 times

" # JohnPi Most Recent % 1 month ago


DynamoDB auto scaling modifies provisioned throughput settings only when the actual workload stays elevated (or depressed) for a sustained
period of several minutes. The Application Auto Scaling target tracking algorithm seeks to keep the target utilization at or near your chosen value
over the long term.

Sudden, short-duration spikes of activity are accommodated by the table's built-in burst capacity.
upvoted 2 times

155 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # AwsBRFan 1 month, 2 weeks ago


Selected Answer: A
Since issue can be related to consumers, then changing to A
upvoted 2 times

" # AwsBRFan 1 month, 2 weeks ago


Selected Answer: D
https://aws.amazon.com/pt/premiumsupport/knowledge-center/kinesis-data-streams-iteratorage-metric/

"However, if the processing time cannot be reduced, then consider upscaling the Kinesis stream by increasing the number of shards."
upvoted 2 times

" # jj22222 7 months, 1 week ago


Selected Answer: A
A. Use Application Auto Scaling to set a scaling schedule to scale out write capacity on the DynamoDB table during predictable load spikes.
upvoted 2 times

" # limeboi18 9 months, 4 weeks ago


Selected Answer: A
I think it's A
upvoted 1 times

" # tkanmani76 10 months, 2 weeks ago


Option A.
This is a case of piling records for processing. Kinesis GetRecords.IteratorAgeMilliseconds increasing indicates that records are being processed
slowly and this higlights the risk of records expiring. ProvisionedThroughputExceededException indicates request rate is too high. AWS API Doc
says - Reduce the frequency of requests and use exponential backoff so they can be processed. To ensure the records are processed quickly
during surge times which is known ahead write capacity should be increased.
upvoted 2 times

" # tkanmani76 10 months, 2 weeks ago


Related information - When Kinesis Producer is writing to KDS - the capacity is determined by the number of shards ( provisioned mode
where the load is known). AWS supports on-demand mode where the shards are scaled up/down. Each shard for writing is able to handle
1MB/Sec. So if we need to increase write we need to increase the shards. This is not relevant in our case as the data is getting written and
Lambda is able to read from the shards.
upvoted 2 times

" # AzureDP900 11 months ago


A is right answer based on traffic surge that often surpasses five times the average load
upvoted 1 times

" # kirrim 1 year ago


You can tell the issue is with DynamoDB because Lambda is reporting a ProvisionedThroughputExceededException, which is part of the
DynamoDB SDK that Lambda code is using, indicating DynamoDB cannot keep up. So you know you're dealing with A or C. The root of the
problem is that even though DynamoDB is set up for autoscaling, it takes a few minutes for it to happen. Merely adjusting the auto scaling policy
thresholds can't change that fact, it's still going to take a while to scale up. If the traffic was a slow ramp up, you might be able to get away with
C, but this is a sudden flood that happens twice per day. Since this is very predictable and on a schedule, the easiest method is to schedule the
scale-up to happen in advance of the flood hitting. (A)

https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-data-streams-iteratorage-metric/
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/model
/ProvisionedThroughputExceededException.html
https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-scheduled-scaling.html
upvoted 5 times

" # tgv 1 year ago


AAA
---
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 1 times

" # Kopa 1 year ago


Im for A, it happens on scheduled time so why not choose schedule automatic scale...
upvoted 2 times

" # Waiweng 1 year ago


it's A
upvoted 3 times

156 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # blackgamer 1 year ago


My answer is A.
upvoted 1 times

" # digimaniac 1 year ago


A
First you need to figure out where the congestion is. It is between Lamda and Dynamo DB. Then, you need to understand the Autoscaling of
Dynamo DB. it only react after a few min of sustained spike. Adjusting the target down actually wont do anything. In reality, Dynamo DB has
Using Burst Capacity which can handle 5 min burst.
upvoted 3 times

" # AJBA 1 year ago


A is the Answer
"The first is predictable traffic, which means the scheduled actions. An example of predictable traffic is when your Kinesis Data Stream endpoint
sees growing traffic in specific time window. In this case, you can make sure that an Application Auto Scaling scheduled action increases the
number of Kinesis Data Stream shards to meet the demand. For instance, you might increase the number of shards at 12:00 p.m. and decrease
them at 8:00 p.m."
https://aws.amazon.com/blogs/big-data/scaling-amazon-kinesis-data-streams-with-aws-application-auto-scaling/#aws-comment-trigger-
5929:~:text=The%20first%20is%20predictable%20traffic%2C%20which,and%20decrease%20them%20at%208%3A00%20p.m.
upvoted 1 times

157 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #565 Topic 1

A company has a web application that securely uploads pictures and videos to an Amazon S3 bucket. The company requires that only
authenticated users are allowed to post content. The application generates a presigned URL that is used to upload objects through a browser
interface. Most users are reporting slow upload times for objects larger than 100 MB.
What can a Solutions Architect do to improve the performance of these uploads while ensuring only authenticated users are allowed to post
content?

A. Set up an Amazon API Gateway with an edge-optimized API endpoint that has a resource as an S3 service proxy. Con+gure the PUT method
for this resource to expose the S3 PutObject operation. Secure the API Gateway using a COGNITO_USER_POOLS authorizer. Have the browser
interface use API Gateway instead of the presigned URL to upload objects.

B. Set up an Amazon API Gateway with a regional API endpoint that has a resource as an S3 service proxy. Con+gure the PUT method for this
resource to expose the S3 PutObject operation. Secure the API Gateway using an AWS Lambda authorizer. Have the browser interface use API
Gateway instead of the presigned URL to upload API objects.

C. Enable an S3 Transfer Acceleration endpoint on the S3 bucket. Use the endpoint when generating the presigned URL. Have the browser
interface upload the objects to this URL using the S3 multipart upload API.

D. Con+gure an Amazon CloudFront distribution for the destination S3 bucket. Enable PUT and POST methods for the CloudFront cache
behavior. Update the CloudFront origin to use an origin access identity (OAI). Give the OAI user s3:PutObject permissions in the bucket policy.
Have the browser interface upload objects using the CloudFront distribution.

Correct Answer: C

" # Nemer Highly Voted $ 1 year, 1 month ago


C. S3 Transfer Acceleration + multipart upload for performance, presigned URLs for access.
upvoted 16 times

" # cldy Most Recent % 11 months ago


C. Enable an S3 Transfer Acceleration endpoint on the S3 bucket. Use the endpoint when generating the presigned URL. Have the browser
interface upload the objects to this URL using the S3 multipart upload API.
upvoted 1 times

" # AzureDP900 11 months ago


S3 Transfer Acceleration is right choice, I will go with C.
upvoted 1 times

" # moon2351 1 year ago


Answer is C
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

" # Chibuzo1 1 year ago


The answer is C.
. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single
operation.
upvoted 4 times

" # Waiweng 1 year ago


it's C
upvoted 3 times

" # Kian1 1 year ago


ofc going with C
upvoted 2 times

" # Ebi 1 year ago


I go with C
upvoted 3 times

" # Bulti 1 year ago

158 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

The question is about uploading the object faster not about retrieving uploaded objects faster and hence the answer is C. When using
CloudFront to upload objects with S3 as origin the request goes through the Edge servers but doesn't use the S3 Transfer acceleration feature to
accelerate the upload. Uploading speeds from slow to fast - direct S3-> Cloudfront to S3-> S3 transfer acceleration
upvoted 3 times
" # Britts 1 year ago
No brainer. C
upvoted 1 times

" # T14102020 1 year ago


Correct is C. S3 Transfer Acceleration
upvoted 1 times

" # jackdryan 1 year ago


I'll go with C
upvoted 2 times

" # CYL 1 year ago


C. Multipart upload and S3 transfer acceleration to handle the upload challenge. Presigned URL to ensure only the right users can do the upload.
upvoted 2 times

" # Paitan 1 year ago


C is the right option
upvoted 1 times

" # kanavpeer 1 year, 1 month ago


D could be the answer, but POST and PUT methods are not supported as cache in cloudfront
https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CachedMethods.html
upvoted 4 times

" # sam422 1 year ago


Issue is with S3 upload right , I didn't see a cache issue?
upvoted 1 times

" # Anila_Dhharisi 1 year, 1 month ago


C is right answer
upvoted 2 times

159 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #566 Topic 1

A company's CISO has asked a Solutions Architect to re-engineer the company's current CI/CD practices to make sure patch deployments to its
applications can happen as quickly as possible with minimal downtime if vulnerabilities are discovered. The company must also be able to quickly
roll back a change in case of errors.
The web application is deployed in a jeet of Amazon EC2 instances behind an Application Load Balancer. The company is currently using GitHub
to host the application source code, and has con+gured an AWS CodeBuild project to build the application. The company also intends to use AWS
CodePipeline to trigger builds from GitHub commits using the existing CodeBuild project.
What CI/CD con+guration meets all of the requirements?

A. Con+gure CodePipeline with a deploy stage using AWS CodeDeploy con+gured for in-place deployment. Monitor the newly deployed code,
and, if there are any issues, push another code update.

B. Con+gure CodePipeline with a deploy stage using AWS CodeDeploy con+gured for blue/green deployments. Monitor the newly deployed
code, and, if there are any issues, trigger a manual rollback using CodeDeploy.

C. Con+gure CodePipeline with a deploy stage using AWS CloudFormation to create a pipeline for test and production stacks. Monitor the
newly deployed code, and, if there are any issues, push another code update.

D. Con+gure the CodePipeline with a deploy stage using AWS OpsWorks and in-place deployments. Monitor the newly deployed code, and, if
there are any issues, push another code update.

Correct Answer: B

" # Nemer Highly Voted $ 1 year, 1 month ago


B seems about right: blue/green deployments to minimize downtime (as opposed to in-place deployments) + deployments can be rolled back
automatically or manually with CodeDeploy.
https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments.html
upvoted 17 times

" # b3llman Highly Voted $ 1 year, 1 month ago


Ans: A
Since the requirement is "as quickly as possible with minimal downtime". Blue/green is not as quick and the question didn't ask for zero
downtime.
upvoted 5 times

" # angelsrp 1 year, 1 month ago


Any links?
upvoted 1 times

" # petebear55 1 year ago


Your missing the point of the question "push another code update." is NOT a ROLLBACK .... Its important to READ the question in the exam
... B is right as it is very fluid ..
upvoted 3 times

" # sashsz 7 months, 1 week ago


You are missing the question's requirements.
upvoted 1 times

" # kadev Most Recent % 2 months, 1 week ago


"push another code update" i dont like that => need to rollback to latest stable version => B
upvoted 1 times

" # cldy 11 months ago


B. Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for blue/green deployments. Monitor the newly deployed
code, and, if there are any issues, trigger a manual rollback using CodeDeploy.
upvoted 1 times

" # AzureDP900 11 months ago


I will go with B, Blue/Green is fast to rollback.
upvoted 3 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

160 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Waiweng 1 year ago


it's B
upvoted 3 times

" # KnightVictor 1 year ago


No brainer. going with B
upvoted 1 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # Firststack 1 year ago


B Blue/Green
upvoted 3 times

" # Ebi 1 year ago


I go with B
upvoted 4 times

" # kopper2019 1 year ago


B as well
upvoted 1 times

" # Bulti 1 year, 1 month ago


B is the right answer as it provides the least downtime option.
upvoted 1 times

" # T14102020 1 year, 1 month ago


Correct is B. blue/green deployments
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with B
upvoted 2 times

" # CYL 1 year, 1 month ago


B. Use blue/green deployment to minimize downtime. The rest of the options do not allow for low downtime during deployment.
upvoted 1 times

" # Anila_Dhharisi 1 year, 1 month ago


B is right answer. Yes Blue/Green deployments has minimal downtime when compared to in-place and can be rollback automatically.
upvoted 2 times

161 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #567 Topic 1

A company wants to analyze log data using date ranges with a custom application running on AWS. The application generates about 10 GB of data
every day, which is expected to grow. A Solutions Architect is tasked with storing the data in Amazon S3 and using Amazon Athena to analyze the
data.
Which combination of steps will ensure optimal performance as the data grows? (Choose two.)

A. Store each object in Amazon S3 with a random string at the front of each key.

B. Store the data in multiple S3 buckets.

C. Store the data in Amazon S3 in a columnar format, such as Apache Parquet or Apache ORC.

D. Store the data in Amazon S3 in objects that are smaller than 10 MB.

E. Store the data using Apache Hive partitioning in Amazon S3 using a key that includes a date, such as dt=2019-02.

Correct Answer: BC

Community vote distribution


CE (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


C & E: Optimal performance with Athena is achieved with columnar storage and partitioning the data.
https://aws.amazon.com/blogs/big-data/top-10-performance-tuning-tips-for-amazon-athena/
upvoted 25 times

" # cldy Most Recent % 10 months, 1 week ago


C and E.
upvoted 1 times

" # cldy 11 months ago


C. Store the data in Amazon S3 in a columnar format, such as Apache Parquet or Apache ORC.
E. Store the data using Apache Hive partitioning in Amazon S3 using a key that includes a date, such as dt=2019-02.
upvoted 1 times

" # challenger1 11 months ago


Selected Answer: CE
My Answer: C & E
upvoted 1 times

" # AzureDP900 11 months ago


C, E is correct . This question is part of Neal Davis practice test
upvoted 2 times

" # acloudguru 11 months, 1 week ago


Selected Answer: CE
C & E: Optimal performance with Athena is achieved with columnar storage and partitioning the data.
https://aws.amazon.com/blogs/big-data/top-10-performance-tuning-tips-for-amazon-athena/
upvoted 1 times

" # moon2351 1 year ago


CE is correct
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C,E
upvoted 2 times

" # oscargee 1 year ago


B & C: Athena is used for S3 query only. In question they mentioned Athena not HIVE, so don't chose E.
upvoted 1 times

" # tkanmani76 10 months, 1 week ago


https://docs.aws.amazon.com/athena/latest/ug/partitions.html
This will clarify why E.
upvoted 1 times

162 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Waiweng 1 year ago


it's C&E
upvoted 4 times

" # blackgamer 1 year ago


C & E. Don't confuse Apache Hive bucketing with AWS S3 Bucket.
upvoted 1 times

" # kiev 1 year ago


Full House with CE
upvoted 1 times

" # Kian1 1 year ago


going with CE
upvoted 2 times

" # Firststack 1 year ago


C & E is correct
upvoted 2 times

" # Ebi 1 year ago


Definitely C,E are correct answers
upvoted 4 times

" # Bulti 1 year, 1 month ago


Answer is C & E- Optimize Columnar data store + Partition to improve performance.
upvoted 2 times

" # T14102020 1 year, 1 month ago


Correct CE. Columnar storage and Hive partition
upvoted 1 times

163 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #568 Topic 1

An advisory +rm is creating a secure data analytics solution for its regulated +nancial services users. Users will upload their raw data to an
Amazon S3 bucket, where they have PutObject permissions only. Data will be analyzed by applications running on an Amazon EMR cluster
launched in a VPC. The +rm requires that the environment be isolated from the internet. All data at rest must be encrypted using keys controlled
by the +rm.
Which combination of actions should the Solutions Architect take to meet the user's security requirements? (Choose two.)

A. Launch the Amazon EMR cluster in a private subnet con+gured to use an AWS KMS CMK for at-rest encryption. Con+gure a gateway VPC
endpoint for Amazon S3 and an interface VPC endpoint for AWS KMS.

B. Launch the Amazon EMR cluster in a private subnet con+gured to use an AWS KMS CMK for at-rest encryption. Con+gure a gateway VPC
endpoint for Amazon S3 and a NAT gateway to access AWS KMS.

C. Launch the Amazon EMR cluster in a private subnet con+gured to use an AWS CloudHSM appliance for at-rest encryption. Con+gure a
gateway VPC endpoint for Amazon S3 and an interface VPC endpoint for CloudHSM.

D. Con+gure the S3 endpoint policies to permit access to the necessary data buckets only.

E. Con+gure the S3 bucket policies to permit access using an aws:sourceVpce condition to match the S3 endpoint ID.

Correct Answer: AE

Community vote distribution


AE (100%)

" # ipindado2020 Highly Voted $ 1 year, 1 month ago


A. KEYS not controlled by the firm(AWS KMS). KO
B. KEYS not controlled by the firm(AWS KMS) and access through internet. KO
C. KEYS controlled by the firm (CloudHSM) and access to AWS public resources trhough internal VPC endpoints. OK.
D. This restricts that financial service users can access just to this bucket trhough the vpc link, does not prevent anybody else to read the bucket.
KO
E. This will enforce the access to the bucket from the financial users vpc. OK.

Then CE
upvoted 27 times

" # oraldevel 1 year, 1 month ago


Did you check about CMK??
upvoted 2 times

" # angelsrp 1 year ago


He is right, i think you are talking about costumer managed CMK which isnt mentioned in the answers.
upvoted 2 times

" # tiana528 11 months, 2 weeks ago


The question mentioned KMS CMK, which has two types, AWS-managed and customer-managed. So A is correct. Using KMS CMK as
the firm's key, no problem at all.
upvoted 2 times

" # arulrajjayaraj 1 year ago


I think the requirement here is " All data at rest must be encrypted using keys controlled by the firm " ,I think KMS would do that , CloudHSM
may be ideal for Customer Supplied Encryption keys with extra hardware security with no one has access to that .
upvoted 1 times

" # QCO 1 year ago


There is AWS managed CMK and customer managed CMKs. In this case as it applies to the question, the firm can use customer managed
CMK. Based on this, A is correct
upvoted 8 times

" # Sun`yhome 1 year ago


By default, AWS KMS creates the key material for a CMK. You cannot extract, export, view, or manage this key material. Also, you cannot
delete this key material; you must delete the CMK. **** However, you can import your own key material into a CMK ****
upvoted 3 times

" # joe16 1 year ago


A.E - Correct
C is incorrect. Here is the snippted from CloudHSM FAQ page that clearly states that you need to import the CloudHSM managed key into the

164 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

AWS KMS to use SSE -


"AWS services integrate with AWS Key Management Service, which in turn is integrated with AWS CloudHSM through the KMS custom key
store feature. If you want to use the server-side encryption offered by many AWS services (such as EBS, S3, or Amazon RDS), you can do so
by configuring a custom key store in AWS KMS."
upvoted 4 times
" # Nemer Highly Voted $ 1 year, 1 month ago
A & E. VPC endpoints and bucket policies...without removing the existing PutObject permissions for the users who are uploading.
https://aws.amazon.com/premiumsupport/knowledge-center/block-s3-traffic-vpc-ip/
upvoted 27 times

" # Student1950 Most Recent % 3 months, 3 weeks ago


I would go with C and E
Reason for C: Encryption in transit is required not encryption at rest
https://docs.aws.amazon.com/cloudhsm/latest/userguide/data-protection.html
Application connects to CloudHSM using interface endpoint and S3 with gateway endpoint
Reason for E: Gateway endpoint need bucket policy to restrict from VPCE

Reason for E:
upvoted 1 times

" # aandc 4 months ago


Selected Answer: AE
vote AE
upvoted 1 times

" # cldy 11 months ago


A. Launch the Amazon EMR cluster in a private subnet configured to use an AWS KMS CMK for at-rest encryption. Configure a gateway VPC
endpoint for Amazon S3 and an interface VPC endpoint for AWS KMS.
E. Configure the S3 bucket policies to permit access using an aws:sourceVpce condition to match the S3 endpoint ID.
upvoted 1 times

" # AzureDP900 11 months ago


A,E is correct
upvoted 1 times

" # wakame 1 year ago


Perhaps the issue is wrong.
I found that there were the following releases for CloudHSM:
https://aws.amazon.com/about-aws/whats-new/2021/02/introducing-amazon-vpc-endpoints-aws-cloudhsm/?nc1=h_ls
In other words, until February of this year, it was not possible to create a VPC endpoint in CloudHSM.
Therefore A & E is correct.
upvoted 1 times

" # Suresh108 1 year ago


SSE-S3: AWS manages both data key and master key

SSE-KMS: AWS manages data key and you manage master key

SSE-C: You manage both data key and master key

See this doc for more details: http://amzn.to/2iVsGvM

A ) Server-Side Encryption

SSE-S3 (AWS-Managed Keys) => When the requirement is to keep the encryption work simple and minimise the maintenance overhead then use
SSE-S3.

SSE-KMS (AWS KMS Keys) => When the requirement is to maintain a security audit trail then use SSE-KMS Keys.

SSE-C (Customer-Provided Keys) => When end-to-end encryption is not required and the client wants full control of his/her security keys, then
use SSE-C.

B) Client-Side Encryption

AWS KMS-managed, customer master key => When the requirement is to maintain end-to-end encryption plus a security audit trail, then use
AWS KMS Keys.

Client Managed Master Key => When the requirement is to maintain end-to-end encryption but the client wants full control of his/her security
keys, then use Client Managed Master Key.
upvoted 3 times

" # denccc 1 year ago


A and E
upvoted 2 times

165 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # WhyIronMan 1 year ago


I'll go with A,E
upvoted 2 times

" # pradhyumna 1 year ago


AE
While C looks like a close one it is not a complete one, the cluster instances would need HSM client software to make it work which is missing
from the answer. On the otherhand A just meets the requirements.
https://aws.amazon.com/cloudhsm/features/
upvoted 2 times

" # Waiweng 1 year ago


it's A ,E
upvoted 4 times

" # Amitv2706 1 year ago


For those who are voting for C,
Doubt that EMR supports CloudHSM based encryption option.

https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-data-encryption-options.html
upvoted 3 times

" # kiev 1 year ago


Guys inpindado is correct. I have confirmed with my materials from Neal Davis. The key requirements is to keep environment isolated from the
Internet and with that we could use AWS CLOUDHSM and VPC condition should match S3 endpoints ID.
upvoted 6 times

" # kiev 1 year ago


A and E for me. CMK is managed by firm and E is no question.
upvoted 2 times

" # Kian1 1 year ago


going for A,E
upvoted 2 times

" # Ebi 1 year ago


I will go with AE
upvoted 4 times

" # Ebi 1 year ago


KMS CMK is managed by firm, you don't need CloudHSM,
Also interface endpoint does not support CloudHSM, so C is NOT CORRECT answer
upvoted 4 times

" # RedKane 1 year ago


CloudHSM does support interface endpoint
https://docs.aws.amazon.com/vpc/latest/privatelink/integrated-services-vpce-list.html
upvoted 1 times

166 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #569 Topic 1

While debugging a backend application for an IoT system that supports globally distributed devices, a Solutions Architect notices that stale data
is occasionally being sent to user devices. Devices often share data, and stale data does not cause issues in most cases. However, device
operations are disrupted when a device reads the stale data after an update.
The global system has multiple identical application stacks deployed in different AWS Regions. If a user device travels out of its home geographic
region, it will always connect to the geographically closest AWS Region to write or read data. The same data is available in all supported AWS
Regions using an Amazon
DynamoDB global table.
What change should be made to avoid causing disruptions in device operations?

A. Update the backend to use strongly consistent reads. Update the devices to always write to and read from their home AWS Region.

B. Enable strong consistency globally on a DynamoDB global table. Update the backend to use strongly consistent reads.

C. Switch the backend data store to Amazon Aurora MySQL with cross-region replicas. Update the backend to always write to the master
endpoint.

D. Select one AWS Region as a master and perform all writes in that AWS Region only. Update the backend to use strongly consistent reads.

Correct Answer: A

" # Nemer Highly Voted $ 1 year, 1 month ago


A. DynamoDB does not support strongly consistent reads ACROSS REGIONS. The stale data comes from writing to one region & reading from
another.
upvoted 14 times

" # cldy Most Recent % 11 months ago


A. Update the backend to use strongly consistent reads. Update the devices to always write to and read from their home AWS Region.
upvoted 1 times

" # AzureDP900 11 months ago


A is right
upvoted 1 times

" # moon2351 1 year ago


Answer is A
upvoted 1 times

" # Goram113 1 year ago


https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/V2globaltables_HowItWorks.html
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

" # Waiweng 1 year ago


it's A
upvoted 2 times

" # Firststack 1 year ago


A is correct
upvoted 2 times

" # Ebi 1 year ago


A is the correct answer
upvoted 3 times

" # SD13 1 year ago


A - However, if your application requires strongly consistent reads, it must perform all of its strongly consistent reads and writes in the same
Region. DynamoDB does not support strongly consistent reads across Regions. Therefore, if you write to one Region and read from another
Region, the read response might include stale data that doesn't reflect the results of recently completed writes in the other Region.
Doc link - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/V2globaltables_HowItWorks.html
upvoted 4 times

167 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Bulti 1 year ago


A is the right answer. Dynamo DB doesn't support strong consistency on global tables cross-region. In order for strong consistency to work , the
application needs to write and read data from the same region.
upvoted 3 times

" # Britts 1 year, 1 month ago


A can't be right unless the the dynamodb global table gets replaced by a regional table first?
upvoted 2 times

" # T14102020 1 year, 1 month ago


Correct is A. Read only from Home Region
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with A
upvoted 2 times

" # smartassX 1 year, 1 month ago


A. ——>
“ An application can read and write data to any replica table. If your application only uses eventually consistent reads and only issues reads
against one AWS Region, it will work without any modification. However, if your application requires strongly consistent reads, it must perform all
of its strongly consistent reads and writes in the same Region. DynamoDB does not support strongly consistent reads across Regions. Therefore,
if you write to one Region and read from another Region, the read response might include stale data that doesn't reflect the results of recently
completed writes in the other Region.

If applications update the same item in different Regions at about the same time, conflicts can arise. To help ensure eventual consistency,
DynamoDB global tables use a last writer wins reconciliation between concurrent updates, in which DynamoDB makes a best effort to determine
the last writer. With this conflict resolution mechanism, all the replicas will agree on the latest update and converge toward a state in which they
all have identical data. “
upvoted 2 times

" # CYL 1 year, 1 month ago


A. This combination allows for less restriction and impact to overall performance and allows for consistent read requirements. Tie a particular
user back to the home region.
upvoted 1 times

" # Anila_Dhharisi 1 year, 1 month ago


Answer is A. Yes, DynamoDB does not support strongly consistent reads across Regions. Therefore, if you write to one Region and read from
another Region, the read response might include stale data that doesn't reflect the results of recently completed writes in the other Region.
upvoted 2 times

168 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #570 Topic 1

A software as a service (SaaS) company offers a cloud solution for document management to private law +rms and the public sector. A local
government client recently mandated that highly con+dential documents cannot be stored outside the country. The company CIO asks a Solutions
Architect to ensure the application can adapt to this new requirement. The CIO also wants to have a proper backup plan for these documents, as
backups are not currently performed.
What solution meets these requirements?

A. Tag documents that are not highly con+dential as regular in Amazon S3. Create individual S3 buckets for each user. Upload objects to each
user's bucket. Set S3 bucket replication from these buckets to a central S3 bucket in a different AWS account and AWS Region. Con+gure an
AWS Lambda function triggered by scheduled events in Amazon CloudWatch to delete objects that are tagged as secret in the S3 backup
bucket.

B. Tag documents as either regular or secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS Region.
Create a cross- region S3 bucket in a separate AWS account. Set proper IAM roles to allow cross-region permissions to the S3 buckets.
Con+gure an AWS Lambda function triggered by Amazon CloudWatch scheduled events to copy objects that are tagged as secret to the S3
backup bucket and objects tagged as normal to the cross-region S3 bucket.

C. Tag documents as either regular or secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS Region.
Use S3 selective cross-region replication based on object tags to move regular documents to an S3 bucket in a different AWS Region.
Con+gure an AWS Lambda function that triggers when new S3 objects are created in the main bucket to replicate only documents tagged as
secret into the S3 bucket in the same AWS Region.

D. Tag highly con+dential documents as secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS
Region. Use S3 selective cross-region replication based on object tags to move regular documents to a different AWS Region. Create an
Amazon CloudWatch Events rule for new S3 objects tagged as secret to trigger an AWS Lambda function to replicate them into a separate
bucket in the same AWS Region.

Correct Answer: D

Community vote distribution


C (75%) D (25%)

" # Ebi Highly Voted $ 1 year ago


Answer is C
upvoted 13 times

" # Blackfry Most Recent % 1 day, 14 hours ago


Selected Answer: C
The difference between C and D is that both regular and confidential documents are tagged or only confidential documents are tagged. But we
can use Object tags, if there are any tag.
So when we wants to use Selective Cross-Region Replication based on Object Tags about regular documents, we should tag 'regular
documents'(or both).
upvoted 1 times

" # WayneYi 1 month ago


I will go with C. Because option D says that we are moving regular documents into a different region, it makes no sense at all
upvoted 1 times

" # tomosabc1 4 weeks, 1 day ago


In fact, to move regular documents is an implicit requirement from the question. Even option C says "...... to move regular documents to an S3
bucket in a different AWS Region".
upvoted 1 times

" # wassb 3 weeks, 6 days ago


@Bulti answer : Answer is C. D looks like an option except for the fact that the regular objects are not tagged.
upvoted 1 times

" # Harithareddynn 2 months, 3 weeks ago


Selected Answer: D
The only difference between C and D is S3 events/Cloud watch events - In case of C, S3 events cannot be triggered selectively based on tag, so
it would call Lambda for all documents - hence D is better.
upvoted 1 times

169 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # ciki 10 months, 3 weeks ago


I think answer is D cloudwatch verify that the application can adapt to this new demand
upvoted 1 times

" # vbal 10 months, 3 weeks ago


https://aws.amazon.com/blogs/mt/monitor-tag-changes-on-aws-resources-with-serverless-workflows-and-amazon-cloudwatch-events/
Even after reading above page I am still not sure if it would work or not. But If I can trigger even based upon Each Object's Tag being Put into S3,
I would prefer D just because this is more efficient as Lambda would be triggered only for Secret documents and not for ALL the PUT Object
Events which is in-efficient IMO.
upvoted 1 times

" # cldy 11 months ago


C. Tag documents as either regular or secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS Region.
Use S3 selective cross-region replication based on object tags to move regular documents to an S3 bucket in a different AWS Region. Configure
an AWS Lambda function that triggers when new S3 objects are created in the main bucket to replicate only documents tagged as secret into the
S3 bucket in the same AWS Region.
upvoted 1 times

" # AzureDP900 11 months ago


Use S3 selective cross-region replication , Answer is C.
upvoted 1 times

" # pcops 11 months, 2 weeks ago


I will go for C
upvoted 1 times

" # acloudguru 11 months, 3 weeks ago


Selected Answer: C
Answer = C.
https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 3 times

" # aws_arn_name 1 year ago


I think answer is D. With C only new confidential object will be backup, what about old confidential object. Addition , S3 event has lag , although
rarely but still can cause lost data, CloudWatch is more reliable
upvoted 2 times

" # wahlbergusa 11 months ago


Wrong. Althought I got confused on the same point as well. The trick is for CloudWatch Event Rule you first need to enable CloudTrail Data
Events = > https://docs.aws.amazon.com/codepipeline/latest/userguide/create-cloudtrail-S3-source-console.html

Hence C is correct.
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 4 times

" # blackgamer 1 year ago


The answer is C. https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-s3-announces-selective-crr-based-on-object-tags/
upvoted 3 times

" # 01037 1 year ago


C.
Region is treated as a country, though there are several Regions in US.
upvoted 1 times

" # Bulti 1 year ago


Answer is C. D looks like an option except for the fact that the regular objects are not tagged. Only highly confidential objects are tagged.
Otherwise it's possible to setup a CloudWatch Event rule on an S3 object load event and specify in the action to invoke Lambda function to copy
the secret files into a backup S3 bucket.
upvoted 3 times

" # LiamNg 6 months, 3 weeks ago


Thank you for pointing this out
upvoted 1 times

" # Britts 1 year ago


Not sure why lambda is needed here as we can do cross zone replication in AWS in the same account. Anyway will go with C

170 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

171 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #571 Topic 1

A company has an application that runs on a jeet of Amazon EC2 instances and stores 70 GB of device data for each instance in Amazon S3.
Recently, some of the S3 uploads have been failing. At the same time, the company is seeing an unexpected increase in storage data costs. The
application code cannot be modi+ed.
What is the MOST emcient way to upload the device data to Amazon S3 while managing storage costs?

A. Upload device data using a multipart upload. Use the AWS CLI to list incomplete parts to address the failed S3 uploads. Enable the lifecycle
policy for the incomplete multipart uploads on the S3 bucket to delete the old uploads and prevent new failed uploads from accumulating.

B. Upload device data using S3 Transfer Acceleration. Use the AWS Management Console to address the failed S3 uploads. Use the Multi-
Object Delete operation nightly to delete the old uploads.

C. Upload device data using a multipart upload. Use the AWS Management Console to list incomplete parts to address the failed S3 uploads.
Con+gure a lifecycle policy to archive continuously to Amazon S3 Glacier.

D. Upload device data using S3 Transfer Acceleration. Use the AWS Management Console to list incomplete parts to address the failed S3
uploads. Enable the lifecycle policy for the incomplete multipart uploads on the S3 bucket to delete the old uploads and prevent new failed
uploads from accumulating.

Correct Answer: C
Reference:
https://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html

Community vote distribution


A (100%)

" # Ebi Highly Voted $ 1 year ago


Answer is A
upvoted 10 times

" # ExtHo 1 year ago


Supporting Ebi It should be A, because the most critical problem is that the console cannot display the information that your multipart upload
failed. This can only be viewed through the SDK/API.
And the title said that there are many unexpected data costs, which should refer to the storage fee caused by the failure of multipart upload
(because if you don’t use multipart upload, the entire file upload will fail if it fails, and there is no such part of the cost). It can be concluded
that the original program has already written code for multipart upload. No additional code changes are required.
upvoted 10 times

" # kirrim 1 year ago


Technically, you can view failed multi-part uploads in the console using AWS Storage Lens:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-lens-optimize-storage.html#locate-incomplete-mpu
https://aws.amazon.com/blogs/aws-cloud-financial-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-
amazon-s3-costs/

I still think A is the best answer, though!


upvoted 1 times

" # Bulti Highly Voted $ 1 year ago


Between A and D, I will go with D only because A will require a code change. It is assumed that the application currently does not use multi-part
upload. Using S3 Transfer acceleration does not require code change. Identifying multi-part object failures is possible using both CLI and console
so I will go with D.
upvoted 6 times

" # Bulti 1 year ago


On reviewing the Option D again, I realized that it is assuming we are using multipart upload with S3 TA. This will also require a code change.
The only option then which will not require a code change is option B. So my final answer is Option B.
upvoted 2 times

" # 01037 1 year ago


Yes, only B doesn't need code change.
upvoted 1 times

" # 01037 1 year ago


But how to find out a failed upload?
Isn't the upload is a 0 or 1 operation if it isn't multi part upload?

172 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # TechX Most Recent % 4 months, 1 week ago
Selected Answer: A
Agree with A, best solution here
upvoted 1 times

" # kangtamo 4 months, 1 week ago


Selected Answer: A
Agree with A: AWS CLI.
upvoted 1 times

" # CloudChef 10 months ago


A is correct
upvoted 1 times

" # AzureDP900 11 months ago


A is right
upvoted 1 times

" # pcops 11 months, 2 weeks ago


I will go with A
upvoted 1 times

" # sashenka 11 months, 3 weeks ago


DDD - main reason it is NOT A is because "Modifications to the application's code are not permitted." and taking advantage of S3 multipart
uploads REQUIRES modification to your code. SDK/API is provided and the S3 multipart upload function is different than the PUT of the S3
upload. Take a look here:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpu-upload-object.html

Additionally, TA is best practice for transferring large files to S3 buckets. As data arrives at the closest edge location, the data is routed to
Amazon S3 over an optimized network path. This will insure more device uploads will not end up in a failed state.
upvoted 2 times

" # sashenka 11 months, 3 weeks ago


AAA - CORRECTION. It appears that if using the AWS SDK/CLI by DEFAULT when uploading a >5Mb file to an AWS S3 bucket multipart
upload will be used. That and I missed that listing of failed multipart upload objects CAN'T be viewed in the Management Console.
upvoted 3 times

" # moon2351 1 year ago


I think Answer is A
upvoted 1 times

" # DerekKey 1 year ago


"Use the AWS Management Console to list incomplete parts to address the failed S3 uploads" - not possible with Management Console
C & D - wrong
"Upload device data using S3 Transfer Acceleration" - can be used to move data between Regions. Not in this case
B & D - wrong
"Use the AWS Management Console to address the failed S3 uploads." - there is no functionality
B - wrong
"Use the AWS CLI to list incomplete parts to address the failed S3 uploads" - correct
"Enable the lifecycle policy for the incomplete multipart uploads on the S3 bucket to delete the old uploads and prevent new failed uploads from
accumulating." - correct
A - correct. I assume that they will not change the application and use CLI to upload files
upvoted 5 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 1 times

" # Desailly 1 year ago


Well described here https://aws.amazon.com/blogs/aws-cost-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-
amazon-s3-costs/
upvoted 1 times

" # SPRao 1 year ago


If some of uploads are failing and cost is getting increased means upload is already multipart
hence only ask is to how to reduce the cost and that can be done by deleting failed uploads
from S3. Hence A makes sense.
upvoted 2 times

" # oscargee 1 year ago


Note: You aren’t able to view the parts of your incomplete multipart upload in the AWS Management Console.
https://aws.amazon.com/cn/blogs/aws-cost-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-amazon-s3-costs/

173 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

So A is the only one.


upvoted 2 times

" # Rich_Rich 1 year ago


Lifecycle policies for failed uploads discussed in this blog: https://aws.amazon.com/blogs/aws-cost-management/discovering-and-deleting-
incomplete-multipart-uploads-to-lower-amazon-s3-costs/
(A)
upvoted 1 times

" # bharadhwaj 1 year ago


s3 storage lens is through the console. The real thing is deletion of the file and not a move to glacier. hence A is correct
upvoted 2 times

" # Waiweng 1 year ago


it's A
upvoted 4 times

" # sarah_t 1 year ago


S3 Transfer Acceleration is used for data transfer from remote clients by routing them through AWS edge locations. How would that help when
the data is already uploaded from within an AWS region?
upvoted 2 times

" # awsnoob 1 year ago


Torn between A and B... Does multipart upload require code changes on the application level?
upvoted 1 times

174 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #572 Topic 1

A company is in the process of implementing AWS Organizations to constrain its developers to use only Amazon EC2, Amazon S3, and Amazon
DynamoDB. The
Developers account resides in a dedicated organizational unit (OU). The Solutions Architect has implemented the following SCP on the Developers
account:

When this policy is deployed, IAM users in the Developers account are still able to use AWS services that are not listed in the policy.
What should the Solutions Architect do to eliminate the Developers' ability to use services outside the scope of this policy?

A. Create an explicit deny statement for each AWS service that should be constrained.

B. Remove the FullAWSAccess SCP from the Developer account's OU.

C. Modify the FullAWSAccess SCP to explicitly deny all services.

D. Add an explicit deny statement using a wildcard to the end of the SCP.

Correct Answer: B

" # Waiweng Highly Voted $ 1 year ago


B is correct
upvoted 18 times

" # student2020 Highly Voted $ 1 year ago


Answer is A - You cannot remove the FullAWSAccess SCP that is inherited from root. Test it and see.
upvoted 8 times

" # joe16 1 year ago


Yes, you can.(Ans - B)
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_strategies.html#orgs_policies_allowlist
"To use SCPs as an allow list, you must replace the AWS managed FullAWSAccess SCP with an SCP that explicitly permits only those
services and actions that you want to allow. By removing the default FullAWSAccess SCP, all actions for all services are now implicitly denied.
Your custom SCP then overrides the implicit Deny with an explicit Allow for only those actions that you want to permit."
upvoted 8 times

" # tekkart 1 year ago


Answer is A, because as soon as an SCP was created, the FullAWSAccess SCP was already overruled (https://docs.aws.amazon.com
/organizations/latest/userguide/orgs_manage_policies_scps_strategies.html#orgs_policies_allowlist) and (https://docs.aws.amazon.com
/organizations/latest/userguide/orgs_manage_policies_scps_strategies.html#orgs_policies_allowlist), because Explicit Deny > Explicit Allow >
Implicit Deny > Implicit Allow, the only way to overcome Explicit Allow is to add Explicit Deny statements. Answers C and D would work too
good, of course everything would be blocked !
upvoted 4 times

" # wassb Most Recent % 3 weeks, 6 days ago


This question doesnt make sense AT ALL.

175 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"cloudwatch:*"
],
"Resource": "*"
}
]
}
An allow list policy might look like the following example, which enables account users to perform operations for Amazon Elastic Compute Cloud
(Amazon EC2) and Amazon CloudWatch, ****but no other service****.
+ The FullAWSAccess SCP doesnt need to be deleted, the fact defining a new SCP is enough..
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_strategies.html#orgs_policies_allowlist
upvoted 1 times

" # aandc 4 months, 1 week ago


B
To support this, AWS Organizations attaches an AWS managed SCP named FullAWSAccess to every root and OU when it's created. This policy
allows all services and actions. It's always available for you to attach or detach from the entities in your organization as needed. Because the
policy is an AWS managed SCP, you can't modify or delete it.
upvoted 2 times

" # tkanmani76 9 months, 3 weeks ago


B is correct - An allow list strategy has you remove the FullAWSAccess SCP that is attached by default to every OU and account. This means
that no APIs are permitted anywhere unless you explicitly allow them.
upvoted 1 times

" # AzureDP900 11 months ago


it should be B
upvoted 1 times

" # pcops 11 months, 2 weeks ago


Ans is B
upvoted 1 times

" # student22 1 year ago


B

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_inheritance_auth.html
upvoted 2 times

" # RedKane 1 year ago


Ignore the messages below - it looks like access has to be granted at each level : root, any intermediate OUs and ACCOUNT so removing
FullAWSAccess SCP from any of the nodes in the hierarchy will do the job.
upvoted 2 times

" # RedKane 1 year ago


To add to the previous post - each higher OU higher in the hierarchy, including organization root will also have FullAWSAccess SCP attached and
each of those SCPs will be inherited by each account below in the hierarchy. So each account inherits multiple copies of FullAWSAccess SCP. In
order to get rid of it one would need to remove FullAWSAccess SCP from every OU (higher in the hierarchy) and the root as well as the
ACCOUNT itself.
upvoted 1 times

" # RedKane 1 year, 1 month ago


FullAWSAccess SCP is attached automatically by default not only to each OU but also to each account individually so removing FullAWSAccess
SCP from Developers-OU will change nothing as the one attached directly to the Developers-ACCOUNT will still remain. That would only leave
option A as valid although I'm not sure if the author of this question considered the behavior I described. Also in real scenarios one would rather
attach SCP with DENY's and leave FullAWSAccess SCP untouched.
upvoted 1 times

" # alisyech 1 year, 1 month ago


it should B
upvoted 1 times

" # didek1986 1 year, 1 month ago


answ B
upvoted 1 times

" # beber3564 1 year, 1 month ago


B. allow list
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_strategies.html#orgs_policies_denylist

176 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # M_Asep 1 year, 1 month ago
I Support D
upvoted 1 times

" # nitinz 1 year, 1 month ago


D is correct answer.
upvoted 1 times

" # heyheyhei 1 year, 1 month ago


D cannot be the answer. It will also override the allowed statements.

A request results in an explicit deny if an applicable policy includes a Deny statement. If policies that apply to a request include an Allow
statement and a Deny statement, the Deny statement trumps the Allow statement. The request is explicitly denied.
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html
upvoted 4 times

" # nitinz 1 year, 1 month ago


You are absolutely correct. Thanks for the link I change my answer to B.
upvoted 4 times

177 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #573 Topic 1

A company developed a Java application and deployed it to an Apache Tomcat server that runs on Amazon EC2 instances. The company's
Engineering team has implemented AWS CloudFormation and Chef Automate to automate the provisioning of and updates to the infrastructure
and con+guration of the application in the development, test, and production environments. These implementations have led to signi+cantly
improves reliability in releasing changes. The Engineering team reports there are frequent service disruptions due to unexpected errors when
updating the application of the Apache Tomcat server.
Which solution will increase the reliability of all releases?

A. Implement a blue/green deployment methodology.

B. Implement the canary release methodology.

C. Con+gure Amazon CloudFront to serve all requests from the cache while deploying the updates.

D. Implement the all at once deployment methodology.

Correct Answer: A
Reference:
https://medium.com/@tom.tikkle/blue-green-deployments-increasing-safety-reliability-speed-98a5c6b222b0

Community vote distribution


A (88%) 13%

" # Trap_D0_r Highly Voted $ 1 year ago


B
Stunning how few people can read. Opsworks isn't even mentioned anywhere in the question. This question has nothing to do with Opsworks.
It's cloudformation + Chef Automate (which indicates a Chef Serve, NOT a full Opsworks stack). Chef isn't an aws tools and doesn't require
Opsworks to work, and it supports Blue/Green, Phoenix, and Canary deployments (https://blog.chef.io/watch-chef-aws-your-path-to-devops).
The answer is B.
upvoted 25 times

" # sashsz 7 months, 1 week ago


Stunning how you are judging the other people and you yourself didn't provide any reason behind your answer.
upvoted 5 times

" # Nemer Highly Voted $ 1 year, 1 month ago


A. Blue/green deployment with Opsworks for Chef Automate, using separate stacks for each environment.
upvoted 11 times

" # Gmail78 1 year, 1 month ago


https://docs.aws.amazon.com/opsworks/latest/userguide/best-deploy.html
upvoted 2 times

" # Alexey79 8 months ago


https://aws.amazon.com/opsworks/chefautomate/
upvoted 1 times

" # nexus2020 Most Recent % 3 months ago


Selected Answer: A
Blue/green will ensure the upgrade is tested before launch. - better reliability = A is right
Canary will track the change, but without doing anything else will not make it more reliable. = b is not the better answer.
upvoted 1 times

" # Anhdd 5 months ago


Selected Answer: B
B good
upvoted 1 times

" # Anhdd 5 months ago


It's say that with the blue/green deployment, the cons is: "Cost is a drawback to blue-green deployments. Replicating a production
environment can be complex and expensive, especially when working with microservices. Quality assurance and user acceptance testing may
not identify all of the anomalies or regressions either, and so shifting all user traffic at once can present risks. An outage or issue could also
have a wide-scale business impact before a rollback is triggered, and depending on the implementation, in-flight user transactions may be lost
when the shift in traffic is made."
upvoted 1 times

178 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Anhdd 5 months ago


Althought that the question not require to low cost, I will always prefer the solution that take lower cost. Plus that, with the blue/green an
outage or issue could also have a wide-scale business impact before a rollback is triggered, and depending on the implementation, in-flight
user transactions may be lost when the shift in traffic is made.
So C run out -> B for me
upvoted 1 times
" # user0001 5 months, 2 weeks ago
it is A in this case, it would be B if the question is about cost
upvoted 2 times

" # futen0326 8 months, 2 weeks ago


Selected Answer: A
Don't let the top comment fool you.. the answer is A. OpsWorks has 3 modes: Puppet Enterprise, Chef Automate, and OpsWorks..

For the exam you will default for OpsWorks if you see those keywords.
upvoted 3 times

" # AMKazi 9 months, 2 weeks ago


A is the answer.
B: is not suitable in this scenario as they are facing issues with upgrading Apache TOMCAT server. You cannot do Canary deployments for server
infrastructure. Canary is best suited for toggle features/releases that can we toggled on /off. Since this is TOMCAT, you cannot opt in or opt out.
upvoted 6 times

" # GeniusMikeLiu 9 months, 2 weeks ago


after read so many comment, I still confused. what the main diffenrent between Blue/green and canary deloyment?
upvoted 1 times

" # cannottellname 9 months, 3 weeks ago


A is reliable. nothing is mentioned about costs.
upvoted 2 times

" # GeniusMikeLiu 10 months, 1 week ago


Selected Answer: A
quesion is care about 'reliable', so A is best then B. Blue/Green deployment can roll back to old version if something goes wrong.
upvoted 3 times

" # cldy 10 months, 1 week ago


B is correct.
upvoted 1 times

" # Juks 10 months, 4 weeks ago


Blue/Green deployment is more reliable as it will never cause an outage. Using Canary you are still causing outage for a set of users.
upvoted 5 times

" # Anhdd 5 months ago


Blue/green deployment: An outage or issue could also have a wide-scale business impact before a rollback is triggered, and depending on the
implementation, in-flight user transactions may be lost when the shift in traffic is made
upvoted 1 times

" # tkanmani76 11 months ago


A - The answer is Blue/Green. The question requires a "Reliable" soluton - With Canary you would still be routing to a small subset of user base
who would be impacted if there is an issue with upgrade. With Blue/Green you would test in one environment and once it works fine you could
swing over - that way there will be no customer impact or production issue.
upvoted 7 times

" # sappers 11 months, 1 week ago


Its B Canary - yep stunning that so many head-in-cloud Architects dont understand DevOps - Think if YOU were responsible for "service outages
occur(ing) OFTEN as a result of unanticipated issues" e.g. a known intermittently flawed App - would you really do Blue/Green (no mention in Q of
testing) ? then just swap over ? Good luck w that :@)
upvoted 2 times

" # sappers 11 months, 1 week ago


Its B Canary - yep stunning that so many head-in-cloud Architects dont understand DevOps - Think if YOU were responsible for "service outages
occur(ing) OFTEN as a result of unanticipated issues" e.g. a known intermittently flawed App - would you really do Blue/Green (no mention in Q of
testing) ? then just swap over ? Good lcuck w that :@)
upvoted 1 times

" # denccc 1 year ago


Issue: "The Engineering team reports there are frequent service disruptions due to unexpected errors when updating the application of the
Apache Tomcat server." This can be prevented by using B/G deployments. Only when everything is fine a switch will happen. So will go with A.
upvoted 4 times

179 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Cotter 1 year ago


A or B I am very confuse!
upvoted 3 times

" # joe16 1 year ago


A and B are similar but B more suited in this scenario. Please read this -
https://martinfowler.com/bliki/CanaryRelease.html
upvoted 1 times

" # joe16 1 year ago


and this -
https://circleci.com/blog/canary-vs-blue-green-downtime/
upvoted 1 times

180 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #574 Topic 1

During a security audit of a Service team's application, a Solutions Architect discovers that a username and password for an Amazon RDS
database and a set of
AWS IAM user credentials can be viewed in the AWS Lambda function code. The Lambda function uses the username and password to run queries
on the database, and it uses the IAM credentials to call AWS services in a separate management account.
The Solutions Architect is concerned that the credentials could grant inappropriate access to anyone who can view the Lambda code. The
management account and the Service team's account are in separate AWS Organizations organizational units (OUs).
Which combination of changes should the Solutions Architect make to improve the solution's security? (Choose two.)

A. Con+gure Lambda to assume a role in the management account with appropriate access to AWS.

B. Con+gure Lambda to use the stored database credentials in AWS Secrets Manager and enable automatic rotation.

C. Create a Lambda function to rotate the credentials every hour by deploying a new Lambda version with the updated credentials.

D. Use an SCP on the management account's OU to prevent IAM users from accessing resources in the Service team's account.

E. Enable AWS Shield Advanced on the management account to shield sensitive resources from unauthorized IAM access.

Correct Answer: BD

Community vote distribution


AB (100%)

" # LunchTime Highly Voted $ 1 year, 1 month ago


A & B are correct.
Concenus on B being correct.
Regarding A verse D: SCP is too restrictive. As mentioned by khksoma, the issue is only around the Lambda function. D also does not provide a
way to support the Lambda calling AWS services in the separate account. As such, D is not correct. Option "A" addresses this and is supported
by the link given by balisongjam.
upvoted 26 times

" # Ebi Highly Voted $ 1 year ago


Answer is AB
upvoted 6 times

" # tartarus23 Most Recent % 6 months ago


Selected Answer: AB
A. Seems a better option than using AWS organizations to address the requirements
B. AWS Secrets Manager enables lifecycle management, key rotation and securely storing the database credentials.
upvoted 1 times

" # HellGate 9 months, 1 week ago


My answer is B and D.

in the question, mentioned as “The Solutions Architect is afraid that the credentials might be misused by anybody who can examine the Lambda
code”, so proper access control is needed here. We need D for this.
upvoted 2 times

" # CloudChef 9 months, 3 weeks ago


Seems AWS has people who put a bunch of wrong answers at about the same time. Careful what you believe.
upvoted 1 times

" # AzureDP900 11 months ago


A, B is right
upvoted 1 times

" # tonikus 1 year ago


Q: Answers here.. marked as "Correct" with randomizer?
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,B
upvoted 2 times

" # ss160700 1 year ago

181 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

A&B - D will prevent Lambda to function correctly


upvoted 1 times
" # pradhyumna 1 year ago
B and D is correct. The question says, "which combination", obviously both AB are solving the same lambda problem, hence not a good
"combination". On top of it, it does not help lambda assuming a role in mgmt account while the application is completely running in service
account with lambda and RDS. Second part of the problem is how to prevent users from using the IAM credentials which can be viewed in the
code. This is what SCP is addressing, anyways SCP doesn't affect the IAM users in the mgmt account and so this SCP would prevent IAM users
from the service account . I would go with B and D "combination".
upvoted 3 times

" # pradhyumna 1 year ago


Changing to A & B rds credentials in secrets manager, use roles to eliminate mgmt creds
upvoted 3 times

" # Waiweng 1 year ago


it's A&B
upvoted 6 times

" # Kian1 1 year ago


going with AB
upvoted 3 times

" # Firststack 1 year ago


A & B is the most secure approach
upvoted 2 times

" # Justu 1 year ago


AB, You need to fix lambda getting credentials directly from the code and allow it to use mgmt account resources.

D: There's no need to restrict ServiceAccount resources by SCP.

Kanavpeer is right.
upvoted 1 times

" # Cantaloupe 1 year ago


A/B
Assuming the role is the right way to do it. And SSM is good for storing DB credentials
https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/

D is wrong as users from one account cannot access resources from another account if not allowed through cross-account access using
assumed roles. There's no need to use SCP for deny
E is wrong as shield is used for ddos protection
C does not make sense with hourly redeploying of lambda
upvoted 2 times

" # petebear55 1 year ago


BEST PRACTICE WOULD BE B AND D
upvoted 3 times

" # Bulti 1 year ago


A & B is the right answer. SCP will prevent an IAM user credentials to access the services which will cause the Lambda function to fail. We don't
want the Lambda function to fail wile calling AWS services. Option A provides an elegant and standard solution to allow Lambda in one account
to access AWS services in another account by assuming the IAM role that provides it access to call those AWS services.
upvoted 2 times

182 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #575 Topic 1

A company is having issues with a newly deployed serverless infrastructure that uses Amazon API Gateway, Amazon Lambda, and Amazon
DynamoDB.
In a steady state, the application performs as expected. However, during peak load, tens of thousands of simultaneous invocations are needed
and user requests fail multiple times before succeeding. The company has checked the logs for each component, focusing speci+cally on Amazon
CloudWatch Logs for Lambda.
There are no errors logged by the services or applications.
What might cause this problem?

A. Lambda has very low memory assigned, which causes the function to fail at peak load.

B. Lambda is in a subnet that uses a NAT gateway to reach out of the internet, and the function instance does not have sumcient Amazon EC2
resources in the VPC to scale with the load.

C. The throttle limit set on API Gateway is very low. During peak load, the additional requests are not making their way through to Lambda.

D. DynamoDB is set up in an auto scaling mode. During peak load, DynamoDB adjusts capacity and throughput behind the scenes, which is
causing the temporary downtime. Once the scaling completes, the retries go through successfully.

Correct Answer: C
Reference:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html

" # proxyolism Highly Voted $ 1 year, 1 month ago


the answer is C. question says

The company has checked the logs for each component, focusing specifically on Amazon CloudWatch Logs for Lambda.

and it means there is no error log from lambda. the company actually did not check API gateway's cloudwatch log. if lambda fails, the company
could check it is the lambda problem with cloudwatch logs. furthermore, A is completely wrong because lambda runs pararell with concurrency.
and question says this problem only occurs when during maximum loads. if lambda memory is the cause of problem, it can be failed whenever
under maximum loads or not.
upvoted 18 times

" # Nemer Highly Voted $ 1 year, 1 month ago


Fine with C. Nothing wrong with Lambda. Increase API gateway throttle limits.
upvoted 9 times

" # sam422 1 year, 1 month ago


There are no errors logged in from services or application, why we think gateway throttle errors, it will log if an issue. I go with A
upvoted 2 times

" # student22 1 year ago


C
No errors from Lambda because requests were throttled at API Gateway.
upvoted 2 times

" # Ni_yot Most Recent % 3 months, 1 week ago


C makes perfect sense
upvoted 1 times

" # AMKazi 9 months, 2 weeks ago


Ans is C:
cannot be D: https://aws.amazon.com/about-aws/whats-new/2017/06/announcing-amazon-dynamodb-auto-scaling/#:~:text=Starting%20today
%2C%20when%20you%20create,request%20volumes%2C%20with%20zero%20downtime.
upvoted 1 times

" # cldy 10 months, 1 week ago


C is correct.
upvoted 1 times

" # AzureDP900 11 months ago


C is correct
upvoted 1 times

183 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # WhyIronMan 1 year ago


I'll go with C
upvoted 2 times

" # Kopa 1 year ago


Im for A, was for D but doesnt make sense as if it was scaling issue on Dynamo it will log errors on lambda.
upvoted 2 times

" # Kopa 11 months, 3 weeks ago


sorry i mean C
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 2 times

" # kiev 1 year ago


It is C. I thought it was A but reading again the last line that says lambda has cloud watch has been checked and there is no problem with
lambda implies the problem isn't with lambda. Now between API gateway and Dynamodb, I think it is clear there is a problem with throttle limit in
API gateway that's causing the issue.
upvoted 1 times

" # Kian1 1 year ago


going with C
upvoted 1 times

" # Firststack 1 year ago


C is correct
upvoted 2 times

" # Ebi 1 year ago


Answer is C
upvoted 3 times

" # kopper2019 1 year ago


C API GW limit
upvoted 2 times

" # Bulti 1 year ago


Answer is C. When throttle limits are low on API Gateway, concurrent requests beyond that threshold limit are dropped and they need to be
retried. As a result after repeated retries the request succeeds when the concurrent request count drops below the throttle limit.
upvoted 2 times

" # T14102020 1 year ago


Correct is C. If no errors in logs so need increase API gateway throttle limits
upvoted 1 times

" # joos 1 year ago


A right
upvoted 2 times

184 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #576 Topic 1

A large company with hundreds of AWS accounts has a newly established centralized internal process for purchasing new or modifying existing
Reserved
Instances. This process requires all business units that want to purchase or modify Reserved Instances to submit requests to a dedicated team
for procurement or execution. Previously, business units would directly purchase or modify Reserved Instances in their own respective AWS
accounts autonomously.
Which combination of steps should be taken to proactively enforce the new process in the MOST secure way possible? (Choose two.)

A. Ensure all AWS accounts are part of an AWS Organizations structure operating in all features mode.

B. Use AWS Con+g to report on the attachment of an IAM policy that denies access to the ec2:PurchaseReservedInstancesOffering and
ec2:ModifyReservedInstances actions.

C. In each AWS account, create an IAM policy with a DENY rule to the ec2:PurchaseReservedInstancesOffering and
ec2:ModifyReservedInstances actions.

D. Create an SCP that contains a deny rule to the ec2:PurchaseReservedInstancesOffering and ec2:ModifyReservedInstances actions. Attach
the SCP to each organizational unit (OU) of the AWS Organizations structure.

E. Ensure that all AWS accounts are part of an AWS Organizations structure operating in consolidated billing features mode.

Correct Answer: CE

Community vote distribution


AD (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


A&D. AWS Org operating in all features mode, to be able to use SCP with deny list (blacklist).
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp-strategies.html
upvoted 31 times

" # petebear55 1 year ago


Well done Nemer
upvoted 2 times

" # jj22222 Most Recent % 7 months, 1 week ago


Selected Answer: AD
A & D for sure
upvoted 1 times

" # pititcu667 8 months, 3 weeks ago


Selected Answer: AD
I agree with nemer
upvoted 1 times

" # AzureDP900 11 months ago


A, D is right
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,D
upvoted 1 times

" # Waiweng 1 year ago


it's A&D
upvoted 4 times

" # blackgamer 1 year ago


A and D
upvoted 1 times

" # kiev 1 year ago


AD is the right answer. Now to those saying why A and not E, it is because AWS organisations in all feature mode include consolidated billing.
upvoted 2 times

185 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Kian1 1 year ago


going with AD
upvoted 1 times

" # Ebi 1 year ago


A and D are correct answer
upvoted 3 times

" # kopper2019 1 year ago


A and D, Orgs and SCP the way to go
upvoted 2 times

" # Bulti 1 year ago


A & D is the right answer. Forst put all accounts into OU and the apply SCP to deny access to the EC2 API that procure new reserved instances
or modify existing reserved instances.
upvoted 1 times

" # darthvoodoo 1 year ago


With D in place, I wonder how the procurement team would now be able to purchase reserved instances...I know you can have exceptions in
SCPs but c'mon AWS...
upvoted 3 times

" # T14102020 1 year ago


Correct is AD. A and D. Use AWS Organization together with SCP
upvoted 1 times

" # rscloud 1 year ago


A and D
SCPs are available only in an organization that has all features enabled.
An SCP restricts permissions for IAM users and roles in member accounts
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with A,D
upvoted 3 times

" # cloudgc 1 year, 1 month ago


A - https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_support-all-features.html
D
upvoted 1 times

186 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #577 Topic 1

A Solutions Architect wants to make sure that only AWS users or roles with suitable permissions can access a new Amazon API Gateway
endpoint. The Solutions
Architect wants an end-to-end view of each request to analyze the latency of the request and create service maps.
How can the Solutions Architect design the API Gateway access control and perform request inspections?

A. For the API Gateway method, set the authorization to AWS_IAM. Then, give the IAM user or role execute-api:Invoke permission on the REST
API resource. Enable the API caller to sign requests with AWS Signature when accessing the endpoint. Use AWS X-Ray to trace and analyze
user requests to API Gateway.

B. For the API Gateway resource, set CORS to enabled and only return the company's domain in Access-Control-Allow-Origin headers. Then,
give the IAM user or role execute-api:Invoke permission on the REST API resource. Use Amazon CloudWatch to trace and analyze user
requests to API Gateway.

C. Create an AWS Lambda function as the custom authorizer, ask the API client to pass the key and secret when making the call, and then use
Lambda to validate the key/secret pair against the IAM system. Use AWS X-Ray to trace and analyze user requests to API Gateway.

D. Create a client certi+cate for API Gateway. Distribute the certi+cate to the AWS users and roles that need to access the endpoint. Enable the
API caller to pass the client certi+cate when accessing the endpoint. Use Amazon CloudWatch to trace and analyze user requests to API
Gateway.

Correct Answer: D
Reference:
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-cors.html

Community vote distribution


A (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


A. Access control using Role, and request inspection with X-Ray.
upvoted 23 times

" # joe16 1 year ago


A
https://aws.amazon.com/premiumsupport/knowledge-center/iam-authentication-api-gateway/
upvoted 2 times

" # Ebi Highly Voted $ 1 year ago


Answer is A
upvoted 5 times

" # cldy Most Recent % 11 months ago


A. For the API Gateway method, set the authorization to AWS_IAM. Then, give the IAM user or role execute-api:Invoke permission on the REST
API resource. Enable the API caller to sign requests with AWS Signature when accessing the endpoint. Use AWS X-Ray to trace and analyze user
requests to API Gateway.
upvoted 1 times

" # AzureDP900 11 months ago


A is right answer
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: A
https://aws.amazon.com/premiumsupport/knowledge-center/iam-authentication-api-gateway/
upvoted 2 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: A
https://aws.amazon.com/premiumsupport/knowledge-center/iam-authentication-api-gateway/
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A

187 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # Waiweng 1 year ago
it's A
upvoted 5 times

" # blackgamer 1 year ago


A is the answer, XRay is needed here.
upvoted 1 times

" # Pupu86 1 year ago


End-to-end request already hints towards the usage of AWS X-ray. Automatically filtering out option B and D. Further the authorisation via role
rather than parsing secrets through AWS clients - so A
upvoted 3 times

" # kiev 1 year ago


A is the correct answer. Role +X-ray for better analysis
upvoted 2 times

" # Kian1 1 year ago


going with A
upvoted 1 times

" # Bulti 1 year ago


A is correct.
upvoted 3 times

" # T14102020 1 year ago


Correct is A. AWS Signature + X-Ray
upvoted 2 times

" # jackdryan 1 year ago


I'll go with A
upvoted 3 times

" # taoteching1 1 year ago


A is correct - https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies-examples.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-using-xray-maps.html
upvoted 2 times

" # liono 1 year, 1 month ago


A
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies-examples.html
upvoted 1 times

188 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #578 Topic 1

A Solutions Architect needs to design a highly available application that will allow authenticated users to stay connected to the application even
when there are underlying failures.
Which solution will meet these requirements?

A. Deploy the application on Amazon EC2 instances. Use Amazon Route 53 to forward requests to the EC2 instances. Use Amazon DynamoDB
to save the authenticated connection details.

B. Deploy the application on Amazon EC2 instances in an Auto Scaling group. Use an internet-facing Application Load Balancer to handle
requests. Use Amazon DynamoDB to save the authenticated connection details.

C. Deploy the application on Amazon EC2 instances in an Auto Scaling group. Use an internet-facing Application Load Balancer on the front
end. Use EC2 instances to save the authenticated connection details.

D. Deploy the application on Amazon EC2 instances in an Auto Scaling group. Use an internet-facing Application Load Balancer on the front
end. Use EC2 instances hosting a MySQL database to save the authenticated connection details.

Correct Answer: C

Community vote distribution


B (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


B. ALB + ASG + DynamoDB make sense.
upvoted 24 times

" # oscargee 1 year ago


The requirement said, stay connection after certification. Which means the info should be cached in backend. That's why DB is not necessary.
Otherwise every time user submit request will trigger a DB query which is really slow.
upvoted 1 times

" # kirrim 1 year ago


Could front-end DDB with ElasticCache if you're worried about the DDB queries being too slow and you truly need that level of
performance on auth/session data and are willing to pay for it. But still need DDB behind it to populate the cache misses in that scenario.
upvoted 1 times

" # tartarus23 Most Recent % 6 months, 1 week ago


Selected Answer: B
B. DynamoDB is a better option to save the authenticated connection details rather than a standard EC2 instance.
upvoted 1 times

" # tartarus23 6 months, 3 weeks ago


Selected Answer: B
B is highly available, scalable, ALB allows connection stickiness and handling with help of DDB to save the connections and sessions.
upvoted 2 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: B
B is most right I would say. I am pretty certain it is the answer that AWS wants to hear
upvoted 1 times

" # CloudChef 9 months, 2 weeks ago


B or not 2 B, that is the question, and the answer is B
upvoted 1 times

" # pititcu667 9 months, 4 weeks ago


Selected Answer: B
Voting B. the answer given makes no sense. If you store session data on an ec2 and you lost it you lost the session.
upvoted 1 times

" # AzureDP900 11 months ago


I will go with DynamoDB , B is right
upvoted 1 times

" # moon2351 1 year ago

189 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Answer is B
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # Waiweng 1 year ago


it's B
upvoted 3 times

" # KnightVictor 1 year ago


Reading highly available, A & D are ruled out.
B, C talk about auto scaling group+application load balancer, so highly available.
Between B & C, since DynamoDB makes more sense in this case, so going for B

My take: B
upvoted 3 times

" # kiev 1 year ago


Absolutely B
upvoted 1 times

" # Kian1 1 year ago


going for B
upvoted 1 times

" # Firststack 1 year ago


B is the answer
upvoted 1 times

" # Ebi 1 year ago


No option other than B makes sense, answer is B
upvoted 3 times

" # Superomam 1 year ago


C. the question is asking: "stay connected to the application even when there are underlying failures", it means when the DynamoDB fails so the
only way is to save the connection details locally on the EC2 instances.
upvoted 1 times

" # sarah_t 1 year ago


DynamoDB is HA by default, EC2 instances can fail.
upvoted 1 times

" # Bulti 1 year ago


Answer is B.
upvoted 1 times

190 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #579 Topic 1

A company experienced a breach of highly con+dential personal information due to permission issues on an Amazon S3 bucket. The Information
Security team has tightened the bucket policy to restrict access. Additionally, to be better prepared for future attacks, these requirements must be
met:
✑ Identify remote IP addresses that are accessing the bucket objects.
✑ Receive alerts when the security policy on the bucket is changed.
✑ Remediate the policy changes automatically.
Which strategies should the Solutions Architect use?

A. Use Amazon CloudWatch Logs with CloudWatch +lters to identify remote IP addresses. Use CloudWatch Events rules with AWS Lambda to
automatically remediate S3 bucket policy changes. Use Amazon SES with CloudWatch Events rules for alerts.

B. Use Amazon Athena with S3 access logs to identify remote IP addresses. Use AWS Con+g rules with AWS Systems Manager Automation to
automatically remediate S3 bucket policy changes. Use Amazon SNS with AWS Con+g rules for alerts.

C. Use S3 access logs with Amazon Elasticsearch Service and Kibana to identify remote IP addresses. Use an Amazon Inspector assessment
template to automatically remediate S3 bucket policy changes. Use Amazon SNS for alerts.

D. Use Amazon Macie with an S3 bucket to identify access patterns and remote IP addresses. Use AWS Lambda with Macie to automatically
remediate S3 bucket policy changes. Use Macie automatic alerting capabilities for alerts.

Correct Answer: B

Community vote distribution


B (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


B. 1)To id remote IPs, need to look at S3 access logs. Athena helps in analyzing those logs.
https://docs.aws.amazon.com/AmazonS3/latest/dev/using-s3-access-logs-to-identify-requests.html

2) For auto-remediation, use AWS Config with Systems Manager.


https://aws.amazon.com/blogs/mt/aws-config-auto-remediation-s3-compliance/

4) For alerting, use SNS with AWS Config.


https://docs.aws.amazon.com/config/latest/developerguide/notifications-for-AWS-Config.html
upvoted 44 times

" # kpcert 1 year ago


Agree. Answer is B.
upvoted 3 times

" # AwsBRFan Most Recent % 1 month, 2 weeks ago


Selected Answer: B
https://docs.aws.amazon.com/AmazonS3/latest/userguide/LogFormat.html
Key word S3 server access
Ispector is for EC2 and ECS
upvoted 1 times

" # cldy 11 months ago


B. Use Amazon Athena with S3 access logs to identify remote IP addresses. Use AWS Config rules with AWS Systems Manager Automation to
automatically remediate S3 bucket policy changes. Use Amazon SNS with AWS Config rules for alerts.
upvoted 1 times

" # pcops 11 months, 2 weeks ago


B: Athena + S3 access logs to identify IP address. SNS for notifications and SM to automate the requests.
upvoted 1 times

" # sashenka 11 months, 3 weeks ago


One actually CAN get the IP ADDRESS using Amazon Macie:
policyDetails.actor.ipAddressDetails.ipAddressV4

https://docs.aws.amazon.com/de_de/macie/latest/user/findings-filter-fields.html
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B

191 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

For those choosing D, read the question again. Twice.


✑ Identify remote IP addresses that are accessing the bucket objects.
✑ Receive alerts when the security policy on the bucket is changed.
✑ Remediate the policy changes automatically.
^ this is called "Requirements" ^
Macie is about the DATA itself; question wants to prevent a series of events like public explicit buckets, notify and set they private again.
Typical use case of AWS Config rules
upvoted 1 times

" # digimaniac 1 year ago


D
B can't monitor S3 policy change. versus Macie can "Macie generates policy findings when the policies or settings for an S3 bucket are changed
in a way that reduces the security of the bucket and its objects. Macie does this only if the change occurs after you enable your Macie account."
upvoted 1 times

" # Pupu86 1 year ago


Answer is D. AWS macie is built specifically for protecting of PII information
upvoted 2 times

" # blackgamer 1 year ago


Answer is B. please refer to below link for details explanation.

https://aws.amazon.com/blogs/mt/using-aws-systems-manager-opscenter-and-aws-config-for-compliance-monitoring/
upvoted 1 times

" # alisyech 1 year ago


i go with B
upvoted 1 times

" # Kian1 1 year ago


going with B
upvoted 1 times

" # Ebi 1 year ago


I will go with B
upvoted 4 times

" # petebear55 1 year ago


Change my mind to B .. D can not do the last point in the question.
upvoted 1 times

" # petebear55 1 year ago


D: https://aws.amazon.com/blogs/security/how-to-create-custom-alerts-with-amazon-macie/
upvoted 1 times

" # petebear55 1 year ago


D is correct as its designed for just this scenario with S3
upvoted 1 times

" # vbal 10 months, 3 weeks ago


Amazon Macie is a security service that makes it easy for you to discover, classify, and protect sensitive data in Amazon Simple Storage
Service (Amazon S3). Question is About Bucket Policy Changes...Can Macie look for changes in AWS resources Configuration???
upvoted 1 times

" # Bulti 1 year ago


B is the correct answer. Only AWS config can continuously monitor changes to bucket polices and enable automatic remediation.
upvoted 2 times

" # Bulti 1 year ago


https://aws.amazon.com/blogs/mt/aws-config-auto-remediation-s3-compliance/
upvoted 1 times

" # darthvoodoo 1 year ago


D is correct because:
1. Macie can detect the source IP (https://docs.aws.amazon.com/macie/latest/user/monitoring.html)
2. It can easily send alerts out
3. Can integrate with event bridge to trigger lambda for remediation (https://docs.aws.amazon.com/macie/latest/user/findings-monitor.html)
upvoted 4 times

" # SD13 1 year ago


D seems latest and accurate: Supporting link: https://aws.amazon.com/blogs/security/deploy-an-automated-chatops-solution-for-
remediating-amazon-macie-findings/?nc1=b_rp
upvoted 2 times

192 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # SD13 1 year ago


Correct answer is B, Macie cannot detect remote IP Athena can
upvoted 2 times

" # kj07 1 year ago


I will go with B.
We will not know if the bucket policy is changed unless you will use an AWS Config rule.
upvoted 1 times

193 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #580 Topic 1

A Solutions Architect is designing a deployment strategy for an application tier and has the following requirements:
✑ The application code will need a 500 GB static dataset to be present before application startup.
✑ The application tier must be able to scale up and down based on demand with as little startup time as possible.
✑ The Development team should be able to update the code multiple times each day.
✑ Critical operating system (OS) patches must be installed within 48 hours of being released.
Which deployment strategy meets these requirements?

A. Use AWS Systems Manager to create a new AMI with the updated OS patches. Update the Auto Scaling group to use the patched AMI and
replace existing unpatched instances. Use AWS CodeDeploy to push the application code to the instances. Store the static data in Amazon
EFS.

B. Use AWS Systems Manager to create a new AMI with updated OS patches. Update the Auto Scaling group to use the patched AMI and
replace existing unpatched instances. Update the OS patches and the application code as batch job every night. Store the static data in
Amazon EFS.

C. Use an Amazon-provided AMI for the OS. Con+gure an Auto Scaling group set to a static instance count. Con+gure an Amazon EC2 user
data script to download the data from Amazon S3. Install OS patches with AWS Systems Manager when they are released. Use AWS
CodeDeploy to push the application code to the instances.

D. Use an Amazon-provided AMI for the OS. Con+gure an Auto Scaling group. Con+gure an Amazon EC2 user data script to download the data
from Amazon S3. Replace existing instances after each updated Amazon-provided AMI release. Use AWS CodeDeploy to push the application
code to the instances.

Correct Answer: B

Community vote distribution


A (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


A. Systems Manager to update the ASG with patched AMI, CodeDeploy to push the code, and EFS for the 500 GB static data.
upvoted 25 times

" # rcher 1 year ago


Second this, System manager have pre-defined run book that can allow you to patch os :)
upvoted 4 times

" # Ebi Highly Voted $ 1 year ago


Answer is A,
B although is correct as well but does not satisfy multiple deployments per day
upvoted 9 times

" # WayneYi Most Recent % 1 month ago


The issue with option B is that it only pushes application code once per day, but we need multiple deployments per day.
upvoted 1 times

" # Bennycy 7 months, 2 weeks ago


C Has all 4 requirements
upvoted 1 times

" # pititcu667 9 months, 3 weeks ago


Selected Answer: A
A the keywords are automatic multiple releases -> CodeDeploy, Least amount of startup time shared efs for data is faster than downloading 500
gb from s3.
upvoted 2 times

" # cldy 10 months, 1 week ago


A: CodeDeploy + EFS.
upvoted 1 times

" # AzureDP900 11 months ago


I will go with OPTION A
upvoted 1 times

194 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # acloudguru 11 months, 1 week ago


Selected Answer: A
codedeploy is better than B
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

" # Waiweng 1 year ago


it's A
upvoted 2 times

" # blackgamer 1 year ago


A seems to be better option
upvoted 1 times

" # Kian1 1 year ago


going with A
upvoted 1 times

" # Bulti 1 year ago


I will go with A instead of B. B is a bit confusing because it appears that the AWS System Manager would be able to create new AMI as new OS
patches are released and replace the existing ones. Not sure why there is a need to do that when deploying the application code as well in B. So
I will go with A.
upvoted 1 times

" # T14102020 1 year ago


Correct is A. CodeDeploy + EFS(faster then S3)
upvoted 2 times

" # T14102020 1 year ago


Correct is A. CodeDeploy + EFS
upvoted 2 times

" # rscloud 1 year ago


A
Code deploy for multiply deploy and EFS for static data
upvoted 2 times

" # gookseang 1 year, 1 month ago


A, CodeDeploy + EFS
upvoted 1 times

195 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #581 Topic 1

A company is operating a large customer service call center, and stores and processes call recordings with a custom application. Approximately
2% of the call recordings are transcribed by an offshore team for quality assurance purposes. These recordings take up to 72 hours to be
transcribed. The recordings are stored on an NFS share before they are archived to an offsite location after 90 days. The company uses Linux
servers for processing the call recordings and managing the transcription queue. There is also a web application for the quality assurance staff to
review and score call recordings.
The company plans to migrate the system to AWS to reduce storage costs and the time required to transcribe calls.
Which set of actions should be taken to meet the company's objectives?

A. Upload the call recordings to Amazon S3 from the call center. Set up an S3 lifecycle policy to move the call recordings to Amazon S3 Glacier
after 90 days. Use an AWS Lambda trigger to transcribe the call recordings with Amazon Transcribe. Use Amazon S3, Amazon API Gateway,
and Lambda to host the review and scoring application.

B. Upload the call recordings to Amazon S3 from the call center. Set up an S3 lifecycle policy to move the call recordings to Amazon S3 Glacier
after 90 days. Use an AWS Lambda trigger to transcribe the call recordings with Amazon Mechanical Turk. Use Amazon EC2 instances in an
Auto Scaling group behind an Application Load Balancer to host the review and scoring application.

C. Use Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer to host the review and scoring application.
Upload the call recordings to this application from the call center and store them on an Amazon EFS mount point. Use AWS Backup to archive
the call recordings after 90 days. Transcribe the call recordings with Amazon Transcribe.

D. Upload the call recordings to Amazon S3 from the call center and put the object key in an Amazon SQS queue. Set up an S3 lifecycle policy
to move the call recordings to Amazon S3 Glacier after 90 days. Use Amazon EC2 instances in an Auto Scaling group to send the recordings to
Amazon Mechanical Turk for transcription. Use the number of objects in the queue as the scaling metric. Use Amazon S3, Amazon API
Gateway, and AWS Lambda to host the review and scoring application.

Correct Answer: A

" # AwsSuperTrooper Highly Voted $ 10 months, 1 week ago


Answer is A
upvoted 6 times

" # awsguru1998 Most Recent % 1 week, 2 days ago


A is straightforward. MTurk is not as instant as Transcribe. Reduce storage costs with NFS so avoid EFS and EC2 and move to serverless with S3
upvoted 1 times

" # Yamchi 6 months, 1 week ago


A
S3 + Glacier + API GW + Lambda + Amazon Transcribe
upvoted 2 times

196 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #582 Topic 1

A Solutions Architect is building a containerized .NET Core application that will run in AWS Fargate. The backend of the application requires
Microsoft SQL Server with high availability. All tiers of the application must be highly available. The credentials used for the connection string to
SQL Server should not be stored on disk within the .NET Core front-end containers.
Which strategies should the Solutions Architect use to meet these requirements?

A. Set up SQL Server to run in Fargate with Service Auto Scaling. Create an Amazon ECS task execution role that allows the Fargate task
de+nition to get the secret value for the credentials to SQL Server running in Fargate. Specify the ARN of the secret in AWS Secrets Manager in
the secrets section of the Fargate task de+nition so the sensitive data can be injected into the containers as environment variables on startup
for reading into the application to construct the connection string. Set up the .NET Core service using Service Auto Scaling behind an
Application Load Balancer in multiple Availability Zones.

B. Create a Multi-AZ deployment of SQL Server on Amazon RDS. Create a secret in AWS Secrets Manager for the credentials to the RDS
database. Create an Amazon ECS task execution role that allows the Fargate task de+nition to get the secret value for the credentials to the
RDS database in Secrets Manager. Specify the ARN of the secret in Secrets Manager in the secrets section of the Fargate task de+nition so
the sensitive data can be injected into the containers as environment variables on startup for reading into the application to construct the
connection string. Set up the .NET Core service in Fargate using Service Auto Scaling behind an Application Load Balancer in multiple
Availability Zones.

C. Create an Auto Scaling group to run SQL Server on Amazon EC2. Create a secret in AWS Secrets Manager for the credentials to SQL Server
running on EC2. Create an Amazon ECS task execution role that allows the Fargate task de+nition to get the secret value for the credentials to
SQL Server on EC2. Specify the ARN of the secret in Secrets Manager in the secrets section of the Fargate task de+nition so the sensitive data
can be injected into the containers as environment variables on startup for reading into the application to construct the connection string. Set
up the .NET Core service using Service Auto Scaling behind an Application Load Balancer in multiple Availability Zones.

D. Create a Multi-AZ deployment of SQL Server on Amazon RDS. Create a secret in AWS Secrets Manager for the credentials to the RDS
database. Create non- persistent empty storage for the .NET Core containers in the Fargate task de+nition to store the sensitive information.
Create an Amazon ECS task execution role that allows the Fargate task de+nition to get the secret value for the credentials to the RDS
database in Secrets Manager. Specify the ARN of the secret in Secrets Manager in the secrets section of the Fargate task de+nition so the
sensitive data can be written to the non-persistent empty storage on startup for reading into the application to construct the connection string.
Set up the .NET Core service using Service Auto Scaling behind an Application Load Balancer in multiple Availability Zones.

Correct Answer: D

Community vote distribution


B (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


B. Secrets Manager natively supports SQL Server on RDS. No real need to create additional 'ephemeral storage' to fetch credentials, as these
can be injected to containers as environment variables.
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-data-security-container-task/
upvoted 33 times

" # rcher 1 year ago


agreed, deploying something similar in Fargate that required secrets from secret manger.
upvoted 3 times

" # T14102020 Highly Voted $ 1 year ago


Correct is B. RDS + Secret Manager + Without non- persistent empty storage
upvoted 5 times

" # AwsBRFan Most Recent % 1 month, 2 weeks ago


Selected Answer: B
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-secrets.html
upvoted 1 times

" # TechX 4 months, 1 week ago


Selected Answer: B
Answer: B
Explanation:
By default tasks in Fargate are assigned ephemeral storage. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-task-

197 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

storage.html https://aws.amazon.com/premiumsupport/knowledge-center/ecs-data-security-container-task/
upvoted 1 times
" # kangtamo 4 months, 1 week ago
Selected Answer: B
It should be B, retrieving RDS credentials from Secret Manager.
upvoted 1 times

" # jyrajan69 8 months, 1 week ago


Unless there is a specific reasons for using non-persistent storage the answer must be B. Best practice is to use roles, and B is the only answer
with that
upvoted 1 times

" # challenger1 11 months ago


My Answer: B
B uses ECS - containers solution for Fargate
upvoted 1 times

" # AzureDP900 11 months ago


I will go with B
upvoted 1 times

" # Suresh108 1 year ago


method of elimination -.

application must be highly available = MULTI-AZ (ONLY B and D has).


out of those two environment variables works good, hence B.
upvoted 3 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # Waiweng 1 year ago


it's B
upvoted 2 times

" # blackgamer 1 year ago


B is the answer.
upvoted 1 times

" # gsw 1 year ago


whats the problem with D?
upvoted 4 times

" # Kian1 1 year ago


going with B
upvoted 1 times

" # Ebi 1 year ago


B is the correct answer
upvoted 3 times

" # 01037 1 year ago


D doesn't work?
upvoted 1 times

" # Bulti 1 year ago


Correct answer is B
upvoted 2 times

198 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #583 Topic 1

An enterprise company wants to implement cost controls for all its accounts in AWS Organizations, which has full features enabled. The company
has mapped organizational units (OUs) to its business units, and it wants to bill these business units for their individual AWS spending. There has
been a recent spike in the company's AWS bill, which is generating attention from the Finance team. A Solutions Architect needs to investigate the
cause of the spike while designing a solution that will track AWS costs in Organizations and generate a noti+cation to the required teams if costs
from a business unit exceed a speci+c monetary threshold.
Which solution will meet these requirements?

A. Use Cost Explorer to troubleshoot the reason for the additional costs. Set up an AWS Lambda function to monitor the company's AWS bill
by each AWS account in an OU. Store the threshold amount set by the Finance team in the AWS Systems Manager Parameter Store. Write the
custom rules in the Lambda function to verify any hidden costs for the AWS accounts. Trigger a noti+cation from the Lambda function to an
Amazon SNS topic when a budget threshold is breached.

B. Use AWS Trusted Advisor to troubleshoot the reason for the additional costs. Set up an AWS Lambda function to monitor the company's
AWS bill by each AWS account in an OU. Store the threshold amount set by the Finance team in the AWS Systems Manager Parameter Store.
Write custom rules in the Lambda function to verify any hidden costs for the AWS accounts. Trigger an email to the required teams from the
Lambda function using Amazon SNS when a budget threshold is breached.

C. Use Cost Explorer to troubleshoot the reason for the additional costs. Create a budget using AWS Budgets with the monetary amount set by
the Finance team for each OU by grouping the linked accounts. Con+gure an Amazon SNS noti+cation to the required teams in the budget.

D. Use AWS Trusted Advisor to troubleshoot the reason for the additional costs. Create a budget using AWS Budgets with the monetary
amount set by the Finance team for each OU by grouping the linked accounts. Add the Amazon EC2 instance types to be used in the company
as a budget +lter. Con+gure an Amazon SNS topic with a subscription for the Finance team email address to receive budget noti+cations.

Correct Answer: C

Community vote distribution


C (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


C. Typical Cost Explorer & AWS Budget use case.
upvoted 24 times

" # Ebi Highly Voted $ 1 year ago


I will go with C
upvoted 6 times

" # sathishleorai Most Recent % 2 months, 2 weeks ago


Selected Answer: C
Best answer
upvoted 1 times

" # cldy 11 months ago


C. Use Cost Explorer to troubleshoot the reason for the additional costs. Create a budget using AWS Budgets with the monetary amount set by
the Finance team for each OU by grouping the linked accounts. Configure an Amazon SNS notification to the required teams in the budget.
upvoted 1 times

" # AzureDP900 11 months ago


I will go with C
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 3 times

" # Waiweng 1 year ago


it's C
upvoted 5 times

" # Bulti 1 year ago


Answer is C
upvoted 2 times

199 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # T14102020 1 year ago


Correct is C. Cost Explorer + Budget
upvoted 3 times

" # jackdryan 1 year ago


I'll go with C
upvoted 2 times

" # CYL 1 year ago


C. Cost explorer give the breakdown by OU. AWS Budget to set limits and alerts.
upvoted 2 times

" # Paitan 1 year, 1 month ago


Definitely C
upvoted 1 times

" # shakthi000005 1 year, 1 month ago


Ans is C
upvoted 2 times

" # Anila_Dhharisi 1 year, 1 month ago


Ater re-reading the question, I think its C and not B.
Cost Explorer – See patterns in AWS spend over time, project future costs, identify areas that need further inquiry, observe Reserved Instance
utilization, observe Reserved Instance coverage, and receive Reserved Instance recommendations.

AWS Trusted Advisor – Get real-time identification of potential areas for optimization.

AWS Budgets – Set custom budgets that trigger alerts when cost or usage exceed (or are forecasted to exceed) a budgeted amount. Budgets
can be set based on tags and accounts as well as resource types.
upvoted 6 times

" # qaz12wsx 1 year, 1 month ago


before writing something make sure that it's correct
upvoted 8 times

" # petebear55 1 year ago


Stop being a school teacher .. everyone is entitled to make mistakes
upvoted 13 times

" # Anila_Dhharisi 1 year, 1 month ago


B is the answer. As Cost Explorer allows users to examine usage patterns over time. Trusted Advisor alerts users about resources with low
utilization. and as well Trusted Advisor inspects your AWS environment and makes recommendations for saving money, improving system
performance, or closing security gaps.
upvoted 2 times

200 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #584 Topic 1

A company is developing a new service that will be accessed using TCP on a static port. A solutions architect must ensure that the service is
highly available, has redundancy across Availability Zones, and is accessible using the DNS name my.service.com, which is publicly accessible.
The service must use +xed address assignments so other companies can add the addresses to their allow lists.
Assuming that resources are deployed in multiple Availability Zones in a single Region, which solution will meet these requirements?

A. Create Amazon EC2 instances with an Elastic IP address for each instance. Create a Network Load Balancer (NLB) and expose the static
TCP port. Register EC2 instances with the NLB. Create a new name server record set named my.service.com, and assign the Elastic IP
addresses of the EC2 instances to the record set. Provide the Elastic IP addresses of the EC2 instances to the other companies to add to their
allow lists.

B. Create an Amazon ECS cluster and a service de+nition for the application. Create and assign public IP addresses for the ECS cluster. Create
a Network Load Balancer (NLB) and expose the TCP port. Create a target group and assign the ECS cluster name to the NLB. Create a new A
record set named my.service.com, and assign the public IP addresses of the ECS cluster to the record set. Provide the public IP addresses of
the ECS cluster to the other companies to add to their allow lists.

C. Create Amazon EC2 instances for the service. Create one Elastic IP address for each Availability Zone. Create a Network Load Balancer
(NLB) and expose the assigned TCP port. Assign the Elastic IP addresses to the NLB for each Availability Zone. Create a target group and
register the EC2 instances with the NLB. Create a new A (alias) record set named my.service.com, and assign the NLB DNS name to the record
set.

D. Create an Amazon ECS cluster and a service de+nition for the application. Create and assign public IP address for each host in the cluster.
Create an Application Load Balancer (ALB) and expose the static TCP port. Create a target group and assign the ECS service de+nition name
to the ALB. Create a new CNAME record set and associate the public IP addresses to the record set. Provide the Elastic IP addresses of the
Amazon EC2 instances to the other companies to add to their allow lists.

Correct Answer: B

Community vote distribution


C (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


C. NLB with one Elastic IP per AZ to handle TCP traffic. Alias record set named my.service.com.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html
upvoted 22 times

" # examguru2020 Highly Voted $ 1 year, 1 month ago


C is correct.
If you create an internet-facing load balancer, you can select an Elastic IP address for each Availability Zone. This provides your load balancer
with static IP addresses.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html
upvoted 7 times

" # kadev Most Recent % 2 months, 2 weeks ago


"The service must use fixed address assignments in order for other businesses to add the addresses to their allow list" => That mean outbound
traffic need through fix IP.
=> C also wrong, IP public fixed of NL not related to outbound
upvoted 1 times

" # Byrney 9 hours, 4 minutes ago


C is right - the allow list on the 'other business' side includes the fixed IPs for the NLB as allowed destinations.
upvoted 1 times

" # kadev 2 months, 3 weeks ago


yep, C
https://aws.amazon.com/premiumsupport/knowledge-center/elb-attach-elastic-ip-to-public-nlb/
upvoted 1 times

" # jyrajan69 3 months, 1 week ago


Answer C is the only one with Alias DNS record which is needed to access AWS Resources
upvoted 1 times

" # bobsmith2000 5 months, 2 weeks ago

201 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Selected Answer: C
No-brainer
upvoted 2 times
" # Devgela 9 months, 2 weeks ago
C. Assign the Elastic IP addresses to the NLB make the answers correct
upvoted 1 times

" # cldy 11 months ago


C. Create Amazon EC2 instances for the service. Create one Elastic IP address for each Availability Zone. Create a Network Load Balancer (NLB)
and expose the assigned TCP port. Assign the Elastic IP addresses to the NLB for each Availability Zone. Create a target group and register the
EC2 instances with the NLB. Create a new A (alias) record set named my.service.com, and assign the NLB DNS name to the record set.
upvoted 1 times

" # AzureDP900 11 months ago


C is perfect.
upvoted 1 times

" # Rho_Ohm 11 months, 1 week ago


>> Ans: C
upvoted 1 times

" # acloudguru 11 months, 1 week ago


Selected Answer: C
C, only make sense one
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 2 times

" # blackgamer 1 year ago


C is correct.
upvoted 2 times

" # Waiweng 1 year ago


it's C
upvoted 3 times

" # Ebi 1 year ago


I will go with C
upvoted 4 times

" # Bulti 1 year ago


Answer is C most probably. In A, B and D, ELB is created but Route53 never uses that as the resource in the A-record, Instead routes traffic
directly to the underlying EC2 or ECS instances. However there is no mention of providing Elastic IP addresses of the EC2 instances to the
external services that will use them for whitelisting. So I am a bit confused.
upvoted 3 times

" # alexmena1981 1 year ago


why not B?
upvoted 1 times

" # rcher 1 year ago


You can't register an ECS cluster name to a target group for NLB. ECS supports dymanic port mapping and the target will "automatically" be
registered as long as the its indicated in the task definition.
upvoted 4 times

202 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #585 Topic 1

A company is running a web application with On-Demand Amazon EC2 instances in Auto Scaling groups that scale dynamically based on custom
metrics. After extensive testing, the company determines that the m5.2xlarge instance size is optimal for the workload. Application data is stored
in db.r4.4xlarge Amazon RDS instances that are con+rmed to be optimal. The tramc to the web application spikes randomly during the day.
What other cost-optimization methods should the company implement to further reduce costs without impacting the reliability of the application?

A. Double the instance count in the Auto Scaling groups and reduce the instance size to m5.large.

B. Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running.

C. Reduce the RDS instance size to db.r4.xlarge and add +ve equivalently sized read replicas to provide reliability.

D. Reserve capacity for all EC2 instances and leverage Spot Instance pricing for the RDS database.

Correct Answer: B

Community vote distribution


B (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


B - is the reasonable option, as there is no point in changing instances sizes that are already optimal, and a spot instance for the db is a bad idea.
upvoted 20 times

" # Ebi Highly Voted $ 1 year ago


B is the right answer
upvoted 5 times

" # Ebi 1 year ago


I change to A after reading all the comments,
As the Autoscaling is based on metrics A can be the right answer if and only if minimum number of instances is 1, otherwise with min=2 this
solution does not help reducing cost
upvoted 3 times

" # AYANtheGLADIATOR Most Recent % 2 months ago


B is the answer here is the link .
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithReservedDBInstances.html
upvoted 1 times

" # Nano803 9 months, 2 weeks ago


Selected Answer: B
I like B
upvoted 1 times

" # cldy 11 months ago


B. Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running.
upvoted 1 times

" # AzureDP900 11 months ago


I will go with B
upvoted 1 times

" # Viper57 1 year ago


The answer is clearly B. People are being confused by the term 'reserve capacity'. This is not the same as an on-demand capacity reservation.

This article by AWS clearly states that by 'reserving capacity' you are reserving the instances and reducing your costs. See -
https://aws.amazon.com/aws-cost-management/aws-cost-optimization/reserved-instances/
upvoted 1 times

" # Kopa 1 year ago


Im more for A, costs can be minimal and change more dynamic because for lower model type of instances.
upvoted 1 times

" # jobe42 1 year ago


B, the fact that RDS and EC2 is defined here as "optimal", no need to change them, so just reduce the cost with reserved instances for RDS and
EC2
upvoted 1 times

203 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # WhyIronMan 1 year ago


I'll go with B
You can easily safe money reserving the DB instance and the minimum number of ec2
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 3 times

" # ItsmeP 1 year ago


Ans B
A is incorrect as there is no change in billing if we go with double instance count with half capacity, it can minor degrade performance as well.
upvoted 1 times

" # Kian1 1 year ago


going with A
upvoted 1 times

" # rasti 1 year ago


Answer is A - to have more smaller instances to better react on dynamic scaling
The answer B is not giving you any discount. It's "Reserve capacity", not "Reserved Instance"
upvoted 5 times

" # bnh_fedi 1 year ago


and the minimum number of EC2 instance**
upvoted 2 times

" # QCO 1 year ago


Checked the pricing on the m5.2xlarge vs m5.xlarge and they are $0.384 and 0.192 per Hour. There is no further reduction in cost (no
savings), however with B, reserving capacity makes cost very granular which can result in cost reduction no matter how little. The question is
more focused on cost reduction rather than reliability/performance of the solution
upvoted 4 times

" # ju0n 1 year ago


Answer is A. Reserved capacity doesn't offer any billing discount.

https://aws.amazon.com/blogs/aws/s3-lifecycle-management-update-support-for-multipart-uploads-and-delete-markers/
upvoted 1 times

" # bnh_fedi 1 year ago


and the minimum number of EC2 instance**
upvoted 1 times

" # Bulti 1 year ago


Correct answer is B.
upvoted 1 times

" # T14102020 1 year ago


Correct is B. reserve capacity
upvoted 1 times

204 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #586 Topic 1

During an audit, a security team discovered that a development team was putting IAM user secret access keys in their code and then committing it
to an AWS
CodeCommit repository. The security team wants to automatically +nd and remediate instances of this security vulnerability.
Which solution will ensure that the credentials are appropriately secured automatically?

A. Run a script nightly using AWS Systems Manager Run Command to search for credentials on the development instances. If found, use AWS
Secrets Manager to rotate the credentials.

B. Use a scheduled AWS Lambda function to download and scan the application code from CodeCommit. If credentials are found, generate
new credentials and store them in AWS KMS.

C. Con+gure Amazon Macie to scan for credentials in CodeCommit repositories. If credentials are found, trigger an AWS Lambda function to
disable the credentials and notify the user.

D. Con+gure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials. If credentials are found,
disable them in AWS IAM and notify the user.

Correct Answer: C
Reference:
https://aws.amazon.com/blogs/security/how-to-+nd-update-access-keys-password-mfa-aws-management-console/

Community vote distribution


D (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


D. CodeCommit trigger with Lambda.
https://docs.aws.amazon.com/lambda/latest/dg/services-codecommit.html
upvoted 27 times

" # rcher 1 year ago


Sample code here https://github.com/aws-samples/discover-sensitive-data-in-aws-codecommit-with-aws-lambda/tree/main/src/handlers

Running regex after all hehe


upvoted 3 times

" # MMARTINEZ85 Highly Voted $ 1 year, 1 month ago


C. Macie can be used with CodeCommit.
https://docs.aws.amazon.com/codecommit/latest/userguide/data-protection.html
upvoted 9 times

" # misterfaust 1 year, 1 month ago


"Use advanced managed security services such as Amazon Macie, which assists in discovering and securing personal data that is stored in
Amazon S3."
upvoted 1 times

" # Gmail78 1 year, 1 month ago


which it exclude C from my understanding...D is then the answer
upvoted 1 times

" # bbnbnuyh 1 year, 1 month ago


Macie can only scan S3 buckets. D is the answer
upvoted 3 times

" # ymengxing 1 year ago


That's right!
AWS CodeCommit stores your repositories in Amazon S3 and Amazon DynamoDB.
So use Macie.
See https://aws.amazon.com/codecommit/features/
High Availability and Durability.
upvoted 4 times

" # kirrim 1 year ago


CodeCommit may use S3 on the back end (and it also uses DynamoDB on the back end) but I don't think they're stored in buckets that
you can see or point Macie to. In fact, there are even solutions out there describing how to copy your repo from CodeCommit into S3 to
back it up: https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-event-driven-backups-from-codecommit-to-

205 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

amazon-s3-using-codebuild-and-cloudwatch-events.html

D: AWS has an exact architecture for doing this: https://aws.amazon.com/blogs/compute/discovering-sensitive-data-in-aws-codecommit-


with-aws-lambda-2/
upvoted 4 times
" # Santo99 Most Recent % 3 months ago
Selected Answer: D
Macke is only for S3
upvoted 1 times

" # cannottellname 9 months, 1 week ago


Amazon Macie is only used for S3. Hence, D seems good :):)
upvoted 1 times

" # tkanmani76 9 months, 2 weeks ago


D - https://aws.amazon.com/blogs/compute/discovering-sensitive-data-in-aws-codecommit-with-aws-lambda-2/
upvoted 1 times

" # AzureDP900 11 months ago


D is right answer, I think this question in Neal Davis practice tests
upvoted 1 times

" # ryu10_09 11 months, 1 week ago


Selected Answer: D
https://aws.amazon.com/blogs/compute/discovering-sensitive-data-in-aws-codecommit-with-aws-lambda-2/
upvoted 1 times

" # nodogoshi 1 year ago


D. Amazon Macie is for S3 Service, not for CodeCommit.
https://docs.aws.amazon.com/codecommit/latest/userguide/data-protection.html
”Use advanced managed security services such as Amazon Macie, which assists in discovering and securing personal data that is stored in
Amazon S3.”
[stored in Amazon S3.]
upvoted 1 times

" # TomPaschenda 1 year ago


For D, there is a blog post describing that exact solution: https://aws.amazon.com/blogs/compute/discovering-sensitive-data-in-aws-
codecommit-with-aws-lambda-2/
For C: I dont think Macie works directly with CodeCommit
upvoted 4 times

" # student22 1 year ago


Good link, thanks.
Answer is D
upvoted 1 times

" # Suresh108 1 year ago


I am choosing DDDDDD.

https://aws.amazon.com/blogs/compute/discovering-sensitive-data-in-aws-codecommit-with-aws-lambda-2/
upvoted 2 times

" # student22 1 year ago


D
Not C - Macie is for s3
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 1 times

" # Kopa 1 year ago


Only D is a promptly and immediate solution regarding security.
upvoted 1 times

" # blackgamer 1 year ago


D is answer. C is not relevant , it is to scan S3.
upvoted 1 times

" # Waiweng 1 year ago


it's D
upvoted 2 times

206 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # PredaOvde 1 year ago


Cannot be D. It will check only for newly commited code, not for old code, which is required. I pick A.
upvoted 1 times

" # Pupu86 1 year ago


Answer is D.
CodeCommit itself is a repository.

Using Mercie means you are saving your code artefacts to S3 instead.
upvoted 1 times

207 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #587 Topic 1

A company is using AWS CodePipeline for the CI/CD of an application to an Amazon EC2 Auto Scaling group. All AWS resources are de+ned in
AWS
CloudFormation templates. The application artifacts are stored in an Amazon S3 bucket and deployed to the Auto Scaling group using instance
user data scripts.
As the application has become more complex, recent resource changes in the CloudFormation templates have caused unplanned downtime.
How should a solutions architect improve the CI/CD pipeline to reduce the likelihood that changes in the templates will cause downtime?

A. Adapt the deployment scripts to detect and report CloudFormation error conditions when performing deployments. Write test plans for a
testing team to execute in a non-production environment before approving the change for production.

B. Implement automated testing using AWS CodeBuild in a test environment. Use CloudFormation change sets to evaluate changes before
deployment. Use AWS CodeDeploy to leverage blue/green deployment patterns to allow evaluations and the ability to revert changes, if
needed.

C. Use plugins for the integrated development environment (IDE) to check the templates for errors, and use the AWS CLI to validate that the
templates are correct. Adapt the deployment code to check for error conditions and generate noti+cations on errors. Deploy to a test
environment and execute a manual test plan before approving the change for production.

D. Use AWS CodeDeploy and a blue/green deployment pattern with CloudFormation to replace the user data deployment scripts. Have the
operators log in to running instances and go through a manual test plan to verify the application is running as expected.

Correct Answer: D

Community vote distribution


B (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


B. Why do manual testing in option D when it can be automated with CodeBuild? CF Change sets to preview changes, and CodeDeploy b/g
deployment with ASG.

https://aws.amazon.com/blogs/devops/performing-bluegreen-deployments-with-aws-codedeploy-and-auto-scaling-groups/
upvoted 21 times

" # hilft Most Recent % 3 months, 1 week ago


of course, it's B.
upvoted 1 times

" # xyzman 5 months, 1 week ago


Selected Answer: B
it is B
upvoted 2 times

" # cldy 11 months ago


B. Implement automated testing using AWS CodeBuild in a test environment. Use CloudFormation change sets to evaluate changes before
deployment. Use AWS CodeDeploy to leverage blue/green deployment patterns to allow evaluations and the ability to revert changes, if needed.
upvoted 3 times

" # AzureDP900 11 months ago


B is right , this is straight forward question
upvoted 2 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: B
codebuild can provide automatic test
upvoted 3 times

" # nodogoshi 1 year ago


B. automated testing always best
upvoted 1 times

" # jobe42 1 year ago


B: fully automated, with ChangeSets, all other answers have way to much room for human errors.
upvoted 2 times

208 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

" # blackgamer 1 year ago


It is B.
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 3 times

" # Kian1 1 year ago


going with B
upvoted 1 times

" # Ebi 1 year, 1 month ago


I will go with B
upvoted 3 times

" # kopper2019 1 year, 1 month ago


Answer is B
upvoted 2 times

" # Bulti 1 year, 1 month ago


B is correct.
upvoted 2 times

" # T14102020 1 year, 1 month ago


Correct is B. automated testing using AWS CodeBuild
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with B
upvoted 2 times

209 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #588 Topic 1

A +nancial services company is moving to AWS and wants to enable developers to experiment and innovate while preventing access to production
applications.
The company has the following requirements:
✑ Production workloads cannot be directly connected to the internet.
✑ All workloads must be restricted to the us-west-2 and eu-central-1 Regions.
✑ Noti+cation should be sent when developer sandboxes exceed $500 in AWS spending monthly.
Which combination of actions needs to be taken to create a multi-account structure that meets the company's requirements? (Choose three.)

A. Create accounts for each production workload within an organization in AWS Organizations. Place the production accounts within an
organizational unit (OU). For each account, delete the default VPC. Create an SCP with a Deny rule for the attach an internet gateway and
create a default VPC actions. Attach the SCP to the OU for the production accounts.

B. Create accounts for each production workload within an organization in AWS Organizations. Place the production accounts within an
organizational unit (OU). Create an SCP with a Deny rule on the attach an internet gateway action. Create an SCP with a Deny rule to prevent
use of the default VPC. Attach the SCPs to the OU for the production accounts.

C. Create a SCP containing a Deny Effect for cloudfront:*, iam:*, route53:*, and support:* with a StringNotEquals condition on an
aws:RequestedRegion condition key with us-west-2 and eu-central-1 values. Attach the SCP to the organization's root.

D. Create an IAM permission boundary containing a Deny Effect for cloudfront:*, iam:*, route53:*, and support:* with a StringNotEquals
condition on an aws:RequestedRegion condition key with us-west-2 and eu-central-1 values. Attach the permission boundary to an IAM group
containing the development and production users.

E. Create accounts for each development workload within an organization in AWS Organizations. Place the development accounts within an
organizational unit (OU). Create a custom AWS Con+g rule to deactivate all IAM users when an account's monthly bill exceeds $500.

F. Create accounts for each development workload within an organization in AWS Organizations. Place the development accounts within an
organizational unit (OU). Create a budget within AWS Budgets for each development account to monitor and report on monthly spending
exceeding $500.

Correct Answer: ACF

Community vote distribution


ACF (90%) 10%

" # Nemer Highly Voted $ 1 year, 1 month ago


BCF - Production and dev accounts in separate OUs, AWS Budget for notifications.
Between A & B, deleting default VPC seems excessive. SCP should be able to prevent using it. Not 100% sure.
upvoted 25 times

" # pablobairat 1 year ago


B is wrong for one simple reason. You can delete the default VPC and create a new one. The new one will have a new arn so the SCP will not
have effect on it.
Just denying to create an IG does not prevent to create a new default VPC with the IG attached.
From here: https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html
"Amazon creates the above resources on your behalf. IAM policies do not apply to these actions because you do not perform these actions.
For example, if you have an IAM policy that denies the ability to call CreateInternetGateway, and then you call CreateDefaultVpc, the internet
gateway in the default VPC is still created."

In conclusion, ACF
upvoted 7 times

" # student22 1 year ago


Good point.
Answer: ACF
upvoted 1 times

" # ipindado2020 1 year, 1 month ago


I get the point... both questions want to reflect equivalent actions, but for me the redaction of B is very confusing...

"and create a default VPC actions. Create an SCP with a Deny rule to prevent use of the default VPC"

Obviousy it can be understood that "create default vpc actions" means the default vpc for the prod environment....

210 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

And when it is said that..."Create an SCP with a Deny rule to prevent use of the default VPC"... It can be understood that it is talking about th
original "default VPC" no the new one... isn´t it?

In any case It is too much "It can be understood"... So I go for ACF, nobody will use never that VPC so I for me it has more sense cleaning the
entire network structure of prod (consdering B syntax).
upvoted 1 times
" # Ebi Highly Voted $ 1 year ago
ACF is the right answer.
B can not be the answer, there is no way to have one single SCP at OU or root level to deny using of default VPC in each account
upvoted 23 times

" # heany 1 month ago


Should be ADF. As there could be other types of workload which could be in other org, e.g. sandbox workloads in CTO org, etc. The question
doesn't imply there are only two orgs in this company
upvoted 2 times

" # gpark 1 year ago


Touche
upvoted 2 times

" # JohnPi Most Recent % 2 weeks, 6 days ago


Selected Answer: BCF
BCF
A does not scale
upvoted 1 times

" # JohnPi 2 weeks, 6 days ago


ACF you cannot "Create an SCP with a Deny rule to prevent use of the default VPC"
upvoted 1 times

" # tomosabc1 1 month ago


Selected Answer: ACF
The answer should be ACF.

B(wrong): "Create an SCP with a Deny rule to prevent use of the default VPC." It is impossible to do this.
D(wrong): Permission boundary can only be attached to user or role, rather than IAM group.
E(wrong): Obviously wrong. AWS Budgets should be used.
upvoted 2 times

" # Azerty1313 1 month, 1 week ago


C isn't recommended see: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html#scp-warning-testing-
effect
upvoted 1 times

" # AwsBRFan 1 month, 2 weeks ago


Selected Answer: ACF
AWS strongly recommends that you don't attach SCPs to the root of your organization without thoroughly testing the impact that the policy has
on accounts.

But if tested why not?


upvoted 2 times

" # hilft 3 months, 2 weeks ago


It's BDF.
Don't mess around with IGW
AWS don't recommend SCP on root account
upvoted 1 times

" # aandc 4 months, 1 week ago


Selected Answer: ACF
Cannot find how to "Deny rule to prevent use of the default VPC"
upvoted 2 times

" # roka_ua 7 months, 1 week ago


Selected Answer: ACF
Vote ACF
upvoted 3 times

" # futen0326 8 months, 2 weeks ago


D instead of C. You don't have to attach an SCP to the root, it's bad practice, you can be a little more granular with D. It works better for the
requirement.

211 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # tkanmani76 9 months, 3 weeks ago
A - Why not B ? Tried searching SCP for VPC - we can deny creation of default VPC (CreateDefaultVpc), there are none to stop using it. So only
way is to delete.
D - Why not C ? Per AWS it is not a good practice to attach SCP to root.
F - No contention with E here.
upvoted 1 times

" # AzureDP900 11 months ago


I have to revisit this question and confirm between ACF vs BCF
upvoted 1 times

" # ryu10_09 11 months, 2 weeks ago


why A, you cannot delete the default VPC. so A is not valid. It is BCF
upvoted 1 times

" # ryu10_09 11 months, 2 weeks ago


I change my mind. i have checked and you can delete default VPC
upvoted 1 times

" # Kopa 11 months, 3 weeks ago


A,C,F should be
upvoted 1 times

" # near22 1 year ago


ADF
for c, AWS don't recommend apply SCP to root
upvoted 1 times

" # littlecurly 1 year ago


B,D,F
D denies the root to the global services including IAM, which doesn't make sense...
upvoted 1 times

" # student2020 1 year ago


ACF is the answer
There is no action to prevent use of default VPC
https://docs.aws.amazon.com/AWSEC2/latest/APIReference/OperationList-query-vpc.html
upvoted 3 times

212 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #589 Topic 1

A company is hosting a three-tier web application in an on-premises environment. Due to a recent surge in tramc that resulted in downtime and a
signi+cant +nancial impact, company management has ordered that the application be moved to AWS. The application is written in .NET and has a
dependency on a MySQL database. A solutions architect must design a scalable and highly available solution to meet the demand of 200,000 daily
users.
Which steps should the solutions architect take to design an appropriate solution?

A. Use AWS Elastic Beanstalk to create a new application with a web server environment and an Amazon RDS MySQL Multi-AZ DB instance.
The environment should launch a Network Load Balancer (NLB) in front of an Amazon EC2 Auto Scaling group in multiple Availability Zones.
Use an Amazon Route 53 alias record to route tramc from the company's domain to the NLB.

B. Use AWS CloudFormation to launch a stack containing an Application Load Balancer (ALB) in front of an Amazon EC2 Auto Scaling group
spanning three Availability Zones. The stack should launch a Multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a Retain
deletion policy. Use an Amazon Route 53 alias record to route tramc from the company's domain to the ALB.

C. Use AWS Elastic Beanstalk to create an automatically scaling web server environment that spans two separate Regions with an Application
Load Balancer (ALB) in each Region. Create a Multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a cross-Region read replica.
Use Amazon Route 53 with a geoproximity routing policy to route tramc between the two Regions.

D. Use AWS CloudFormation to launch a stack containing an Application Load Balancer (ALB) in front of an Amazon ECS cluster of Spot
instances spanning three Availability Zones. The stack should launch an Amazon RDS MySQL DB instance with a Snapshot deletion policy. Use
an Amazon Route 53 alias record to route tramc from the company's domain to the ALB.

Correct Answer: A

Community vote distribution


B (64%) A (36%)

" # Nemer Highly Voted $ 1 year, 1 month ago


B. Web app needs ALB. Multi-AZ deployment should address HA. Retain deletion policy to not delete the db with the stack.
upvoted 22 times

" # hilft 3 months, 1 week ago


i think B is better than A
upvoted 1 times

" # oscargee 1 year ago


I don't think web app combined with ALB. My understanding is access heavy using NLB, calculation heavy using ALB.
upvoted 3 times

" # GeniusMikeLiu 9 months, 2 weeks ago


why not C?
upvoted 2 times

" # Ebi Highly Voted $ 1 year ago


I will go with B
upvoted 5 times

" # Blair77 Most Recent % 1 week, 6 days ago


Selected Answer: B
BBB - Web App need ALB not NLB
upvoted 2 times

" # ToanVN1988 3 weeks ago


Selected Answer: A
Have to use CNAME record on R53 to maping with ALB. C is wrong
upvoted 2 times

" # WayneYi 1 month ago


200,000 accesses per day is really no big deal, no reason to use NLB
upvoted 1 times

" # kadev 2 months ago

213 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question forcus on HA
Amazon Aurora is designed for spead storage on three AZ => HA more than RDS only
upvoted 1 times
" # Sathish1412 2 months, 1 week ago
B is best option for the requirement
upvoted 1 times

" # MikeyJ 2 months, 3 weeks ago


Selected Answer: B
NLB in A is overkill.

daily demands of 200,000 users < Network Load Balancer is capable of handling millions of requests per second while maintaining ultra-low
latencies.
https://aws.amazon.com/elasticloadbalancing/network-load-balancer/
upvoted 2 times

" # Sathish1412 2 months, 1 week ago


You are correct!
upvoted 1 times

" # Millari 4 months ago


A. EB is already .NET ready
AWS Elastic Beanstalk for .NET makes it easier to deploy, manage, and scale your ASP.NET web applications that use Amazon Web Services.
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_NET.html

you can also AAdding an Amazon RDS DB instance to your .NET application environment
upvoted 1 times

" # TechX 4 months, 1 week ago


Selected Answer: A
A for me, we have 200.000 users which is heavy access, NLB will go over ALB
upvoted 1 times

" # gerhardbl 2 months, 3 weeks ago


I'd start thinking about a NLB if I had 200K users per second, not per day.
upvoted 1 times

" # kangtamo 4 months, 1 week ago


Selected Answer: B
B sounds better.
upvoted 3 times

" # Bigbearcn 9 months ago


Selected Answer: B
It's B.
upvoted 2 times

" # HellGate 9 months ago


Selected Answer: A
You can deploy system with Beanstalk since it has its source code of .Net. And there's no DR requirement (D).
upvoted 2 times

" # AzureDP900 11 months ago


B is right
upvoted 1 times

" # kaleen_bhaiya 11 months, 4 weeks ago


Answer is A
Couple of reasons; 1) NLB is high performing and 2) You cannot have an A record for Route 53 alias, ALB doesn't have IP (A type) so answer
would be NLB. Let me know if I am missing anything.
upvoted 3 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # blackgamer 1 year ago


I will go with B. A is incorrect as NLB doesn't have listener for Http and Https, it only works at layer 4 TCP and TLS only.
upvoted 5 times

" # Rocketeer 1 month, 2 weeks ago


NLB will act as a passthrough for the traffic and hence will work for http or https

214 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # Rocketeer 1 month, 2 weeks ago


Never mind. I think we need to use ALB for http or https traffic.
upvoted 1 times

215 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #590 Topic 1

A solutions architect is designing a publicly accessible web application that is on an Amazon CloudFront distribution with an Amazon S3 website
endpoint as the origin. When the solution is deployed, the website returns an Error 403: Access Denied message.
Which steps should the solutions architect take to correct the issue? (Choose two.)

A. Remove the S3 block public access option from the S3 bucket.

B. Remove the requester pays option from the S3 bucket.

C. Remove the origin access identity (OAI) from the CloudFront distribution.

D. Change the storage class from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA).

E. Disable S3 object versioning.

Correct Answer: AC

Community vote distribution


AB (100%)

" # Anila_Dhharisi Highly Voted $ 1 year, 1 month ago


https://aws.amazon.com/premiumsupport/knowledge-center/s3-website-cloudfront-error-403/
upvoted 19 times

" # viet1991 1 year ago


A&B.
Amazon S3 Block Public Access must be disabled on the bucket.
If Requester Pays is enabled, then the request must include the request-payer parameter.
upvoted 7 times

" # tekkart 1 year ago


In your link, it is written ": If you don't want to allow public (anonymous) access to your S3 objects, then change your configuration to use
the S3 REST API endpoint as the origin of your distribution. Then, configure your distribution and S3 bucket to restrict access using an
origin access identity (OAI). " Then the answer would be A&C, Requester Pays is useful to prevent DDoS attacks, just remove it (answer B)
would remove functionality with it
upvoted 3 times

" # joe16 1 year ago


Requestor pays is a feature to share cost of storing/accessing S3 objects, not DDoS prevention.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html
upvoted 1 times

" # tkanmani76 Highly Voted $ 11 months ago


https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/ This will settle the answer in favour of A and B.
upvoted 7 times

" # AzureDP900 10 months, 4 weeks ago


This is great information, thanks for sharing.
upvoted 1 times

" # bermo 10 months, 3 weeks ago


Thanks indeed for this useful link
upvoted 1 times

" # Punitsolanki 8 months, 4 weeks ago


But the link is when you access the S3 directly, not via cloudfront.
upvoted 1 times

" # Rocky2222 Most Recent % 2 months, 4 weeks ago


Selected Answer: AB
https://aws.amazon.com/premiumsupport/knowledge-center/s3-website-cloudfront-error-403/
upvoted 1 times

" # TechX 4 months, 1 week ago


Selected Answer: AB
AB for me.
If you enable Requester Pays on a bucket, anonymous access to that bucket is not allowed.

216 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html
upvoted 1 times
" # Hari008 7 months, 3 weeks ago
Here the key word is publicly available, i will go with A&C
upvoted 1 times

" # peddyua 8 months, 4 weeks ago


A is weird, it can work with block public access as well (deployed on a previous project)
AB for me
upvoted 2 times

" # AzureDP900 11 months ago


Selected Answer: AB
A and B
upvoted 1 times

" # Viper57 1 year ago


A and C are correct.

The question says it is using a S3 Website Endpoint. OAI can only be used when Cloudfront needs to access a REST API endpoint, so removing
OAI would fix this problem.

See using S3 to host a static website with Cloudfront: https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-


website/

- Using a REST API endpoint as the origin, with access restricted by an origin access identity (OAI)
- Using a website endpoint as the origin, with anonymous (public) access allowed
- Using a website endpoint as the origin, with access restricted by a Referer header
upvoted 5 times

" # student22 1 year ago


A and C
If you don't want to allow public (anonymous) access to your S3 objects, then change your configuration to use the S3 REST API endpoint as the
origin of your distribution. Then, configure your distribution and S3 bucket to restrict access using an origin access identity (OAI).
https://aws.amazon.com/premiumsupport/knowledge-center/s3-website-cloudfront-error-403/
upvoted 1 times

" # student22 1 year ago


Changing to A and B
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,B
upvoted 2 times

" # blackgamer 1 year ago


A & B is the better option but it is not the best too. A is not a very good solution.
upvoted 2 times

" # Waiweng 1 year ago


it's A&B
upvoted 5 times

" # Pupu86 1 year ago


Origin Access Identity feature is to control only Cloudfront has access (read) permissions to S3 so users can only access S3 contents via a valid
OAI in Cloudfront, making Cloudfront the only point of entry. So the eventual state of access logs in Cloudfront would also be accurately
reflective.
upvoted 2 times

" # Kian1 1 year ago


going with AB
upvoted 2 times

" # Ebi 1 year ago


Answer is AB
upvoted 5 times

" # kopper2019 1 year ago


A and B
https://aws.amazon.com/premiumsupport/knowledge-center/s3-website-cloudfront-error-403/
upvoted 1 times

" # Bulti 1 year ago

217 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

A and B are the right answers


upvoted 1 times

218 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #591 Topic 1

A web application is hosted in a dedicated VPC that is connected to a company's on-premises data center over a Site-to-Site VPN connection. The
application is accessible from the company network only. This is a temporary non-production application that is used during business hours. The
workload is generally low with occasional surges.
The application has an Amazon Aurora MySQL provisioned database cluster on the backend. The VPC has an internet gateway and a NAT
gateways attached.
The web servers are in private subnets in an Auto Scaling group behind an Elastic Load Balancer. The web servers also upload data to an Amazon
S3 bucket through the internet.
A solutions architect needs to reduce operational costs and simplify the architecture.
Which strategy should the solutions architect use?

A. Review the Auto Scaling group settings and ensure the scheduled actions are speci+ed to operate the Amazon EC2 instances during
business hours only. Use 3-year scheduled Reserved Instances for the web server EC2 instances. Detach the internet gateway and remove the
NAT gateways from the VPC. Use an Aurora Serverless database and set up a VPC endpoint for the S3 bucket.

B. Review the Auto Scaling group settings and ensure the scheduled actions are speci+ed to operate the Amazon EC2 instances during
business hours only. Detach the internet gateway and remove the NAT gateways from the VPC. Use an Aurora Serverless database and set up
a VPC endpoint for the S3 bucket, then update the network routing and security rules and policies related to the changes.

C. Review the Auto Scaling group settings and ensure the scheduled actions are speci+ed to operate the Amazon EC2 instances during
business hours only. Detach the internet gateway from the VPC, and use an Aurora Serverless database. Set up a VPC endpoint for the S3
bucket, then update the network routing and security rules and policies related to the changes.

D. Use 3-year scheduled Reserved Instances for the web server Amazon EC2 instances. Remove the NAT gateways from the VPC, and set up a
VPC endpoint for the S3 bucket. Use Amazon CloudWatch and AWS Lambda to stop and start the Aurora DB cluster so it operates during
business hours only. Update the network routing and security rules and policies related to the changes.

Correct Answer: C

Community vote distribution


B (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


B. We are charged for each “NAT Gateway-hour" even without data going through it.
https://aws.amazon.com/vpc/pricing/
upvoted 19 times

" # oscargee 1 year ago


How would you handle The web servers also upload data to an Amazon S3 bucket through the internet? NAT has to been keep.
upvoted 1 times

" # Viper57 1 year ago


Using an S3 VPC endpoint that goes over the AWS backend solves this problem.
upvoted 5 times

" # amaltare Highly Voted $ 1 year, 1 month ago


Guys.. has anyone notice that a site-to-site VPN is in place and for this to work, an internet gateway is required. but ABC all the three options are
saying to remove internet gateway, I dont think it will work then.. I will go with D
upvoted 11 times

" # JohnPi 1 month ago


no IGW required https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario4.html
upvoted 1 times

" # MichaelR 1 year, 1 month ago


Just fount this from AWS S2S docs: "An Internet gateway is not required to establish a Site-to-Site VPN connection."
upvoted 2 times

" # beso 1 year, 1 month ago


a site-to-site VPN creates an IPSEC tunnel from an EC2 instance to a device of your choice on-prem. The IGW is required for that tunnel to
have a route out of the VPC
upvoted 1 times

219 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # pleasespammelater 1 year, 1 month ago


It would make sense that a site-to-site VPN needs an IGW. But that's incorrect.
"An internet gateway is not required to establish an AWS Site-to-Site VPN connection." https://aws.amazon.com/vpc/faqs/#Connectivity
upvoted 4 times

" # cloudgc 1 year, 1 month ago


B - https://aws.amazon.com/vpn/faqs/#:~:text=Amazon%20supports%20Internet%20Protocol%20security,-to-Site%20VPN%20connection.
upvoted 3 times
" # Heer Most Recent % 4 days, 16 hours ago
The right answer between B and C is B .The ques says we do have IGW and NAT and Option C is removing IGW only .NAT cannot operate
without IGW and that is why option C doesn't makes sense.
upvoted 1 times

" # AwsBRFan 1 month, 2 weeks ago


Selected Answer: B
VPN requires Virtual Private Gateway
upvoted 1 times

" # kangtamo 4 months, 1 week ago


Selected Answer: B
Go with B.
upvoted 1 times

" # AzureDP900 11 months ago


B is right
upvoted 2 times

" # vbal 11 months, 1 week ago


A is wrong becoz SRI can't have 3-year reservation.
upvoted 1 times

" # Pb55 1 year ago


S3 VPC endpoint means no need for IGW or NAT. So B.
upvoted 3 times

" # oscargee 1 year ago


C! B and C are almost same. But you need NAT to allow web servers in VPC private sub net to upload data to an Amazon S3 bucket through the
internet.
upvoted 1 times

" # jobe42 1 year ago


B... "and set up a VPC endpoint for the S3 bucket"
upvoted 1 times

" # blackgamer 1 year ago


Going with B.
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 3 times

" # Pupu86 1 year ago


https://docs.aws.amazon.com/vpn/latest/s2svpn/SetUpVPNConnections.html

This link shows you how to create a site-to-site VPN connection to your AWS VPCs. No internet gateway or NAT gateway is required
upvoted 1 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # Ebi 1 year ago


Answer is B
upvoted 4 times

" # kopper2019 1 year ago


B what would you need a Internet GW and NAT GW when all is private using a VPN and RI is not needed since is temporary so buying RIs for 3
years would mean losing money
upvoted 1 times

" # Bulti 1 year ago

220 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

B is the right answer. A and D are out because scheduled reserved instances are not required as it is a temporary application. C is identical to B
but it keeps the NAT Gateway which has extra unnecessary cost when we are using VPC endpoint to talk to S3.
upvoted 2 times

" # srinivasa 1 year ago


NAT gateway and internet gateway are not required.
B is the right answer
upvoted 1 times

221 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #592 Topic 1

A company plans to refactor a monolithic application into a modern application design deployed on AWS. The CI/CD pipeline needs to be
upgraded to support the modern design for the application with the following requirements:
✑ It should allow changes to be released several times every hour.
✑ It should be able to roll back the changes as quickly as possible.
Which design will meet these requirements?

A. Deploy a CI/CD pipeline that incorporates AMIs to contain the application and their con+gurations. Deploy the application by replacing
Amazon EC2 instances.

B. Specify AWS Elastic Beanstalk to stage in a secondary environment as the deployment target for the CI/CD pipeline of the application. To
deploy, swap the staging and production environment URLs.

C. Use AWS Systems Manager to re-provision the infrastructure for each deployment. Update the Amazon EC2 user data to pull the latest code
artifact from Amazon S3 and use Amazon Route 53 weighted routing to point to the new environment.

D. Roll out the application updates as part of an Auto Scaling event using prebuilt AMIs. Use new versions of the AMIs to add instances, and
phase out all instances that use the previous AMI version with the con+gured termination policy during a deployment event.

Correct Answer: A

Community vote distribution


B (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


B. blue/green deployment with Beanstalk.
upvoted 19 times

" # hilft Most Recent % 3 months, 2 weeks ago


B. 'staging' blue/green
upvoted 1 times

" # aandc 4 months, 1 week ago


Selected Answer: B
key words :"It should be able to roll back the changes as quickly as possible"
upvoted 2 times

" # pititcu667 9 months, 4 weeks ago


Selected Answer: B
B Blue Green no outage min interf.
upvoted 1 times

" # Baji000 10 months, 2 weeks ago


Selected Answer: B
It's B
upvoted 1 times

" # AzureDP900 11 months ago


B is right
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # blackgamer 1 year ago


B is the right answer.
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 3 times

" # Kian1 1 year ago

222 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

going with B
upvoted 2 times
" # Ebi 1 year ago
Although there is no clarification of the platform and development env used, but the closest answer in here is B
upvoted 3 times

" # 01037 1 year ago


Can AWS Systems Manager be used to provision infrastructure?
upvoted 2 times

" # Bulti 1 year ago


B is the right answer. It is the fastest when it comes to rollback and deploying changes every hour. C is good but it falls short to meet the
requirement of frequent deployments as it is pretty heavy in terms of having to build a new infrastructure each time a new deployment is needed.
upvoted 2 times

" # T14102020 1 year, 1 month ago


Correct is B. Beanstalk
upvoted 1 times

" # rscloud 1 year, 1 month ago


B. blue/green deployment is good fit here.
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with B
upvoted 3 times

" # CYL 1 year, 1 month ago


B. Allows for fallback.
upvoted 1 times

223 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #593 Topic 1

A company currently has data hosted in an IBM Db2 database. A web application calls an API that runs stored procedures on the database to
retrieve user information data that is read-only. This data is historical in nature and changes on a daily basis. When a user logs in to the
application, this data needs to be retrieved within 3 seconds. Each time a user logs in, the stored procedures run. Users log in several times a day
to check stock prices.
Running this database has become cost-prohibitive due to Db2 CPU licensing. Performance goals are not being met. Timeouts from Db2 are
common due to long-running queries.
Which approach should a solutions architect take to migrate this solution to AWS?

A. Rehost the Db2 database in Amazon Fargate. Migrate all the data. Enable caching in Fargate. Refactor the API to use the Fargate Db2
database. Implement Amazon API Gateway and enable API caching.

B. Use AWS DMS to migrate data to Amazon DynamoDB using a continuous replication task. Refactor the API to use the DynamoDB data.
Implement the refactored API in Amazon API Gateway and enable API caching.

C. Create a local cache on the mainframe to store query outputs. Use SFTP to sync to Amazon S3 on a daily basis. Refactor the API to use
Amazon EFS. Implement Amazon API Gateway and enable API caching.

D. Extract data daily and copy the data to AWS Snowball for storage on Amazon S3. Sync daily. Refactor the API to use the S3 data. Implement
Amazon API Gateway and enable API caching.

Correct Answer: A

Community vote distribution


B (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


B. DMS supports DB2 as source and DynamoDB as target. Along with API caching, retrieval should be under 3 seconds.
upvoted 16 times

" # Ebi Highly Voted $ 1 year ago


Answer is B
All other answer do not make any sense
upvoted 6 times

" # AwsBRFan Most Recent % 1 month, 2 weeks ago


Selected Answer: B
B makes sense
upvoted 1 times

" # gnic 2 months, 1 week ago


It's B.
upvoted 1 times

" # CloudHandsOn 3 months, 2 weeks ago


B. is the correct answer
upvoted 1 times

" # cldy 11 months ago


B. Use AWS DMS to migrate data to Amazon DynamoDB using a continuous replication task. Refactor the API to use the DynamoDB data.
Implement the refactored API in Amazon API Gateway and enable API caching.
upvoted 1 times

" # AzureDP900 11 months ago


It is B, This question is part of Neal Davis practice test
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # blackgamer 1 year ago


B is the answer.
upvoted 1 times

224 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Waiweng 1 year ago


it's B
upvoted 2 times

" # Bemi 1 year ago


https://aws.amazon.com/about-aws/whats-new/2018/04/aws-dms-supports-ibm-db2-as-a-source/
upvoted 1 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # Bulti 1 year ago


B is the answer. DMS supports migration from DB2 to Dynamo DB.
upvoted 2 times

" # T14102020 1 year ago


Correct is B. DMS supports DB2 as source and DynamoDB as target.
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with B
upvoted 3 times

" # cloudgc 1 year, 1 month ago


B - https://aws.amazon.com/about-aws/whats-new/2018/04/aws-dms-supports-ibm-db2-as-a-source
/#:~:text=AWS%20Database%20Migration%20Service%20Supports%20IBM%20Db2%20as%20a%20Source,-Posted%20On%3A%20Apr&
text=AWS%20Database%20Migration%20Service%20(DMS,to%20any%20DMS%20supported%20target.
upvoted 1 times

" # CYL 1 year, 1 month ago


B. DMS supports relational DB to DynamoDB migration. https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html
upvoted 1 times

225 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #594 Topic 1

A company is planning to deploy a new business analytics application that requires 10,000 hours of compute time each month. The compute
resources can have jexible availability, but must be as cost-effective as possible. The company will also provide a reporting service to distribute
analytics reports, which needs to run at all times.
How should the Solutions Architect design a solution that meets these requirements?

A. Deploy the reporting service on a Spot Fleet. Deploy the analytics application as a container in Amazon ECS with AWS Fargate as the
compute option. Set the analytics application to use a custom metric with Service Auto Scaling.

B. Deploy the reporting service on an On-Demand Instance. Deploy the analytics application as a container in AWS Batch with AWS Fargate as
the compute option. Set the analytics application to use a custom metric with Service Auto Scaling.

C. Deploy the reporting service as a container in Amazon ECS with AWS Fargate as the compute option. Deploy the analytics application on a
Spot Fleet. Set the analytics application to use a custom metric with Amazon EC2 Auto Scaling applied to the Spot Fleet.

D. Deploy the reporting service as a container in Amazon ECS with AWS Fargate as the compute option. Deploy the analytics application on an
On-Demand Instance and purchase a Reserved Instance with a 3-year term. Set the analytics application to use a custom metric with Amazon
EC2 Auto Scaling applied to the On-Demand Instance.

Correct Answer: C

" # Nemer Highly Voted $ 1 year, 1 month ago


C seems OK. Reporting service runs constantly, and as such should be separated from the analytics application running on a Spot Fleet for cost
savings.
upvoted 14 times

" # Heer Most Recent % 4 days, 16 hours ago


Analytics service:Key words
Flexible availability i.s can be interrupted
Cost effective

Reporting service :Key words


Need to run all the time so Containers

So the right ans is C


upvoted 1 times

" # jyrajan69 8 months, 2 weeks ago


There seems to be a tendency to jump in with an answer. First of all this is a steady state application, not short term, hence automatically you
start thinking Reserved Instance, and the clearly stated 'the organization will offer a reporting service for distributing analytics results, which must
be available at all times.", which will take Spot out of the picture. Therefore the answer has to be D. Unless someone can give proper justification
tht is the answer
upvoted 3 times

" # tobstar86 8 months, 1 week ago


Well, Spot instances in answer C are only used for the analytics application and not the reporting app. So thats ok.
Only difference between C & D is : C uses spot instances for the analytics app and D uses mixture of on demand on longterm reserved
instances.

So pretty much both answers are valid. But, considering the business perspective: it's a new application. Would you want to commit yourself
for the next 3 years with unknown outcome? Sure you can modify them later or resell, but still.
I'd choose C.
upvoted 2 times

" # lucesarano 9 months, 1 week ago


I do not understand why A and B are wrong. They both look feasible to me.
upvoted 1 times

" # AzureDP900 11 months ago


I will go with C
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

226 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # blackgamer 1 year ago


C is cost effective compared to D. Spot instances.
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 3 times

" # KnightVictor 1 year ago


going with C, keyword compute resources- "cost effective", "flexible availability,"
reporting service ->needs to run at all times
upvoted 2 times

" # Kian1 1 year ago


going with C
upvoted 3 times

" # Ebi 1 year ago


I will go with C
upvoted 4 times

" # Bulti 1 year ago


Correct answer is C for Spot Fleet
upvoted 2 times

" # T14102020 1 year ago


Correct is C. report as Fargate + analytics as Spot
upvoted 1 times

" # jackdryan 1 year ago


I'll go with C
upvoted 3 times

" # CYL 1 year ago


C. Use spot instances for analytics workload. Reporting services need to be up all the time, hence should run on a reliable instance type that will
not terminate on its own.
upvoted 1 times

" # SamAWSExam99 1 year, 1 month ago


C for spot instances
upvoted 1 times

" # Anila_Dhharisi 1 year, 1 month ago


C is better option to use Spot for analytics
upvoted 2 times

227 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #595 Topic 1

A company is migrating its three-tier web application from on-premises to the AWS Cloud. The company has the following requirements for the
migration process:
✑ Ingest machine images from the on-premises environment.
✑ Synchronize changes from the on-premises environment to the AWS environment until the production cutover.
✑ Minimize downtime when executing the production cutover.
✑ Migrate the virtual machines' root volumes and data volumes.
Which solution will satisfy these requirements with minimal operational overhead?

A. Use AWS Server Migration Service (SMS) to create and launch a replication job for each tier of the application. Launch instances from the
AMIs created by AWS SMS. After initial testing, perform a +nal replication and create new instances from the updated AMIs.

B. Create an AWS CLI VM Import/Export script to migrate each virtual machine. Schedule the script to run incrementally to maintain changes
in the application. Launch instances from the AMIs created by VM Import/Export. Once testing is done, rerun the script to do a +nal import
and launch the instances from the AMIs.

C. Use AWS Server Migration Service (SMS) to upload the operating system volumes. Use the AWS CLI import-snapshot command for the data
volumes. Launch instances from the AMIs created by AWS SMS and attach the data volumes to the instances. After initial testing, perform a
+nal replication, launch new instances from the replicated AMIs, and attach the data volumes to the instances.

D. Use AWS Application Discovery Service and AWS Migration Hub to group the virtual machines as an application. Use the AWS CLI VM
Import/Export script to import the virtual machines as AMIs. Schedule the script to run incrementally to maintain changes in the application.
Launch instances from the AMIs. After initial testing, perform a +nal virtual machine import and launch new instances from the AMIs.

Correct Answer: B

Community vote distribution


A (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


A. SMS for automated, live incremental replication of live server volumes.
upvoted 17 times

" # LunchTime Highly Voted $ 1 year, 1 month ago


A is correct.
SMS can handle migrating the data volumes: https://aws.amazon.com/about-aws/whats-new/2018/09/aws-server-migration-service-adds-
support-for-migrating-larger-data-volumes/
upvoted 6 times

" # Heer Most Recent % 4 days, 16 hours ago


Server Migration Service solely can do the following :
Migrate Virtual machine i.e the volumes are also getting migrated ,
AWS SMS incrementally replicates your server VMs as cloud-hosted Amazon Machine Images (AMIs) ready for deployment on Amazon EC2.

Right option is A
upvoted 1 times

" # roka_ua 7 months, 1 week ago


Vote A
upvoted 1 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: A
It is A
upvoted 1 times

" # AzureDP900 11 months ago


A is good
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

228 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # syc1205 1 year ago


Ingest machine images from the on-premises environment. So B
upvoted 1 times

" # blackgamer 1 year ago


A is the answer.
upvoted 1 times

" # Waiweng 1 year ago


it's A
upvoted 3 times

" # alisyech 1 year ago


A for sure
upvoted 1 times

" # Kian1 1 year ago


going with A
upvoted 1 times

" # Trap_D0_r 1 year ago


C
I wouldn't wrap up every drive for every vm into one giant ami and the phrasing of A "launch a replication job for each tier of the application" is a
little vague and sounds fishy to me. Personally C looks better--use SMS to create ami's from root volumes, snapshot and import data drives,
create EC2s from amis and reattach data drives. Test and deploy. That makes more sense to me.
upvoted 1 times

" # Ebi 1 year ago


Thirdly how you manage incremental changes on data volume?
Answer is NOT C
upvoted 6 times

" # Ebi 1 year ago


First of all snapshot import is a feature of VM Export/Import not a feature of SMS:
https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-import-snapshot.html
Secondly, if you import snapshot you need a create a volume from snapshot,
upvoted 4 times

" # Ebi 1 year ago


A is the answer
upvoted 4 times

" # T14102020 1 year ago


Correct is A. SMS with minimum steps opposite C
upvoted 1 times

" # T14102020 1 year ago


without AWS CLI
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with A
upvoted 3 times

" # CYL 1 year, 1 month ago


A. SMS for migration of VMs.
upvoted 1 times

229 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #596 Topic 1

An enterprise company's data science team wants to provide a safe, cost-effective way to provide easy access to Amazon SageMaker. The data
scientists have limited AWS knowledge and need to be able to launch a Jupyter notebook instance. The notebook instance needs to have a
precon+gured AWS KMS key to encrypt data at rest on the machine learning storage volume without exposing the complex setup requirements.
Which approach will allow the company to set up a self-service mechanism for the data scientists to launch Jupyter notebooks in its AWS
accounts with the
LEAST amount of operational overhead?

A. Create a serverless front end using a static Amazon S3 website to allow the data scientists to request a Jupyter notebook instance by +lling
out a form. Use Amazon API Gateway to receive requests from the S3 website and trigger a central AWS Lambda function to make an API call
to Amazon SageMaker that will launch a notebook instance with a precon+gured KMS key for the data scientists. Then call back to the front-
end website to display the URL to the notebook instance.

B. Create an AWS CloudFormation template to launch a Jupyter notebook instance using the AWS::SageMaker::NotebookInstance resource
type with a precon+gured KMS key. Add a user-friendly name to the CloudFormation template. Display the URL to the notebook using the
Outputs section. Distribute the CloudFormation template to the data scientists using a shared Amazon S3 bucket.

C. Create an AWS CloudFormation template to launch a Jupyter notebook instance using the AWS::SageMaker::NotebookInstance resource
type with a precon+gured KMS key. Simplify the parameter names, such as the instance size, by mapping them to Small, Large, and X-Large
using the Mappings section in CloudFormation. Display the URL to the notebook using the Outputs section, then upload the template into an
AWS Service Catalog product in the data scientist's portfolio, and share it with the data scientist's IAM role.

D. Create an AWS CLI script that the data scientists can run locally. Provide step-by-step instructions about the parameters to be provided
while executing the AWS CLI script to launch a Jupyter notebook with a precon+gured KMS key. Distribute the CLI script to the data scientists
using a shared Amazon S3 bucket.

Correct Answer: B

Community vote distribution


C (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


C. Service Catalog
https://aws.amazon.com/blogs/mt/enable-self-service-secured-data-science-using-amazon-sagemaker-notebooks-and-aws-service-catalog/
upvoted 20 times

" # spdracr713 Most Recent % 1 month, 3 weeks ago


A seems to fit the simplicity
upvoted 1 times

" # hilft 3 months, 2 weeks ago


Why it's not B?
upvoted 1 times

" # aandc 4 months, 1 week ago


Selected Answer: C
keyword "The data scientists are unfamiliar with AWS" -> service catalog
upvoted 3 times

" # tartarus23 6 months, 1 week ago


Selected Answer: C
C. Using service catalog is easier than S3 bucket for the data scientists
upvoted 1 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: C
I think it is C
upvoted 1 times

" # pititcu667 9 months, 4 weeks ago


Selected Answer: C
C just because service catalog simplifies it.

230 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # AzureDP900 11 months ago
I will go with C
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 3 times

" # blackgamer 1 year ago


The answer is C for sure, service catalog.
upvoted 2 times

" # Waiweng 1 year ago


it's C
upvoted 3 times

" # kiev 1 year ago


Service catalog is the key. C
upvoted 2 times

" # Kian1 1 year ago


going with C
upvoted 2 times

" # Ebi 1 year ago


I will go with C
upvoted 4 times

" # T14102020 1 year ago


Correct is C. CloudFormat + ServiceCatalog
upvoted 3 times

" # jackdryan 1 year ago


I'll go with C
upvoted 3 times

" # oopsy 1 year ago


seems C
upvoted 1 times

231 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #597 Topic 1

A company is migrating its applications to AWS. The applications will be deployed to AWS accounts owned by business units. The company has
several teams of developers who are responsible for the development and maintenance of all applications. The company is expecting rapid growth
in the number of users.
The company's chief technology omcer has the following requirements:
✑ Developers must launch the AWS infrastructure using AWS CloudFormation.
Developers must not be able to create resources outside of CloudFormation.

✑ The solution must be able to scale to hundreds of AWS accounts.


Which of the following would meet these requirements? (Choose two.)

A. Using CloudFormation, create an IAM role that can be assumed by CloudFormation that has permissions to create all the resources the
company needs. Use CloudFormation StackSets to deploy this template to each AWS account.

B. In a central account, create an IAM role that can be assumed by developers, and attach a policy that allows interaction with
CloudFormation. Modify the AssumeRolePolicyDocument action to allow the IAM role to be passed to CloudFormation.

C. Using CloudFormation, create an IAM role that can be assumed by developers, and attach policies that allow interaction with and passing a
role to CloudFormation. Attach an inline policy to deny access to all other AWS services. Use CloudFormation StackSets to deploy this
template to each AWS account.

D. Using CloudFormation, create an IAM role for each developer, and attach policies that allow interaction with CloudFormation. Use
CloudFormation StackSets to deploy this template to each AWS account.

E. In a central AWS account, create an IAM role that can be assumed by CloudFormation that has permissions to create the resources the
company requires. Create a CloudFormation stack policy that allows the IAM role to manage resources. Use CloudFormation StackSets to
deploy the CloudFormation stack policy to each AWS account.

Correct Answer: CE
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html

Community vote distribution


AC (100%)

" # Kibana01 Highly Voted $ 1 year, 1 month ago


A&C seems a better combination.
upvoted 13 times

" # Anila_Dhharisi Highly Voted $ 1 year, 1 month ago


Between A & E, its better to go with A. In E, they gave option of stack policy. We use stack policy only for updates and as well to avoid any
unintentional updates. In this scenario, they had not discussed the requirement of updates on the resources of CloudFormation stack.
Between B,C,D - its better to go with B.
In option C, they mentioned about inline policy which is not appropriate as we need to embed the policy not attach it and better to use managed
policies than inline policies. Inline policies are assigned to service linked roles which is inherited from the parent or user .
In option D, its saying to create role to each of the developers which is not the right way in assigning the permissions. A role can be used by
multiple developers instead of creating each role to each developer.
upvoted 11 times

" # viet1991 1 year ago


A&C
B is wrong.
"Modify the AssumeRolePolicyDocument action to allow the IAM role to be passed to CloudFormation." => this sentence is wrong.

"AssumeRolePolicyDocument
The trust policy that is associated with this role. Trust policies define which entities can assume the role."

We need to use iam:Passrole to pass the role from developer to cloudformation.


AssumeRolePolicyDocument is used for assume the role only.
upvoted 4 times

" # Blair77 Most Recent % 3 weeks, 3 days ago


Selected Answer: AC
AAA CCC 110% sure!

232 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # tomosabc1 1 month ago
Selected Answer: AC
The answer is AC.

B(wrong):"Modify the AssumeRolePolicyDocument action to allow the IAM role to be passed to CloudFormation." => this sentence is wrong.
"AssumeRolePolicyDocument
The trust policy that is associated with this role. Trust policies define which entities can assume the role."
We need to use iam:Passrole to pass the role from developer to cloudformation.

D(wrong): "create an IAM role for each developer". This sentence is wrong.
E(wrong): The newly created role in central account cannot be directly used by CloudFormation to create resources in other account. In addition,
similar to S3 bucket policy, CloudFormation stack policy is used to control who can update the stack, rather than allowing the stack to
create/manage AWS resource.
upvoted 1 times

" # hilft 3 months, 2 weeks ago


C and E
upvoted 1 times

" # aandc 4 months, 1 week ago


Selected Answer: AC
agreed with AC
upvoted 1 times

" # jyrajan69 8 months, 2 weeks ago


First we start by looking at either B or E, here E is more detailed and complete answer, so will go with E. Then is between A,C and D. D is not
compliant with AWS not best practice to create role for each developer, so then between A and C. My answer would be C as this has an inline
policy that prevents the developer from accessing the services directly. So answer is C and E
upvoted 3 times

" # AzureDP900 11 months ago


I'll go with A,C
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,C
upvoted 3 times

" # tekkart 1 year ago


Here is a proposition of reasoning.
First you must start from an account. Between B and E, you choose E because B is tempting (the statement about AssumeRolePolicyDocument
looks right if I look this example : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html#aws-
resource-iam-role--examples) but B does not allow you to export your configuration to other accounts. With E, CloudFormation gives itself the
rights needed to auto-assign IAM.
Then you use CloudFormation (options A, C, D). Remains to give rights to developers (C, D), but between these options, to respect the statement
that "developers must not be able to create resources outside of CloudFormation", the only option is C "attach an inline policy to deny access to
all other AWS services" not very fine grained, but the only present. Plus D has the keywords "for each developer" which as said by Anila is
tedious.
Therefore CE would be the right answers (as in the autocorrection, looking at the comments 90% of the answers are supposed to be good and I
see discussion about the proposed answers on 50% !)
upvoted 6 times

" # tkanmani76 10 months ago


Agree with Tekkart, C&E are the right choices.
upvoted 1 times

" # tkanmani76 8 months, 3 weeks ago


Correcting to A and C. C is the only choice which limits access to use of other services. And A deploys the template. (E deploys only stack
set policy which is not correct).
upvoted 2 times

" # Waiweng 1 year ago


it's A&C
upvoted 2 times

" # ppshein 1 year ago


I would go with A & B. Because C is kinda duplicated with A and developers can manually amend policy by itself if required.
upvoted 1 times

" # Kian1 1 year ago


going with AC
upvoted 1 times

233 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Ebi 1 year ago


I will go with AC
upvoted 4 times

" # petebear55 1 year ago


A AND C
upvoted 2 times

" # Bulti 1 year ago


A& C is the right answer. E is a misleading option. You need to deploy the CloudFormation template and not just the Stack policy. Moreover, the
purpose of the stack policy is to prevent accidental changes to the resources being created by the CloudFormation template which is not the
requirement. So A&C is correct.
upvoted 4 times

" # jackdryan 1 year ago


I'll go with A,C
upvoted 4 times

234 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #598 Topic 1

A media company has a static web application that is generated programmatically. The company has a build pipeline that generates HTML
content that is uploaded to an Amazon S3 bucket served by Amazon CloudFront. The build pipeline runs inside a Build Account. The S3 bucket and
CloudFront distribution are in a Distribution Account. The build pipeline uploads the +les to Amazon S3 using an IAM role in the Build Account.
The S3 bucket has a bucket policy that only allows CloudFront to read objects using an origin access identity (OAI). During testing all attempts to
access the application using the CloudFront URL result in an
HTTP 403 Access Denied response.
What should a solutions architect suggest to the company to allow access the objects in Amazon S3 through CloudFront?

A. Modify the S3 upload process in the Build Account to add the bucket-owner-full-control ACL to the objects at upload.

B. Create a new cross-account IAM role in the Distribution Account with write access to the S3 bucket. Modify the build pipeline to assume
this role to upload the +les to the Distribution Account.

C. Modify the S3 upload process in the Build Account to set the object owner to the Distribution Account.

D. Create a new IAM role in the Distribution Account with read access to the S3 bucket. Con+gure CloudFront to use this new role as its OAI.
Modify the build pipeline to assume this role when uploading +les from the Build Account.

Correct Answer: B

Community vote distribution


A (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


A.
https://aws.amazon.com/tw/premiumsupport/knowledge-center/s3-bucket-owner-access/
upvoted 21 times

" # Byrney 8 hours, 8 minutes ago


B: This exact scenario is detailed here:
https://aws.amazon.com/premiumsupport/knowledge-center/codepipeline-artifacts-s3/

The cross account role sets the owner as the distribution account.
upvoted 1 times

" # viet1991 1 year ago


A is right.
By default, an S3 object is owned by the AWS account that uploaded it. This is true even when the bucket is owned by another account. To
get access to the object, the object owner must explicitly grant you (the bucket owner) access.

aws s3 cp BuildAccountFile s3://DistributionAccountS3/ --acl bucket-owner-full-control


upvoted 4 times

" # darthvoodoo Highly Voted $ 1 year ago


The answer is definitely A...this is one of the questions that always pops up in the security specialty exam https://aws.amazon.com
/premiumsupport/knowledge-center/s3-bucket-owner-access/
If B was the issue, you would have been getting a 404 error instead.
upvoted 10 times

" # petebear55 1 year ago


WELL DONE
upvoted 1 times

" # tekkart 1 year ago


At first I thought it was B, then I changed my mind to A.
It is not the Pipeline in Build Account which cannot access the object (answer B).
It is CloudFront, together with S3 bucket in Distribution Account, activated by OAI, which cannot access the object. Because, by giving
cross-account permission, it lost its bucket full control : https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-
policies.html#example-bucket-policies-use-case-8
I go with A
upvoted 1 times

" # 01037 1 year ago


Good point
upvoted 1 times

235 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # oscargee 1 year ago


It's not A because it means S3 bucket owner asks permission of object. In this case, S3 belongs to CloudFront but accessor is app. So app
won't have same permission as Cloud Front. You have to chose B.
upvoted 1 times

" # Dionenonly Most Recent % 3 weeks, 1 day ago


Selected Answer: A
Answer is A. No brainer
upvoted 1 times

" # CloudHell 4 months, 4 weeks ago


My initial instinct was B, but after reading the comments A sounds like a better choice.
upvoted 1 times

" # johnnsmith 8 months, 2 weeks ago


B is correct. There are three possible Object Ownership settings: (1)Bucket owner enforced: bucket owner always owns the object. That is not the
case we have here. (2) Bucket owner preferred. If an object upload includes the bucket-owner-full-control canned ACL, the bucket owner owns
the object. Objects uploaded with other ACLs are owned by the writing account. Answer A would only work with this setting. But we don't if the
bucket used this setting. (3) Object writer: Object writer owns the object. Answer B works regardless of the Object Ownership setting of the
bucket. Details at https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html
upvoted 2 times

" # cldy 11 months ago


A. Modify the S3 upload process in the Build Account to add the bucket-owner-full-control ACL to the objects at upload.
upvoted 1 times

" # AzureDP900 11 months ago


A is fine
upvoted 1 times

" # student22 1 year ago


A
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-access/
upvoted 2 times

" # blackgamer 1 year ago


It definitely is A. This document explains it -

https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html#object-ownership-replication
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 1 times

" # SJain50 1 year ago


B
https://aws.amazon.com/premiumsupport/knowledge-center/codepipeline-artifacts-s3/
upvoted 2 times

" # Waiweng 1 year ago


it's A
upvoted 1 times

" # kiev 1 year ago


Neal Davis went for B. I am going for my exam next week. I am lost now. I know both both A and B works
upvoted 3 times

" # ele 1 year ago


A is correct. By assuming cross-account role, the pipeline would give up any permissions in Build account, that it might need to complete build
actions. So it must keep with its own role and the answer is A.
upvoted 3 times

" # Kian1 1 year ago


going with A
upvoted 1 times

" # Ebi 1 year ago


I will go with A
upvoted 3 times

" # Ebi 1 year ago


Actually A and B both are correct answer, but A is more straight forward

236 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

One of those questions from AWS which evaluates ability to pick the BEST answer not only the right one
upvoted 1 times

" # 01037 1 year ago


A.
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-access/
upvoted 2 times

237 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #599 Topic 1

A company has built a high performance computing (HPC) cluster in AWS for a tightly coupled workload that generates a large number of shared
+les stored in
Amazon EFS. The cluster was performing well when the number of Amazon EC2 instances in the cluster was 100. However, when the company
increased the cluster size to 1,000 EC2 instances, overall performance was well below expectations.
Which collection of design choices should a solutions architect make to achieve the maximum performance from the HPC cluster? (Choose
three.)

A. Ensure the HPC cluster is launched within a single Availability Zone.

B. Launch the EC2 instances and attach elastic network interfaces in multiples of four.

C. Select EC2 instance types with an Elastic Fabric Adapter (EFA) enabled.

D. Ensure the clusters is launched across multiple Availability Zones.

E. Replace Amazon EFS win multiple Amazon EBS volumes in a RAID array.

F. Replace Amazon EFS with Amazon FSx for Lustre.

Correct Answer: DEF

Community vote distribution


ACF (100%)

" # easytoo Highly Voted $ 1 year, 1 month ago


A. High performance computing (HPC) workload cluster should be in a single AZ.
C. Elastic Fabric Adapter (EFA) is a network device that you can attach to your Amazon EC2 instances to accelerate High Performance
Computing (HPC)
F. Amazon FSx for Lustre - Use it for workloads where speed matters, such as machine learning, high performance computing (HPC), video
processing, and financial modeling.

Cluster – packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network
performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
upvoted 40 times

" # blackgamer 1 year ago


Yes, definitely ACF. Well explained.
upvoted 2 times

" # Ebi Highly Voted $ 1 year ago


I will go with ACF
upvoted 6 times

" # gnic Most Recent % 2 months, 1 week ago


Selected Answer: ACF
ACF
A - for network perfomance (single AZ is better than multiple AZ because the latency)
C- EFA no brain for HPC
F - EFS Lustre is for HPC, no brain too
upvoted 1 times

" # ka1tw 7 months, 2 weeks ago


Why NOT B with multiple ENI?
upvoted 1 times

" # Ni_yot 9 months, 4 weeks ago


Yes ACF.
upvoted 1 times

" # cldy 10 months, 1 week ago


A.C.F.
EFA + FSx Lustre + single AZ.
upvoted 1 times

238 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Tan0k 10 months, 3 weeks ago


Selected Answer: ACF
ACF got to be
upvoted 1 times

" # AzureDP900 11 months ago


ACF is correct
upvoted 1 times

" # acloudguru 11 months, 1 week ago


Selected Answer: ACF
ACF is good.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,C,F
upvoted 2 times

" # Waiweng 1 year ago


it's A,C,F
upvoted 3 times

" # Kian1 1 year ago


going with ACF
upvoted 3 times

" # T14102020 1 year ago


Correct ACF. Elastic Fabric Adapter + FSx Lustre for HPC + single AZ
upvoted 4 times

" # jackdryan 1 year ago


I'll go with A,C,F
upvoted 5 times

" # oopsy 1 year ago


seems ACF
upvoted 2 times

" # CYL 1 year ago


ACF to enhance on networking and file system level optimization.
upvoted 2 times

" # Bulti 1 year ago


ACF is the right answer
upvoted 2 times

239 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #600 Topic 1

A company with multiple accounts is currently using a con+guration that does not meet the following security governance policies:
✑ Prevent ingress from port 22 to any Amazon EC2 instance.
✑ Require billing and application tags for resources.
✑ Encrypt all Amazon EBS volumes.
A solutions architect wants to provide preventive and detective controls, including noti+cations about a speci+c resource, if there are policy
deviations.
Which solution should the solutions architect implement?

A. Create an AWS CodeCommit repository containing policy-compliant AWS CloudFormation templates. Create an AWS Service Catalog
portfolio. Import the CloudFormation templates by attaching the CodeCommit repository to the portfolio. Restrict users across all accounts to
items from the AWS Service Catalog portfolio. Use AWS Con+g managed rules to detect deviations from the policies. Con+gure an Amazon
CloudWatch Events rule for deviations, and associate a CloudWatch alarm to send noti+cations when the TriggeredRules metric is greater than
zero.

B. Use AWS Service Catalog to build a portfolio with products that are in compliance with the governance policies in a central account.
Restrict users across all accounts to AWS Service Catalog products. Share a compliant portfolio to other accounts. Use AWS Con+g managed
rules to detect deviations from the policies. Con+gure an Amazon CloudWatch Events rule to send a noti+cation when a deviation occurs.

C. Implement policy-compliant AWS CloudFormation templates for each account, and ensure that all provisioning is completed by
CloudFormation. Con+gure Amazon Inspector to perform regular checks against resources. Perform policy validation and write the
assessment output to Amazon CloudWatch Logs. Create a CloudWatch Logs metric +lter to increment a metric when a deviation occurs.
Con+gure a CloudWatch alarm to send noti+cations when the con+gured metric is greater than zero.

D. Restrict users and enforce least privilege access using AWS IAM. Consolidate all AWS CloudTrail logs into a single account. Send the
CloudTrail logs to Amazon Elasticsearch Service (Amazon ES). Implement monitoring, alerting, and reporting using the Kibana dashboard in
Amazon ES and with Amazon SNS.

Correct Answer: C

Community vote distribution


B (100%)

" # bbnbnuyh Highly Voted $ 1 year, 1 month ago


B: https://aws.amazon.com/blogs/mt/use-aws-service-catalog-to-build-a-custom-catalog-of-products-from-aws-marketplace/
https://docs.aws.amazon.com/config/latest/developerguide/monitor-config-with-cloudwatchevents.html
upvoted 16 times

" # Bulti Highly Voted $ 1 year, 1 month ago


B is the right answer. It is a standard hub and spoke service catalog approach to enabling users in multiple account launch products from their
portfolio in their local service catalog. A is incorrect because, it doesn't make sense to have users in other accounts access a single service
catalog in a central account. This is not hub and spoke service catalog model that is promoted as a best practice in a multi-account setup.
upvoted 9 times

" # AwsBRFan Most Recent % 1 month, 2 weeks ago


Selected Answer: B
Just codecommit will not work: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-S3-servicecatalog.html
upvoted 1 times

" # CloudHell 4 months, 4 weeks ago


Selected Answer: B
B makes sense to me.
upvoted 1 times

" # cldy 11 months ago


B. Use AWS Service Catalog to build a portfolio with products that are in compliance with the governance policies in a central account. Restrict
users across all accounts to AWS Service Catalog products. Share a compliant portfolio to other accounts. Use AWS Config managed rules to
detect deviations from the policies. Configure an Amazon CloudWatch Events rule to send a notification when a deviation occurs.
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: B

240 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

B is correct answer
upvoted 1 times
" # tgv 1 year ago
BBB
---
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

" # Kopa 1 year ago


B, Use AWS Config managed rules to detect deviations from the policies. This is what AWS Config is made of.
upvoted 2 times

" # blackgamer 1 year ago


Answer is B.
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 2 times

" # Sun`yhome 1 year ago


For ppl voting C, which step does control port 22 access?
ASC's portfolio doesn't define EC2's security group, does it?
upvoted 1 times

" # Kian1 1 year, 1 month ago


going with B
upvoted 2 times

" # 01037 1 year, 1 month ago


Why is A wrong?
upvoted 1 times

" # student22 1 year ago


Bulli has explained this above.
upvoted 1 times

" # RedKane 1 year ago


Probably because CoodeCommit repository can't be attached to the Porfolio. Service Catalog seams to only be able to create products based
on CloudFormation templates or existing stacks.
upvoted 1 times

" # Ebi 1 year, 1 month ago


B is the answer
upvoted 3 times

" # T14102020 1 year, 1 month ago


Correct is B. ServiceCatalog + AWS Config managed rules to detect deviations + without Cloudwatch rules to detect deviations
upvoted 2 times

" # jackdryan 1 year, 1 month ago


I'll go with B
upvoted 3 times

241 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #601 Topic 1

A company is manually deploying its application to production and wants to move to a more mature deployment pattern. The company has asked
a solutions architect to design a solution that leverages its current Chef tools and knowledge. The application must be deployed to a staging
environment for testing and veri+cation before being deployed to production. Any new deployment must be rolled back in 5 minutes if errors are
discovered after a deployment.
Which AWS service and deployment pattern should the solutions architect use to meet these requirements?

A. Use AWS Elastic Beanstalk and deploy the application using a rolling update deployment strategy.

B. Use AWS CodePipeline and deploy the application using a rolling update deployment strategy.

C. Use AWS CodeBuild and deploy the application using a canary deployment strategy.

D. Use AWS OpsWorks and deploy the application using a blue/green deployment strategy.

Correct Answer: A

Community vote distribution


D (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


D. blue/green staging/production with OpsWorks, which leverages Chef.
upvoted 16 times

" # Ebi Highly Voted $ 1 year ago


I will also go with D
upvoted 6 times

" # cldy Most Recent % 11 months ago


D. Use AWS OpsWorks and deploy the application using a blue/green deployment strategy.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: D
I will go with D, Blue/Green deployment we can revert back so quickly.
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: D
should be D. keywords " current Chef tools and knowledge."
upvoted 1 times

" # tgv 1 year ago


DDD
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 1 times

" # blackgamer 1 year ago


D, OpsWorks is like managed service for chef and puppets.
upvoted 3 times

" # Waiweng 1 year ago


it's D
upvoted 3 times

" # KnightVictor 1 year ago


should be D. keywords " current Chef tools and knowledge."
upvoted 2 times

" # Kian1 1 year ago


going with D

242 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # kopper2019 1 year ago
Chef = OpsWorks = D
upvoted 2 times

" # T14102020 1 year, 1 month ago


Correct is D. blue/green
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with D
upvoted 3 times

" # oopsy 1 year, 1 month ago


must be D
upvoted 1 times

" # CYL 1 year, 1 month ago


D. Opswork to utilize Chef. Blue / Green to allow for testing before switching to live production.
upvoted 2 times

" # Bulti 1 year, 1 month ago


D is the right answer
upvoted 1 times

243 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #602 Topic 1

A company has been using a third-party provider for its content delivery network and recently decided to switch to Amazon CloudFront. The
development team wants to maximize performance for the global user base. The company uses a content management system (CMS) that serves
both static and dynamic content.
The CMS is behind an Application Load Balancer (ALB) which is set as the default origin for the distribution. Static assets are served from an
Amazon S3 bucket.
The Origin Access Identity (OAI) was created properly and the S3 bucket policy has been updated to allow the GetObject action from the OAI, but
static assets are receiving a 404 error.
Which combination of steps should the solutions architect take to +x the error? (Choose two.)

A. Add another origin to the CloudFront distribution for the static assets.

B. Add a path-based rule to the ALB to forward requests for the static assets.

C. Add an RTMP distribution to allow caching of both static and dynamic content.

D. Add a behavior to the CloudFront distribution for the path pattern and the origin of the static assets.

E. Add a host header condition to the ALB listener and forward the header from CloudFront to add tramc to the allow list.

Correct Answer: AB

Community vote distribution


AD (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


AD. Add an origin and a behavior.
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-distribution-serve-content/
upvoted 26 times

" # oscargee 1 year ago


Add another origin for S3, not for CloudFront in the page you pointed out.
upvoted 2 times

" # kangtamo Most Recent % 4 months, 2 weeks ago


Selected Answer: AD
Agree with AD.
upvoted 1 times

" # AzureDP900 11 months ago


I'll go with A,D
upvoted 2 times

" # tgv 1 year ago


AAA DDD
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,D
upvoted 2 times

" # blackgamer 1 year ago


A and D .
upvoted 1 times

" # Waiweng 1 year ago


it's A&D
upvoted 3 times

" # Kian1 1 year ago


going with AD origin
upvoted 2 times

" # Ebi 1 year ago


Answer is AD

244 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 4 times

" # 01037 1 year ago


CMS serves both static and dynamic content, so why 404 happens?
upvoted 1 times

" # T14102020 1 year ago


Correct is AD. Add an origin and a behavior.
upvoted 2 times

" # jackdryan 1 year ago


I'll go with A,D
upvoted 4 times

" # CYL 1 year ago


AD allows us to store static content on S3 and have cloudfront assess it directly.
upvoted 1 times

" # ishuiyutian 1 year ago


A & D is correct
upvoted 1 times

" # Bulti 1 year ago


aD is correct
upvoted 1 times

" # wsw 1 year, 1 month ago


AD is correct yes
upvoted 2 times

" # Anila_Dhharisi 1 year, 1 month ago


AD as per the link given by Nemer
upvoted 2 times

245 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #603 Topic 1

A +nancial services company logs personally identi+able information to its application logs stored in Amazon S3. Due to regulatory compliance
requirements, the log +les must be encrypted at rest. The security team has mandated that the company's on-premises hardware security modules
(HSMs) be used to generate the
CMK material.
Which steps should the solutions architect take to meet these requirements?

A. Create an AWS CloudHSM cluster. Create a new CMK in AWS KMS using AWS_CloudHSM as the source for the key material and an origin of
AWS_CLOUDHSM. Enable automatic key rotation on the CMK with a duration of 1 year. Con+gure a bucket policy on the logging bucket that
disallows uploads of unencrypted data and requires that the encryption source be AWS KMS.

B. Provision an AWS Direct Connect connection, ensuring there is no overlap of the RFC 1918 address space between on-premises hardware
and the VPCs. Con+gure an AWS bucket policy on the logging bucket that requires all objects to be encrypted. Con+gure the logging
application to query the on-premises HSMs from the AWS environment for the encryption key material, and create a unique CMK for each
logging event.

C. Create a CMK in AWS KMS with no key material and an origin of EXTERNAL. Import the key material generated from the on-premises HSMs
into the CMK using the public key and import token provided by AWS. Con+gure a bucket policy on the logging bucket that disallows uploads
of non-encrypted data and requires that the encryption source be AWS KMS.

D. Create a new CMK in AWS KMS with AWS-provided key material and an origin of AWS_KMS. Disable this CMK, and overwrite the key
material with the key material from the on-premises HSM using the public key and import token provided by AWS. Re-enable the CMK. Enable
automatic key rotation on the CMK with a duration of 1 year. Con+gure a bucket policy on the logging bucket that disallows uploads of non-
encrypted data and requires that the encryption source be AWS KMS.

Correct Answer: D

Community vote distribution


C (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


C. Create CMK with origin EXTERNAL.
https://aws.amazon.com/blogs/security/how-to-byok-bring-your-own-key-to-aws-kms-for-less-than-15-00-a-year-using-aws-cloudhsm/
upvoted 21 times

" # Ebi Highly Voted $ 1 year ago


C is my choice
upvoted 6 times

" # pankajrawat Most Recent % 6 months, 1 week ago


Selected Answer: C
C is the correct answer
upvoted 1 times

" # AzureDP900 11 months ago


I will go with C
upvoted 1 times

" # cldy 11 months ago


C. Create a CMK in AWS KMS with no key material and an origin of EXTERNAL. Import the key material generated from the on-premises HSMs
into the CMK using the public key and import token provided by AWS. Configure a bucket policy on the logging bucket that disallows uploads of
non-encrypted data and requires that the encryption source be AWS KMS.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


C is correct answer !
upvoted 1 times

" # backfringe 11 months, 2 weeks ago


I go with C
upvoted 1 times

" # acloudguru 11 months, 3 weeks ago

246 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Selected Answer: C
C,https://aws.amazon.com/blogs/security/how-to-byok-bring-your-own-key-to-aws-kms-for-less-than-15-00-a-year-using-aws-cloudhsm/
upvoted 2 times
" # tgv 1 year ago
CCC
---
upvoted 1 times

" # blackgamer 1 year ago


C is the answer.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

" # Waiweng 1 year ago


it;s C
upvoted 3 times

" # kopper2019 1 year ago


it's C

Step 1: Create the CMK with no key material associated


Begin by creating a customer master key (CMK) in AWS KMS that has no key material associated. The CLI command to create the CMK is as
follows:

$ aws kms create-key --origin EXTERNAL --region us-east-1

If successful, you’ll see an output on the CLI similar to below. The KeyState will be PendingImport and the Origin will be EXTERNAL.
upvoted 5 times

" # T14102020 1 year, 1 month ago


Correct is C. Create CMK with origin EXTERNAL.
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with C
upvoted 4 times

" # CYL 1 year, 1 month ago


C. https://docs.aws.amazon.com/kms/latest/developerguide/importing-keys-create-cmk.html
upvoted 2 times

" # Bulti 1 year, 1 month ago


Answer is C
upvoted 1 times

247 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #604 Topic 1

A solutions architect is implementing infrastructure as code for a two-tier web application in an AWS CloudFormation template. The web frontend
application will be deployed on Amazon EC2 instances in an Auto Scaling group. The backend database will be an Amazon RDS for MySQL DB
instance. The database password will be rotated every 60 days.
How can the solutions architect MOST securely manage the con+guration of the application's database credentials?

A. Provide the database password as a parameter in the CloudFormation template. Create an initialization script in the Auto Scaling group's
launch con+guration UserData property to reference the password parameter using the Ref intrinsic function. Store the password on the EC2
instances. Reference the parameter for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Ref
intrinsic function.

B. Create a new AWS Secrets Manager secret resource in the CloudFormation template to be used as the database password. Con+gure the
application to retrieve the password from Secrets Manager when needed. Reference the secret resource for the value of the
MasterUserPassword property in the AWS::RDS::DBInstance resource using a dynamic reference.

C. Create a new AWS Secrets Manager secret resource in the CloudFormation template to be used as the database password. Create an
initialization script in the Auto Scaling group's launch con+guration UserData property to reference the secret resource using the Ref intrinsic
function. Reference the secret resource for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Ref
intrinsic function.

D. Create a new AWS Systems Manager Parameter Store parameter in the CloudFormation template to be used as the database password.
Create an initialization script in the Auto Scaling group's launch con+guration UserData property to reference the parameter. Reference the
parameter for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Fn::GetAtt intrinsic function.

Correct Answer: D

Community vote distribution


B (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


B.
https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_cloudformation.html
upvoted 23 times

" # bobsmith2000 Most Recent % 5 months, 3 weeks ago


Selected Answer: B
B no-brainer.
RDS creds, rotation - Secret Manager
upvoted 2 times

" # cldy 11 months ago


B. Create a new AWS Secrets Manager secret resource in the CloudFormation template to be used as the database password. Configure the
application to retrieve the password from Secrets Manager when needed. Reference the secret resource for the value of the
MasterUserPassword property in the AWS::RDS::DBInstance resource using a dynamic reference.
upvoted 1 times

" # AzureDP900 11 months ago


B is right
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

" # Balki 1 year ago


B.
C hardcodes the passwords in the script and the new password will be lost after 60 days
upvoted 3 times

248 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # blackgamer 1 year ago


The solution is B as it is required for password rotation too.
upvoted 2 times

" # Waiweng 1 year ago


it's B
upvoted 3 times

" # Amitv2706 1 year ago


B. For Secret Rotation which is provided only by Secret Manager
upvoted 1 times

" # AJBA 1 year ago


B https://aws.amazon.com/blogs/security/how-to-create-and-retrieve-secrets-managed-in-aws-secrets-manager-using-aws-cloudformation-
template/#aws-comment-trigger-8922:~:text=The%20secret%20(username%20and%20password%20for,BackupRetentionPeriod%3A%200
upvoted 3 times

" # natpilot 1 year ago


C is correct. good option with Ref function ( https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-
secretsmanager-secrettargetattachment.html ) ; for B, Dynamic references for secure values, such as ssm-secure and secretsmanager, are not
currently supported;
upvoted 2 times

" # kirrim 1 year ago


You definitely CAN use a dynamic reference for secretsmanager for an RDS DB password, even with rotation:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html

But I'm not saying you're wrong, it appears from that document you referenced, you definitely CAN do this with the Ref function as well.

So it appears B and C are both feasible answers. It would come down to which one you think is the better answer. And that might be a matter
of personal preference?
upvoted 1 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # Ebi 1 year ago


Answer is B
upvoted 4 times

" # Bulti 1 year ago


Answer is B. You need to reference secret from secret manager dynamically in the CloudFormation template where the RDS resource is
configured. Loading it as part of the userdata script is not secure.
upvoted 3 times

" # T14102020 1 year ago


Correct is B. Secrets Manager + dynamic function
upvoted 1 times

" # karoth_p 1 year ago


B. Configure CloudTrail in each member account to deliver log events to a central S3 bucket. Ensure the central S3 bucket policy allows
PutObject access from the member accounts. Migrate existing logs to the central S3 bucket. Set up an Amazon CloudWatch alarm to alert if
CloudTrail is not configured properly.

C. Configure an organization-level CloudTrail in the parent account to deliver log events to a central S3 bucket. Migrate the existing CloudTrail
logs from each member account to the central S3 bucket. Delete the existing CloudTrail and logs in the member accounts.

D. Configure an organization-level CloudTrail in the parent account to deliver log events to a central S3 bucket. Configure CloudTrail in each
member account to deliver log events to the central S3 bucket.
upvoted 1 times

" # arulrajjayaraj 1 year ago


C - organization-level solution and Existing logs are kept
upvoted 3 times

249 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #605 Topic 1

A company built an application based on AWS Lambda deployed in an AWS CloudFormation stack. The last production release of the web
application introduced an issue that resulted in an outage lasting several minutes. A solutions architect must adjust the deployment process to
support a canary release.
Which solution will meet these requirements?

A. Create an alias for every new deployed version of the Lambda function. Use the AWS CLI update-alias command with the routing-con+g
parameter to distribute the load.

B. Deploy the application into a new CloudFormation stack. Use an Amazon Route 53 weighted routing policy to distribute the load.

C. Create a version for every new deployed Lambda function. Use the AWS CLI update-function-con+guration command with the routing-con+g
parameter to distribute the load.

D. Con+gure AWS CodeDeploy and use CodeDeployDefault.OneAtATime in the Deployment con+guration to distribute the load.

Correct Answer: C

Community vote distribution


A (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


A. Alias traffic shifting.
https://aws.amazon.com/blogs/compute/implementing-canary-deployments-of-aws-lambda-functions-with-alias-traffic-shifting/
upvoted 21 times

" # Sumit_Kumar Most Recent % 2 months, 2 weeks ago


# Update $LATEST version of function
aws lambda update-function-code --function-name myfunction ….

# Publish new version of function


aws lambda publish-version --function-name myfunction

# Point alias to new version, weighted at 5% (original version at 95% of traffic)


aws lambda update-alias --function-name myfunction --name myalias --routing-config '{"AdditionalVersionWeights" : {"2" : 0.05} }'

# Verify that the new version is healthy



# Set the primary version on the alias to the new version and reset the additional versions (100% weighted)
aws lambda update-alias --function-name myfunction --name myalias --function-version 2 --routing-config '{}'
This is begging to be automate
upvoted 1 times

" # Jughead 2 months, 3 weeks ago


Selected Answer: A
A is the answer
upvoted 1 times

" # bobsmith2000 5 months, 3 weeks ago


NONE of them is correct.
B and D are nonsense.
B - there's no point to deploy a new Lambda every time and edit rte
D - Look it up here. https://docs.amazonaws.cn/en_us/codedeploy/latest/userguide/deployment-configurations.html

Between A and C.
A is wrong because "Create an alias for every new deployed version". The alias it's the same, the weight between the versions for the alias it's
different. You point out to the alias and then operate with version.
C it's wrong because you have to use update-alias instead of update-function-configuration.

So it's either A and C phrasing is messed up or none of them is correct.


upvoted 2 times

" # cldy 11 months ago


A. Create an alias for every new deployed version of the Lambda function. Use the AWS CLI update-alias command with the routing-config
parameter to distribute the load.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago

250 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Correct Answer A. there is no second thoughts also!


upvoted 1 times
" # tgv 1 year ago
AAA
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 1 times

" # blackgamer 1 year ago


A is the answer. Refer below link for details explanation on how Lambda Alias works.

https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html
upvoted 1 times

" # Waiweng 1 year ago


it's A
upvoted 2 times

" # Kian1 1 year ago


going with A
upvoted 2 times

" # lechuk 1 year ago


Maybe a Typo but it's not need to create an ALIAS for every every function deployment...
upvoted 2 times

" # Ebi 1 year ago


A is my answer
upvoted 3 times

" # Bulti 1 year ago


A is the correct option for Serverless.
upvoted 2 times

" # T14102020 1 year ago


Correct is A . Update alias traffic
upvoted 1 times

" # jackdryan 1 year ago


I'll go with A
upvoted 3 times

" # CYL 1 year ago


A. Use alias to switch traffic.
upvoted 1 times

251 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #606 Topic 1

A manufacturing company is growing exponentially and has secured funding to improve its IT infrastructure and ecommerce presence. The
company's ecommerce platform consists of:
✑ Static assets primarily comprised of product images stored in Amazon S3.
✑ Amazon DynamoDB tables that store product information, user information, and order information.
✑ Web servers containing the application's front-end behind Elastic Load Balancers.
The company wants to set up a disaster recovery site in a separate Region.
Which combination of actions should the solutions architect take to implement the new design while meeting all the requirements? (Choose
three.)

A. Enable Amazon Route 53 health checks to determine if the primary site is down, and route tramc to the disaster recovery site if there is an
issue.

B. Enable Amazon S3 cross-Region replication on the buckets that contain static assets.

C. Enable multi-Region targets on the Elastic Load Balancer and target Amazon EC2 instances in both Regions.

D. Enable DynamoDB global tables to achieve a multi-Region table replication.

E. Enable Amazon CloudWatch and create CloudWatch alarms that route tramc to the disaster recovery site when application latency exceeds
the desired threshold.

F. Enable Amazon S3 versioning on the source and destination buckets containing static assets to ensure there is a rollback version available
in the event of data corruption.

Correct Answer: AEF

Community vote distribution


ABD (100%)

" # Konnon Highly Voted $ 1 year, 1 month ago


The answer is ABD.
upvoted 27 times

" # Nemer Highly Voted $ 1 year, 1 month ago


ABD is right.
Route 53 + S3 CRR + DynDB global tables.
upvoted 15 times

" # sodasu Most Recent % 1 month, 2 weeks ago


why not E ?
upvoted 1 times

" # sodasu 1 month, 2 weeks ago


BD is right.

i wanna know how about E?


Thanks!
upvoted 1 times

" # Cal88 1 week ago


The first issue with E is that its based on latency.
Second , cloud watch alarms don't just switch traffic and they need to trigger a lambda function to do that which is not mentioned
Third , even if lambda was mentioned why would you use this option if its supported natively in route 53 without the need to implement this
long process
Remember that in the professional exam some option could be done technically but you are asked for the *Best option
so in this case A is better to do this and more reliable
upvoted 1 times

" # KiraguJohn 4 months, 2 weeks ago


ABD it is
upvoted 1 times

" # roka_ua 7 months, 1 week ago


Selected Answer: ABD
Vote ABD

252 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # Ni_yot 8 months ago


AB &D for me. Slightly tricky question. But R53 will cover the DR requirement. S3 CRR means there is another copy of the data in another region
and Global tables ensure multiple copies of the data in case of site down.
upvoted 1 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: ABD
it is ABD
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: ABD
ABD is correct answer!
upvoted 2 times

" # moon2351 1 year ago


Answer is ADB
upvoted 1 times

" # tgv 1 year ago


AAA BBB DDD
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,B,D
upvoted 3 times

" # blackgamer 1 year ago


The answer is ABD for sure.
upvoted 1 times

" # Waiweng 1 year ago


it's ABD
upvoted 3 times

" # Justu 1 year ago


ABD, seems right but it doesn't fully answer the question: "Which combination of actions, while meeting all the requirements?" as it's not stating
anything about the application layer.

However, another picks are not correct either. So I would answer ABD :D

C: There's no multi-region targets in ELB. However you can load balance traffic with the IP addresses, so you could do it.
upvoted 3 times

" # Ebi 1 year ago


ABD is my answer
upvoted 3 times

" # Bulti 1 year ago


It's not clear if versioning is enabled on S3 buckets in source and destination region. Without that CRR will not work. So I will go with A,D, F.
Static content will not change. So no need to sync up but in case someone deletes it or gets corrupted you can go to the previous version.
upvoted 2 times

" # student22 1 year ago


A,B,D
Not F because the requirement is for a DR site.
upvoted 1 times

" # Bulti 1 year ago


On second thought I will go with ABD. B is correct and F is wrong because if cross region replication is enabled then versioning is enabled as
well without which cross region replication is not possible.
upvoted 2 times

" # T14102020 1 year ago


Correct is ABD. Route 53 + S3 CRR + DynDB global tables.
upvoted 2 times

253 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #607 Topic 1

A company is developing a gene reporting device that will collect genomic information to assist researchers will collecting large samples of data
from a diverse population. The device will push 8 KB of genomic data every second to a data platform that will need to process and analyze the
data and provide information back to researchers. The data platform must meet the following requirements:
✑ Provide near-real-time analytics of the inbound genomic data
✑ Ensure the data is jexible, parallel, and durable
✑ Deliver results of processing to a data warehouse
Which strategy should a solutions architect use to meet these requirements?

A. Use Amazon Kinesis Data Firehouse to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an
Amazon RDS instance.

B. Use Amazon Kinesis Data Streams to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an
Amazon Redshift cluster using Amazon EMR.

C. Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SQS with Kinesis, and save the results to an Amazon
Redshift cluster.

D. Use an Amazon API Gateway to put requests into an Amazon SQS queue, analyze the data with an AWS Lambda function, and save the
results to an Amazon Redshift cluster using Amazon EMR.

Correct Answer: B

Community vote distribution


B (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


B is right. Kinesis streams / EMR / Redshift.
upvoted 23 times

" # Ebi Highly Voted $ 1 year ago


I will go with B
upvoted 5 times

" # WayneYi Most Recent % 1 month ago


I chose A, but then I noticed that it is sending results to RDS, oops!
upvoted 1 times

" # Ni_yot 2 months, 2 weeks ago


B is correct ans. like this one
upvoted 1 times

" # HellGate 9 months, 1 week ago


B
real-time requirement = Kinesis
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: B
B for sure !
upvoted 1 times

" # Kopa 11 months, 3 weeks ago


Ascertain that the data is adaptable, parallel, and durable is very tempting to S3, but most probably its B with Kinesis
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

254 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # KittuCheeku 1 year ago


B is the correct answer as it is best suited amongst 4 given options. KDS (Analytics) + Redshift (Data Warehouse) + Using Elastic MapReduce
upvoted 2 times

" # Waiweng 1 year ago


it's B
upvoted 2 times

" # digimaniac 1 year ago


I think C is correct too. B is an overkill. 8KB per second is really low data rate.
upvoted 1 times

" # TonyGe 1 year ago


near-real-time, always think of Kinesis
upvoted 2 times

" # Bulti 1 year ago


answer is B
upvoted 3 times

" # T14102020 1 year, 1 month ago


Correct is D. Kinesis data streams + EMR + Redshift.
upvoted 2 times

" # jackdryan 1 year, 1 month ago


I'll go with B
upvoted 3 times

" # CYL 1 year, 1 month ago


B. Redshift is the data-warehouse. EMR to do the data transformation. Kinesis for real-time data transfer.
upvoted 4 times

" # Anila_Dhharisi 1 year, 1 month ago


B is correct.
upvoted 2 times

255 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #608 Topic 1

A company needs to move its on-premises resources to AWS. The current environment consists of 100 virtual machines (VMs) with a total of 40
TB of storage.
Most of the VMs can be taken osine because they support functions during business hours only, however, some are mission critical, so downtime
must be minimized.
The administrator of the on-premises network provisioned 10 Mbps of internet bandwidth for the migration. The on-premises network throughput
has reached capacity and would be costly to increase. A solutions architect must design a migration solution that can be performed within the
next 3 months.
Which method would ful+ll these requirements?

A. Set up a 1 Gbps AWS Direct Connect connection. Then, provision a private virtual interface, and use AWS Server Migration Service (SMS) to
migrate the VMs into Amazon EC2.

B. Use AWS Application Discovery Service to assess each application, and determine how to refactor and optimize each using AWS services or
AWS Marketplace solutions.

C. Export the VMs locally, beginning with the most mission-critical servers +rst. Use AWS Transfer for SFTP to securely upload each VM to
Amazon S3 after they are exported. Use VM Import/Export to import the VMs into Amazon EC2.

D. Migrate mission-critical VMs with AWS SMS. Export the other VMs locally and transfer them to Amazon S3 using AWS Snowball. Use VM
Import/Export to import the VMs into Amazon EC2.

Correct Answer: A

Community vote distribution


D (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


D. 40 TB transfer -> snowball. SMS only needed for the mission-critical VMs that would need live incremental replication with no downtime.
upvoted 25 times

" # Justu Highly Voted $ 1 year ago


To transfer 40TB of data in 10Mbps link, it will take 400 days. So transferring anything over that link in 3 months is not feasible. Rules C out.

Direct Connect link is needed only while migration period. So ordering that for just 3 months doesn't seem correct. Also it's a costly option. Rules
out A.

And refactoring 100 applications in 3 months, doesn't sound right to me as well. Rules out B.

So left will be D. Problem with D is that Snowball transfer also takes some time, but I guess it's OK for non critical systems to be down for week.
If we can use the on-prem servers while setupping AWS instances and then transfer only the delta of data, the downtime then will be minimized.
upvoted 10 times

" # hilft Most Recent % 3 months, 1 week ago


snowball. other options will take forever.
upvoted 1 times

" # kyo 9 months ago


Selected Answer: D
AWS Server Migration Service is better than VM I/E.
upvoted 1 times

" # cldy 10 months, 1 week ago


D correct.
upvoted 1 times

" # AzureDP900 11 months ago


D is right answer. A is not correct
upvoted 1 times

" # tgv 1 year ago


DDD
---
upvoted 1 times

256 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # levy_k 1 year ago


I will go with A, firstly, there is no concern about the cost in the topic. customer's purpose is to migrate all service and data to aws in 3 months
without any affect for end user.
For D, with snowball, the service and old data can be migrate to aws in weeks, but how to process the datas generated during the weeks? only
10Mb network obviously is not the correct solution. Customer sholud find a solution to keep the data consistent.
upvoted 3 times

" # Viper57 1 year ago


You did not read the question. " The on-premises network throughput has reached capacity and would be costly to increase. Cost is a
concern, so a 1 Gbps Direct Connect connection would definitely be too expensive.
upvoted 4 times

" # zolthar_z 1 year ago


I think the solution is D, the key is the cost limitation. There is a reason to put that in the question. The direct connect is the best solution if you
don't have a budget problem. With D you can create a copy of the non-essentials VMs, work with the on-premise while the VMs arrive and are
deployed in EC2.
upvoted 3 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 2 times

" # Kopa 1 year ago


D, since customer is not willing to make additional cost with direct connect. Also not full 40 TB are to be migrated as some of data will offloaded
to Snowball device.
upvoted 2 times

" # tvs 1 year ago


Should be D . SMS transfer vmdk to s3 which Need public virtual interface over DX. https://aws.amazon.com/blogs/apn/aws-server-migration-
service-server-migration-to-the-cloud-made-easy/
upvoted 2 times

" # blackgamer 1 year ago


D is the solution.
upvoted 1 times

" # Waiweng 1 year ago


it's D
upvoted 2 times

" # natpilot 1 year ago


A for sure, direct connect can be establiched in 1 month and is more indicated for critical vm migration; for D, after VM import, how is feasible to
do the data resync of about 3 days of data with 10Mbps full used?
upvoted 2 times

" # elf78 1 year ago


D - Assuming mission critical VMs are migrated using SMS via existing link and others via Snowball. All other options don't sound reasonable.
upvoted 2 times

" # Ebi 1 year ago


A for sure,
D is not the option, snowball migration takes weeks while non-critical VMs must be available during business hours.
upvoted 6 times

" # Ebi 1 year ago


After further review I guess this questions does not have any correct answer:
A: Is not correct, as mentioned in other comments, private VIF will work for connecting to VPC not public services like S3
B: No sense, it is talking discover not actual migration
C: No sense, no needed to have SFTP
D: Transferring VMs using Snowball will take weeks while as per question non-critical application are used during business hours

Very bad question


upvoted 4 times

257 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #609 Topic 1

A company runs a popular public-facing ecommerce website. Its user base is growing quickly from a local market to a national market. The
website is hosted in an on-premises data center with web servers and a MySQL database. The company wants to migrate its workload to AWS. A
solutions architect needs to create a solution to:
✑ Improve security
✑ Improve reliability
✑ Improve availability
✑ Reduce latency
✑ Reduce maintenance
Which combination of steps should the solutions architect take to meet these requirements? (Choose three.)

A. Use Amazon EC2 instances in two Availability Zones for the web servers in an Auto Scaling group behind an Application Load Balancer.

B. Migrate the database to a Multi-AZ Amazon Aurora MySQL DB cluster.

C. Use Amazon EC2 instances in two Availability Zones to host a highly available MySQL database cluster.

D. Host static website content in Amazon S3. Use S3 Transfer Acceleration to reduce latency while serving webpages. Use AWS WAF to
improve website security.

E. Host static website content in Amazon S3. Use Amazon CloudFront to reduce latency while serving webpages. Use AWS WAF to improve
website security.

F. Migrate the database to a single-AZ Amazon RDS for MySQL DB instance.

Correct Answer: DEF

Community vote distribution


ABE (85%) BDE (15%)

" # Nemer Highly Voted $ 1 year, 1 month ago


ABE.
Excluding: C does not reduce maintenance (MySQL IaaS), we need CloudFront for WAF (D is out), and F is not HA.
upvoted 39 times

" # sam422 1 year, 1 month ago


C doesn't qualify for reduce maintenance
upvoted 1 times

" # Anila_Dhharisi Highly Voted $ 1 year, 1 month ago


yes its ABE.
upvoted 8 times

" # hilft Most Recent % 3 months, 2 weeks ago


ABE straight forward
upvoted 1 times

" # TechX 4 months ago


Selected Answer: ABE
It's ABE
upvoted 2 times

" # aandc 4 months, 1 week ago


Selected Answer: ABE
easy one
upvoted 2 times

" # tartarus23 6 months, 3 weeks ago


Selected Answer: ABE
A. high availability and performance for the web servers since they are multi-AZ, auto-scaled and load balanced.
B. database are multi az and shifted to auroa db cluster so it is reliable and scalable
E. static website content in S3 and cached in Cloudfront reduces latency . WAF increases security.
upvoted 2 times

" # Ni_yot 8 months, 1 week ago

258 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

ABE for me. With this option, the web app and DB are highly available. And the latency and security is covered with the E answer.
upvoted 1 times

" # zoliv 9 months, 2 weeks ago


Selected Answer: ABE
ABE for sure!
upvoted 1 times

" # pititcu667 9 months, 4 weeks ago


Selected Answer: ABE
This is my choice.
upvoted 1 times

" # weequan 11 months ago


Selected Answer: BDE
A need maintenence
C need maintenence
F not reliability
upvoted 1 times

" # challenger1 10 months, 3 weeks ago


No..... ABE
upvoted 1 times

" # weequan 11 months ago


Selected Answer: BDE
A need maintenence
C is not meet with reliability
F not reliability
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: ABE
ABE is right answer!
upvoted 3 times

" # AzureDP900 11 months, 1 week ago


I will go with ABE !
upvoted 1 times

" # tgv 1 year ago


AAA BBB EEE
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,B,E
upvoted 2 times

" # blackgamer 1 year ago


ABE for sure.
upvoted 1 times

" # Waiweng 1 year ago


it's ABE
upvoted 3 times

259 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #610 Topic 1

A company has an internal application running on AWS that is used to track and process shipments in the company's warehouse. Currently, after
the system receives an order, it emails the staff the information needed to ship a package. Once the package is shipped, the staff replies to the
email and the order is marked as shipped.
The company wants to stop using email in the application and move to a serverless application model.
Which architecture solution meets these requirements?

A. Use AWS Batch to con+gure the different tasks required to ship a package. Have AWS Batch trigger an AWS Lambda function that creates
and prints a shipping label. Once that label is scanned, as it leaves the warehouse, have another Lambda function move the process to the
next step in the AWS Batch job.

B. When a new order is created, store the order information in Amazon SQS. Have AWS Lambda check the queue every 5 minutes and process
any needed work. When an order needs to be shipped, have Lambda print the label in the warehouse. Once the label has been scanned, as it
leaves the warehouse, have an Amazon EC2 instance update Amazon SQS.

C. Update the application to store new order information in Amazon DynamoDB. When a new order is created, trigger an AWS Step Functions
workjow, mark the orders as ‫ג‬€in progress‫ג‬€, and print a package label to the warehouse. Once the label has been scanned and ful+lled, the
application will trigger an AWS Lambda function that will mark the order as shipped and complete the workjow.

D. Store new order information in Amazon EFS. Have instances pull the new information from the NFS and send that information to printers in
the warehouse. Once the label has been scanned, as it leaves the warehouse, have Amazon API Gateway call the instances to remove the
order information from Amazon EFS.

Correct Answer: A

Community vote distribution


C (100%)

" # Konnon Highly Voted $ 1 year, 1 month ago


I go for C. Use DynamoDB Streams to trigger lambda then trigger step function.
upvoted 17 times

" # joe16 1 year ago


Yes.
AWS Batch is ruled out as it supports only EC2/Fargate based compute not lambdas.
upvoted 2 times

" # Nemer Highly Voted $ 1 year, 1 month ago


C. Step functions Standard for order fulfillment.
upvoted 10 times

" # pititcu667 Most Recent % 9 months, 3 weeks ago


Selected Answer: C
c because step functions can be used to handle the steps. workflow service would have been better but yeah.
upvoted 1 times

" # cldy 11 months ago


C. Update the application to store new order information in Amazon DynamoDB. When a new order is created, trigger an AWS Step Functions
workflow, mark the orders as ‫ג‬€in progress‫ג‬€, and print a package label to the warehouse. Once the label has been scanned and fulfilled, the
application will trigger an AWS Lambda function that will mark the order as shipped and complete the workflow.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: C
C is right answer!
upvoted 1 times

" # acloudguru 11 months, 1 week ago


Selected Answer: C
A does not make any sense. C should be the serverless and control whole process solution.
upvoted 2 times

" # student22 1 year ago

260 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

C
Step functions
upvoted 1 times
" # tgv 1 year ago
CCC
---
upvoted 1 times

" # blackgamer 1 year ago


The answer is C.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

" # Kopa 1 year ago


C, Typical DynamoDB and AWS Step Functions usage
upvoted 1 times

" # ladh 1 year ago


Why not b?
upvoted 1 times

" # pradhyumna 1 year ago


Because EC2 instance is part of the solution which does not fit the requirement of serverless
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 4 times

" # KnightVictor 1 year ago


Answer is C. keywords like "human intervention", workflow an be changed by step functions
upvoted 1 times

" # Ebi 1 year ago


Answer is C
upvoted 3 times

" # petebear55 1 year ago


A: is not Serverless .. ans is C ... wish they would stop putting wrong answers in the answe r box
upvoted 1 times

" # Bulti 1 year, 1 month ago


Answer is C
upvoted 2 times

261 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #611 Topic 1

A company has developed a mobile game. The backend for the game runs on several virtual machines located in an on-premises data center. The
business logic is exposed using a REST API with multiple functions. Player session data is stored in central +le storage. Backend services use
different API keys for throttling and to distinguish between live and test tramc.
The load on the game backend varies throughout the day. During peak hours, the server capacity is not sumcient. There are also latency issues
when fetching player session data. Management has asked a solutions architect to present a cloud architecture that can handle the game's
varying load and provide low-latency data access. The API model should not be changed.
Which solution meets these requirements?

A. Implement the REST API using a Network Load Balancer (NLB). Run the business logic on an Amazon EC2 instance behind the NLB. Store
player session data in Amazon Aurora Serverless.

B. Implement the REST API using an Application Load Balancer (ALB). Run the business logic in AWS Lambda. Store player session data in
Amazon DynamoDB with on-demand capacity.

C. Implement the REST API using Amazon API Gateway. Run the business logic in AWS Lambda. Store player session data in Amazon
DynamoDB with on- demand capacity.

D. Implement the REST API using AWS AppSync. Run the business logic in AWS Lambda. Store player session data in Amazon Aurora
Serverless.

Correct Answer: A

Community vote distribution


C (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


C. Api Gateway . DynamoDB typical gaming use case.
https://aws.amazon.com/blogs/database/amazon-dynamodb-gaming-use-cases-and-design-patterns/
upvoted 23 times

" # Ni_yot Most Recent % 2 months, 3 weeks ago


C is good
upvoted 1 times

" # pititcu667 9 months, 3 weeks ago


Selected Answer: C
c -> api lambda dynamo classic use case
upvoted 1 times

" # vbal 11 months, 1 week ago


Answer shld be B based upon API model cant be changed as both ALB & API Gateway have similar capabilities...https://dashbird.io/blog/aws-
api-gateway-vs-application-load-balancer/
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: C
C is perfect answer for this use case!
upvoted 1 times

" # andylogan 1 year ago


It's C, typical use-case
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 2 times

" # blackgamer 1 year ago

262 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

C is the answer.
upvoted 1 times
" # Waiweng 1 year ago
it;s C
upvoted 3 times

" # gsw 1 year ago


always avoid making radical changes to the architecture unless the question specifies it
upvoted 2 times

" # alisyech 1 year ago


C for sure
upvoted 1 times

" # certainly 1 year, 1 month ago


B would work. API gateway limit 10,000 Req. per Sec. while ALB doesn't have limit, hence it scale better
upvoted 2 times

" # Ebi 1 year, 1 month ago


I will with C
upvoted 3 times

" # 01037 1 year, 1 month ago


C is the best option here, but it needs lots of work.
Without anymore requirement, C is the answer.
upvoted 3 times

" # Bulti 1 year, 1 month ago


C is the correct answer.
upvoted 1 times

" # T14102020 1 year, 1 month ago


Correct is C. API Gateway + Lambda + Dynamo
upvoted 1 times

263 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #612 Topic 1

An enterprise company wants to allow its developers to purchase third-party software through AWS Marketplace. The company uses an AWS
Organizations account structure with full features enabled, and has a shared services account in each organizational unit (OU) that will be used by
procurement managers. The procurement team's policy indicates that developers should be able to obtain third-party software from an approved
list only and use Private Marketplace in AWS
Marketplace to achieve this requirement. The procurement team wants administration of Private Marketplace to be restricted to a role named
procurement- manager-role, which could be assumed by procurement managers. Other IAM users, groups, roles, and account administrators in the
company should be denied
Private Marketplace administrative access.
What is the MOST emcient way to design an architecture to meet these requirements?

A. Create an IAM role named procurement-manager-role in all AWS accounts in the organization. Add the PowerUserAccess managed policy to
the role. Apply an inline policy to all IAM users and roles in every AWS account to deny permissions on the
AWSPrivateMarketplaceAdminFullAccess managed policy.

B. Create an IAM role named procurement-manager-role in all AWS accounts in the organization. Add the AdministratorAccess managed policy
to the role. De+ne a permissions boundary with the AWSPrivateMarketplaceAdminFullAccess managed policy and attach it to all the developer
roles.

C. Create an IAM role named procurement-manager-role in all the shared services accounts in the organization. Add the
AWSPrivateMarketplaceAdminFullAccess managed policy to the role. Create an organization root-level SCP to deny permissions to administer
Private Marketplace to everyone except the role named procurement-manager-role. Create another organization root-level SCP to deny
permissions to create an IAM role named procurement-manager-role to everyone in the organization.

D. Create an IAM role named procurement-manager-role in all AWS accounts that will be used by developers. Add the
AWSPrivateMarketplaceAdminFullAccess managed policy to the role. Create an SCP in Organizations to deny permissions to administer
Private Marketplace to everyone except the role named procurement-manager-role. Apply the SCP to all the shared services accounts in the
organization.

Correct Answer: D

Community vote distribution


C (60%) D (40%)

" # Nemer Highly Voted $ 1 year, 1 month ago


C. SCP to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role.

https://aws.amazon.com/blogs/awsmarketplace/controlling-access-to-a-well-architected-private-marketplace-using-iam-and-aws-organizations/
upvoted 19 times

" # Gladabhi 1 year ago


I will go with C as Procurement manager need access from shared account. We don't want any other account have the proc-mag-role as goes
with least permission principle.
upvoted 3 times

" # Nemer 1 year, 1 month ago


Changed to D. In C, there is the issue of ROOT-level SCP to deny permissions to create an IAM role named procurement-manager-role to
EVERYONE in the organization..
upvoted 9 times

" # joe16 1 year ago


D is wrong. Developers should not have the procurement-manager-role.
"...restricted to a role named procurement- manager-role, which could be assumed by procurement managers"
upvoted 3 times

" # Kelvin 1 year ago


Yes, D looks correct.
upvoted 1 times

" # shammous 1 year ago


The issue is not with the word "EVERYONE", but with the entire useless statement: "Create another organization root-level SCP to deny
permissions to create an IAM role named procurement-manager-role to everyone in the organization.": First, this could be done in the first
SCP, second, denying permissions to create an IAM role named procurement-manager-role doesn't change anything.

264 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # RedKane 1 year ago


Without second SCP users/roles in other accounts that have full IAM access could create role with this name "procurement-manager-
role" and assign any permission they want - since first SCP explicitly excludes "procurement-manager-role" from the DENY that would
allow bypassing intended design of security rules.
upvoted 6 times

" # student22 1 year ago


Good explanation.
C makes sense.
upvoted 1 times
" # WhyIronMan Highly Voted $ 1 year ago
I'll go with C
upvoted 6 times

" # dcdcdc3 Most Recent % 1 month, 2 weeks ago


Selected Answer: D
per the link provided below
https://aws.amazon.com/blogs/awsmarketplace/controlling-access-to-a-well-architected-private-marketplace-using-iam-and-aws-organizations/

and per this paragraph within that link


"As an additional control, I applied an SCP to all the organizational units in this example organization to restrict Private Marketplace
administration access to an IAM role called procurement-manager. This guardrail prevents other IAM roles, users, or groups from accessing the
Private Marketplace administration page, even administrators in any of these organizational units’ accounts."

I would choose D
upvoted 2 times

" # Rahu 3 weeks, 2 days ago


But here the question also says "Other IAM users, groups, roles, and account administrators in the company should be denied Private
Marketplace administrative access". That means Answer C only matches your point.
upvoted 1 times

" # hilft 3 months, 1 week ago


D. not C
never root level SCP
upvoted 2 times

" # jj22222 9 months ago


C looks right
upvoted 1 times

" # AMKazi 9 months, 2 weeks ago


Answer should be B. - meets both requirements of procurement mgmt and dev access
C- only solving requirement of procurement manager. What about developer access to use the marketplace?
D- giving procurement manager role to Developers
upvoted 1 times

" # cldy 10 months, 1 week ago


C is correct.
upvoted 1 times

" # Ni_yot 10 months, 2 weeks ago


C for me. The link attached in the write up is worth a read. https://aws.amazon.com/blogs/awsmarketplace/controlling-access-to-a-well-
architected-private-marketplace-using-iam-and-aws-organizations/
upvoted 1 times

" # cldy 11 months ago


C. Create an IAM role named procurement-manager-role in all the shared services accounts in the organization. Add the
AWSPrivateMarketplaceAdminFullAccess managed policy to the role. Create an organization root-level SCP to deny permissions to administer
Private Marketplace to everyone except the role named procurement-manager-role. Create another organization root-level SCP to deny
permissions to create an IAM role named procurement-manager-role to everyone in the organization.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: C
C is right!
https://aws.amazon.com/blogs/awsmarketplace/controlling-access-to-a-well-architected-private-marketplace-using-iam-and-aws-organizations/
upvoted 2 times

" # acloudguru 11 months, 1 week ago

265 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Selected Answer: C
C. SCP to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role.

https://aws.amazon.com/blogs/awsmarketplace/controlling-access-to-a-well-architected-private-marketplace-using-iam-and-aws-organizations/
upvoted 1 times
" # andylogan 1 year ago
It's C
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # blackgamer 1 year ago


C is the answer. D is wrong as the SCP applying to shared service account which is not being used by developer.
upvoted 3 times

" # blackgamer 1 year ago


Please refer following links for more details why C is correct.
https://aws.amazon.com/blogs/awsmarketplace/controlling-access-to-a-well-architected-private-marketplace-using-iam-and-aws-
organizations/
upvoted 1 times

" # beebatov 1 year ago


C is the Answer! D is giving procurement-manager-role to DEVELOPERS!!
Although its not the best practice to apply SCP at root level, but C is the most viable answer for me here.
upvoted 3 times

" # gsw 1 year ago


worth checking the Jon Bonso exams as this question is in one of the exam sets and he gives answer A. PowerUserAccess is a IAM default user
role for developers https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html
upvoted 4 times

" # KnightVictor 1 year ago


going with D
upvoted 1 times

266 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #613 Topic 1

A solutions architect is designing the data storage and retrieval architecture for a new application that a company will be launching soon. The
application is designed to ingest millions of small records per minute from devices all around the world. Each record is less than 4 KB in size and
needs to be stored in a durable location where it can be retrieved with low latency. The data is ephemeral and the company is required to store the
data for 120 days only, after which the data can be deleted.
The solutions architect calculates that, during the course of a year, the storage requirements would be about 10-15 TB.
Which storage strategy is the MOST cost-effective and meets the design requirements?

A. Design the application to store each incoming record as a single .csv +le in an Amazon S3 bucket to allow for indexed retrieval. Con+gure a
lifecycle policy to delete data older than 120 days.

B. Design the application to store each incoming record in an Amazon DynamoDB table properly con+gured for the scale. Con+gure the
DynamoDB Time to Live (TTL) feature to delete records older than 120 days.

C. Design the application to store each incoming record in a single table in an Amazon RDS MySQL database. Run a nightly cron job that
executes a query to delete any records older than 120 days.

D. Design the application to batch incoming records before writing them to an Amazon S3 bucket. Update the metadata for the object to
contain the list of records in the batch and use the Amazon S3 metadata search feature to retrieve the data. Con+gure a lifecycle policy to
delete the data after 120 days.

Correct Answer: C

Community vote distribution


B (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


B. DynamoDB with TTL, cheaper for sustained throughput of small items + suited for fast retrievals. S3 cheaper for storage only, much higher
costs with writes. RDS not designed for this use case.
upvoted 29 times

" # oscargee 1 year ago


DynamoDB is a Key/Value storage. And it fits big data read/write. So it cannot be used in this situation.
upvoted 2 times

" # sashenka 11 months, 4 weeks ago


Hmmm.... "The program is meant to continuously consume millions of tiny records per minute from devices located around the globe." If
that doesn't say big data read/write than I don't know what is. Also, DynamoDB is perfect for this especially seeing that the 4k value is the
limit size.
upvoted 3 times

" # kyo Most Recent % 9 months ago


Selected Answer: B
Only B can do it
upvoted 1 times

" # cannottellname 9 months, 2 weeks ago


BBBBBBBBBBB
upvoted 1 times

" # cldy 10 months, 1 week ago


B correct.
upvoted 1 times

" # vbal 10 months, 2 weeks ago


Answer: D; Anyone who thinks S3 Object Metadata Search is not possible: https://aws.amazon.com/blogs/architecture/swiftly-search-metadata-
with-an-amazon-s3-serverless-architecture/
upvoted 2 times

" # vbal 10 months, 2 weeks ago


I wld say building index have a cost attached which could be offset by adding more items in a batch ...
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


B is correct!

267 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # acloudguru 11 months, 1 week ago


Selected Answer: B
B. DynamoDB with TTL, cheaper for sustained throughput of small items + suited for fast retrievals. S3 cheaper for storage only, much higher
costs with writes. RDS not designed for this use case.
upvoted 3 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # DerekKey 1 year ago


A & C - wrong
B - should be correct
D - I am not aware of the API that you can use to search S3 object using used-defined matadata btw. 1.000 put requests cost 0,005 and PUT
request header has limitation for user-defined metadata to 2 KB
upvoted 1 times

" # kirrim 1 year ago


Kendra and ElasticSearch will let you search S3 object metadata, but D sounds to me like they're saying it's a native function of S3 itself,
which neither of those are. So I'm not saying D is right, just that other services can do it.

Re: PUT header request with limitation for user-defined metadata to 2KB, that should be OK, you're not storing 4KB data in metadata, you'd
be combining multiple 4KB data pieces into a very large flat file. The metadata would only tell you which data pieces are in that very large flat
file.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # blackgamer 1 year ago


B is cost effective compared to C. Also low latency.
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 2 times

" # Ajeeshpv 1 year ago


B, millions of input with size less than 4 kb and low latency
upvoted 1 times

" # Kian1 1 year ago


going with B
upvoted 1 times

" # 01037 1 year ago


I'll go with B, since it's an exam.
But I think S3 is pretty cost-effective in this case, though I don't know what "indexed retrieval" is.
As long as we give enough prefix, I think it may meet the requirement, since S3 has at least"3,500 PUT/COPY/POST/DELETE or 5,500
GET/HEAD requests per second per prefix in a bucket".
upvoted 1 times

" # RedKane 1 year ago


S3 is $5 per million requests, if you have 1 million per minute it's $216,000 per month. Roughly 6x cost of DynamoDB
upvoted 3 times

" # tomosabc1 1 month ago


Thanks for pointing this out.
upvoted 1 times

" # Ebi 1 year ago


Answer is B
Cost effective, low latency, TTL supports retention
upvoted 4 times

268 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # petebear55 1 year ago


This is in the exam guys !!! answer is B Dynamo db is most suitable in these cases. 'ingest millions of small records per minute from devices all
around the world.' D IS RED HERRING
upvoted 2 times

269 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #614 Topic 1

A company provides auction services for artwork and has users across North America and Europe. The company hosts its application in Amazon
EC2 instances in the us-east-1 Region. Artists upload photos of their work as large-size, high-resolution image +les from their mobile phones to a
centralized Amazon S3 bucket created in the us-east-1 Region. The users in Europe are reporting slow performance for their image uploads.
How can a solutions architect improve the performance of the image upload process?

A. Redeploy the application to use S3 multipart uploads.

B. Create an Amazon CloudFront distribution and point to the application as a custom origin.

C. Con+gure the buckets to use S3 Transfer Acceleration.

D. Create an Auto Scaling group for the EC2 instances and create a scaling policy.

Correct Answer: C

Community vote distribution


C (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


C. Typical S3 Transfer Acceleration use case. Uses CloudFront’s globally distributed edge locations.
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
Exclude option A, as only EU customers have latency issues.
upvoted 21 times

" # Ebi Highly Voted $ 1 year ago


Definitely C
upvoted 6 times

" # cldy Most Recent % 11 months ago


C. Configure the buckets to use S3 Transfer Acceleration.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Transfer acceleration is correct. C is right answer
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: C
A does not make sense, should be C , to use Transfer acceleration for S3 in different region. hope I can have such easy question in my exam.
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

" # blackgamer 1 year ago


C is the answer.
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 3 times

" # alisyech 1 year ago


i go with C
upvoted 1 times

270 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Kian1 1 year ago


going with C
upvoted 3 times

" # petebear55 1 year ago


This question is very similar to one in teh exam .. where u will be asked to choose two answers ... the other answer is 'upload LARGER images'
rather than the (which one would first go for ) smaller images
upvoted 2 times

" # Bulti 1 year ago


Answer is C.
upvoted 1 times

" # T14102020 1 year, 1 month ago


Correct is C. S3 Transfer Acceleration
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with C
upvoted 3 times

" # cloudgc 1 year, 1 month ago


C as the performance issue is only in uploading images from Europe.
upvoted 1 times

271 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #615 Topic 1

A company has developed a new release of a popular video game and wants to make it available for public download. The new release package is
approximately
5 GB in size. The company provides downloads for existing releases from a Linux-based, publicly facing FTP site hosted in an on-premises data
center. The company expects the new release will be downloaded by users worldwide. The company wants a solution that provides improved
download performance and low transfer costs, regardless of a user's location.
Which solutions will meet these requirements?

A. Store the game +les on Amazon EBS volumes mounted on Amazon EC2 instances within an Auto Scaling group. Con+gure an FTP service
on the EC2 instances. Use an Application Load Balancer in front of the Auto Scaling group. Publish the game download URL for users to
download the package.

B. Store the game +les on Amazon EFS volumes that are attached to Amazon EC2 instances within an Auto Scaling group. Con+gure an FTP
service on each of the EC2 instances. Use an Application Load Balancer in front of the Auto Scaling group. Publish the game download URL
for users to download the package.

C. Con+gure Amazon Route 53 and an Amazon S3 bucket for website hosting. Upload the game +les to the S3 bucket. Use Amazon CloudFront
for the website. Publish the game download URL for users to download the package.

D. Con+gure Amazon Route 53 and an Amazon S3 bucket for website hosting. Upload the game +les to the S3 bucket. Set Requester Pays for
the S3 bucket. Publish the game download URL for users to download the package.

Correct Answer: C

" # Nemer Highly Voted $ 1 year, 1 month ago


C. CloudFront for the website + game download URL.

As this a public download that allows anonymous access, option D is excluded.


upvoted 16 times

" # Ebi Highly Voted $ 1 year ago


C is the answer
upvoted 5 times

" # loopback0 Most Recent % 3 weeks, 4 days ago


It's C
upvoted 1 times

" # cldy 11 months ago


C. Configure Amazon Route 53 and an Amazon S3 bucket for website hosting. Upload the game files to the S3 bucket. Use Amazon CloudFront
for the website. Publish the game download URL for users to download the package.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


C is right
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # mericov 1 year ago


C ->
https://aws.amazon.com/blogs/aws/amazon-cloudfront-support-for-20-gb-objects/
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

272 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Kopa 1 year ago


C, cloudfront more cost effective
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 3 times

" # Kian1 1 year ago


going with C
upvoted 2 times

" # Bulti 1 year ago


answer is C.
upvoted 1 times

" # srinivasa 1 year ago


D reduces the transfer costs
upvoted 1 times

" # darthvoodoo 1 year ago


D is not feasible because users are expected to have AWS accounts to absorb the download cost.
upvoted 2 times

" # petebear55 1 year ago


but c where your coming from
upvoted 1 times

" # T14102020 1 year, 1 month ago


Correct is C. CloudFront
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with C
upvoted 3 times

" # CYL 1 year, 1 month ago


C. Since users are from worldwide, using cloudfront as a CDN will help to improve speed of download. Moreover, the game file is likely the same
content for each user who does the download.
upvoted 1 times

273 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #616 Topic 1

A new startup is running a serverless application using AWS Lambda as the primary source of compute. New versions of the application must be
made available to a subset of users before deploying changes to all users. Developers should also have the ability to abort the deployment and
have access to an easy rollback

mechanism. A solutions architect decides to use AWS CodeDeploy to deploy changes when a new version is available.
Which CodeDeploy con+guration should the solutions architect use?

A. A blue/green deployment

B. A linear deployment

C. A canary deployment

D. An all-at-once deployment

Correct Answer: D
Reference:
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html

Community vote distribution


C (100%)

" # Blair77 3 weeks, 3 days ago


Selected Answer: C
C for Canary !!! Let's go!
upvoted 1 times

" # CloudHandsOn 3 months, 2 weeks ago


C.
"..subset of users." <- Canary is the job for this
upvoted 2 times

" # gorodetsky 8 months ago


Selected Answer: C
C: https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html
upvoted 2 times

" # Chuky64 5 months ago


This is the key "program must be made accessible to a subset of users"
upvoted 1 times

" # GeniusMikeLiu 9 months, 3 weeks ago


why not A? blue/green also support rollback /
upvoted 1 times

" # kadev 2 months, 1 week ago


because it's a concept defined by AWS. lambda deployment modes only: canary, linear, all
upvoted 1 times

" # cldy 11 months ago


C. A canary deployment
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


C is correct answer
upvoted 1 times

" # pcops 11 months, 2 weeks ago


C: new versions of the program must be made accessible to a subset of users. Definition of canary deployment - A canary deployment, or canary
release, is a deployment pattern that allows you to roll out new code/features to a subset of users as an initial test
upvoted 1 times

" # andylogan 1 year ago


It's C

274 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # tgv 1 year ago
CCC
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

" # Kopa 1 year ago


Its C since at serverless application ECS we can use canary.
upvoted 1 times

" # blackgamer 1 year, 1 month ago


It is C, Canary deployment.
upvoted 2 times

" # Waiweng 1 year, 1 month ago


it's A, sorry Codedeploy on use blue/green deployment for lambda
upvoted 4 times

" # surekye 1 year ago


C is the Answer: https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html.
Question is about Deployment configuration.
upvoted 3 times

" # Waiweng 1 year, 1 month ago


it's C
upvoted 4 times

" # KnightVictor 1 year, 1 month ago


would go with C
upvoted 1 times

" # nitinz 1 year, 1 month ago


C is correct answer.
upvoted 1 times

" # Ajeeshpv 1 year, 1 month ago


C , subset of users
upvoted 3 times

275 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #617 Topic 1

A solutions architect is implementing federated access to AWS for users of the company's mobile application. Due to regulatory and security
requirements, the application must use a custom-built solution for authenticating users and must use IAM roles for authorization.
Which of the following actions would enable authentication and authorization and satisfy the requirements? (Choose two.)

A. Use a custom-built SAML-compatible solution for authentication and AWS SSO for authorization.

B. Create a custom-built LDAP connector using Amazon API Gateway and AWS Lambda for authentication. Store authorization tokens in
Amazon DynamoDB, and validate authorization requests using another Lambda function that reads the credentials from DynamoDB.

C. Use a custom-built OpenID Connect-compatible solution with AWS SSO for authentication and authorization.

D. Use a custom-built SAML-compatible solution that uses LDAP for authentication and uses a SAML assertion to perform authorization to the
IAM identity provider.

E. Use a custom-built OpenID Connect-compatible solution for authentication and use Amazon Cognito for authorization.

Correct Answer: AC

Community vote distribution


DE (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


DE. Custom-built SAML-compatible & OpenID Connect-compatible solutions.
A & C: AWS SSO does not support mobile apps.
B - no comment.
upvoted 22 times

" # DashL 1 year ago


Guys, the key thing is that SSO is an authentication service - not an authorization service. Whether is supports mobile app or not is the the
key point here.
upvoted 4 times

" # DashL 1 year ago


Cognito is both an authentication and authorization service. The steps for Cognito authentication and authorization are:
User Login: User enters username and password and logs in with Cognito User Pool in which case a token will be provided by Cognito
upon successful login. User pool provides features to control user sign up, sign in and user management.
Get Temporary Credentials: Identity pool can use authentication providers like Amazon, Cognito, Custom, Facebook, Google, OpenId,
SAML, Twitter/Digits. Identity pool verifies that the token provided to it is a valid token generated by a registered authentication provider.
Cognito Identity Pool will provide temporary credentials for accessing AWS resources.
User Authorization: Cognito will use IAM to authorize the user with necessary permissions with IAM role (authenticated and
unauthenticated identities).
upvoted 1 times

" # LunchTime 1 year, 1 month ago


AWS SSO does support mobile apps: https://aws.amazon.com/single-sign-on/faqs/
upvoted 2 times

" # LunchTime 1 year, 1 month ago


Sorry, Nemer is correct. If you go to the link I posted you will see that it says "Does AWS SSO support single sign-on to native mobile and
desktop applications?
No. AWS SSO supports single sign-on to business applications through web browsers only. " My apologies.
upvoted 1 times

" # Carupano 1 year, 1 month ago


You all do not have a clue...what you're posting bunch of locos.
https://aws.amazon.com/emr/features/spark/
upvoted 2 times

" # tekkart 1 year ago


Answers A and E
A : OK, AWS SSO is compatible with SAML
B : KO, the requirements "IAM roles for authorisation" is not respected
C : KO, AWS SSO is not compatible with OIDC
D : KO, a solution cannot use SAML and LDAP at the same time, those are 2 distinct protocols
E : OK even though Cognito is more authentication than authorisation, it rests on IAM roles
upvoted 2 times

276 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # StelSen 1 year ago


Agree. SSO Can also do authorization via SSO >> Permission Tab
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Does AWS SSO support single sign-on to native mobile and desktop applications?

No. AWS SSO supports single sign-on to business applications through web browsers only.https://aws.amazon.com/single-sign-
on/faqs/?nc1=h_ls
upvoted 1 times

" # sashsz 7 months ago


: KO, a solution cannot use SAML and LDAP at the same time, those are 2 distinct protocols
Incorrect check your information
upvoted 2 times
" # Ni_yot Most Recent % 3 weeks, 6 days ago
like D and E
upvoted 1 times

" # asfsdfsdf 2 months, 2 weeks ago


Selected Answer: DE
This is clearly documented in AWS docs:
For D see diagram:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html
for E:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc_cognito.html
SSO is not for mobile apps
upvoted 1 times

" # Sat12345 10 months, 1 week ago


Does AWS SSO support single sign-on to native mobile and desktop applications?

No. AWS SSO supports single sign-on to business applications through web browsers only.
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: DE
D,E is correct !
upvoted 1 times

" # johnnsmith 1 year ago


D is wrong. To use role for authorization, you have to use STS assumerole API. There is nothing in D that calls STS API. AWS SSO and AWS
Cognito can call STS API. So it is A and E.
upvoted 1 times

" # andylogan 1 year ago


It's D, E
upvoted 1 times

" # tgv 1 year ago


DDD EEE
---
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with D,E
upvoted 1 times

" # blackgamer 1 year ago


D & E.
upvoted 1 times

" # Waiweng 1 year ago


it's D,E
upvoted 2 times

" # Kian1 1 year ago


will go with DE
upvoted 2 times

" # Ebi 1 year, 1 month ago


SSO is not purposed for end users and mobile apps.

277 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

I will go with DE
upvoted 4 times
" # Bulti 1 year, 1 month ago
D&E is correct. AWS SSO does not support mobile authentication.
upvoted 1 times

" # T14102020 1 year, 1 month ago


Correct is DE. A and C: AWS SSO does not support mobile apps.
upvoted 2 times

" # jackdryan 1 year, 1 month ago


I'll go with D,E
upvoted 3 times

" # NNHAN 1 year, 1 month ago


I choose B, D.
upvoted 2 times

278 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #618 Topic 1

A company has developed a custom tool used in its workjow that runs within a Docker container. The company must perform manual steps each
time the container code is updated to make the container image available to new workjow executions. The company wants to automate this
process to eliminate manual effort and ensure a new container image is generated every time the tool code is updated.
Which combination of actions should a solutions architect take to meet these requirements? (Choose three.)

A. Con+gure an Amazon ECR repository for the tool. Con+gure an AWS CodeCommit repository containing code for the tool being deployed to
the container image in Amazon ECR.

B. Con+gure an AWS CodeDeploy application that triggers an application version update that pulls the latest tool container image from
Amazon ECR, updates the container with code from the source AWS CodeCommit repository, and pushes the updated container image to
Amazon ECR.

C. Con+guration an AWS CodeBuild project that pulls the latest tool container image from Amazon ECR, updates the container with code from
the source AWS CodeCommit repository, and pushes the updated container image to Amazon ECR.

D. Con+gure an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeDeploy
application update.

E. Con+gure an Amazon EventBridge rule that triggers on commits to the AWS CodeCommit repository for the tool. Con+gure the event to
trigger an update to the tool container image in Amazon ECR. Push the updated container image to Amazon ECR.

F. Con+gure an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeBuild
build.

Correct Answer: ACD

Community vote distribution


ACF (100%)

" # directconnect Highly Voted $ 1 year, 1 month ago


The answer is ACF. The pipeline should trigger the CodeBuild project which will store the result in ECR using CodeCommit as source.
CodeDeploy has no part to play in this.
upvoted 18 times

" # CYL Highly Voted $ 1 year ago


ACF. We are building new image, hence what we need are ECR to store the image, code build to build the image, and Codepipeline to see this
end to end. Code Deploy is not needed as we are not deploying to the container.
upvoted 6 times

" # bobsmith2000 Most Recent % 5 months, 3 weeks ago


C, F for sure.
B and D are nonsense.
Between A and E.

Why A?
From it we only need "Configure an Amazon ECR repository for the tool.". The rest is crap. C and F cover all the process from pulling to
CodeCommit to pushing to ECR.
Why the hell the added a second sentence in A "Configure an AWS CodeCommit repository containing code for the tool being deployed to the
container image in Amazon ECR." ?
Whose sick mind is this a product of?
upvoted 1 times

" # cldy 11 months ago


A. Configure an Amazon ECR repository for the tool. Configure an AWS CodeCommit repository containing code for the tool being deployed to
the container image in Amazon ECR.
C. Configuration an AWS CodeBuild project that pulls the latest tool container image from Amazon ECR, updates the container with code from
the source AWS CodeCommit repository, and pushes the updated container image to Amazon ECR.
F. Configure an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeBuild
build.
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: ACF
ACF is correct.
upvoted 2 times

279 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # acloudguru 11 months, 2 weeks ago


Selected Answer: ACF
ACF. We are building new image, hence what we need are ECR to store the image, code build to build the image, and Codepipeline to see this
end to end. Code Deploy is not needed as we are not deploying to the container.
upvoted 1 times

" # andylogan 1 year ago


A, C, F
upvoted 1 times

" # tgv 1 year ago


AAA CCC FFF
---
upvoted 2 times

" # denccc 1 year ago


IIII tttthink yyyyour kkkeyboard iiis bbbroken
upvoted 2 times

" # tgv 1 year ago


it helps me to spot my answer more easily when I go through the questions the second time
upvoted 4 times

" # WhyIronMan 1 year ago


I'll go with A,C,F
upvoted 1 times

" # blackgamer 1 year ago


ACF is correct.
upvoted 1 times

" # Waiweng 1 year ago


A,C, F
upvoted 3 times

" # Kian1 1 year ago


going with ACF
upvoted 2 times

" # rcher 1 year ago


Ok with ACF, but i wonder why must CodeBuild pull the latest image while building the new code? I thought its just a matter of building an
existing DockerFile , unless its importing a base image.

Configuration an AWS CodeBuild project that pulls the latest tool container image from Amazon ECR,
upvoted 2 times

" # Ebi 1 year ago


ACF is the answer
upvoted 3 times

" # Bulti 1 year ago


ACF is the right answer.
upvoted 1 times

" # T14102020 1 year ago


Correct is ACF. CodeBuild+ Pipeline + without CodeDeploy
upvoted 2 times

" # jackdryan 1 year ago


I'll go with A,C,F
upvoted 3 times

280 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #619 Topic 1

A company hosts an application on Amazon EC2 instance and needs to store +les in Amazon S3. The +les should never traverse the public
internet, and only the application EC2 instances are granted access to a speci+c Amazon S3 bucket. A solutions architect has created a VPC
endpoint for Amazon S3 and connected the endpoint to the application VPC.
Which additional steps should the solutions architect take to meet these requirements?

A. Assign an endpoint policy to the endpoint that restricts access to a speci+c S3 bucket. Attach a bucket policy to the S3 bucket that grants
access to the VPC endpoint. Add the gateway pre+x list to a NACL of the instances to limit access to the application EC2 instances only.

B. Attach a bucket policy to the S3 bucket that grants access to application EC2 instances only using the aws:SourceIp condition. Update the
VPC route table so only the application EC2 instances can access the VPC endpoint.

C. Assign an endpoint policy to the VPC endpoint that restricts access to a speci+c S3 bucket. Attach a bucket policy to the S3 bucket that
grants access to the VPC endpoint. Assign an IAM role to the application EC2 instances and only allow access to this role in the S3 bucket's
policy.

D. Assign an endpoint policy to the VPC endpoint that restricts access to S3 in the current Region. Attach a bucket policy to the S3 bucket that
grants access to the VPC private subnets only. Add the gateway pre+x list to a NACL to limit access to the application EC2 instances only.

Correct Answer: C

Community vote distribution


C (100%)

" # Nemer Highly Voted $ 1 year, 1 month ago


C. S3 endpoint policy to restrict access to specific bucket, bucket policy to grant access to specific VPC endpoint + specific role.
upvoted 19 times

" # kirrim 1 year ago


C is the best answer, I'm not crazy about the wording though. "Only allow access to this role in the S3 bucket's policy" ignores that the very
same answer also says the the VPCE needs to be permitted in the bucket policy, too.
upvoted 2 times

" # Konnon Highly Voted $ 1 year, 1 month ago


The answer is C. Bucket policy allows VPCE. VPCE policy restrict to specific bucket. IAM allows the user to use S3.
upvoted 13 times

" # cldy Most Recent % 10 months, 1 week ago


C is correct.
upvoted 1 times

" # Ni_yot 10 months, 2 weeks ago


C for me. https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html
upvoted 1 times

" # AzureDP900 11 months ago


This question is in Neal Davis Practice test 5
upvoted 1 times

" # vramchn 11 months ago


C. Role + policy
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: C
Adrian Cantrill explained very well about Gateway endpoint policy , It is C
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: C
I will go with C
upvoted 1 times

" # andylogan 1 year ago


It's C

281 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 3 times

" # Waiweng 1 year ago


it's C
upvoted 4 times

" # kiev 1 year ago


Bucket policy +Role #C
upvoted 1 times

" # Ebi 1 year ago


I will go with C
upvoted 6 times

" # petebear55 1 year ago


C: Remember best practice when it comes to difficult questions like this lads !!! "ROLE"
upvoted 2 times

" # Bulti 1 year ago


C is correct.
upvoted 2 times

" # T14102020 1 year ago


Correct is C. without NACL + without aws:SourceIp
upvoted 1 times

282 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #620 Topic 1

A +nancial services company has an on-premises environment that ingests market data feeds from stock exchanges, transforms the data, and
sends the data to an internal Apache Kafka cluster. Management wants to leverage AWS services to build a scalable and near-real-time solution
with consistent network performance to provide stock market data to a web application.
Which steps should a solutions architect take to build the solution? (Choose three.)

A. Establish an AWS Direct Connect connection from the on-premises data center to AWS.

B. Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Consumer Library
to put the data into an Amazon Kinesis data stream.

C. Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Kinesis Producer
Library to put the data into a Kinesis data stream.

D. Create a WebSocket API in Amazon API Gateway, create an AWS Lambda function to process an Amazon Kinesis data stream, and use the
@connections command to send callback messages to connected clients.

E. Create a GraphQL API in AWS AppSync, create an AWS Lambda function to process the Amazon Kinesis data stream, and use the
@connections command to send callback messages to connected clients.

F. Establish a Site-to-Site VPN from the on-premises data center to AWS.

Correct Answer: ADE

Community vote distribution


ACD (100%)

" # pengcp Highly Voted $ 1 year, 1 month ago


ACD, not E. Refer https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-how-to-call-websocket-api-connections.html
upvoted 20 times

" # Kelvin1477 1 year ago


yes support this as callback is only allowed for websocket
upvoted 5 times

" # jyrajan69 Most Recent % 4 months, 1 week ago


ACD, A is obvious because of consistency, B is wrong because there is no AWS library and D because @callback is a feature of Websocket API
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: ACD
ACD is right answer!
upvoted 1 times

" # andylogan 1 year ago


It's A C D
upvoted 1 times

" # tgv 1 year ago


AAA CCC DDD
---
upvoted 1 times

" # blackgamer 1 year ago


ACD for sure.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,C,D
upvoted 2 times

" # Waiweng 1 year ago


it's ACD
upvoted 3 times

" # Amitv2706 1 year ago

283 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

@connections command for call back doesnt seem to be available in Appsync( but yes with API gateway) as it manages these constructs
internally.

Seeing this ACD is correct.


upvoted 2 times

" # Kian1 1 year ago


going with ACD
upvoted 2 times

" # Ebi 1 year ago


ACD is correct
upvoted 3 times

" # Bulti 1 year ago


ACD is correct. E is incorrect because @connections to have the backend service connect back to the connected clients is not a feature of
GraphQL API using AppSync.
upvoted 2 times

" # T14102020 1 year ago


Correct is ACD.
Direct Connect
+ Transfer Kafka content by Producer into Kinesis Data Stream
+ Websocket to connect to web application clients.
upvoted 3 times

" # hedglin 1 year ago


ACD : I don't think GraphQL API han @connection command.
upvoted 1 times

" # jackdryan 1 year ago


I'll go with A,C,D
upvoted 3 times

" # CYL 1 year ago


ACD. Direct Connect to ensure reliable network connection between on premise to VPC, transfer Kafka content into Kinese Data Stream and then
use websocket to connect to web application clients.
upvoted 3 times

" # Joe666 1 year, 1 month ago


ACD are correct
upvoted 3 times

284 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #621 Topic 1

A +tness tracking company serves users around the world, with its primary markets in North America and Asia. The company needs to design an
infrastructure for its read-heavy user authorization application with the following requirements:
✑ Be resilient to problems with the application in any Region.
✑ Write to a database in a single Region.
✑ Read from multiple Regions.
✑ Support resiliency across application tiers in each Region.
✑ Support the relational database semantics rejected in the application.
Which combination of steps should a solutions architect take? (Choose two.)

A. Use an Amazon Route 53 geoproximity routing policy combined with a multivalue answer routing policy.

B. Deploy web, application, and MySQL database servers to Amazon EC2 instance in each Region. Set up the application so that reads and
writes are local to the Region. Create snapshots of the web, application, and database servers and store the snapshots in an Amazon S3
bucket in both Regions. Set up cross- Region replication for the database layer.

C. Use an Amazon Route 53 geolocation routing policy combined with a failover routing policy.

D. Set up web, application, and Amazon RDS for MySQL instances in each Region. Set up the application so that reads are local and writes are
partitioned based on the user. Set up a Multi-AZ failover for the web, application, and database servers. Set up cross-Region replication for the
database layer.

E. Set up active-active web and application servers in each Region. Deploy an Amazon Aurora global database with clusters in each Region.
Set up the application to use the in-Region Aurora database endpoints. Create snapshots of the web application servers and store them in an
Amazon S3 bucket in both Regions.

Correct Answer: BD

Community vote distribution


CE (67%) AE (33%)

" # bbnbnuyh Highly Voted $ 1 year, 1 month ago


C,E
C because "failover routing" gives resiliency
E because rest of the options dont make sense for read- heavy and write to central requirement
upvoted 34 times

" # keos 1 year, 1 month ago


E "...snapshots of the web application servers..." is for what?
upvoted 2 times

" # Sonujunko 8 months ago


Be robust to application-related issues in any Region.
upvoted 1 times

" # pablobairat 1 year ago


A,E
From https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-multivalue
"Multivalue answer routing lets you configure Amazon Route 53 to return multiple values, such as IP addresses for your web servers, in
response to DNS queries. You can specify multiple values for almost any record, but multivalue answer routing also lets you check the health
of each resource, so Route 53 returns only values for healthy resources. It's not a substitute for a load balancer, but the ability to return
multiple health-checkable IP addresses is a way to use DNS to improve availability and load balancing."
upvoted 6 times

" # Hasitha99 7 months ago


Selected Anser : C, E.
The question says, most of the revenue comes from North America & Asia. So we can deploy our infrastructure by prioritising that. Then we
can serve all North American Users from the North American region and Asia users from Asia deployments ( geolocation routing).

Why not A? A is a valid answer. But, if we set up geoproximity base routing, it will route traffic based on the closeness of AWS resources
and users. In other terms, we can't give higher priority to our higher revenue regions.
upvoted 1 times

" # tgv Highly Voted $ 1 year ago


AAA EEE

285 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

---
The first important thing to note is that users are from all over the world and not only from North America and Asia and that you have to be
resilient to problem with the application in ANY REGION.

What I don't like about Failover is that it works by creating 2 records (primary + secondary)
Since you have to be resilient to problem with the application in ANY Region, how are you configuring the failover policy/ies?
upvoted 6 times
" # dmscountera Most Recent % 2 weeks, 5 days ago
Selected Answer: AE
As per the Q, you need to read/be resilient in ANY region not from just 2.
So multi-value supports up to 8 IPs > failover ~2
8 > 2 =>
AE
upvoted 1 times

" # kadev 2 months, 1 week ago


many people confuse A and C, this is explain:
"multivalue answer routing policy may cause the users to be randomly sent to other healthy regions" => not good for performance
and the point of this Q is "resiliency" => if request failed, it can route to another enpoint => failover
upvoted 2 times

" # Student1950 3 months, 2 weeks ago


I would vote for A and E.
Multi value routing >> checks health of one or more DNS records and send traffic only to healthy record.
Fail-Over: used in active - passive traffic flow
If E is selected, network is active-active, and we need multivalve DNS routing and not failover DNS routing.
upvoted 2 times

" # bobsmith2000 6 months ago


E for sure.
Between A and C.
That a tough one.
We have global users and have to proved a failover.
On one hand with geoproximity policy we can serve the content for global users from only two regions. In case of geolocation we must set up
default region front users outside Asia and North America, but it's not mentioned in C.
On the other hand, multi-answer is not about failover, because it's random.
So with A we cover global users, but get random distribution b/w two regions.
With C we cover only two region but provide failover.
upvoted 1 times

" # Hasitha99 7 months ago


Selected Answer: CE
The question says, most of the revenue comes from North America & Asia. So we can deploy our infrastructure by prioritising that. Then we can
serve all North American Users from the North American region and Asia users from Asia deployments ( geolocation routing).

Why not A? A is a valid answer. But, if we set up geoproximity base routing, it will route traffic based on the closeness of AWS resources and
users. In other terms, we can't give higher priority to our higher revenue regions.
upvoted 2 times

" # cldy 11 months, 1 week ago


A. Use an Amazon Route 53 geoproximity routing policy combined with a multivalue answer routing policy.
E. Set up active-active web and application servers in each Region. Deploy an Amazon Aurora global database with clusters in each Region. Set
up the application to use the in-Region Aurora database endpoints. Create snapshots of the web application servers and store them in an
Amazon S3 bucket in both Regions.
upvoted 2 times

" # backfringe 11 months, 2 weeks ago


I go for CE
upvoted 3 times

" # CloudMan01 1 year ago


A is correct, as the question says there are users all around the world but the primary markets are in North America and Asia. To have better
resilience use Geoproximity routing policy – Use when you want to route traffic based on the location of your resources and, optionally, shift
traffic from resources in one location to resources in another.
upvoted 1 times

" # johnnsmith 1 year ago


If you choose E, you have to choose A because the application is in active-active mode. If you choose C, it will become active-standby mode.
upvoted 3 times

" # RVivek 10 months, 2 weeks ago


under normal condition DNS will rresolve to the region user is closese t (goelocation policy), only when the region fails , failover policy is
applied. So it is Active Active
upvoted 1 times

286 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Viper57 1 year ago


Answer E is either incorrect or badly written.

EBS volume snapshots are stored in S3, however you cannot choose what bucket they are stored in nor can they be accessed through the S3
api.
upvoted 2 times

" # andylogan 1 year ago


It's C, E
upvoted 1 times

" # student22 1 year ago


C,E

Why not A? Failover routing is better than multivalue answer for this case, and geolocation can be used here with no issues.

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
upvoted 1 times

" # student22 1 year ago


So, we don't need geoproximity.
upvoted 1 times

" # student22 1 year ago


A,E
A vs C - This applications is for 'users around the world'. So, Geoproximity is more suitable. It was for users in the given two regions, I'd have
selected C.
upvoted 1 times

" # student22 1 year ago


Changing to C,E
upvoted 1 times

" # near22 1 year ago


C,D
B,E make no sense, You cannot save any aws snapshot to s3 bucket.
upvoted 1 times

" # DerekKey 1 year ago


You are completely wrong.
In this case: You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots.
upvoted 1 times

" # Viper57 1 year ago


The underlying snapshot is stored in S3, however you cannot access the snapshots in any buckets.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C,E
upvoted 4 times

287 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #622 Topic 1

A company needs to create a centralized logging architecture for all of its AWS accounts. The architecture should provide near-real-time data
analysis for all AWS
CloudTrail logs and VPC Flow Logs across all AWS accounts. The company plans to use Amazon Elasticsearch Service (Amazon ES) to perform
log analysis in the logging account.
Which strategy a solutions architect use to meet these requirements?

A. Con+gure CloudTrail and VPC Flow Logs in each AWS account to send data to a centralized Amazon S3 bucket in the logging account.
Create and AWS Lambda function to load data from the S3 bucket to Amazon ES in the logging account.

B. Con+gure CloudTrail and VPC Flow Logs to send data to a log group in Amazon CloudWatch account. Con+gure a CloudWatch subscription
+lter in each AWS account to send data to Amazon Kinesis Data Firehouse in the logging account. Load data from Kinesis Data Firehouse into
Amazon ES in the logging account.

C. Con+gure CloudTrail and VPC Flow Logs to send data to a separate Amazon S3 bucket in each AWS account. Create an AWS Lambda
function triggered by S3 events to copy the data to a centralized logging bucket. Create another Lambda function to load data from the S3
bucket to Amazon ES in the logging account.

D. Con+gure CloudTrail and VPC Flow Logs to send data to a log group in Amazon CloudWatch Logs in each AWS account. Create AWS
Lambda functions in each AWS accounts to subscribe to the log groups and stream the data to an Amazon S3 bucket in the logging account.
Create another Lambda function to load data from the S3 bucket to Amazon ES in the logging account.

Correct Answer: A

Community vote distribution


B (92%) 8%

" # bbnbnuyh Highly Voted $ 1 year, 1 month ago


B. It is well defined here - https://www.cloudjourney.io/articles/publiccloud/central_logging_part_2-su/
upvoted 19 times

" # certainly 1 year ago


https://aws.amazon.com/solutions/implementations/centralized-logging/
upvoted 3 times

" # sayakan 1 year ago


Thanks certainly. This is what I need.
upvoted 1 times

" # Kopa 1 year ago


The B answer is saying: "Configure a CloudWatch subscription filter in each AWS account to send data to Amazon Kinesis Data Firehouse "
on the link it is described to send data to Amazon Kinesis DataStream then Lambda and after that to Kinesis FireHose, it looks that Kinesis
DataStream not mention on the answer. Im again for B but it looks suspicious.
upvoted 2 times

" # Viper57 1 year ago


CloudWatch subscription filter support sending to Kinesis data streams and Firehose so B looks correct.
upvoted 1 times

" # student22 1 year ago


Thanks. I also read the question as "... send data to a log group in each account" So, my answer is B. If it's really taking about a central
cloudwatch account, the answer is A.

Site admins, verify please?


upvoted 1 times

" # Kelvin1477 Highly Voted $ 1 year, 1 month ago


I think A still a correct option..
Based on AWS documentation: A trail enables CloudTrail to deliver log files to an Amazon S3 bucket
https://docs.aws.amazon.com/AmazonS3/latest/dev/cloudtrail-logging.html
upvoted 5 times

" # DerekKey 1 year ago


CloudTrail delivers to both S3 and CloudWatch
upvoted 1 times

288 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # kadev Most Recent % 2 months, 1 week ago


B for sure, CW loggrpup supcription supports to kinesis firehose now
A. you cant trigger event when S3 object update for log file updating, it's so expensive. if you run lambda as scheduled => it not near-realtime
upvoted 1 times

" # gnic 2 months, 1 week ago


Selected Answer: B
the keyword is "near real time"
I was for A, but B is better
upvoted 2 times

" # fdoxxx 3 months ago


The answer is A - why not B? the service Amazon Kinesis Data Firehouse does not exists - there is Amazon Kinesis Data Firehose - this typo is
on purpose imho.
upvoted 1 times

" # shucht 3 months, 1 week ago


Selected Answer: A
It cannot be B because firehose cannot output to ElasticSearch
upvoted 1 times

" # gnic 2 months, 1 week ago


it can
upvoted 1 times

" # hilft 3 months, 1 week ago


the keyword here is real time. B
upvoted 1 times

" # Anhdd 5 months ago


Selected Answer: B
Should be B. Due to "near-real-time data analysis" -> Use Kinesis Data Firehouse to send data log to ES is best practice
upvoted 2 times

" # alexph169 5 months, 4 weeks ago


Selected Answer: B
Near real time is the keyword. Can not be Lambda here that is an async call mechanism.
upvoted 2 times

" # jyrajan69 8 months, 3 weeks ago


The requirement says near real time, based on that Kinesis will satisfy this, so the only answer likely is B
upvoted 3 times

" # Jonfernz 9 months, 1 week ago


Selected Answer: B
Firehose for near-real time.
upvoted 2 times

" # cldy 11 months ago


B. Configure CloudTrail and VPC Flow Logs to send data to a log group in Amazon CloudWatch account. Configure a CloudWatch subscription
filter in each AWS account to send data to Amazon Kinesis Data Firehouse in the logging account. Load data from Kinesis Data Firehouse into
Amazon ES in the logging account.
upvoted 1 times

" # bill_smoke 11 months ago


Could someone please confirm whether these question sets are still on the SAA-C02 exam for December? I'm taking my test in a week and want
to make sure this is all legit.
upvoted 2 times

" # AzureDP900 11 months ago


B is right
upvoted 1 times

" # Rho_Ohm 11 months, 1 week ago


>>> Ans: B
upvoted 1 times

" # acloudguru 11 months, 1 week ago


Selected Answer: B
B, near-real-time

289 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 3 times
" # andylogan 1 year ago
It's B
upvoted 1 times

290 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #623 Topic 1

A +nancial company is using a high-performance compute cluster running on Amazon EC2 instances to perform market simulations. A DNS record
must be created in an Amazon Route 53 private hosted zone when instances start. The DNS record must be removed after instances are
terminated.
Currently the company uses a combination of Amazon CloudWatch Events and AWS Lambda to create the DNS record. The solution worked well in
testing with small clusters, but in production with clusters containing thousands of instances the company sees the following error in the Lambda
logs:
HTTP 400 error (Bad request).
The response header also includes a status code element with a value of `Throttling` and a status message element with a value of `Rate
exceeded`.
Which combination of steps should the Solutions Architect take to resolve these issues? (Choose three.)

A. Con+gure an Amazon SOS FIFO queue and con+gure a CloudWatch Events rule to use this queue as a target. Remove the Lambda target
from the CloudWatch Events rule.

B. Con+gure an Amazon Kinesis data stream and con+gure a CloudWatch Events rule to use this queue as a target. Remove the Lambda target
from the CloudWatch Events rule.

C. Update the CloudWatch Events rule to trigger on Amazon EC2 ‫ג‬€Instance Launch Successful‫ג‬€ and ‫ג‬€Instance Terminate Successful‫ג‬€
events for the Auto Scaling group used by the cluster.

D. Con+gure a Lambda function to retrieve messages from an Amazon SQS queue. Modify the Lambda function to retrieve a maximum of 10
messages then batch the messages by Amazon Route 53 API call type and submit. Delete the messages from the SQS queue after successful
API calls.

E. Con+gure an Amazon SQS standard queue and con+gure the existing CloudWatch Events rule to use this queue as a target. Remove the
Lambda target from the CloudWatch Events rule.

F. Con+gure a Lambda function to read data from the Amazon Kinesis data stream and con+gure the batch window to 5 minutes. Modify the
function to make a single API call to Amazon Route 53 with all records read from the kinesis data stream.

Correct Answer: BEF

Community vote distribution


CDE (50%) ACD (50%)

" # bbnbnuyh Highly Voted $ 1 year, 1 month ago


C, D, E
You have to introduce a SQS: FIFO has limited throughput so may be a normal SQS queue with batching that can overcome the rate limits
upvoted 27 times

" # pablobairat 1 year ago


ACD
If you use UPSERT to introduce the DNS records, if it does not exists, it creates it, if it exists, it update the values (in case of duplicates). For
deleting, you use a delete, if it exists, it deletes it, if it was already deleted(duplicated message in the queue), it does nothing.

The goal here is to support thousands of instances launching and terminating, with a SQS FIFO queue this requirement is not fullfilled. And it
was the original problem with Lambda and the concurrency.
upvoted 5 times

" # Kelvin 1 year ago


ACD is better as you need FIFO mode to ensure processing DNS records exactly once.
upvoted 6 times

" # beso 1 year, 1 month ago


CloudWatch--> SQS--> Lambda (batch) --> R53
upvoted 3 times

" # Ebi Highly Voted $ 1 year, 1 month ago


I will go with ACD
upvoted 12 times

" # LCC92 1 year ago


FIFO SQS is limited 300 message/second. CDE is correct.

291 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 4 times
" # Ebi 1 year, 1 month ago
We need FIFO queue here for exactly-once-processing feature as well as order
upvoted 4 times

" # JohnPi Most Recent % 1 month, 1 week ago


Selected Answer: CDE
CDE is the answer
upvoted 2 times

" # aqiao 2 months ago


Selected Answer: ACD
Even the default message groups support 300 requests , you can enable high performance option to improve high
throughput:https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/high-throughput-fifo.html
upvoted 1 times

" # jj22222 7 months, 1 week ago


Selected Answer: ACD
ACD looks right
upvoted 1 times

" # jyrajan69 8 months, 2 weeks ago


If you are choosing E, then you are okay with duplicate DNS records, which means that when you delete the records you have to figure out the
timestamp so that you do not delete the latest entry. Yes you do have limited throughput but thats where D comes in with batching, now you can
get 3000 TPS for FIFO. So my answer will have to be A,C,D
upvoted 2 times

" # tkanmani76 10 months, 2 weeks ago


Answer C, D, E -
If we would have gone with A, C, D - which makes sense from FIFO perspective, the option D does not mention 'SQS FIFO' instead just mentions
SQS. Hence C, D, E
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


CDE is perfect answer, This question in Neal Davis practice test.

The errors in the Lambda logs indicate that throttling is occurring. Throttling is intended to protect your resources and downstream applications.
Though Lambda automatically scales to accommodate incoming traffic, functions can still be throttled for various reasons.

In this case it is most likely that the throttling is not occurring in Lambda itself but in API calls made to Amazon Route 53. In Route 53 you are
limited (by default) to five requests per second per AWS account. If you submit more than five requests per second, Amazon Route 53 returns an
HTTP 400 error (Bad request). The response header also includes a Code element with a value of Throttling and a Message element with a value
of Rate exceeded.

The resolution here is to place the data for the DNS records into an SQS queue where they can buffer. AWS Lambda can then poll the queue and
process the messages, making sure to batch the messages to reduce the likelihood of receiving more errors.
upvoted 5 times

" # Cotter 1 year ago


CDE better than ADE FIFO SQS is limited 300 message/second as commented below.
upvoted 1 times

" # andylogan 1 year ago


It's C, D, E
upvoted 1 times

" # tgv 1 year ago


CCC DDD EEE
---
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with C,D,E
upvoted 2 times

" # Chibuzo1 1 year ago


The Correct Answer is C D E.
upvoted 1 times

" # vkbajoria 1 year ago


C, D , E
upvoted 1 times

292 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Santoshhhhh 1 year ago


BDE- SQS for decoupling , no FIFO as it has limit ...kinesis is not decoupling solution
upvoted 1 times

" # Waiweng 1 year ago


it's C,D,E
upvoted 2 times

" # anandbabu 1 year, 1 month ago


BCF is corrrect answer
upvoted 2 times

293 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #624 Topic 1

A North American company with headquarters on the East Coast is deploying a new web application running on Amazon EC2 in the us-east-1
Region. The application should dynamically scale to meet user demand and maintain resiliency. Additionally, the application must have disaster
recover capabilities in an active-passive con+guration with the us-west-1 Region.
Which steps should a solutions architect take after creating a VPC in the us-east-1 Region?

A. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect both VPCs. Deploy an Application Load Balancer (ALB)
spanning multiple Availability Zones (AZs) to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs in each Region as
part of an Auto Scaling group spanning both VPCs and served by the ALB.

B. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs) to the VPC in the us-east-1 Region. Deploy EC2
instances across multiple AZs as part of an Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1 Region. Create
an Amazon Route 53 record set with a failover routing policy and health checks enabled to provide high availability across both Regions.

C. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect both VPCs. Deploy an Application Load Balancer (ALB) that
spans both VPCs. Deploy EC2 instances across multiple Availability Zones as part of an Auto Scaling group in each VPC served by the ALB.
Create an Amazon Route 53 record that points to the ALB.

D. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs) to the VPC in the us-east-1 Region. Deploy EC2
instances across multiple AZs as part of an Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1 Region. Create
separate Amazon Route 53 records in each Region that point to the ALB in the Region. Use Route 53 health checks to provide high availability
across both Regions.

Correct Answer: D

Community vote distribution


B (100%)

" # porlarowl Highly Voted $ 1 year, 1 month ago


I support B.
A new web application in a active-passive DR mode.
a Route 53 record set with a failover routing policy.
upvoted 27 times

" # aqiao Most Recent % 2 months ago


Selected Answer: B
ALB can not cross region, so A and C rule out. Route 53 is a global service but not region, D rules out
upvoted 1 times

" # jj22222 7 months, 1 week ago


Selected Answer: B
b looks right
upvoted 1 times

" # pal40sg 8 months, 1 week ago


Selected Answer: B
A new web application in a active-passive DR mode.
a Route 53 record set with a failover routing policy.
upvoted 1 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: B
It is definitely B
upvoted 1 times

" # pititcu667 10 months ago


Selected Answer: B
comes down to the route53 being a global service. i initially voted d.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: B
B Correct answer

294 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # kirrim 1 year ago


There is no reason for the two regions to intercommunicate with each other, so I see no need for the inter-Region peering. That rules out A and C.

Between B vs D... Route53 doesn't have per-region records. It's a global service. So D is wrong. B should work great.
upvoted 2 times

" # andylogan 1 year ago


It's B with failover routing policy
upvoted 2 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # blackgamer 1 year ago


Definitely B.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # Waiweng 1 year ago


ir's B
upvoted 3 times

" # ExtHo 1 year ago


D Did not mention the routing policy to be used on Amazon Route 53. The question requires that the second region acts as a passive backup,
which means only the main region receives all the traffic so you need to specifically use failover routing policy in Amazon Route 53. So B is
correct as per requirement
upvoted 2 times

" # alisyech 1 year ago


i choose B
upvoted 2 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # Ebi 1 year ago


I will go with B
upvoted 4 times

295 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #625 Topic 1

A company standardized its method of deploying applications to AWS using AWS CodePipeline and AWS CloudFormation. The applications are in
TypeScript and
Python. The company has recently acquired another business that deploys applications to AWS using Python scripts.
Developers from the newly acquired company are hesitant to move their applications under CloudFormation because it would require that they
learn a new domain-speci+c language and eliminate their access to language features, such as looping.
How can the acquired applications quickly be brought up to deployment standards while addressing the developers' concerns?

A. Create Cloud Formation templates and re-use parts of the Python scripts as Instance user data. Use the AWS Cloud Development Kit (AWS
CDK) to deploy the application using these templates. Incorporate the AWS CDK into CodePipeline and deploy the application to AWS using
these templates.

B. Use a third-party resource provisioning engine inside AWS CodeBuild to standardize the deployment processes of the existing and acquired
company. Orchestrate the CodeBuild job using CodePipeline.

C. Standardize on AWS OpsWorks. Integrate OpsWorks with CodePipeline. Have the developers create Chef recipes to deploy their
applications on AWS.

D. De+ne the AWS resources using TypeScript or Python. Use the AWS Cloud Development Kit (AWS CDK) to create CloudFormation templates
from the developers' code, and use the AWS CDK to create CloudFormation stacks. Incorporate the AWS CDK as a CodeBuild job in
CodePipeline.

Correct Answer: B

Community vote distribution


D (89%) 11%

" # AMKazi Highly Voted $ 9 months ago


D as it lets developers use their skills
upvoted 5 times

" # pititcu667 Highly Voted $ 8 months, 2 weeks ago


Selected Answer: D
answer should be d
upvoted 5 times

" # Rocky2222 Most Recent % 2 months, 4 weeks ago


Selected Answer: D
With this solution, the developers no longer need to learn the AWS CloudFormation specific language as they can continue writing TypeScript or
Python scripts. The AWS CDK stacks can be converted to AWS CloudFormation templates which can be integrated into the company
deployment process.
upvoted 3 times

" # adsdadasdad 3 months, 3 weeks ago


Selected Answer: B
PEOPLE TERRAFORM. Its B
upvoted 1 times

" # fdoxxx 3 months, 1 week ago


when you have a hammer everything looks like a nail ;-) It's D
upvoted 1 times

" # hilft 5 months ago


It is D.
upvoted 2 times

" # Ni_yot 9 months, 4 weeks ago


Agree its D based on this link
upvoted 3 times

" # krisvija12 10 months, 1 week ago


Answer should be : D
Ref : https://docs.aws.amazon.com/cdk/v2/guide/home.html

296 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 3 times

297 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #626 Topic 1

A company has a single AWS master billing account, which is the root of the AWS Organizations hierarchy.
The company has multiple AWS accounts within this hierarchy, all organized into organization units (OUs). More OUs and AWS accounts will
continue to be created as other parts of the business migrate applications to AWS. These business units may need to use different AWS services.
The Security team is implementing the following requirements for all current and future AWS accounts:
✑ Control policies must be applied across all accounts to prohibit AWS servers.
✑ Exceptions to the control policies are allowed based on valid use cases.
Which solution will meet these requirements with minimal optional overhead?

A. Use an SCP in Organizations to implement a deny list of AWS servers. Apply this SCP at the level. For any speci+c exceptions for an OU,
create a new SCP for that OU and add the required AWS services to the allow list.

B. Use an SCP in Organizations to implement a deny list of AWS service. Apply this SCP at the root level and each OU. Remove the default
AWS managed SCP from the root level and all OU levels. For any speci+c exceptions, modify the SCP attached to that OU, and add the required
AWS services to the allow list.

C. Use an SCP in Organizations to implement a deny list of AWS service. Apply this SCP at each OU level. Leave the default AWS managed SCP
at the root level. For any speci+c executions for an OU, create a new SCP for that OU.

D. Use an SCP in Organizations to implement an allow list of AWS services. Apply this SCP at the root level. Remove the default AWS managed
SCP from the root level and all OU levels. For any speci+c exceptions for an OU, modify the SCP attached to that OU, and add the required AWS
services to the allow list.

Correct Answer: B

Community vote distribution


D (63%) C (31%) 6%

" # Bulti Highly Voted $ 1 year, 1 month ago


Correct answer is C. When you use a Deny list, you cannot explicitly allow access to services at OU or account levels. You need to explicitly deny
access to services and that's why the term deny list. By default, all services are explicitly allowed starting at the root level. So you need to
explicitly create an SCP at each OU level where you need to implement the control policy of denying access to services. In exceptional
circumstances on a use case basis, you need to allow access to the services that already have an allow access from root to this OU level where
you are creating an exception. Only C satisfies this criteria. D is not correct because it doesn't create an SCP that allow access at all level from
the OU in question upto the root level. So even if you create an SCP that allows access to a service, access won't be granted as it's not been
explicitly allowed at all level above this OU.
upvoted 22 times

" # tekkart 1 year ago


Here the correct answer must be D.

1 - The allowed rights work with as the intersection of the rights given by SCP at root, OU and IAM Policies. Therefore if you implement on a
SCP at OU level a Deny of an AWS Server you then wish to grant, the only option is to Modify your SCP, which rules out answers A and C
which recommend you to Create a new SCP

2 - In answers A, B and C it is suggested to Implement an Explicit Deny, and for options B and C, this Deny is at Root Level. It is not possible
with this strategy to allow exceptions with this configurations because Explicit Deny takes precedence over Explicit Allow, then Implicit Deny,
then Implicit Allow. The only way to address this problem is to set Implicit Deny at the Root Level, so then with our Explicit Allow on SCP at
OU Level, it overrides the Implicit Deny, which is what is proposed in Answer D : it is an Allow list of AWS Services not including the restricted
AWS Servers which are Implicitly Denied.
upvoted 6 times

" # tomosabc1 1 month ago


Your explanation is not correct. D is wrong.

Using Allow List Strategy, to allow a permission, SCPs with allow statement must be added to the account and every OU above it including
root. Every SCP in the hierarchy must explicitly allow the APIs you want to use.
Explicit allow at a lower level of organization hierarchy cannot overwrite the implicit deny at a higher level.
upvoted 1 times

" # dutchy1988 Highly Voted $ 1 year, 1 month ago


Prohibit all AWS servers (should be services i guess) can only be achieved by whitelisting method. This means that you will have to remove the
AWS managed SCP from the root.
Whitelist SCP on the root of your organisation makes sure that any new account will apply these settings. SCP never grants access but can allow
you to make use of AWS services.
With that baseline set, granting a new set of AWS services in a separate SCP attaching it to the new account in your organisation complies here

298 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

for the minimal operational overhead.

only D will statisfy.

One more negative for C. once you implement a deny on a toplevel. it will override any allow in a child OU. not that it is stated within this
question. but with that in mind that it could be the case, whitelisting makes more sense for me.
upvoted 21 times

" # cloudgc 1 year, 1 month ago


D would have been the answer if 'These business units may need to use different AWS services' was not required.

With D we are giving the same AWS Services to all the units.
upvoted 2 times

" # aws_arn_name 1 year ago


No, D state that "modify the SCP attached to that OU" not the root SCP
upvoted 3 times
" # tomosabc1 Most Recent % 1 month ago
Selected Answer: C
C is correct. For explanation, please refer to Bulti's answer.
upvoted 1 times

" # dcdcdc3 1 month, 2 weeks ago


Selected Answer: A
D Cannot work if SCP is not attached to Every Level of OU including root. C can work but is too much overhead;
A may have incomplete wording but as is, it is working solution, as the SCP is attached "at the Level". In A, it nowhere says to "attach deny to
root level".
Here is the whole text for A:
"A. Use an SCP in Organizations to implement a deny list of AWS servers. Apply this SCP at the level. For any specific exceptions for an OU,
create a new SCP for that OU and add the required AWS services to the allow list."
The New SCP will not have a Deny for specific service and will have an Allow statement..
upvoted 1 times

" # aqiao 2 months ago


Selected Answer: D
Three key points in SCP:
1 Explicit deny actions has the highest priority;
2 Accounts under sub OU inherit the parent OU permissions;
3 Explicit allow actions overrides default FullAWSAccess on root organizations;
4 Once a deny actions applied on a some OU, even an explicit allow action added on sub OU, all the accounts directly under this OU and its sub
OU have no permission to perform the action.
upvoted 3 times

" # aqiao 2 months ago


Here is the official statement:
If an action is blocked by a Deny statement, then all OUs and accounts affected by that SCP are denied access to that action. An SCP at a
lower level can't add a permission after it is blocked by an SCP at a higher level. SCPs can only filter; they never add permissions.
You can get the details here :https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_inheritance_auth.html
So only D satisfied. Actually there is no need to remove default permission on root OU, like key point 3 said, it will be overrode by explicit
allow lists
upvoted 1 times

" # gnic 2 months, 1 week ago


Selected Answer: D
it's D
upvoted 3 times

" # Harithareddynn 2 months, 3 weeks ago


Selected Answer: D
Minimal operational overhead compared to C
upvoted 2 times

" # CloudHandsOn 3 months, 2 weeks ago


C.
i chose C as the first answer
upvoted 1 times

" # Enigmaaaaaa 4 months ago


Selected Answer: C
A - will not work as a deny is on the root level so no specification - there is no way to add the permission back lower in the hierarchy
B - Same for A
C - Can work - deny on OU level and leave AWS Full access to all accounts at root (I can only assume we also leave for all OU levels) - for
exceptions create a new deny SCP and replace it - however it has an operational overhead as it requires attaching it to every OU and every new

299 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

OU
D- cannot work if FullAccess is replaced with specific access SCP it should be applied to all level including OU and account levels (intersection).

Overall all answers are not fully complete but I have to go with C
upvoted 2 times

" # bobsmith2000 5 months, 3 weeks ago


Selected Answer: D
Both C and D will work.
But we need a solution with the LAST operational overhead.

C) There's no shared policy. So every time we must edit the OU SCP


D) We must specify common resources on the root level with an allow list (allow explicitly with implicit deny for the rest), and then we are able to
both deny or allow any specific services: additional allow will be merged with the root SCP, and explicit deny will override the root's allow. So that
we make changes only if needed, not every time when create an account
upvoted 2 times

" # Ishu_awsguy 9 months, 1 week ago


Correct answer is C.
We need allow at the root as per the question says " a variety of services needs to be used by business units"
Deny SCP is being applied on all OU's in option C.
For any specific service allow for any OU, we can replace or edit the OU SCP.
It has the least overhead.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_strategies.html#orgs_policies_denylist
upvoted 1 times

" # vbal 10 months, 2 weeks ago


C says create new SCP for Exception Allow after it have been explicitly Denied...Doesn't make sense if you already have an Deny it is gonna take
precedence over explicit Allow in newly created SCP; Answer is D.
upvoted 1 times

" # Ishu_awsguy 9 months, 1 week ago


the SCP is being applied on OU level.
No precedence.
precedence is for default allow policy.
and new policy is to be created for any change.
upvoted 1 times

" # Suresh108 10 months, 2 weeks ago


Inclined towards CCCCC.

Question is asking about minimal operational overhead.


https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_strategies.html#orgs_policies_denylist

The default configuration of AWS Organizations supports using SCPs as deny lists. Using a deny list strategy, account administrators can
delegate all services and actions until you create and attach an SCP that denies a specific service or set of actions. Deny statements require less
maintenance, because you don't need to update them when AWS adds new services. Deny statements usually use less space, thus making it
easier to stay within the maximum size for SCPs. In a statement where the Effect element has a value of Deny, you can also restrict access to
specific resources, or define conditions for when SCPs are in effect.

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_inheritance_auth.html
upvoted 3 times

" # cldy 11 months ago


C. Use an SCP in Organizations to implement a deny list of AWS service. Apply this SCP at each OU level. Leave the default AWS managed SCP
at the root level. For any specific executions for an OU, create a new SCP for that OU.
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: C
C correct
upvoted 2 times

" # Bigbearcn 1 year ago


C is correct.
B is wrong. If you use a deny list and remove the default SCP( which is full access for all services), you cannot do anything.
D is wrong. If you apply a SCP at root level, it define the max permission and will overwrite the SCP at OU level.
upvoted 4 times

" # student22 1 year ago


C is the answer.
upvoted 1 times

" # student22 1 year ago


D lacks SCPs at each level for allow list.

300 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # tiana528 11 months ago


Not needed. Each level already inherited from the root level, which allows a list of services.
upvoted 2 times

301 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #627 Topic 1

A healthcare company runs a production workload on AWS that stores highly sensitive personal information. The security team mandates that, for
auditing purposes, any AWS API action using AWS account root user credentials must automatically create a high-priority ticket in the company's
ticketing system. The ticketing system has a monthly 3-hour maintenance window when no tickets can be created.
To meet security requirements, the company enabled AWS CloudTrail logs and wrote a scheduled AWS Lambda function that uses Amazon Athena
to query API actions performed by the root user. The Lambda function submits any actions found to the ticketing system API. During a recent
security audit, the security team discovered that several tickets were not created because the ticketing system was unavailable due to planned
maintenance.
Which combination of steps should a solutions architect take to ensure that the incidents are reported to the ticketing system even during planned
maintenance?
(Choose two.)

A. Create an Amazon SNS topic to which Amazon CloudWatch alarms will be published. Con+gure a CloudWatch alarm to invoke the Lambda
function.

B. Create an Amazon SQS queue to which Amazon CloudWatch alarms will be published. Con+gure a CloudWatch alarm to publish to the SQS
queue.

C. Modify the Lambda function to be triggered by messages published to an Amazon SNS topic. Update the existing application code to retry
every 5 minutes if the ticketing system's API endpoint is unavailable.

D. Modify the Lambda function to be triggered when there are messages in the Amazon SQS queue and to return successfully when the
ticketing system API has processed the request.

E. Create an Amazon EventBridge rule that triggers on all API events where the invoking user identity is root. Con+gure the EventBridge rule to
write the event to an Amazon SQS queue.

Correct Answer: BD

Community vote distribution


DE (100%)

" # Rajarshi Highly Voted $ 1 year, 1 month ago


D and E
upvoted 23 times

" # DashL 1 year ago


A - Uses SNS topics. Will not work.
B - Uses CloudWatch Alarms. It is required to use a CloudWatch Event/EventBridge rule
C - Correct.
D - doesn't have reties to address the situation when the ticketing system is down.
E - Correct
upvoted 4 times

" # tekkart 1 year ago


You are right, but following your logic, it should be A & C because C requires SNS
C offers a solution for the Ticketing System unavailable. None of the solution based on SQS triggers a solution based on its availability
upvoted 1 times

" # tekkart 1 year ago


But the phrasal of answer A is not OK : "CW Alarm to invoke the Lambda function"

Why need an Event and a queue, the Lambda is already scheduled... unless when the event is "the ticketing system is available" not
"the invoking user identity is root" in question E... E does not address the main concern which is the unavailability of ticketing system
upvoted 1 times

" # tekkart 1 year ago


Considering D&E as answers.
1 - SQS, as Event source mapping for Lambda, where errors such as unavailable ticketing system block processing until errors are
solved or items expire.
https://docs.aws.amazon.com/lambda/latest/dg/invocation-retries.html
https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventsourcemapping.html
https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html#invocation-async-destinations

With Dead Letter Queuing option as an alternative solution for on-failure destination :

302 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html

2 - SNS is possible as a destination from Event Source Mapping, having SQS->SNS->Lambda, plus for multiple destination
notifications such as email sending would be useful, hence C&E could be feasible assuming this link between SQS and SNS.
upvoted 3 times
" # beso Highly Voted $ 1 year, 1 month ago
B and D, CloudWatch--> SQS--> Lambda-->Ticketing system
upvoted 13 times

" # shammous 1 year ago


You need EventBridge to trigger root API calls only and then take action. Option B is too broad and doesn't satisfy the requirement of
detecting "API actions performed by the root user".
upvoted 1 times

" # Kelvin 1 year ago


You need CloudWatch Events (aka EventBridge) but not CloudWatch Alarm in this case. So D and E.
upvoted 4 times

" # hilft Most Recent % 3 months, 1 week ago


B and D, CloudWatch--> SQS--> Lambda
upvoted 1 times

" # bobsmith2000 5 months, 2 weeks ago


Selected Answer: DE
Right by the book!
upvoted 1 times

" # jj22222 9 months ago


D and E
upvoted 1 times

" # CloudChef 10 months ago


Selected Answer: DE
D and E
upvoted 2 times

" # Ni_yot 10 months, 1 week ago


D and E is good choice.
upvoted 2 times

" # cldy 10 months, 1 week ago


D and E.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: DE
D,E

The existing system can be modified to use Amazon EventBridge instead of using AWS CloudTrail with Amazon Athena. Eventbridge can be
configured with a rule that checks all AWS API calls via CloudTrail. The rule can be configured to look for the usage or the root user account.
Eventbridge can then be configured with an Amazon SQS queue as a target that puts a message in the queue waiting to be processed.
The Lambda function can then be configured to poll the queue for messages (event-source mapping), process the event synchronously and only
return a successful result when the ticketing system has processed the request. The message will be deleted only if the result is successful,
allowing for retries.
This system will ensure that the important events are not missed when the ticketing system is unavailable.
upvoted 3 times

" # niruk 1 year ago


D&E
Eventbridge => https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html
SQS permissions => https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-use-resource-based.html#eb-sqs-permissions
Search for root => https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-logging-monitoring.html also look at the policy.
upvoted 3 times

" # kirrim 1 year ago


+1 for citing documentation on how to make this work

A & B are wrong because CloudWatch Alarms is based on metrics, not an event/action (that's CloudWatch Events)
C is eliminated because it could have only worked in combo with A, and A is wrong
D is valid per your links
E is valid per your links

(Note that you'd probably have to be careful with D that you don't have a Lambda function running for a LONG time trying to reach the API!

303 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Might require some extra work here to avoid that)


upvoted 1 times
" # andylogan 1 year ago
It's D E
Since we need CloudWatch Events (aka EventBridge) but not CloudWatch Alarm in this case.
upvoted 2 times

" # tgv 1 year ago


DDD EEE
---
upvoted 1 times

" # denccc 1 year ago


it's D and E
upvoted 1 times

" # DerekKey 1 year ago


A& B - wrong -> CloudWatch alarms base on metrics
C - wrong - no services in correct answers that write to SNS
D - correct - Lambda -> SQS
E - correct - EventBridge = CW Events -> SQS
upvoted 2 times

" # blackgamer 1 year ago


I believe D and E.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D,E

SNS does not serve for this purpose


upvoted 2 times

" # Kopa 1 year ago


Im for D & E, E more faster then B.
upvoted 1 times

304 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #628 Topic 1

A solutions architect is migrating an existing workload to AWS Fargate. The task can only run in a private subnet within the VPC where there is no
direct connectivity from outside the system to the application. When the Fargate task is launched, the task fails with the following error:
CannotPullContainerError: API error (500): Get https://111122223333.dkr.ecr.us-east-1.amazonaws.com/v2/: net/http: request canceled while
waiting for connection
How should the solutions architect correct this error?

A. Ensure the task is set to ENABLED for the auto-assign public IP setting when launching the task.

B. Ensure the task is set to DISABLED for the auto-assign public IP setting when launching the task. Con+gure a NAT gateway in the public
subnet in the VPC to route requests to the internet.

C. Ensure the task is set to DISABLED for the auto-assign public IP setting when launching the task. Con+gure a NAT gateway in the private
subnet in the VPC to route requests to the internet.

D. Ensure the network mode is set to bridge in the Fargate task de+nition.

Correct Answer: C

Community vote distribution


B (100%)

" # asldavid Highly Voted $ 1 year, 1 month ago


B. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_cannot_pull_image.html
upvoted 22 times

" # liono Highly Voted $ 1 year, 1 month ago


B is the correct answer. The NAT needs to be in public subnet.
https://stackoverflow.com/questions/48368499/stopped-cannotpullcontainererror-api-error-500
upvoted 12 times

" # porlarowl 1 year, 1 month ago


I understand that "Configure a NAT GW in the private subnet" dose not mean Creating a NAT GW. If it means creating a NAT GW, the answer
should be B. On the other hand, it means attaching a NAT GW to subnet, the answer should be C. I am not sure, cause I am not a English
native speaker.
upvoted 2 times

" # petebear55 1 year, 1 month ago


THINK YOUR RIGHT .. however because they have asked you to choose between public and private in the answers .. knowing aws this
leads to one of these being the answer .. so in exam i would put b .. for public
upvoted 2 times

" # Kelvin1477 1 year, 1 month ago


yes agree, NAT gw always public facing caused need public IP to communicate with ECR
upvoted 1 times

" # kangtamo Most Recent % 4 months, 2 weeks ago


Selected Answer: B
Agree with B.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


B is right

When a Fargate task is launched, its elastic network interface requires a route to the internet to pull container
images. If you receive an error similar to the following when launching a task, it is because a route to the internet
does not exist:
CannotPullContainerError: API error (500): Get https://111122223333.dkr.ecr.us-east-1.amazonaws.com/v2/:
net/http: request canceled while waiting for connection”
To resolve this issue, you can:
o For tasks in public subnets, specify ENABLED for Auto-assign public IP when launching the task.
o For tasks in private subnets, specify DISABLED for Auto-assign public IP when launching the task, and
configure a NAT gateway in your VPC to route requests to the internet.
upvoted 5 times

" # andylogan 1 year ago


It's B

305 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # nsei 1 year ago


Answer is B. NAT gateway should be in the public subnet.
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # blackgamer 1 year ago


B. NAT gateway needs to be in public subnet.
upvoted 2 times

" # AndyTokyo608 1 year ago


I go with C For tasks in public subnets, specify ENABLED for Auto-assign public IP when launching the task
upvoted 1 times

" # DerekKey 1 year ago


Strange. Read the question again.
"The task can only run in a private subnet within the VPC where there is no direct connectivity from outside the system to the application"
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # Waiweng 1 year ago


it's B
upvoted 3 times

" # Waiweng 1 year ago


it's B
upvoted 2 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # Ebi 1 year ago


B is my choice
upvoted 4 times

" # kopper2019 1 year, 1 month ago


B, NAT GW must in a public subnet in order to work
upvoted 1 times

" # kopper2019 1 year, 1 month ago


https://aws.amazon.com/blogs/compute/task-networking-in-aws-fargate/
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-configure-network.html
upvoted 1 times

" # Bulti 1 year, 1 month ago


Answer is B. There is no difference between configuring and creating as far as this question is concerned. And we all know that NAT Gateway
needs to be created in a Public Subnet. It needs to be accessed from the private subnet via a route table attached to it that routes outbound
traffic to the NAT Gateway which is in the public subnet and from there to the internet via the Internet Gateway attached to the VPC.
upvoted 1 times

" # petebear55 1 year, 1 month ago


B . aws are sh**s ... sent to persecute us poor students of aws !! . they throw red herring questions in like this .. knowing most would go for C ..
but this is not the case .. it needs to be PUBLIC so select B ... be aware of this in the exam ... your thinking should go opposite to what your
instinct is saying !! .. it is the same in my previous answers when they mention uploading files and mention small or large ... go for large even
though instinct says small ... Wizzlabs have a good couple of questions similar to this one and very good explanations from people whom took
exam etc ... But for now lets choose B
upvoted 2 times

306 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #629 Topic 1

A company is running a two-tier web-based application in an on-premises data center. The application user consists of a single server running a
stateful application. The application connects to a PostgreSQL database running on a separate server. The application's user base is expected to
grow signi+cantly, so the company is migrating the application and database to AWS. The solution will use Amazon Aurora PostgreSQL, Amazon
EC2 Auto Scaling, and Elastic Load
Balancing.
Which solution will provide a consistent user experience that will allow the application and database tiers to scale?

A. Enable Aurora Auto Scaling for Aurora Replicas. Use a Network Load Balancer with the least outstanding requests routing algorithm and
sticky sessions enabled.

B. Enable Aurora Auto Scaling for Aurora writes. Use an Application Load Balancer with the round robin routing algorithm and sticky sessions
enabled.

C. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the robin routing and sticky sessions enabled.

D. Enable Aurora Scaling for Aurora writers. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky
sessions enabled.

Correct Answer: B

Community vote distribution


C (100%)

" # beso Highly Voted $ 1 year, 1 month ago


C,
Aurora Auto Scaling enables your Aurora DB cluster to handle sudden increases in connectivity or workload. When the connectivity or workload
decreases, Aurora Auto Scaling removes unnecessary Aurora Replicas so that you don't pay for unused provisioned DB instances
upvoted 22 times

" # MarkChoi Most Recent % 2 months, 4 weeks ago


Selected Answer: C
It's C
upvoted 1 times

" # cldy 11 months ago


C. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the robin routing and sickly sessions enabled.
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


C is right !
upvoted 1 times

" # andylogan 1 year ago


It's C with Aurora Replicas
upvoted 2 times

" # nsei 1 year ago


C is correct
upvoted 1 times

" # student22 1 year ago


C
Auto Scaling for Aurora Replicas + ALB with sticky sessions
upvoted 3 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

307 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Kopa 1 year ago


C also for me
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 4 times

" # alisyech 1 year ago


i choose C
upvoted 2 times

" # rcher 1 year ago


ALB cause its Web application (Although i can argue that NLB can scale better,just that you need to do SSL termination at the web app)

Aurora scale read replica, haven't heard of writes (Correct me if i am wrong)

C then
upvoted 4 times

" # kirrim 1 year ago


The Least Outstanding Requests algo is only supported on the ALB, not the NLB that I could find. So it's definitely C in my mind
upvoted 2 times

" # Ebi 1 year ago


C is the correct answer.
upvoted 4 times

" # Bulti 1 year ago


Answer is C
upvoted 1 times

" # T14102020 1 year, 1 month ago


Correct is C.
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with C
upvoted 2 times

308 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #630 Topic 1

A solutions architect is designing a network for a new cloud deployment. Each account will need autonomy to modify route tables and make
changes. Centralized and controlled egress internet connectivity is also needed. The cloud footprint is expected to grow to thousands of AWS
accounts.
Which architecture will meet these requirements?

A. A centralized transit VPC with a VPN connection to a standalone VPC in each account. Outbound internet tramc will be controlled by +rewall
appliances.

B. A centralized shared VPC with a subnet for each account. Outbound internet tramc will be controlled through a jeet of proxy servers.

C. A shared services VPC to host central assets to include a jeet of +rewalls with a route to the internet. Each spoke VPC will peer to the
central VPC.

D. A shared transit gateway to which each VPC will be attached. Outbound internet access will route through a jeet of VPN-attached +rewalls.

Correct Answer: A

Community vote distribution


D (100%)

" # MarkDillon1075 Highly Voted $ 1 year, 1 month ago


D - https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/centralized-egress-to-
internet.html
upvoted 24 times

" # Chris_1990 Highly Voted $ 1 year, 1 month ago


Looks like D

Answer C is wrong, because there is a default limit of 50 VPS peerings per VPC, which can be increased to a amximum of 125
(https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html). Since the cloud footprint is expected to grow to thousands of AWS
accounts, VPC peering with one central VPC would not work. Transit Gateway can hadle up to 5000 attachments and therefore is the better
choice here.
upvoted 10 times

" # kangtamo Most Recent % 4 months, 1 week ago


Selected Answer: D
Agree with D.
upvoted 1 times

" # AzureDP900 11 months ago


I will go with D
upvoted 1 times

" # kirrim 1 year ago


A would not scale beyond 100 VPN connections to a VPC
B would not scale beyond 200 subnets in a single VPC (you can increase the subnet quota beyond 200, but ultimately this doesn't scale because
your CIDR and minimum subnet size would limit you at some point)
C would not scale beyond the VPC peering limit of 50 (you can increase this to 125, but not beyond that)
D would scale the most, but even that is not infinite, you'd have a limit of 5,000 TGW attachments (can be increased), or 10k static routes per
TGW (one for each VPC CIDR), or 50Gbps throughput, or the VPN throughput of your firewalls.
upvoted 8 times

" # Kopa 11 months, 1 week ago


good explanation
upvoted 1 times

" # moon2351 1 year ago


Answer is D
upvoted 1 times

" # andylogan 1 year ago


It's D with shared transit gateway
upvoted 1 times

" # tgv 1 year ago


DDD

309 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 2 times

" # student2020 1 year ago


D looks good except "VPN-attached firewalls". What is this? Did they mean VPC attached firewalls?
upvoted 3 times

" # student2020 1 year ago


Architecture for D is explained here:
https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/centralized-egress-to-
internet.html
upvoted 2 times

" # ryu10_09 11 months, 3 weeks ago


Using an EC2 instance for centralized outbound

Using a software-based firewall appliance (on EC2) from AWS Marketplace as an egress point is similar to the NAT gateway setup. This option
can be used if you want to leverage the layer 7 firewall/Intrusion Prevention/Detection System (IPS/IDS) capabilities of the various vendor
offerings.

In Figure 12, we replace NAT Gateway with an EC2 instance (with SNAT enabled on EC2 instance). There are few key considerations with this
upvoted 1 times

" # Waiweng 1 year ago


it's D
upvoted 4 times

" # certainly 1 year ago


I will go with B. https://aws.amazon.com/blogs/networking-and-content-delivery/how-to-use-aws-privatelink-to-secure-and-scale-web-filtering-
using-explicit-proxy/. D. how do you route internet traffic thru VPN connected firewall?
upvoted 2 times

" # nitinz 1 year ago


Seems B to me.
upvoted 1 times

" # nitinz 1 year ago


Changing to D
upvoted 1 times

" # certainly 1 year ago


change my Answer to D. B says " A centralized shared VPC with a subnet for each account" this would not allow to grow to support 1000s
AWS account
upvoted 1 times

" # Kian1 1 year ago


going with D transit gateway
upvoted 2 times

" # Trap_D0_r 1 year ago


D
This is *THE* use case for a Transit Gateway. All the other information in answers is a distraction.
upvoted 3 times

" # Ebi 1 year, 1 month ago


I will go with D
upvoted 5 times

" # Bulti 1 year, 1 month ago


D is the correct answer as it is the only scalable option listed.
upvoted 1 times

" # spring21 1 year, 1 month ago


D: AWS Transit Gateway helps you design and implement networks at scale by acting as a cloud router. As your network grows, the complexity
of managing incremental connections can slow you down. AWS Transit Gateway connects VPCs and on-premises networks through a central
hub. This simplifies your network and puts an end to complex peering relationships -- each new connection is only made once.
upvoted 5 times

310 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #631 Topic 1

A solutions architect needs to migrate 50 TB of NFS data to Amazon S3. The +les are on several NFS +le servers on corporate network. These are
dense +le systems containing tens of millions of small +les. The system operators have con+gured the +le interface on an AWS Snowball Edge
device and are using a shell script to copy data.
Developers report that copying the data to the Snowball Edge device is very slow. The solutions architect suspects this may be related to the
overhead of encrypting all the small +les and transporting them over the network.
Which changes can be made to speed up the data transfer?

A. Cluster two Snowball Edge devices together to increase the throughput of the devices.

B. Change the solution to use the S3 Adapter instead of the +le interface on the Snowball Edge device.

C. Increase the number of parallel copy jobs to increase the throughput of the Snowball Edge device.

D. Connect directly to the USB interface on the Snowball Edge device and copy the +les locally.

Correct Answer: B

Community vote distribution


C (100%)

" # Gmail78 Highly Voted $ 1 year, 1 month ago


C- Perform multiple copy operations at one time – If your workstation is powerful enough, you can perform multiple snowball cp commands at
one time. You can do this by running each command from a separate terminal window, in separate instances of the Snowball client, all connected
to the same Snowball.
https://docs.aws.amazon.com/snowball/latest/ug/performance.html
upvoted 34 times

" # mrphuongbn 1 year ago


Sometimes the fastest way to transfer data with Snowball is to transfer data in parallel.
https://docs.aws.amazon.com/snowball/latest/ug/transfer-petabytes.html#parallel-usage
upvoted 1 times

" # StelSen 1 year ago


Agree with Answer-C. One more supporting link: https://aws.amazon.com/blogs/storage/best-practices-for-accelerating-data-migrations-
using-aws-snowball-edge/ (Section: Parallelize data transfers)
upvoted 1 times

" # taoteching1 Highly Voted $ 1 year, 1 month ago


Answer = B - The data transfer rate using the file interface is typically between 25 MB/s and 40 MB/s. If you need to transfer data faster than this,
use the Amazon S3 Adapter for Snowball, which has a data transfer rate typically between 250 MB/s and 400 MB/s.
https://docs.aws.amazon.com/snowball/latest/developer-guide/using-adapter.html
https://docs.aws.amazon.com/snowball/latest/developer-guide/BestPractices.html
upvoted 16 times

" # PAUGURU 1 year, 1 month ago


The last link you posted clearly states:
"This following list is ordered from largest to smallest positive impact on performance:
1 Perform multiple write operations at one time – To do this, run each command from multiple terminal windows on a computer with a network
connection to a single AWS Snowball Edge device."
So I say C.
upvoted 5 times

" # shammous 1 year ago


Check the note below the list that satisfy the requirement.
upvoted 3 times

" # student22 1 year ago


Yes. It looks like B is the correct answer:
https://docs.aws.amazon.com/snowball/latest/developer-guide/BestPractices.html
The data transfer rate using the file interface is typically between 25 MB/s and 40 MB/s. If you need to transfer data faster than this, use
the Amazon S3 Adapter for Snowball, which has a data transfer rate typically between 250 MB/s and 400 MB/s.
upvoted 1 times

" # student22 1 year ago


So, you have to use Amazon S3 Adapter for Snowball first even if you're planning to increase the speed with multiple parallel write
operations.

311 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times

" # AkaAka4 10 months, 3 weeks ago


I don't think there is such pre-requisite... the two options can be done separately without dependency on each other.
upvoted 1 times
" # HellGate 9 months, 1 week ago
S3 Adapter is designed for this situation over programatical approach in C.
upvoted 1 times

" # aqiao Most Recent % 2 months ago


Selected Answer: C
According to this link:https://docs.aws.amazon.com/snowball/latest/developer-guide/performance.html. For small files, you should transfer them
in batch. The biggest difference between B and C is S3 adapter used with program, but file interface is a GUI
upvoted 1 times

" # TechX 4 months, 1 week ago


Selected Answer: C
C for me.
The question said that: "The solutions architect feels this is due to the overhead associated with encrypting and transferring all the little data
across the network." -> we have network issue.
While from the AWS documentation, they clearly note that: "Because the computer workstation from which or to which you make the data
transfer is considered to be the bottleneck for transferring data, we highly recommend that your workstation be a powerful computer. It should be
able to meet high demands in terms of processing, memory, and networking" https://docs.aws.amazon.com/snowball/latest/ug/using-
adapter.html
upvoted 1 times

" # jj22222 7 months, 1 week ago


Selected Answer: C
C. Increase the number of parallel copy jobs to increase the throughput of the Snowball Edge device.
upvoted 1 times

" # jyrajan69 8 months, 2 weeks ago


This statement is imp 'architect feels this is due to the overhead associated with encrypting and transferring all the little data across the network.'
So based on this, S3 Adapter with higher transfer rates will not resolve this issue, must go with C.
upvoted 2 times

" # TechX 4 months, 1 week ago


agree with you, from the AWS documentation, they clearly note that: "Because the computer workstation from which or to which you make
the data transfer is considered to be the bottleneck for transferring data, we highly recommend that your workstation be a powerful computer.
It should be able to meet high demands in terms of processing, memory, and networking"
https://docs.aws.amazon.com/snowball/latest/ug/using-adapter.html
upvoted 1 times

" # pititcu667 9 months, 3 weeks ago


Selected Answer: C
Question is center around snowball.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: C
C
Perform multiple copy operations at one time by running each command from a separate terminal window, in separate instances of the Snowball
client” is the correct answer
upvoted 3 times

" # Liongeek 12 months ago


From my understanding, it's as simple as this:
If you change to s3 adapter of course you could get higher transfer rate, but you won't cause you'r still using only ONE CPU thread to copy the
files. You want to use all your CPU cores/threads to shorten the ENCRYPTION TIME? Run parallel copies then :)
upvoted 1 times

" # Bigbearcn 1 year ago


I will go with B
upvoted 1 times

" # andylogan 1 year ago


It's B to use S3 Adapter first
upvoted 1 times

" # blackgamer 1 year ago


C is the answer.
upvoted 1 times

312 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # johnyc55 1 year ago


Note
The data transfer rate using the file interface is typically between 25 MB/s and 40 MB/s. If you need to transfer data faster than this, use the
Amazon S3 adapter for Snowball, which has a data transfer rate typically between 250 MB/s and 400 MB/s. For more information, see
Transferring Files Using the Amazon S3 Interface.
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with B
B and C are right, but the ORDER is:
1 - set the Amazon S3 Adapter for Snowball
2 - start multiple copies after set S3 Adapter for Snowball

the first thing to do is set the S3 Adapter for Snowball, otherwise the multiple copies will throw the same problem again.
upvoted 5 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

" # WhyIronMan 1 year ago


After a second tough, changing to C
upvoted 1 times

" # WhyIronMan 1 year ago


No, nvm, going back to B...
the order is:
1 - set the Amazon S3 Adapter for Snowball
2 - start multiple copies after set S3 Adapter for Snowball

the first thing to do is set the S3 Adapter for Snowball, otherwise the multiple copies will throw the same problem again.
upvoted 3 times

" # student22 1 year ago


Yes. B is right.
upvoted 2 times

" # vkbajoria 1 year ago


It is C for me. Eventhough B sounds very convincing as well. Plus it provide faster speed then File Interface.

But According to AWS, if transfer is started with File Interface, it should be continue till end. Therefore, opening multiple window will speed things
up. If we want to start over, then obviously s3 Interface would be faster.
Here is the link: https://docs.aws.amazon.com/snowball/latest/developer-guide/using-fileinterface.html#fileinterface-overview
upvoted 3 times

" # gcg27 1 year ago


Correct answer is C, this question is exacted from Bonso exam
upvoted 4 times

" # NerdMe 1 year ago


Yes, you are correct I also saw the exact same question. Answer is C.
upvoted 3 times

313 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #632 Topic 1

A company is planning on hosting its ecommerce platform on AWS using a multi-tier web application designed for a NoSQL database. The
company plans to use the us-west-2 Region as its primary Region. The company wants to ensure that copies of the application and data are
available in second Region, us-west-1, for disaster recovery. The company wants to keep the time to fail over as low as possible. Failing back to
the primary Region should be possible without administrative interaction after the primary service is restored.
Which design should the solutions architect use?

A. Use AWS CloudFormation StackSets to create the stacks in both Regions with Auto Scaling groups for the web and application tiers.
Asynchronously replicate static content between Regions using Amazon S3 cross-Region replication. Use an Amazon Route 53 DNS failover
routing policy to direct users to the secondary site in us-west-1 in the event of an outage. Use Amazon DynamoDB global tables for the
database tier.

B. Use AWS CloudFormation StackSets to create the stacks in both Regions with Auto Scaling groups for the web and application tiers.
Asynchronously replicate static content between Regions using Amazon S3 cross-Region replication. Use an Amazon Route 53 DNS failover
routing policy to direct users to the secondary site in us-west-1 in the event of an outage Deploy an Amazon Aurora global database for the
database tier.

C. Use AWS Service Catalog to deploy the web and application servers in both Regions Asynchronously replicate static content between the
two Regions using Amazon S3 cross-Region replication. Use Amazon Route 53 health checks to identify a primary Region failure and update
the public DNS entry listing to the secondary Region in the event of an outage. Use Amazon RDS for MySQL with cross-Region replication for
the database tier.

D. Use AWS CloudFormation StackSets to create the stacks in both Regions using Auto Scaling groups for the web and application tiers.
Asynchronously replicate static content between Regions using Amazon S3 cross-Region replication. Use Amazon CloudFront with static +les
in Amazon S3, and multi-Region origins for the front-end web tier. Use Amazon DynamoDB tables in each Region with scheduled backups to
Amazon S3.

Correct Answer: C

Community vote distribution


A (100%)

" # porlarowl Highly Voted $ 1 year, 1 month ago


I support A.
because the platform designed for NoSQL... should be DynamoDB global table.
upvoted 24 times

" # Gmail78 1 year, 1 month ago


Agree with A - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html
upvoted 5 times

" # Ebi Highly Voted $ 1 year, 1 month ago


A for sure
upvoted 6 times

" # Binoj_1985 Most Recent % 10 months, 3 weeks ago


Selected Answer: A
Service catalog not required since cross account is not required. So CFT is enough. Also, failover for website can be done by Route53
upvoted 2 times

" # cldy 11 months ago


A. Use AWS CloudFormation StackSets to create the stacks in both Regions with Auto Scaling groups for the web and application tiers.
Asynchronously replicate static content between Regions using Amazon S3 cross-Region replication. Use an Amazon Route 53 DNS failover
routing policy to direct users to the secondary site in us-west-1 in the event of an outage. Use Amazon DynamoDB global tables for the database
tier.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: A
A is right
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago

314 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Selected Answer: A
Dynamo DB is NoSql solution, Cloudformation is for Iaas, for C, use service catalogue for what?
upvoted 1 times

" # andylogan 1 year ago


It's A with NoSQL DynamoDB
upvoted 1 times

" # tgv 1 year ago


AAA
---
upvoted 1 times

" # blackgamer 1 year ago


A. Dynamo DB is NoSql solution
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

" # Waiweng 1 year ago


it's A
upvoted 5 times

" # Kian1 1 year ago


going with A
upvoted 3 times

" # kopper2019 1 year ago


A for sure, NoSQL and Failover policy using Route 53
upvoted 2 times

" # Bulti 1 year, 1 month ago


A is correct
upvoted 2 times

" # rscloud 1 year, 1 month ago


A
Route53 DNS Failover, DynamoDB global table fulfill req.
upvoted 2 times

" # jackdryan 1 year, 1 month ago


I'll go with A
upvoted 2 times

315 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #633 Topic 1

A company hosts a blog post application on AWS using Amazon API Gateway, Amazon DynamoDB, and AWS Lambda. The application currently
does not use
API keys to authorize requests. The API model is as follows:
GET/posts/[postid] to get post details
GET/users[userid] to get user details
GET/comments/[commentid] to get comments details
The company has noticed users are actively discussing topics in the comments section, and the company wants to increase user engagement by
marking the comments appears in real time.
Which design should be used to reduce comment latency and improve user experience?

A. Use edge-optimized API with Amazon CloudFront to cache API responses.

B. Modify the blog application code to request GET comment[commented] every 10 seconds.

C. Use AWS AppSync and leverage WebSockets to deliver comments.

D. Change the concurrency limit of the Lambda functions to lower the API response time.

Correct Answer: D

Community vote distribution


C (100%)

" # liono Highly Voted $ 1 year, 1 month ago


C, https://aws.amazon.com/appsync/
upvoted 15 times

" # A_New_Guy 1 year, 1 month ago


Why this one?
upvoted 1 times

" # beso 1 year, 1 month ago


AWS AppSync is a fully managed service supports real-time updates.
upvoted 3 times

" # Ebi Highly Voted $ 1 year, 1 month ago


I go with C
upvoted 5 times

" # CloudHandsOn Most Recent % 3 months, 3 weeks ago


I believe that it is D (at first, thought it was C). Here is my explanation:
For short term, you can choose C, and just increase PC on the Lambda to handle load as it increases. However, the question asks about best
DESIGN which is more long term -> "Which design should be adopted..". Over time, you may be finding yourself stuck with paying more in costs
if you go with C. Smart Guy goes with D. Answer is DDDDDDD!!!!
upvoted 1 times

" # CloudHandsOn 3 months, 3 weeks ago


I wish they had an edit button :(. CORRECTION below:

I believe that it is C (at first, thought it was D). Here is my explanation:


For short term, you can choose D, and just increase PC on the Lambda to handle load as it increases. However, the question asks about best
DESIGN which is more long term -> "Which design should be adopted..". Over time, you may be finding yourself stuck with paying more in
costs if you go with D. Smart Guy goes with C. Answer is CCCCCCC!!!!
upvoted 2 times

" # aandc 4 months, 1 week ago


Selected Answer: C
keyword "in real time" -> APPSYNC
upvoted 1 times

" # Hasitha99 7 months ago


Selected Answer: C
AppSync is graphql based solution that supports real-time updates.
upvoted 2 times

316 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # kenchou73 7 months, 2 weeks ago


Selected Answer: C
https://docs.aws.amazon.com/appsync/latest/devguide/graphql-overview.html
upvoted 1 times

" # cldy 11 months ago


C. Use AWS AppSync and leverage WebSockets to deliver comments.
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


I will go with C
upvoted 2 times

" # backfringe 11 months, 2 weeks ago


I go with C
https://docs.aws.amazon.com/appsync/latest/devguide/real-time-websocket-client.html
upvoted 2 times

" # ElGuru 11 months, 3 weeks ago


Selected Answer: C
"Which design should be adopted to enhance user experience and decrease comment latency?"

Having to change to GraphQL shouldn't be relevant since the question doesn't ask about the Easiest way.
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 2 times

" # Suresh108 1 year ago


CCCCCC

https://aws.amazon.com/blogs/mobile/appsync-realtime/
upvoted 3 times

" # tgv 1 year ago


CCC
---
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 2 times

" # oscargee 1 year ago


Not C! The question says: Get xxx means it is using RESTful query. Websocket doen't have such thing.
upvoted 4 times

" # WhyIronMan 1 year ago


It's C... Go study TCP/IP and HTTP protocols and leave us alone please
upvoted 3 times

" # blackgamer 1 year ago


The problem with C is that AppSync designed for GraphQL but the question is about REST api. Moreover, Oscar free is right that
websocket doesn’t work in that way. Only polling is solution here if not rewriting the app.
upvoted 1 times

" # blackgamer 1 year ago


No solution seems to be complete here, but I will go with D as it is not at least wrong although not a complete solution.
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 3 times

" # gsw 1 year ago


aws generally don't recommend changing the entire architecture for a solution unless specified in the question which makes C a difficult choice...
D makes more sense
upvoted 1 times

317 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #634 Topic 1

A company has a VPC with two domain controllers running Active Directory in the default con+guration. The VPC DHCP options set is con+gured
to use the IP addresses of the two domain controllers. There is a VPC interface endpoint de+ned; but instances within the VPC are not able to
resolve the private endpoint addresses.
Which strategies would resolve this issue? (Choose two.)

A. De+ne an outbound Amazon Route 53 Resolver. Set a conditional forward rule for the Active Directory domain to the Active Directory
servers. Update the VPC DHCP options set to AmazonProvidedDNS.

B. Update the DNS service on the Active Directory servers to forward all non-authoritative queries to the VPC Resolver.

C. De+ne an inbound Amazon Route 53 Resolver. Set a conditional forward rule for the Active Directory domain to the Active Directory servers.
Update the VPC DHCP options set to AmazonProvidedDNS.

D. Update the DNS service on the client instances to split DNS queries between the Active Directory servers and the VPC Resolver.

E. Update the DNS service on the Active Directory servers to forward all queries to the VPC Resolver.

Correct Answer: BE

Community vote distribution


AB (57%) BC (43%)

" # MichaelR Highly Voted $ 1 year, 1 month ago


I think its A, B. AwsDNS is set in DHCP options. AWS resources resolve other resources as a result, but forward AD domain queries to AD servers
via an Outbound resolver endpoint. Users hitting the AD servers from on-prem would then have non-authoritative queries pushed to the AWS
resolver.
upvoted 29 times

" # TonyGe 1 year ago


A is incorrect, an outbound resolver is for DNS queries that you want to forward outside your VPC. For example, this is used for resolving
outside domain names.
upvoted 1 times

" # MichaelR 1 year, 1 month ago


as far as I know, you can't create a forward rule in an inbound resolver. Correct me if I"m wrong
upvoted 4 times

" # liono Highly Voted $ 1 year, 1 month ago


B &C are correct options
upvoted 19 times

" # Cal88 Most Recent % 6 days, 23 hours ago


The correct answer is AB as most comments are stating.

For anyone who thinks that A is not correct because outbound resolver will forward to on-premise DNS server.
Remember , our goal is to resolve records in our domain which in the question is hosted in the AD so we need to forward these requests if they
don't match the private hosts for the VPC.
The DNS being hosted inside the VPC or on premise is not relevant since you are specifying an ip in the forward rule , so technically you can
forward to the AD which inside the VPC
in AWS Docs:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-forwarding-outbound-queries.html#resolver-forwarding-outbound-
queries-rule-values

Target IP addresses
When a DNS query matches the name that you specify in Domain name, the outbound endpoint forwards the query to the IP addresses that you
specify here. These are typically the IP addresses for DNS resolvers on your network.

so it could be any IP weather its inside or outside the VPC


upvoted 1 times

" # Cal88 6 days, 23 hours ago


Notice in the documentation it says typically its inside your network , it does not mean this is the only way to do it but it means that in most
cases this what will happen.
The use case in the question applies to using outbound resolver
upvoted 1 times

" # nsvijay04b1 2 weeks, 3 days ago

318 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Selected Answer: AB
AB is answer. why?
A) correct - outbound resolver has conditional fwd rules to resolve hybrid DNS + VPC DHCP options must be reverted to other EC2 can resolve
DNS
B) correct - AD servers to use inbound resolver for non-authorititative queries to reach instances
C) wrong - There is no conditional fwd rules for inbound resolvers
D) wrong - splitting DNS server based on type of app seems illogical for me
E) wrong - AD servers need to resolve internal queries as well, not makes sense
upvoted 1 times
" # RVD 2 months, 2 weeks ago
Selected Answer: BC
To resolve the AWS services CNAME it needs to forward the queries to AWS DNS which on prem DNS trying to forward, here question is about
ec2 is not able to resolve the endpoint DNS. EC2->ADDNS->Inboud Resolver.
upvoted 3 times

" # Enigmaaaaaa 3 months, 1 week ago


AB
First we set all Instances to forward all queries to AmazonDNS (to resolve private interface names) and then other queries *.example.corp.com
will be forwarded with the outbound endpoint to the AD servers
upvoted 3 times

" # Sonujunko 8 months, 1 week ago


Selected Answer: AB
A,B
https://aws.amazon.com/blogs/networking-and-content-delivery/integrating-your-directory-services-dns-resolution-with-amazon-route-53-
resolvers/
upvoted 3 times

" # pititcu667 10 months ago


Guys i initially said a.b then I noticed the domain controllers are inside the vpc . so changing to bc.
upvoted 1 times

" # pititcu667 8 months, 2 weeks ago


I made a mistake it's ab forwarding requires outbound.
upvoted 1 times

" # AzureDP900 11 months ago


I will go with A & B
upvoted 1 times

" # Liongeek 12 months ago


B&C for me. Same question appeared on Udemy test
upvoted 2 times

" # Salmariaz 1 year ago


Should be A and B , as outbound endpoint not necessarily mean that the servers should be onprem for conditional forwarder rule to kick in,
instead they can reside in another VPC too and it allows DNS queries from your VPC to the VPC where the AD servers run.
Option C would also work with an inbound endpoint pointing to the 2 AD server IPs, but definitely not with forwarding rules. So clearly ruled out.
upvoted 4 times

" # andylogan 1 year ago


It's A, B - An outbound resolver
upvoted 2 times

" # tgv 1 year ago


AAA BBB
---
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html
upvoted 5 times

" # blackgamer 1 year ago


The answer is A & B. Thanks Waiweng for useful reference.
upvoted 1 times

" # pablobairat 1 year ago


B&C
"An outbound resolver is for DNS queries that you want to forward outside your VPC" -> So A is discarted since everything is inside the VPC
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B,C
upvoted 2 times

319 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # botohin687 1 year ago


Answer A &B
https://aws.amazon.com/blogs/aws/new-amazon-route-53-resolver-for-hybrid-clouds/
upvoted 2 times

" # Shenannigan 1 year ago


I could see this being correct if the domain controllers were hosted on premise but in this case the DC's are hosted on the VPC as such I am
going with:
BC
upvoted 2 times

320 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #635 Topic 1

A company has a photo sharing social networking application. To provide a consistent experience for users, the company performs some image
processing on the photos uploaded by users before publishing on the application. The image processing is implemented using a set of Python
libraries.
The current architecture is as follows:
✑ The image processing Python code runs in a single Amazon EC2 instance and stores the processed images in an Amazon S3 bucket named
ImageBucket.
✑ The front-end application, hosted in another bucket, loads the images from ImageBucket to display to users.
With plans for global expansion, the company wants to implement changes in its existing architecture to be able to scale for increased demand on
the application and reduce management complexity as the application scales.
Which combination of changes should a solutions architect make? (Choose two.)

A. Place the image processing EC2 instance into an Auto Scaling group.

B. Use AWS Lambda to run the image processing tasks.

C. Use Amazon Rekognition for image processing.

D. Use Amazon CloudFront in front of ImageBucket.

E. Deploy the applications in an Amazon ECS cluster and apply Service Auto Scaling.

Correct Answer: DE

Community vote distribution


BD (100%)

" # cpd Highly Voted $ 1 year, 1 month ago


"reduce management complexity as the application scales"; both A and E involves managing underlying host (unless its Fargate for E). I'd use
lambda to answer this requirement i.e., B. And D is obvious answer.
upvoted 23 times

" # DashL 1 year ago


The question doesn't say if the image processing can complete within a lambda's timeout period of 15 minutes or less. So the answer should
be AD
upvoted 3 times

" # memester 1 year ago


It also doesn't say it can't...
upvoted 7 times

" # Ebi Highly Voted $ 1 year ago


Answer is BD,
B changes existing architecture from EC2 to Lambda with minimum management overhead
upvoted 15 times

" # kadev Most Recent % 2 months, 1 week ago


Because AWS is the best , AWS can do anything ( AWS said) lol => Lambda can process image ( because first sentence => dont care about
timeout or many libs ..) => B
upvoted 1 times

" # kadev 2 months, 1 week ago


After kidding,
"to scale for increased demand on the application and reduce management complexity "
- Auto scaling with Ec2, or Ecs for processing event from S3, need add more modules: SQS, CW event, lambda to audjust Scaling size
=> Lambda is the best option
upvoted 1 times

" # kadev 2 months, 1 week ago


we are in exam, not your project.
upvoted 1 times

" # jyrajan69 8 months, 2 weeks ago


D is definitely one choice. For those choosing B, you are considering the complexity. The question states that currently there are a lot of python
libraries needed to process, so if using B then you must configure this as layers that Lambda must pull in, adding to the complexity. Now with
answer A, you are catering for growth with scaling the EC2 instances, not adding any more complexity and that should be A. Answer A and D

321 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # tkanmani76 10 months, 2 weeks ago
D for sure. It's not A as its mentioned that firm wants to change the architecture. Between B and E, Lambda would be a good choice and more
operationally efficient over ECS. Its much faster when it comes to scaling over ECS. Hence will choose Lambda (Choice B) over ECS.
https://prismatic.io/blog/why-we-moved-from-lambda-to-ecs/ - This is an interesting case study on the problems faced by Prismatic with
Lambda and why they moved to ECS - which provide a perspective. However in our case Lambda will do the work.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Selected Answer: BD
B,D is right
upvoted 3 times

" # andylogan 1 year ago


It's B, D - minimum management overhead
upvoted 1 times

" # DerekKey 1 year ago


B&D - we are using such solution in two environments
upvoted 2 times

" # tgv 1 year ago


BBB DDD
---
upvoted 2 times

" # blackgamer 1 year ago


E is okay, but B is more suitable to minimize management overhead. I will go with B and D.
upvoted 1 times

" # jobe42 1 year ago


In fact that we can change the existing architecture:
A.) Will not solve the scaling problem, more EC2 instances have somehow cooridinate the tasks, no SQS or DynamoDB here for processing
information => NOK
B.) S3 triggers a Lambda function for processing => OK
C.) Recognition not suitable for "processing", just for analyzing. you can't change the pictures (scale, add watermark etc.) just find out if this is a
dog or a copyrighted dog => NOK
D.) Obvious.. => OK
E.) Would work, but needs more to do than B => NOK

== B & D
upvoted 8 times

" # WhyIronMan 1 year ago


I'll go with B,D
upvoted 2 times

" # tvs 1 year ago


BD - Why not A - because you need to patch and other maintenance on EC2 instances.
upvoted 2 times

" # zolthar_z 1 year ago


The answer is B and D.. D is obvious. B the question give the answer (execute on process per image, if there is no new image no process should
run, python and libraries) is lambda
upvoted 1 times

" # Waiweng 1 year ago


it's A&D implement changes to it's existing architecture and reduce management complexity
upvoted 3 times

" # student22 1 year ago


B,D
Lambda will reduce management overhead better. EC2 with ASG will still have some overhead for patching etc.
upvoted 1 times

" # gpark 1 year ago


DE
===
The question asks about application tier, not processing tier..
So, AB is quite doubtful to select.
upvoted 1 times

" # Kian1 1 year ago

322 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

To be able to scale for increased demand on the application and reduce management complexity as the application scales. I would prefer
solution D,E. but not sure the answer tbh.
upvoted 1 times

323 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #636 Topic 1

A company has a web application that allows users to upload short videos. The videos are stored on Amazon EBS volumes and analyzed by
custom recognition software for categorization.
The website contains static content that has variable tramc with peaks in certain months. The architecture consists of Amazon EC2 instances
running in an Auto
Scaling group for the web application and EC2 instances running in an Auto Scaling group to process an Amazon SQS-queue. The company wants
to re-architect the application to reduce operational overhead using AWS managed services where possible and remove dependencies on third-
party software.
Which solution meets these requirements?

A. Use Amazon ECS containers for the web application and Spot instances for the Scaling group that processes the SQS queue. Replace the
custom software with Amazon Rekognition to categorize the videos.

B. Store the uploaded videos in Amazon EFS and mount the +le system to the EC2 instances for the web application. Process the SQS queue
with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.

C. Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event noti+cation to publish events to the SQS
queue. Process the SQS queue with an AWS Lambda function that call the Amazon Rekognition API to categorize the videos.

D. Use AWS Elastic Beanstalk to launch EC2 instances in an Auto Scaling group for the application and launch a worker environment to
process the SQS queue. Replace the custom software with Amazon Rekognition to categorize the videos.

Correct Answer: A

Community vote distribution


C (100%)

" # Gmail78 Highly Voted $ 1 year, 1 month ago


C is more realistic
upvoted 14 times

" # Ebi Highly Voted $ 1 year ago


I will go with C
upvoted 7 times

" # hilft Most Recent % 3 months, 1 week ago


C is full serverless
upvoted 1 times

" # kangtamo 4 months, 1 week ago


Selected Answer: C
Agree with C: S3.
upvoted 1 times

" # Ni_yot 7 months, 3 weeks ago


Yep. C is correct. they want to do away with 3rd party software and use AWS managed services
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


c is correct
upvoted 1 times

" # andylogan 1 year ago


It's C - reduce operational overhead.
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # blackgamer 1 year ago


C for sure, managed services.
upvoted 1 times

324 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # WhyIronMan 1 year ago


I'll go with C
upvoted 2 times

" # oscargee 1 year ago


C? How can you host an app in S3? S3 is for static data.
upvoted 2 times

" # Chibuzo1 1 year ago


"the website contains static content" Answer is C buddy!!
upvoted 2 times

" # devtest01 2 months ago


Question is not clearly. "Contain" not "static website"
upvoted 1 times

" # mustpassla 1 year ago


C, reduce operational overhead.
upvoted 2 times

" # Waiweng 1 year ago


it's C
upvoted 4 times

" # kiev 1 year ago


Full House says C is the answer
upvoted 2 times

" # Kian1 1 year ago


going with C
upvoted 3 times

" # LB 1 year ago


C for me
upvoted 2 times

" # Bulti 1 year ago


C is the right answer
upvoted 2 times

325 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #637 Topic 1

A retail company processes point-of-sale data on application servers in its data center and writes outputs to an Amazon DynamoDB table. The
data center is connected to the company's VPC with an AWS Direct Connect (DX) connection, and the application servers require a consistent
network connection at speeds greater than 2 Gbps.
The company decides that the DynamoDB table needs to be highly available and fault tolerant. The company policy states that the data should be
available across two regions.
What changes should the company make to meet these requirements?

A. Establish a second DX connection for redundancy. Use DynamoDB global tables to replicate data to a second Region. Modify the
application to fail over to the second Region.

B. Use an AWS managed VPN as a backup to DX. Create an identical DynamoDB table in a second Region. Modify the application to replicate
data to both Regions.

C. Establish a second DX connection for redundancy. Create an identical DynamoDB table in a second Region. Enable DynamoDB auto scaling
to manage throughput capacity. Modify the application to write to the second Region.

D. Use AWS managed VPN as a backup to DX. Create an identical DynamoDB table in a second Region. Enable DynamoDB streams to capture
changes to the table. Use AWS Lambda to replicate changes to the second Region.

Correct Answer: A

" # Gmail78 Highly Voted $ 1 year, 1 month ago


A - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html
upvoted 15 times

" # mustpassla Highly Voted $ 1 year ago


A, speeds greater than 2 Gbps.
upvoted 8 times

" # hilft Most Recent % 3 months, 2 weeks ago


A. two region - > "global"
upvoted 1 times

" # jyrajan69 8 months, 2 weeks ago


Only A gives you the Global Table option for Dynamo so definitely the answer
upvoted 2 times

" # AzureDP900 11 months ago


A is right
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


A for sure
upvoted 1 times

" # andylogan 1 year ago


It's A - Managed VPN max throughput 1.25Gbps
upvoted 2 times

" # tgv 1 year ago


AAA
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 1 times

" # Waiweng 1 year ago


it;s A
upvoted 3 times

" # PredaOvde 1 year ago


A say "modify the application to fail over..." . That makes no sense.

326 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # alisyech 1 year ago


A for sure
upvoted 1 times

" # bulapapa 1 year ago


A is correct
VPN doesn't meet the bandwidth. The maximum bandwidth of VPN is 1.25Gbps.
https://docs.aws.amazon.com/vpn/latest/s2svpn/vpn-limits.html
upvoted 4 times

" # Kian1 1 year ago


going with A
upvoted 2 times

" # LB 1 year ago


A - Dynamo DB global tables and redundant Direct connection
upvoted 2 times

" # kopper2019 1 year ago


A, VPN cannot be used since they required stable and consistent speed and DynamoDB global tables
upvoted 1 times

" # Ebi 1 year ago


Answer is A
upvoted 4 times

327 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #638 Topic 1

A company is using AWS CloudFormation as its deployment tool for all application. It stages all application binaries and templates within Amazon
S3 bucket with versioning enabled. Developers have access to an Amazon EC2 instance that hosts the integrated development (IDE). The
Developers download the application binaries from Amazon S3 to the EC2 instance, make changes, and upload the binaries to an S3 bucket after
running the unit tests locally. The developers want to improve the existing deployment mechanism and implement CI/CD using AWS CodePipeline.
The developers have the following requirements:
✑ Use AWS CodeCommit for source control.
✑ Automate unit testing and security scanning.
✑ Alert the Developers when unit tests fail.
✑ Turn application features on and off, and customize deployment dynamically as part of CI/CD.
✑ Have the lead Developer provide approval before deploying an application.
Which solution will meet these requirements?

A. Use AWS CodeBuild to run tests and security scans. Use an Amazon EventBridge rule to send Amazon SNS alerts to the Developers when
unit tests fail. Write AWS Cloud Developer kit (AWS CDK) constructs for different solution features, and use a manifest +le to turn features on
and off in the AWS CDK application. Use a manual approval stage in the pipeline to allow the lead Developer to approve applications.

B. Use AWS Lambda to run unit tests and security scans. Use Lambda in a subsequent stage in the pipeline to send Amazon SNS alerts to the
developers when unit tests fail. Write AWS Amplify plugins for different solution features and utilize user prompts to turn features on and off.
Use Amazon SES in the pipeline to allow the lead developer to approve applications.

C. Use Jenkins to run unit tests and security scans. Use an Amazon EventBridge rule in the pipeline to send Amazon SES alerts to the
developers when unit tests fail. Use AWS CloudFormation nested stacks for different solution features and parameters to turn features on and
off. Use AWS Lambda in the pipeline to allow the lead developer to approve applications.

D. Use AWS CodeDeploy to run unit tests and security scans. Use an Amazon CloudWatch alarm in the pipeline to send Amazon SNS alerts to
the developers when unit tests fail. Use Docker images for different solution features and the AWS CLI to turn features on and off. Use a
manual approval stage in the pipeline to allow the lead developer to approve applications.

Correct Answer: C

Community vote distribution


A (100%)

" # Gmail78 Highly Voted $ 1 year, 1 month ago


UI testing (AWS Lambda and AWS CodeBuild). Not sure B is correct, Jenkins is an alternative of AWS Pipeline.A seems the more close one.
upvoted 16 times

" # certainly 1 year ago


A for sure. B is incorrect. 1. Lambda limit of 15 min make it a poor candidate for running unit tests and security scans. AWS Amplify is used for
onboarding, ml, rt-collaboration, doesn't seem to be good fit here
upvoted 5 times

" # Ebi Highly Voted $ 1 year ago


I go with A
upvoted 7 times

" # nsvijay04b1 Most Recent % 2 weeks, 3 days ago


Selected Answer: A
A.
->eventbridge handling codebuild state change and trigger SNS https://docs.aws.amazon.com/codebuild/latest/userguide/sample-build-
notifications.html
-> manual approval stage in pipeline
upvoted 2 times

" # xinhui 3 months, 1 week ago


I will also go with A , AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces
software packages that are ready to deploy.
upvoted 3 times

" # hilft 3 months, 2 weeks ago


A. Codebuild is the only option here

328 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # tartarus23 6 months, 1 week ago
Selected Answer: A
A. CodeBuild is the AWS managed service for unit tests and scans. I highly doubt AWS will promote third party services such as Jenkins, instead
of their own AWS services.
upvoted 1 times

" # Hari008 7 months, 3 weeks ago


Why on the earth AWS promote Jenkins instead of their own product
upvoted 2 times

" # mousedolly2002 8 months, 2 weeks ago


Should be A
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


A code build and lead approval is key!
upvoted 1 times

" # andylogan 1 year ago


It's A with CDK as integrated development (IDE)
upvoted 1 times

" # tgv 1 year ago


AAA
---
upvoted 1 times

" # blackgamer 1 year ago


Clearly A.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

" # mustpassla 1 year ago


A, major function of CodeBuild.
upvoted 3 times

" # victordun 1 year ago


A - Code build for testing
upvoted 3 times

" # Waiweng 1 year ago


it's A
upvoted 3 times

" # alisyech 1 year ago


i go with A
upvoted 1 times

329 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #639 Topic 1

An IoT company has rolled out a jeet of sensors for monitoring temperatures in remote locations. Each device connects to AWS IoT Core and
sends a message
30 seconds, updating an Amazon DynamoDB table. A System Administrator users AWS IoT to verify the devices are still sending messages to AWS
IoT Core: the database is not updating.
What should a Solutions Architect check to determine why the database is not being updated?

A. Verify the AWS IoT Device Shadow service is subscribed to the appropriate topic and is executing the AWS Lambda function.

B. Verify that AWS IoT monitoring shows that the appropriate AWS IoT rules are being executed, and that the AWS IoT rules are enabled with
the correct rule actions.

C. Check the AWS IoT Fleet indexing service and verify that the thing group has the appropriate IAM role to update DynamoDB.

D. Verify that AWS IoT things are using MQTT instead of MQTT over WebSocket, then check that the provisioning has the appropriate policy
attached.

Correct Answer: D

Community vote distribution


B (100%)

" # keos Highly Voted $ 1 year, 1 month ago


B, https://docs.aws.amazon.com/iot/latest/developerguide/iot-rules.html
upvoted 18 times

" # beso 1 year, 1 month ago


https://docs.aws.amazon.com/iot/latest/developerguide/monitoring_overview.html
upvoted 3 times

" # hilft Most Recent % 3 months, 2 weeks ago


I thought it was C. Forum goes for the B. I guess the keyword here is "IoT rules".
upvoted 1 times

" # aandc 4 months, 1 week ago


Selected Answer: B
keyword :IoT rules
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


B perfect !
upvoted 1 times

" # andylogan 1 year ago


It's B - appropriate actions on the rule to write to DDB.
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # blackgamer 1 year ago


B is the answer.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # Kopa 1 year ago


B, All others no reason to change in unexcpected way.
upvoted 2 times

" # mustpassla 1 year ago


Guess B.

330 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 3 times

" # Kian1 1 year ago


going for B
upvoted 2 times

" # Ebi 1 year, 1 month ago


Answer is B
upvoted 3 times

" # Bulti 1 year, 1 month ago


Answer is B.. Not C because Fleet Group index is for querying thing group not updating.
upvoted 2 times

" # petebear55 1 year, 1 month ago


KNOWING HOW SLIPPERY AWS CAN BE AND HOW THEY LIKE TO THROW A OBVIOUS ANSWER IN THERE SOMETIMES .. WHEN WERE
PROGRAMMED TO LOOK HARDER. .. IT MAY BE SIMPLY C !! ... Given experience of similar questions i may actually choose C for this. However
in all honesty any of the answers could be the right 1. So C for me
upvoted 3 times

" # T14102020 1 year, 1 month ago


Correct is B. appropriate actions
upvoted 1 times

" # cloudgc 1 year, 1 month ago


B - choose the appropriate actions on the rule. in this case write to DDB.
https://docs.aws.amazon.com/iot/latest/developerguide/iot-rule-actions.html
upvoted 1 times

331 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #640 Topic 1

An enterprise company is using a multi-account AWS strategy. There are separate accounts for development staging and production workloads.
To control costs and improve governance the following requirements have been de+ned:
✑ The company must be able to calculate the AWS costs for each project.
✑ The company must be able to calculate the AWS costs for each environment development staging and production.
✑ Commonly deployed IT services must be centrally managed.
✑ Business units can deploy pre-approved IT services only.
✑ Usage of AWS resources in the development account must be limited.
Which combination of actions should be taken to meet these requirements? (Choose three.)

A. Apply environment, cost center, and application name tags to all taggable resources.

B. Con+gure custom budgets and de+ne thresholds using Cost Explorer.

C. Con+gure AWS Trusted Advisor to obtain weekly emails with cost-saving estimates.

D. Create a portfolio for each business unit and add products to the portfolios using AWS CloudFormation in AWS Service Catalog.

E. Con+gure a billing alarm in Amazon CloudWatch.

F. Con+gure SCPs in AWS Organizations to allow services available using AWS.

Correct Answer: CEF

Community vote distribution


ADF (75%) ABF (25%)

" # AK2020 Highly Voted $ 1 year, 1 month ago


A - Tagging & Costing
D - Preapproved and Control
F- Limited service
upvoted 34 times

" # Kelvin1477 1 year, 1 month ago


this options make more sense
upvoted 2 times

" # vipgcp Highly Voted $ 1 year, 1 month ago


A, D, F
A - TAGS for cost management - ok
B - There is no requirement of budgeting
C - no requirement on cost saving or alerts
D - controlled provisioining - ok
E - no requirement of alarm
F - central - ok
upvoted 13 times

" # hilft Most Recent % 3 months, 2 weeks ago


A,B,F
?
upvoted 1 times

" # wassb 3 weeks, 4 days ago


Cost Explorer is not for setting thresholds.
upvoted 1 times

" # aandc 4 months, 1 week ago


Selected Answer: ADF
vote ADF
upvoted 1 times

" # aandc 4 months ago


change to ABF
upvoted 1 times

" # Alexey79 6 months, 3 weeks ago

332 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Selected Answer: ABF


A:
Tagging to have a clear segregation between staging and production workloads and each project.

B:
Visualize, understand, and manage your AWS costs based on Tags created in A:.

F:

Why NOT D:
AWS CloudFormation will not prevent usage of unauthorized AWS Services. SCP is used for that.
upvoted 1 times

" # Alexey79 6 months, 3 weeks ago


A:
Tagging to have a clear segregation between staging and production workloads and each project.

B:
Visualize, understand, and manage your AWS costs based on Tags created in A:.
Having only tagging is not enough.

F:
Use SCP to limit AWS Resources deployment only to “pre-approved IT services only”.

Why NOT D:
AWS CloudFormation will not prevent usage of unauthorized AWS Services as per requirement “Business units can deploy pre-approved IT
services only”. SCP is used for that. CloudFormation is good for deployment of approved AWS Resources, not AWS Services.
upvoted 1 times
" # roka_ua 7 months, 1 week ago
Selected Answer: ADF
Vote ADF
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


I will go with ADF, It is most appropriate for given scenario.
upvoted 1 times

" # andylogan 1 year ago


It's A D F - tag, limit access with SCP
upvoted 1 times

" # tgv 1 year ago


AAA DDD FFF
---
upvoted 1 times

" # blackgamer 1 year ago


Agree with ADF.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,D,F
upvoted 2 times

" # daisyli 1 year ago


I think where B is wrong is the 'Cost Explorer'. If using 'AWS Budgets', B may be one of the right answer.
B. Configure custom budgets and define thresholds using Cost Explorer.
Cost Explorer:
https://aws.amazon.com/aws-cost-management/aws-cost-explorer/?nc1=h_ls
AWS Budgets:
https://aws.amazon.com/aws-cost-management/aws-budgets/?nc1=h_ls
upvoted 3 times

" # mustpassla 1 year ago


ADF for sure.
upvoted 2 times

" # digimaniac 1 year ago


what does " Usage of AWS resources in the development account must be limited" means? meaning don't create crazy expensive EC2 or don't
use EC2 at all?
upvoted 1 times

" # Waiweng 1 year ago


it's A,D,F

333 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # Kian1 1 year ago
will go with ADF
upvoted 2 times

" # Ebi 1 year ago


I will go with ADF
upvoted 4 times

334 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #641 Topic 1

A company is planning to migrate an existing high performance computing (HPC) solution to the AWS Cloud. The existing solution consists of a
12-node cluster running Linux with high speed interconnectivity developed on a single rack. A solutions architect needs to optimize the
performance of the HPC cluster.
Which combination of steps will meet these requirements? (Choose two.)

A. Deploy instances across at least three Availability Zones.

B. Deploy Amazon EC2 instances in a placement group.

C. Use Amazon EC2 instances that support Elastic Fabric Adapter (EFA).

D. Use Amazon EC2 instances that support burstable performance.

E. Enable CPU hyperthreading.

Correct Answer: BE

Community vote distribution


BC (100%)

" # asldavid Highly Voted $ 1 year, 1 month ago


B and C
upvoted 20 times

" # Gmail78 1 year, 1 month ago


B - C https://www.iucc.ac.il/en/blog/best-practices-for-running-hpc-on-aws/
upvoted 3 times

" # Kelvin1477 1 year, 1 month ago


agree.. B&C
upvoted 1 times

" # KevinZhong Highly Voted $ 1 year ago


BC
----------------
A: HA is not the case
B: placement group is good for HPC, refer to https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
C: EFA is good for HPC, refer to https://aws.amazon.com/hpc/efa/
D: burstable is not the case
E: we need to Disable hyper-threading, refer to https://www.iucc.ac.il/en/blog/best-practices-for-running-hpc-on-aws/
upvoted 9 times

" # Sonujunko 8 months, 1 week ago


https://aws.amazon.com/blogs/compute/disabling-intel-hyper-threading-technology-on-amazon-linux/
upvoted 1 times

" # AMKazi Most Recent % 9 months ago


B and C
upvoted 1 times

" # cldy 10 months, 1 week ago


B and C.
upvoted 1 times

" # cldy 11 months ago


B. Deploy Amazon EC2 instances in a placement group.
C. Use Amazon EC2 instances that support Elastic Fabric Adapter (EFA).
upvoted 1 times

" # Rho_Ohm 11 months ago


Selected Answer: BC
B and C
upvoted 1 times

" # AzureDP900 11 months ago


I'll go with B,C

335 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # AzureDP900 11 months, 1 week ago
B and C is correct
upvoted 1 times

" # andylogan 1 year ago


It's B C, placement group and Elastic Fabric Adapter (EFA).
upvoted 1 times

" # tgv 1 year ago


BBB CCC
---
upvoted 1 times

" # blackgamer 1 year ago


B and C.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B,C
upvoted 2 times

" # Kopa 1 year ago


b,C for sure
upvoted 1 times

" # Waiweng 1 year ago


it's B&C
upvoted 3 times

" # PredaOvde 1 year ago


B say "in a placement group". It does not say cluster partition group, which would improve performance. A placement group could also be
spread, which would decrease performance.
upvoted 2 times

" # Taku 1 year, 1 month ago


will go with B &C .....

https://aws.amazon.com/hpc/efa/

Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of
inter-node communications at scale on AWS. Its custom-built operating system (OS) bypass hardware interface enhances the performance of
inter-instance communications, which is critical to scaling these applications. With EFA, High Performance Computing (HPC) applications using
the Message Passing Interface (MPI) and Machine Learning (ML) applications using NVIDIA Collective Communications Library (NCCL) can scale
to thousands of CPUs or GPUs. As a result, you get the application performance of on-premises HPC clusters with the on-demand elasticity and
flexibility of th...
upvoted 1 times

" # Kian1 1 year, 1 month ago


going with BC
upvoted 2 times

336 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #642 Topic 1

A company hosts a game player-matching service on a public facing, physical, on-premises instance that all users are able to access over the
internet. All tramc to the instance uses UDP. The company wants to migrate the service to AWS and provide a high level of security. A solutions
architect needs to design a solution for the player-matching service using AWS.
Which combination of steps should the solutions architect take to meet these requirements? (Choose three.)

A. Use a Network Load Balancer (NLB) in front of the player-matching instance. Use a friendly DNS entry in Amazon Route 53 pointing to the
NLB's Elastic IP address.

B. Use an Application Load Balancer (ALB) in front of the player-matching instance. Use a friendly DNS entry in Amazon Route 53 pointing to
the ALB's internet- facing fully quali+ed domain name (FQDN).

C. De+ne an AWS WAF rule to explicitly drop non-UDP tramc, and associate the rule with the load balancer.

D. Con+gure a network ACL rule to block all non-UDP tramc. Associate the network ACL with the subnets that hold the load balancer
instances.

E. Use Amazon CloudFront with an Elastic Load Balancer as an origin.

F. Enable AWS Shield Advanced on all public-facing resources.

Correct Answer: BDF

Community vote distribution


ADF (100%)

" # smartassX Highly Voted $ 1 year, 1 month ago


ADF "If your application is used only for TCP traffic, you can create a rule to deny all UDP traffic, or vice versa" https://docs.aws.amazon.com
/whitepapers/latest/aws-best-practices-ddos-resiliency/security-groups-and-network-access-control-lists-nacls-bp5.html
upvoted 24 times

" # kangtamo Most Recent % 4 months, 1 week ago


Selected Answer: ADF
Agree with ADF.
upvoted 2 times

" # Ni_yot 9 months, 2 weeks ago


ADF. NLB supports UDP
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


A,D,F is perfect answer
The Network Load Balancer (NLB) supports the UDP protocol and can be placed in front of the application instance.
This configuration may add some security if the instance is running in a private subnet.
An NLB can be configured with an Elastic IP in each subnet in which it has nodes. In this case it only has a single
subnet (one instance) and so there will be 1 EIP.
Route 53 can be configured to resolve directly to the EIP rather than the DNS name of the NLB as there is only one IP
address to return. To filter traffic the network ACL for the subnet can be configured to block all non-UDP traffic.
This solution meets all the stated requirements.
upvoted 1 times

" # andylogan 1 year ago


It's A D F, NLB + NACL + AWS Shield Advanced
upvoted 2 times

" # student22 1 year ago


ADF
NLB + NACL + AWS Shield Advanced
upvoted 2 times

" # tgv 1 year ago


AAA DDD FFF
---
upvoted 1 times

" # blackgamer 1 year ago


A over B because the application needs UDP port. NLB is the answer here. D and F are Okay.

337 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # denccc 1 year ago
I would think BDF? Not sure if the order of answers changed? WAF for ALB.
upvoted 1 times

" # denccc 1 year ago


Oh yes, UDP... my bad
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with A,D,F
upvoted 3 times

" # mustpassla 1 year ago


ADF for sure. keyword: UDP and security.
upvoted 1 times

" # Waiweng 1 year ago


it's A,D,F
upvoted 4 times

" # Kian1 1 year ago


going with ADF
upvoted 3 times

" # Justu 1 year ago


I would go ACF, WAF is better solution to defend your load balancer than NACL.
upvoted 2 times

" # rasti 1 year ago


C is wrong. WAF is only for ALB, not for NLB.
ADF is correct
upvoted 11 times

" # nqobza 1 year, 1 month ago


The correct answer is AEF. I think people are getting caught up with trying to block UDP traffic. There is no need for that as on the Network load
balancer we would only add a UDP listener so anything outside of UDP would be rejected anyway.
upvoted 1 times

" # Ebi 1 year, 1 month ago


I will go with ADE,
I don't see any reason to enable expensive Advanced Shield when there is no specific requirement in the question
upvoted 3 times

" # Ebi 1 year ago


Changed my answer to ADF:
https://aws.amazon.com/blogs/networking-and-content-delivery/accelerate-protect-games-with-amazon-cloudfront-aws-shield-aws-waf/
upvoted 5 times

" # certainly 1 year ago


why not E. CloudFront is also recommended in referenced aws blog. also, https://docs.aws.amazon.com/whitepapers/latest/aws-best-
practices-ddos-resiliency/protecting-your-origin-bp1-bp5.html
upvoted 2 times

" # heyheyhei 1 year ago


CloudFront does not support UDP. Instead, Global Accelerator can be used for UDP gaming
upvoted 8 times

" # Bulti 1 year, 1 month ago


ADF. Not B because this is a traffic using layer 4 Protocol (UDP) and NLB is a better fit to handle this traffic than ALB. Not C because WAF
protects ALB and is meant to protect web application traffic mainly HTTP and HTTPS.
upvoted 4 times

338 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #643 Topic 1

A company has multiple AWS accounts and manages these accounts which AWS Organizations. A developer was given IAM user credentials to
access AWS resources. The developer should have read-only access to all Amazon S3 buckets in the account. However, when the developer tries
to access the S3 buckets from the console, they receive an access denied error message with no bucket listed.
A solution architect reviews the permissions and +nds that the developer's IAM user is listed as having read-only access to all S3 buckets in the
account.
Which additional steps should the solutions architect take to troubleshoot the issue? (Choose two.)

A. Check the bucket policies for all S3 buckets.

B. Check the ACLs for all S3 buckets.

C. Check the SCPs set at the organizational units (OUs).

D. Check for the permissions boundaries set for the IAM user.

E. Check if an appropriate IAM role is attached to the IAM user.

Correct Answer: DE

Community vote distribution


CD (67%) AB (33%)

" # XRiddlerX Highly Voted $ 1 year ago


Answer C and D
- My two cents:
A is INCORRECT even though a bucket policy IS a resource based policy and will be evaluated AFTER Organizations SCPs, if a DENY is set in
the policy you will list see it listed. You will see the word "ERROR" in the Access column.

B is INCORRECT because even though ACLs are resource-based policies you use ACLs to grant basic read/write permissions on the objects in
the bucket. You'll still be able to ListBuckets if there is an ACL on the bucket.

C is CORRECT because after the Deny Evaluation a Organization SCPs are evaluated and take affect/merged. (See Link Below)

D is CORRECT because a DENY on the permission boundary will not allow the developer to ListBuckets

E is INCORRECT because this is a IAM Permission and applied AFTER DENY, ORG SCP, and RESOURCE-based policy evaluation. In addition
the Solution Architect checked the developers IAM User and it was listed as readonly.

https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html#policy-eval-denyallow
upvoted 33 times

" # liono Highly Voted $ 1 year, 1 month ago


c,d seems correct
upvoted 17 times

" # Cal88 Most Recent % 6 days, 23 hours ago


The answer is CD

A. Check the bucket policies for all S3 buckets.


Not relevant , the user is facing an issue to list all buckets.
If the question is about access denied when trying to read or write from some bucket then this might be the cause

B. Check the ACLs for all S3 buckets.


Same as A

C. Check the SCPs set at the organizational units (OUs).


Correct , even if a user has IAM permission to access a service if the SCP for his OU denies it he cant access
D. Check for the permissions boundaries set for the IAM user.
This is correct , the issue could be because of the permission set for the IAM user

E. Check if an appropriate IAM role is attached to the IAM user.


Not relevant , the permission to access S3 in the question is defined on the user and there is no mention that the user is a assuming a role or that
an ec2 instance with that role is having the problem
upvoted 1 times

" # Enigmaaaaaa 4 months ago


CD for me.
Since we cannot list any bucket at all - A& B are excluded.

339 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

E - we already have this kind of access - not relevant.


upvoted 1 times
" # kangtamo 4 months, 2 weeks ago
Selected Answer: CD
Agree with CD.
upvoted 2 times

" # HellGate 7 months, 2 weeks ago


Selected Answer: AB
There are several ways to control access S3 bucket.
- IAM user policy
- bucket policy
- ACLs
- S3 block public access
If setting related with IAM is right, we should check their bucket policies and ACLs. So answer is A and B.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


C, D is correct
A service control policy (SCP) may have been implemented that limits the API actions that are available for Amazon
S3. This will apply to all users in the account regardless of the permissions they have assigned to their user account.
Another potential cause of the issue is that the permissions boundary for the user limits the S3 API actions available
to the user. A permissions boundary is an advanced feature for using a managed policy to set the maximum
permissions that an identity-based policy can grant to an IAM entity. An entity’s permissions boundary allows it to
perform only the actions that are allowed by both its identity-based policies and its permissions boundaries
upvoted 1 times

" # andylogan 1 year ago


It's C D with evaluating Identity-based policies with boundaries
upvoted 2 times

" # DerekKey 1 year ago


C&D correct
https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/
upvoted 3 times

" # tgv 1 year ago


CCC DDD
---
upvoted 1 times

" # blackgamer 1 year ago


CD is the answer.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C,D
upvoted 2 times

" # Waiweng 1 year ago


it's C,D
upvoted 3 times

" # Amitv2706 1 year ago


C and D.

If I go with IAM Policy Evaluation Logic mentioned here :


https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html#policy-eval-denyallow

Deny Evaluation : There is no explicit deny mentioned


Organizations SCPs : C (not checked as per question)
Resource based policies : Not given as a option or mentioned in question
IAM permission boundaries : D (not checked as per question)
Session Policies : Not given as a option or mentioned in question
Identity Based Policy : Based on question user(or its group which is implicit) is already having read-only access to all S3 buckets
Errors : Not given as a option or mentioned in question
upvoted 2 times

" # alisyech 1 year ago


i choose C & D
upvoted 1 times

340 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # nitinz 1 year ago


trick question C&D check the vein dig on https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html and things will
make sense.
upvoted 1 times

" # kiev 1 year ago


CD WOULD FOR ME AS WELL.
upvoted 1 times

Question #644 Topic 1

A company is planning to migrate its business-critical applications from an on-premises data center to AWS. The company has an on-premises
installation of a
Microsoft SQL Server Always On cluster. The company wants to migrate to an AWS managed database service. A solutions architect must design
a heterogeneous database migration on AWS.
Which solution will meet these requirements?

A. Migrate the SQL Server databases to Amazon RDS for MySQL by using backup and restore utilities.

B. Use an AWS Snowball Edge Storage Optimized device to transfer data to Amazon S3. Set up Amazon RDS for MySQL. Use S3 integration
with SQL Server features, such as BULK INSERT.

C. Use the AWS Schema Conversion Tool to translate the database schema to Amazon RDS for MeSQL. Then use AWS Database Migration
Service (AWS DMS) to migrate the data from on-premises databases to Amazon RDS.

D. Use AWS DataSync to migrate data over the network between on-premises storage and Amazon S3. Set up Amazon RDS for MySQL. Use S3
integration with SQL Server features, such as BULK INSERT.

Correct Answer: A
Reference:
https://docs.aws.amazon.com/dms/latest/sbs/dms-sbs-welcome.html

Community vote distribution


C (100%)

" # Ni_yot 9 months, 2 weeks ago


c of cause
upvoted 1 times

" # wpinfo 10 months, 1 week ago


Selected Answer: C
answer should be C. The AWS Schema Conversion Tool (AWS SCT) makes heterogeneous database migrations. https://aws.amazon.com
/dms/schema-conversion-tool/
upvoted 1 times

" # notabot2 10 months, 1 week ago


I go with C
upvoted 1 times

" # RamCrk 10 months, 1 week ago


C , preference ,because must design a heterogeneous
https://aws.amazon.com/dms/schema-conversion-tool/
upvoted 1 times

" # rootx 10 months, 1 week ago


Selected Answer: C
C seems to be the best fit
upvoted 1 times

341 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #645 Topic 1

A company has an application that generates reports and stores them in an Amazon bucket Amazon S3 bucket. When a user accesses their
report, the application generates a signed URL to allow the user to download the report. The company's security team has discovered that the +les
are public and that anyone can download them without authentication. The company has suspended the generation of new reports until the
problem is resolved.
Which set of action will immediately remediate the security issue without impacting the application's normal workjow?

A. Create an AWS Lambda function that applies all policy for users who are not authenticated. Create a scheduled event to invoke the Lambda
function.

B. Review the AWS Trusted advisor bucket permissions check and implement the recommend actions.

C. Run a script that puts a Private ACL on all of the object in the bucket.

D. Use the Block Public Access feature in Amazon S3 to set the IgnorePublicAcis option to TRUE on the bucket.

Correct Answer: B

Community vote distribution


D (100%)

" # Ebi Highly Voted $ 1 year ago


I'll go with D
upvoted 7 times

" # Waiweng Highly Voted $ 1 year ago


it's D
upvoted 6 times

" # user0001 5 months, 3 weeks ago


from documentation
Setting this option to TRUE causes Amazon S3 to ignore all public ACLs on a bucket and any objects that it contains. This setting enables you
to safely block public access granted by ACLs while still allowing PUT Object calls that include a public ACL (as opposed to BlockPublicAcls,
which rejects PUT Object calls that include a public ACL). Enabling this setting doesn't affect the persistence of any existing ACLs and
doesn't prevent new public ACLs from being set.
upvoted 1 times

" # kangtamo Most Recent % 4 months, 1 week ago


Selected Answer: D
D sounds better.
upvoted 2 times

" # GeniusMikeLiu 9 months, 2 weeks ago


" The company's security staff determined that the files are accessible to the public and may be downloaded without authentication" mean want
public access right? why D? so confused
upvoted 1 times

" # cldy 11 months ago


D. Use the Block Public Access feature in Amazon S3 to set the IgnorePublicAcis option to TRUE on the bucket.
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


D is right.
The S3 bucket is allowing public access and this must be immediately disabled. Setting the IgnorePublicAcls option
to TRUE causes Amazon S3 to ignore all public ACLs on a bucket and any objects that it contains.
The other settings you can configure with the Block Public Access Feature are:
o BlockPublicAcls – PUT bucket ACL and PUT objects requests are blocked if granting public access.
o BlockPublicPolicy – Rejects requests to PUT a bucket policy if granting public access.
o RestrictPublicBuckets – Restricts access to principles in the bucket owners’ AWS account.
upvoted 2 times

" # andylogan 1 year ago


It's D - pre-signed URL is to allows unauthenticated users access to the bucket in private
upvoted 2 times

" # tgv 1 year ago


DDD

342 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 1 times

" # Kian1 1 year ago


going with D
upvoted 5 times

" # Bulti 1 year ago


Answer is D. Remember that the purpose of creating a pre-signed URL is to allows unauthenticated users access to the bucket or the objects in
the bucket which are private. So if someone can still access the bucket then the buckets or the objects in the bucket have been granted a public
ACL which needs to be blocked and the way to do that is by using the IgnorePublicAcls setting.
upvoted 4 times

" # petebear55 1 year ago


B could be the answer .. however it would probably AWS Macie which does the needful. .. I will go for D in this case .. however i'm not hundred
percent convinced and think the question is poorly written
upvoted 1 times

" # shammous 1 year ago


B won't "immediately remediate the security issue". D would.
upvoted 2 times

" # T14102020 1 year ago


Correct is D.
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with D
upvoted 2 times

" # smartassX 1 year, 1 month ago


D --> "IgnorePublicAcis" --> "Setting this option to TRUE causes Amazon S3 to ignore all public ACLs on a bucket and any objects that it
contains. This setting enables you to safely block public access granted by ACLs while still allowing PUT Object calls that include a public ACL
(as opposed to BlockPublicAcls, which rejects PUT Object calls that include a public ACL). Enabling this setting doesn't affect the persistence of
any existing ACLs and doesn't prevent new public ACLs from being set."
upvoted 4 times

" # asldavid 1 year, 1 month ago


D
https://aws.amazon.com/s3/features/block-public-access/
upvoted 3 times

" # Gmail78 1 year, 1 month ago


what is IgnorePublicAcis? I would go with A instead
upvoted 1 times

" # avland 1 year, 1 month ago


Pretty sure there's a typo there. Should be IgnorePublicAcls.

Block public access to buckets and objects granted through any access control lists (ACLs)
S3 will ignore all ACLs that grant public access to buckets and objects.
upvoted 3 times

" # Kelvin1477 1 year, 1 month ago


Support D too as mention pre-signed url that is shared to the user will not be block but the policy will block any other public access:
https://acloud.guru/forums/s3-masterclass/discussion/-LsBZBXjnnNdi4dT1Czi/block%20public%20access%20vs%20pre-
signed%20URL%20access
upvoted 2 times

343 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #646 Topic 1

A company hosts a legacy application that runs on an Amazon EC2 instance inside a VPC without internet access. Users access the application
with a desktop program installed on their corporate laptops. Communication between the laptops and the VPC jows through AWS Direct Connect
(DX). A new requirement states that all data in transit must be encrypted between users and the VPC.
Which strategy should a solutions architect use to maintain consistent network performance while meeting this new requirement?

A. Create a client VPN endpoint and con+gure the laptops to use an AWS client VPN to connect to the VPC over the internet.

B. Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public
virtual interface.

C. Create a new Site-to-Site VPN that connects to the VPC over the internet.

D. Create a new private virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX private
virtual interface.

Correct Answer: D

Community vote distribution


B (67%) D (33%)

" # liono Highly Voted $ 1 year, 1 month ago


B
https://aws.amazon.com/premiumsupport/knowledge-center/create-vpn-direct-connect/
upvoted 27 times

" # user0001 5 months, 2 weeks ago


it is D, there is no requirement to access public services so no need for public VIP
upvoted 2 times

" # DashL 1 year ago


To connect to a VPC, it is required to connect to a Private Virtual interface over Direct connect. I guess an AWS document will be more
accurate than any blog post:
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-vpn.html
upvoted 4 times

" # helpaws 3 months, 1 week ago


Private VIFs do not provide encryption.. Public VIFs can via IPSEC. you also cannot establish a VPN connection without a Public VIF.
upvoted 5 times

" # Bulti Highly Voted $ 1 year ago


Answer is B. https://aws.amazon.com/premiumsupport/knowledge-center/create-vpn-direct-connect/. Remember that to connect to services
such as EC2 using just Direct Connect you need to create a private VIF. However if you want to encrypt the traffic flowing through DirectConnect,
you will need to use the public VIF of DX to create a VPN connection that will allow access to AWS services such as S3, EC2 etc. The video
describes this.
upvoted 11 times

" # JohnPi Most Recent % 2 weeks, 5 days ago


Selected Answer: B
you need public VIF.
To implement a Private IP VPN with AWS Direct Connect you need a transit virtual interface, DXG, transit gateway
upvoted 1 times

" # Enigmaaaaaa 3 months, 1 week ago


This is clearly stated in AWS documentation
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-vpn.html
The answer must B as IPSec tunnels are always public
upvoted 1 times

" # hilft 3 months, 1 week ago


got the DX. D > B
upvoted 1 times

" # aandc 4 months, 1 week ago


B you need to use the public VIF of DX to create a VPN connection
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-vpn.html

344 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # TechX 4 months, 1 week ago
Selected Answer: D
D for me
upvoted 1 times

" # TechX 4 months, 1 week ago


Just ignore D, after asking my experienced senior SA. It should be B, cause now he is also doing a same solution for a company has the same
case in this question
upvoted 2 times

" # Ddssssss 4 months, 2 weeks ago


I don't understand why it cant be D?? Just because 90% of the time you would use the Public interface doesn't mean you cant use the private.
Its a valid DX configuration option with IPSEC tunnel.
Private virtual interface: A private virtual interface should be used to access an Amazon VPC using private IP addresses.

https://docs.aws.amazon.com/directconnect/latest/UserGuide/WorkingWithVirtualInterfaces.html

It is also clearly explain in this blog which references all the details in any AWS doc.
https://jayendrapatil.com/tag/direct-connect/

This doc is also only 2 days old. but with the use of a transit GW you can use Private IP and IPSEC.

https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-site-to-site-vpn-private-ip-vpns/
upvoted 1 times

" # Hasitha99 6 months, 1 week ago


Selected Answer: B
o connect to services such as EC2 using just Direct Connect you need to create a private VIF. However if you want to encrypt the traffic flowing
through DirectConnect, you will need to use the public VIF of DX to create a VPN connection that will allow access to AWS services such as S3,
EC2.
upvoted 1 times

" # azure_kai 7 months, 2 weeks ago


Selected Answer: D
I would choose D. There is no internet connection. And the traffic is between corporate network and VPC. Most likely, it only involves private IP
addresses, which only requires privhttps://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/view/14/#ate
virtual interface over DX.
upvoted 1 times

" # jyrajan69 8 months, 1 week ago


There is no debate, link from liono clearly shows step by step solution. Answer is B
upvoted 1 times

" # lifebegins 8 months, 3 weeks ago


Answer is D:
We shoud go over the
With AWS Direct Connect and AWS Site-to-Site VPN, you can combine one or more AWS Direct Connect dedicated network connections with
the Amazon VPC VPN

https://docs.aws.amazon.com/directconnect/latest/UserGuide/encryption-in-transit.html
upvoted 1 times

" # HellGate 9 months ago


My answer is D.

Why do we need public virtual interface for communication between laptop and VPC over DX? There are no requirements of accessing from
internet. It should be PRIVATE virtual interface.
upvoted 1 times

" # futen0326 8 months, 3 weeks ago


Private VIFs do not provide encryption.. Public VIFs can via IPSEC. you also cannot establish a VPN connection without a Public VIF.
upvoted 2 times

" # Naj_64 4 weeks, 1 day ago


You can with a Transit VIF "Private IP VPN is deployed on top of Transit VIFs" -- https://aws.amazon.com/blogs/networking-and-content-
delivery/introducing-aws-site-to-site-vpn-private-ip-vpns/ Answer is still B though.
upvoted 1 times

" # GV19 9 months, 2 weeks ago


Selected Answer: B
to establish VPN over DX, Public VIF is required, Only Option B has this detail;
upvoted 2 times

345 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # KiraguJohn 11 months ago


VPC does not have internet connection.
Private virtual interface: used to access an VPC using private IP addresses.
Public virtual interface: can access all AWS public services using public IP addresses.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


B is right answer
Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public virtual
interface.
upvoted 1 times

" # Ronon 11 months, 3 weeks ago


Using PRIVATE virtual interface to connect to legacy application in an Amazon EC2. Answer D is correct.
upvoted 2 times

346 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #647 Topic 1

A company is creating a centralized logging service running on Amazon EC2 that will receive and analyze logs from hundreds of AWS accounts.
AWS PrivateLink is being used to provide connectivity between the client services and the logging service.
In each AWS account with a client, an interface endpoint has been created for the logging service and is available. The logging service running on
EC2 instances with a Network Load Balancer (NLB) are deployed in different subnets. The clients are unable to submit logs using the VPC
endpoint.
Which combination of steps should a solutions architect take to resolve this issue? (Choose two.)

A. Check that the NACL is attached to the logging service subnet to allow communications to and from the NLB subnets. Check that the NACL
is attached to the NLB subnet to allow communications to and from the logging service subnets running on EC2 instances.

B. Check that the NACL is attached to the logging service subnets to allow communications to and from the interface endpoint subnets.
Check that the NACL is attached to the interface endpoint subnet to allow communications to and from the logging service subnets running
on EC2 instances.

C. Check the security group for the logging service running on the EC2 instances to ensure it allows ingress from the NLB subnets.

D. Check the security group for the logging service running on the EC2 instances to ensure it allows ingress from the clients.

E. Check the security group for the NLB to ensure it allows ingress from the interface endpoint subnets.

Correct Answer: DE

Community vote distribution


BD (50%) AC (50%)

" # cloudgc Highly Voted $ 1 year ago


A&C are correct.
NLB will see traffic from interface endpoint subnet and logging service subnet.
Logging service SG will see traffic only from NLB IP.
upvoted 36 times

" # Bulti Highly Voted $ 1 year ago


A&C. The client of the Logging service running on EC2 is NLB and not the interface endpoint. the flow is
Client->VPCE(PrivateLink)->NLB->Logging service. So the answer is A & C 100%.
upvoted 12 times

" # rcher 1 year ago


Agreed, and i implemented this architecture for my work.

NLB sits in front of the Logging Services, so the NACL and Sec groups for the corresponding logging instances (and its subnet) need to check
for the NLB ingress. A/C for me
upvoted 2 times

" # nitinz 1 year ago


you got it right.
upvoted 1 times

" # JohnPi Most Recent % 1 month ago


Selected Answer: BD
B +D is corect
upvoted 1 times

" # foureye2004 3 months, 1 week ago


I think A&D and include C.

With NLB, for security group attached to target EC2 instance (front by NLB) need to allow not only IP of NLB but also IP from client (If target type
is an instance), assume that we use EC2 only, so target type instance is fitted.

https://aws.amazon.com/premiumsupport/knowledge-center/security-group-load-balancer/
upvoted 1 times

" # kangtamo 4 months, 1 week ago


Selected Answer: AC
Agree with AC: NLB
upvoted 1 times

347 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # bobsmith2000 5 months, 2 weeks ago


A is not correct.
The Q states "The logging service is deployed in many SUBNETS", A states "Check that the NACL is attached to the logging service SUBNET"
upvoted 1 times

" # Ddssssss 4 months, 3 weeks ago


the singulars and plurals in A are off. First it says subnets, then subnet, then subnets, then subnet. I think A is correct, just bad grammar.
upvoted 2 times

" # cldy 11 months ago


A. Check that the NACL is attached to the logging service subnet to allow communications to and from the NLB subnets. Check that the NACL is
attached to the NLB subnet to allow communications to and from the logging service subnets running on EC2 instances.
C. Check the security group for the logging service running on the EC2 instances to ensure it allows ingress from the NLB subnets.
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


It seems B & D for me. I need to revisit this question again !
upvoted 2 times

" # nsei 11 months, 3 weeks ago


From this resource https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#client-ip-preservation:
"Client IP preservation has no effect on AWS PrivateLink traffic. The source IP of the AWS PrivateLink traffic is always the private IP address of
the Network Load Balancer." ... hence the answer is A&C
upvoted 2 times

" # andylogan 1 year ago


It's A C since the client of the Logging service running on EC2 is NLB
upvoted 1 times

" # wakame 1 year ago


Hi guys,
NLB does not do Source NAT unlike ALB, but is the correct answer still A & C?
upvoted 1 times

" # wakame 1 year ago


I found out that there are the following specifications, so I solved it.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#client-ip-preservation
In the case of using PrivateLink, NLB has a specification that It transrates Source IP to NLB Private IP.
upvoted 3 times

" # tgv 1 year ago


AAA CCC
---
I don't understand what NLB not having security group has to do with A/C.
I'm thinking that the clients are sending traffic to the NLB (not some kind of round robin directly on the EC2 instances).
The communication between NLB and EC2 instances still has to be configured. It doesn't work out of the box
upvoted 1 times

" # blackgamer 1 year ago


A and C
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with B,D
upvoted 4 times

" # nopenope111 1 year ago


B&D.
NLB is not like ALB. it just passes the traffic to EC2. EC2 needs to allow ingress from outside.
upvoted 2 times

" # kpcert 1 year ago


I think the answer is B and D. NACL and Security group of EC2 logging service to allow traffic from client subnets, It is NLB in front of EC2 , NLB
will preserve the client IPs and pass on the client details and source IPs of client to EC2, so the Network ACL and Security group of logging
service should have the allow rule for the ip range of client subnets subnets.
upvoted 3 times

" # Waiweng 1 year ago


it's A&C
upvoted 3 times

348 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #648 Topic 1

A company is refactoring an existing web service that provides read and write access to structured data. The service must respond to short but
signi+cant spikes in the system load. The service must be fault tolerant across multiple AWS Regions.
Which actions should be taken to meet these requirements?

A. Store the data in Amazon DocumentDB. Create a single global Amazon CloudFront distribution with a custom origin built on edge-optimized
Amazon API Gateway and AWS Lambda. Assign the company's domain as an alternate domain for the distribution, and con+gure Amazon
Route 53 with an alias to the CloudFront distribution.

B. Store the data in replicated Amazon S3 buckets in two Regions. Create an Amazon CloudFront distribution in each Region, with custom
origins built on Amazon API Gateway and AWS Lambda launched in each Region. Assign the company's domain as an alternate domain for
both distributions, and con+gure Amazon Route 53 with a failover routing policy between them.

C. Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. In both Regions, run the web service
as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, con+gure an
alias record in the company's domain and a Route 53 latency-based routing policy with health checks to distribute tramc between the two
ALBs.

D. Store the data in Amazon Aurora global databases. Add Auto Scaling replicas to both Regions. Run the web service on Amazon EC2
instances in an Auto Scaling group behind an Application Load Balancer in each Region. Con+gure the instances to download the web service
code in the user data. In Amazon Route 53, con+gure an alias record for the company's domain and a multi-value routing policy

Correct Answer: A

Community vote distribution


C (67%) D (33%)

" # Bulti Highly Voted $ 1 year, 1 month ago


Its between B and C. I think A and D are out. A is out because of DocumentDB and D is out because of multi value. Between B and C, I think C is
a better because S3 is usually used as a static web site and not for writing dynamic data (in this case structured data). Option C is a standard
way of designing an application using a middle tier and a data tier where the middle tier is load balanced and is in an auto scaling group.
Moreover DynamoDb can be used for both structured and semi-structured data. The latency routing policy with health checks will result in
routing the traffic to the region with low latency in case the ALB endpoint is considered healthy or else it will be routed to the other region. So I
will go with C.
upvoted 26 times

" # aws_arn_name 1 year ago


"short but significant spikes" , i think Lambda is better with this than ASG so answer should be B
upvoted 2 times

" # joe16 1 year ago


In S3 CRR is not immediate(AWS Docs - "Most objects replicate within 15 minutes, but sometimes replication can take a couple hours or
more"). So B is not an option as solution.
DDB Global tables have sync latency of less than a sec - "In a global table, a newly written item is usually propagated to all replica tables
within a second."
I will go with C
upvoted 1 times

" # tekkart 1 year ago


D can work fine also
- Aurora for structured value
- Fault Tolerant because Route 53 with MVA policy allows health checks like it would w/ Failover policy : https://docs.aws.amazon.com
/Route53/latest/DeveloperGuide/resource-record-sets-values-multivalue.html
- ASG for spikes
- 1 ALB, simpler architecture than C where there are 2 ALB (one internal and one external)
upvoted 3 times

" # tekkart 1 year ago


Because of the word 'refactoring' it's right may be ECS in answer C where ASG and ALB are also available

For A and B, I don't see how CloudFront can have API Gateway as origin... A and B would be ruled out because CloudFront can have :
Web server, S3 bucket, or Elemental Media PAckage/Store for VOD as origins.
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Cloudfront can support API gateway and lambda.https://aws.amazon.com/tw/cloudfront/?nc=sn&loc=0

349 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # Ebi Highly Voted $ 1 year, 1 month ago
Answer is C.
D is not the right answer, although Aurora is better choice for structured data, but Aurora Global database supports one master only, so other
regions do not support write.
upvoted 18 times

" # nqobza 1 year, 1 month ago


You're overthinking it. We only need to write to the primary.
upvoted 3 times

" # Ebi 1 year ago


With multi value routing in route 53 you should be able to write in each region, otherwise you need manage failover if primary fails which
has not been mentioned in this answer, I still go with C
upvoted 3 times

" # gpark 1 year ago


This is another good point.
upvoted 1 times

" # dart93 1 year ago


aurora does support multi master: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-multi-master.html
'In a multi-master cluster, all DB instances can perform write operations.'
upvoted 3 times

" # MrCarter 1 year ago


MULTI MASTER IS A REGIONAL SERVICE NOT MULTI REGION!!
upvoted 8 times

" # Liongeek 1 year ago


"Currently, all DB instances in a multi-master cluster must be in the same AWS Region."
upvoted 1 times

" # tomosabc1 Most Recent % 1 month ago


Selected Answer: C
The answer is C

A(wrong):Single Point of Failure, can't support fault tolerant across multiple regions.

B(wrong):S3 CRR is not fast enough. AWS Docs - "Most objects replicate within 15 minutes, but sometimes replication can take a couple hours
or more". By comparison, DynamoDB Global tables has sync latency of less than a sec - "In a global table, a newly written item is usually
propagated to all replica tables within a second.".

D(wrong):Unlike DynamoDB, Aurora Global database has only one master(only one writable node) in the case of multiple region deployment.
upvoted 2 times

" # linuxmaster007 1 month, 3 weeks ago


Answer is C ( as per tutorial dojo)
upvoted 1 times

" # ASC1 2 months, 1 week ago


dynamo db can store both Structured and Semi Structured data. So C is correct.
upvoted 1 times

" # CloudHandsOn 3 months, 2 weeks ago


D. - 'STRUCTURED' data. No other option is fully structured. I believe because of this, its the only viable option.
upvoted 2 times

" # bobsmith2000 5 months, 3 weeks ago


Selected Answer: D
The Q states "structured data". So neither NoSQL nor S3 (file storage) fits the bill.
The only answer which complies to this situation is D. Multi-answer is not a problem for a web app.
upvoted 1 times

" # user0001 5 months, 3 weeks ago


C: this is because The service must be able to react quickly to brief but large surges in system demand. Across many AWS Regions, the service
must be fault resilient
D can not scale fast
A is not fault resilient across regions
upvoted 1 times

" # RVivek 8 months, 3 weeks ago

350 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Answer is C ; The soloution should be able to quickly scale Fargate and fault resilient then both region should be active . Dynamodb glbal table
and Route53 latency based rcords with health check
upvoted 1 times
" # cldy 11 months ago
C. Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. In both Regions, run the web service
as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, configure an
alias record in the company‫ג‬€™s domain and a Route 53 latency-based routing policy with health checks to distribute traffic between the two
ALBs.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


Answer is C.
see Ebi explanation, I am good to with it.
D is not the right answer, although Aurora is better choice for structured data, but Aurora Global database supports one master only, so other
regions do not support
upvoted 1 times

" # nerembo 11 months, 2 weeks ago


I think it shoud be D.
The question says: "Across many AWS Regions, the service must be fault resilient". There is nothing about that in both regions database should
be writable.
https://aws.amazon.com/rds/aurora/faqs/ :
Amazon Aurora Global Database is a feature that allows a single Amazon Aurora database to span multiple AWS regions. It replicates your data
with no impact on database performance, enables fast local reads in each region with typical latency of less than a second, and provides disaster
recovery from region-wide outages. In the unlikely event of a regional degradation or outage, a secondary region can be promoted to full
read/write capabilities in less than 1 minute.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Integrating.AutoScaling.html
Aurora Auto Scaling enables your Aurora DB cluster to handle sudden increases in connectivity or workload.
upvoted 1 times

" # kirrim 1 year ago


This was clearly C in the past due to the lack of multi-region support in DocumentDB. But in 2021, AWS implemented support for DocumentDB
global clusters to support automatic replication across up to 5 regions. So now A and C are both valid candidates.

I would still lean towards C, even so, because one DocumentDB region must be primary, and the failover process to a secondary region is not
seamless by any means. You have to stop application writes in the primary (failed) region, and then promote the secondary region to its own
standalone master. Then you have to repoint your app to the secondary region. Not ideal.

https://aws.amazon.com/documentdb/global-clusters/

https://aws.amazon.com/blogs/database/introducing-amazon-documentdb-with-mongodb-compatibility-global-clusters/
upvoted 2 times

" # andylogan 1 year ago


It's C for Dynamo
upvoted 1 times

" # blackgamer 1 year ago


C for me
upvoted 2 times

" # tgv 1 year ago


CCC
---
A: it doesn't cover the fault-tolerant across multiple regions requirement
B: CloudFront is a global service
D: Creating multivalue answer alias records is not supported.
upvoted 3 times

" # TomPaschenda 1 year ago


C for me:
A - out because DocumentDB has no cross-region failover
B - out because S3 would not support writes in both regions (replication only goes one way)
C - only possible solution
D - out because EC2 ASG is not great for "short but significant spikes". Also "download web service code in user data" - why? And as pointed
out, read replica would require promotion for failover
upvoted 1 times

351 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #649 Topic 1

A company plans to migrate to AWS. A solutions architect uses AWS Application Discovery Service over the jeet and discovers that there is an
Oracle data warehouse and several PostgreSQL databases.
Which combination of migration patterns will reduce licensing costs and operational overhead? (Choose two.)

A. Lift and shift the Oracle data warehouse to Amazon EC2 using AWS DMS.

B. Migrate the Oracle data warehouse to Amazon Redshift using AWS SCT and AWS DMS

C. Lift and shift the PostgreSQL databases to Amazon EC2 using AWS DMS.

D. Migrate the PostgreSQL databases to Amazon RDS for PostgreSQL using AWS DMS.

E. Migrate the Oracle data warehouse to an Amazon EMR managed cluster using AWS DMS.

Correct Answer: DE

Community vote distribution


BD (100%)

" # asldavid Highly Voted $ 1 year, 1 month ago


B&D
https://aws.amazon.com/getting-started/hands-on/migrate-oracle-to-amazon-redshift/
upvoted 18 times

" # Ebi Highly Voted $ 1 year, 1 month ago


Answer is BD
upvoted 5 times

" # pititcu667 Most Recent % 9 months, 3 weeks ago


Selected Answer: BD
i vote b and d
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


B and D correct
upvoted 1 times

" # AzureDP900 11 months, 2 weeks ago


B & D is correct answer, looks like they intentionally updating wrong answers. Read the question and understand why it is wrong vs right.
upvoted 1 times

" # andylogan 1 year ago


It's B D - Redshift and RDS PostgreSQL
upvoted 1 times

" # student22 1 year ago


B,D
---
Oracle Dara Warehouse --> Redshift
PostgreSQL --> RDS PostgreSQL
upvoted 2 times

" # tgv 1 year ago


BBB DDD
---
upvoted 1 times

" # blackgamer 1 year ago


B and D
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B,D
upvoted 4 times

" # mustpassla 1 year ago

352 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

BD, SAA level.


upvoted 1 times
" # Waiweng 1 year ago
it's B&D
upvoted 4 times

" # Kian1 1 year, 1 month ago


going with BD
upvoted 3 times

" # kopper2019 1 year, 1 month ago


B and D for sure
upvoted 2 times

" # Bulti 1 year, 1 month ago


B & D is the answer.
upvoted 1 times

" # rscloud 1 year, 1 month ago


B,D for sure
upvoted 1 times

" # T14102020 1 year, 1 month ago


For sure B & D
upvoted 1 times

353 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #650 Topic 1

A solutions architect needs to de+ne a reference architecture for a solution for three-tier applications with web, application, and NoSQL data
layers. The reference architecture must meet the following requirements:
✑ High availability within an AWS Region
✑ Able to fail over in 1 minute to another AWS Region for disaster recovery
✑ Provide the most emcient solution while minimizing the impact on the user experience
Which combination of steps will meet these requirements? (Choose three.)

A. Use an Amazon Route 53 weighted routing policy set to 100/0 across the two selected Regions. Set Time to Live (TTL) to 1 hour.

B. Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL)
to 30 seconds.

C. Use a global table within Amazon DynamoDB so data can be accessed in the two selected Regions.

D. Back up data from an Amazon DynamoDB table in the primary Region every 60 minutes and then write the data to Amazon S3. Use S3
cross-Region replication to copy the data from the primary Region to the disaster recovery Region. Have a script import the data into
DynamoDB in a disaster recovery scenario.

E. Implement a hot standby model using Auto Scaling groups for the web and application layers across multiple Availability Zones in the
Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources.

F. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the
required resources.

Correct Answer: ACE

Community vote distribution


BCE (100%)

" # kalyan_krishna742020 Highly Voted $ 1 year, 1 month ago


Ans: BCE
upvoted 22 times

" # TechX Most Recent % 4 months, 1 week ago


Selected Answer: BCE
No-brain, it's BCE
upvoted 1 times

" # jj22222 7 months, 1 week ago


Selected Answer: BCE
bce - look right
upvoted 1 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: BCE
it is BCE
upvoted 1 times

" # pititcu667 10 months ago


Selected Answer: BCE
A will only move the dns after 1 hour so ..
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


B,C,E is perfect answer.
The requirements can be achieved by using an Amazon DynamoDB database with a global table. DynamoDB is a
NoSQL database so it fits the requirements. A global table also allows both reads and writes to occur in both Regions.
For the web and application tiers Auto Scaling groups should be configured. Due to the 1-minute RTO these must be
configured in an active/passive state. The best pricing model to lower price but ensure resources are available when
needed is to use a combination of zonal reserved instances and on-demand instances.
To failover between the Regions, a Route 53 failover routing policy can be configured with a TTL configured on the
record of 30 seconds. This will mean clients must resolve against Route 53 every 30 seconds to get the latest record.
In a failover scenario the clients would be redirected to the secondary site if the primary site is unhealthy.

354 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 4 times
" # andypham 1 year ago
Yes, BBB CCC EEE
upvoted 2 times

" # andylogan 1 year ago


It's B C E
upvoted 1 times

" # tgv 1 year ago


BBB CCC EEE
---
upvoted 1 times

" # blackgamer 1 year ago


BCE is the answer
upvoted 1 times

" # DanShone 1 year ago


Agree B,C,E
upvoted 1 times

" # WhyIronMan 1 year, 1 month ago


I'll go with B,C,E
upvoted 3 times

" # Kopa 1 year, 1 month ago


B,C,E for sure
upvoted 1 times

" # victordun 1 year, 1 month ago


BCE should be most optimal choices that meets requirements
upvoted 4 times

" # Waiweng 1 year, 1 month ago


BCE is correct
upvoted 4 times

" # CarisB 1 year, 1 month ago


Yes, BCE
upvoted 1 times

" # wasabidev 1 year, 1 month ago


BCE for me
upvoted 1 times

355 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #651 Topic 1

A company has a Microsoft SQL Server database in its data center and plans to migrate data to Amazon Aurora MySQL. The company has already
used the AWS
Schema Conversion Tool to migrate triggers, stored procedures and other schema objects to Aurora MySQL. The database contains 1 TB of data
and grows less than 1 MB per day. The company's data center is connected to AWS through a dedicated 1Gbps AWS Direct Connect connection.
The company would like to migrate data to Aurora MySQL and perform recon+gurations with minimal downtime to the applications.
Which solution meets the company's requirements?

A. Shut down applications over the weekend. Create an AWS DMS replication instance and task to migrate existing data from SQL Server to
Aurora MySQL. Perform application testing and migrate the data to the new database endpoint.

B. Create an AWS DMS replication instance and task to migrate existing data and ongoing replication from SQL Server to Aurora MySQL.
Perform application testing and migrate the data to the new database endpoint.

C. Create a database snapshot of SQL Server on Amazon S3. Restore the database snapshot from Amazon S3 to Aurora MySQL. Create an
AWS DMS replication instance and task for ongoing replication from SQL Server to Aurora MySQL. Perform application testing and migrate the
data to the new database endpoint.

D. Create a SQL Server native backup +le on Amazon S3. Create an AWS DMS replication instance and task to restore the SQL Server backup
+le to Aurora MySQL. Create another AWS DMS task for ongoing replication from SQL Server to Aurora MySQL. Perform application testing
and migrate the data to the new database endpoint.

Correct Answer: B

" # XRiddlerX Highly Voted $ 1 year, 1 month ago


Answer is B
A is incorrect because shutting down the application over the weekend will cause downtime to the application.
C is incorrect because you can't restore a SQL Server snapshot to Aurora MySQL. They are two very different DBS engines
D is incorrect cause you can restore a native MSSQL backup to a Aurora MySQL because they are two different DBS engines and I'm not aware
of restoring backup file functionality in DMS.

B is correct because since your have used the AWS SCT all you need to do for this migration is migrate the existing data and keep replication
going until cutover.
upvoted 35 times

" # jackdryan 1 year, 1 month ago


https://docs.aws.amazon.com/dms/latest/sbs/CHAP_SQLServer2Aurora.Steps.html
upvoted 3 times

" # GopiSivanathan 1 year, 1 month ago


To transfer 1TB data over 1 Gbps ; it will take only 2 and 26 mins, is that not acceptable as this statement in the question?
The company would like to migrate data to Aurora MySQL and perform reconfigurations with minimal downtime to the applications.
upvoted 1 times

" # StelSen 1 year ago


Read this in Option-B -> "migrate existing data and ONGOING Replication....". So, initially it will take 2.5 hrs and afterwards it will be
almost realtime sync. So, literally no downtime or may be 5 mins to just to finish last replication.
upvoted 2 times

" # cldy Most Recent % 11 months ago


B. Create an AWS DMS replication instance and task to migrate existing data and ongoing replication from SQL Server to Aurora MySQL.
Perform application testing and migrate the data to the new database endpoint.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


B is right
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # Suresh108 1 year ago


BBBBBBBBBBBB

356 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

https://docs.aws.amazon.com/dms/latest/sbs/chap-sqlserver2aurora.steps.html
upvoted 2 times
" # tgv 1 year ago
BBB
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 2 times

" # alisyech 1 year ago


should be B
upvoted 1 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # Trap_D0_r 1 year ago


B
LOL you can do B with 1-2 hours of total outage time MAX (if things go very VERY poorly), and schedule that time during extremely low usage
periods or scheduled outage (if you have an uptime requirement it won't affect your SLA). "A" requires a full weekend of downtime. Terrible,
terrible solution. See XRiddlerX's answer for why the other two options are garbage.
upvoted 1 times

" # Superomam 1 year ago


I've changed my idea to B. A is not wrong because it's doable but reading carefully this article https://docs.aws.amazon.com/dms/latest
/sbs/CHAP_SQLServer2Aurora.Steps.htm, it seems that B should be the right answer.
upvoted 1 times

" # Superomam 1 year ago


A. I'm currently working into the "migration arena" and every time a DB must be migrated, you've to stop the application to avoid writing to the
DB.
upvoted 1 times

" # sarah_t 1 year ago


you can use DMS for ongoing replication until you cut over
upvoted 1 times

" # pablobairat 1 year ago


Your company is lucky for having you ;) It is B
upvoted 3 times

" # Ebi 1 year, 1 month ago


B for sure,
upvoted 4 times

" # Bulti 1 year, 1 month ago


B is the right answer. Not C because you cannot use DMS to import a SQL Server snapshot into the Aurora SQL DB.
upvoted 2 times

" # petebear55 1 year, 1 month ago


Ive seen these questions before and the answer is always A ... don't think the question is written very well. if it mentions the app needs to run
24/7 then of course i would not choose A. but given experience with similar ? i will choose A
upvoted 1 times

" # T14102020 1 year, 1 month ago


For sure B
upvoted 1 times

357 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #652 Topic 1

A company runs an application on a jeet of Amazon EC2 instances. The application requires low latency and random access to 100 GB of data.
The application must be able to access the data at up to 3.000 IOPS. A Development team has con+gured the EC2 launch template to provision a
100-GB Provisioned IOPS
(PIOPS) Amazon EBS volume with 3 000 IOPS provisioned. A Solutions Architect is tasked with lowering costs without impacting performance and
durability.
Which action should be taken?

A. Create an Amazon EFS +le system with the performance mode set to Max I/O. Con+gure the EC2 operating system to mount the EFS +le
system.

B. Create an Amazon EFS +le system with the throughput mode set to Provisioned. Con+gure the EC2 operating system to mount the EFS +le
system.

C. Update the EC2 launch template to allocate a new 1-TB EBS General Purpose SSO (gp2) volume.

D. Update the EC2 launch template to exclude the PIOPS volume. Con+gure the application to use local instance storage.

Correct Answer: A

Community vote distribution


C (100%)

" # Gmail78 Highly Voted $ 1 year, 1 month ago


Definitely C https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
upvoted 15 times

" # Appukk 2 weeks, 4 days ago


Answer is B as per tutorialdojo
upvoted 2 times

" # Kelvin1477 1 year, 1 month ago


But need to consider also ec2 fleet instances, 1 ec2 can tied to only 1 ebs volume
upvoted 5 times

" # Kelvin1477 1 year, 1 month ago


Looks like it: 100 GiB gp2 volume has a baseline performance of 300 IOPS.
upvoted 1 times

" # user0001 5 months, 2 weeks ago


agree it is C
upvoted 1 times

" # keos Highly Voted $ 1 year, 1 month ago


B likely
upvoted 15 times

" # keos 1 year, 1 month ago


should be C, cheaper than provisioned

A,B is misleading
upvoted 4 times

" # Enigmaaaaaa Most Recent % 3 months, 1 week ago


Lets say there are 1000 EC2 instances and we want to save cost.
The choice is between B and C:
1000 instances with 1TB of GP2 data is: 10K a month
1000 instances with 100GB of io1 with 3000 iops is: 20.7K a month
So C is valid.
Now regarding EFS its not clear by the question if the data can be shared between the instances or its unique per instance - if it can be shared
100 GB of EFS is cheaper
if we need 100gb*1000 = 100tb C is cheaper.
Since the questions does not state that data can be shared between instance i will have to choose C
upvoted 4 times

" # kangtamo 5 months ago

358 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Selected Answer: C
Go with C.
upvoted 1 times
" # AzureDP900 11 months, 1 week ago
General Purpose SSD, It is typo in question. I will go with C.
upvoted 2 times

" # kirrim 1 year ago


The key consideration here is that the company is paying for individual disks for every EC2 instance in their fleet. The question translates to: Is it
cheaper to provision that on an individual disk basis for every EC2 instance in the fleet using a different disk approach, or is it cheaper to
provision a shared EFS volume and mount it on every instance in the fleet?

Ultimately this question comes down to:


- how much EFS data throughput is your app going to need (which you would need to multiply by the number of servers accessing the EFS
filesystem)
- how many servers are in your fleet?

And we are not told how many servers are in the fleet, nor the throughput needed based on the application's average block size per operation.
Both are critical factors in making this decision. I'm not a fan of this question due to that missing info.

Without that crucial info, I'm just going to default to keeping things the way they're doing today with individual disks on each instance, and save
on cost by going with gp2, but that's really answering the question at all.
upvoted 3 times

" # andylogan 1 year ago


It's C - 1-TB EBS General Purpose SSO (gp2)
upvoted 3 times

" # tgv 1 year ago


CCC
---
upvoted 2 times

" # blackgamer 1 year ago


B is the answer. It can’t be A because of the unnecessary cost for max io.
upvoted 1 times

" # saggarwal4114 1 year ago


It is B
upvoted 2 times

" # mericov 1 year ago


"Launch templates are immutable. To modify a launch template, you must create a new version of the launch template." - there is no option to
update the launch templates (C&D). Provisioned for EFS is referring to the throughput (MiB/s), for which we do not have any info. The remaining
option is A - MaxIO
upvoted 3 times

" # tkanmani76 10 months, 2 weeks ago


Agree on the launch templates - C and D are incorrect. The below passage sounds A is not the right choice.
Some latency-sensitive workloads require the higher I/O levels provided by Max I/O performance mode and the lower latency provided by
General Purpose performance mode. For this type of workload, we recommend creating multiple General Purpose performance mode file
systems.
https://docs.aws.amazon.com/efs/latest/ug/performance.html
Hence going with B.
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with C

Guys, please READ the question!!!!!

The questions never said that's a single volume mounted/shared across instances!!!!

So, instead of having


- 1 x 100 Gb 3000 PIOPs PER INSTANCE is cheaper to have
- 1 x 1000 Gb (1000 x 3 iops) PER INSTANCE

If you guys don't read the question at least twice it'll be difficult to go well in the exam
upvoted 9 times

" # AWSum1 1 year ago


C- because of this exact reason
upvoted 2 times

359 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # somebodyelse 1 year ago


https://aws.amazon.com/ebs/general-purpose/
I think GP might do the required 3000 iops
upvoted 3 times
" # qurren 1 year ago
It is C

EFS is not correct for this random access requirement, so rule out A/B
upvoted 1 times

" # XAvenger 1 year ago


There are concerns related to the EBS volume attached to multiple EC2 instances (how are they going to use single volume for multiple EC2
instances??) If they are going to user multiple EBS volumes then EFS looks cheaper.
BUT I tried to find any information related to EFS IOPS. About EFS throughput - there is much information, but not about EFS IOPS.

I would choose C despite the fact the option is weird.


upvoted 1 times

" # zolthar_z 1 year ago


The Answer is C. with A (File systems in the Max I/O mode can scale to higher levels of aggregate throughput and operations per second. This
scaling is done with a tradeoff of slightly higher latencies for file metadata operations. Highly parallelized applications and workloads, such as big
data analysis, media processing, and genomic analysis, can benefit from this mode.)
upvoted 1 times

" # tonywang806 1 year ago


B is incorrect.
Because IOPS is a unit about performance mode, not about throughput mode.
upvoted 4 times

" # Waiweng 1 year ago


it's B
upvoted 8 times

360 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #653 Topic 1

A company recently transformed its legacy infrastructure provisioning scripts to AWS CloudFormation templates. The newly developed templates
are hosted in the company's private GitHub repository. Since adopting CloudFormation, the company has encountered several issues with updates
to the CloudFormation templates, causing execution or creating environment. Management is concerned by the increase in errors and has asked a
Solutions Architect to design the automated testing of CloudFormation template updates.
What should the Solution Architect do to meet these requirements?

A. Use AWS CodePipeline to create a change set from the CloudFormation templates stored in the private GitHub repository. Execute the
change set using AWS CodeDeploy. Include a CodePipeline action to test the deployment with testing scripts run by AWS CodeBuild.

B. Mirror the GitHub repository to AWS CodeCommit using AWS Lambda. Use AWS CodeDeploy to create a change set from the
CloudFormation templates and execute it. Have CodeDeploy test the deployment with testing scripts run by AWS CodeBuild.

C. Use AWS CodePipeline to create and execute a change set from the CloudFormation templates stored in the GitHub repository. Con+gure a
CodePipeline action to be deployment with testing scripts run by AWS CodeBuild.

D. Mirror the GitHub repository to AWS CodeCommit using AWS Lambda. Use AWS CodeBuild to create a change set from the CloudFormation
templates and execute it. Have CodeBuild test the deployment with testing scripts.

Correct Answer: B

Community vote distribution


C (100%)

" # Ebi Highly Voted $ 1 year ago


C is my answer
upvoted 11 times

" # Bulti Highly Voted $ 1 year ago


Answer is C. Not A because CodeDeploy is not required to execute the changeset. CodePiepline action can do that.
upvoted 8 times

" # kangtamo Most Recent % 4 months, 2 weeks ago


Selected Answer: C
Agree with C.
upvoted 1 times

" # challenger1 11 months ago


My Answer: C
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


C is perfect
upvoted 1 times

" # andylogan 1 year ago


It's C - CodePipeline create and execute change set
upvoted 2 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # blackgamer 1 year ago


C run test during build
upvoted 1 times

" # mericov 1 year ago


C: - https://aws.amazon.com/blogs/devops/building-a-ci-cd-pipeline-to-update-an-aws-cloudformation-stacksets/
upvoted 2 times

" # kirrim 1 year ago


Agree, that architecture exactly matches C (except it uses CodeCommit instead of GitHub as in the question, but minor difference)
upvoted 1 times

361 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # WhyIronMan 1 year ago


I'll go with C
upvoted 2 times

" # mustpassla 1 year ago


It is B
upvoted 2 times

" # Waiweng 1 year ago


it's C
upvoted 3 times

" # alisyech 1 year ago


C seems correct
upvoted 2 times

" # Kian1 1 year ago


going with C
upvoted 4 times

" # Trap_D0_r 1 year ago


I want to vote for A since C is actual gibberish and there's a lot of reading between the lines to make it coherent...
upvoted 3 times

" # RLai 1 year ago


C is the answer.
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-third-party-git-source-repositories-in-aws-codepipeline.html
upvoted 2 times

" # rscloud 1 year ago


I will go with C
upvoted 1 times

362 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #654 Topic 1

A company has several Amazon EC2 instances to both public and private subnets within a VPC that is not connected to the corporate network. A
security group associated with the EC2 instances allows the company to use the Windows remote desktop protocol (RDP) over the internet to
access the instances. The security team has noticed connection attempts from unknown sources. The company wants to implement a more
secure solution to access the EC2 instances.
Which strategy should a solutions architect implement?

A. Deploy a Linux bastion host on the corporate network that has access to all instances in the VPC.

B. Deploy AWS Systems Manager Agent on the EC2 instances. Access the EC2 instances using Session Manager restricting access to users
with permission.

C. Deploy a Linux bastion host with an Elastic IP address in the public subnet. Allow access to the bastion host from 0.0.0.0/0.

D. Establish a Site-to-Site VPN connecting the corporate network to the VPC. Update the security groups to allow access from the corporate
network only.

Correct Answer: A

Community vote distribution


B (100%)

" # Ebi Highly Voted $ 1 year, 1 month ago


Answer is B, with Systems Manager agent you can manage EC2 instances without the need to open inbound ports.
upvoted 23 times

" # Kelvin1477 Highly Voted $ 1 year, 1 month ago


I still prefer B or SSM Session Manager, as the other option is using Linux bastion where the question here is talking about Windows RDP which i
believe the target instance is Windows Server
upvoted 13 times

" # joancarles Most Recent % 1 month, 3 weeks ago


It would be necessary to add a role to the EC2 computers for SSM access, installing only the agent is not enough. On the other hand, users
would have to change the use of RDP to open the session through the Fleet Manager, since from the connect tab, they would only get a power
shell. For me, the most balanced answer would be D
upvoted 1 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: B
it is B
upvoted 1 times

" # pititcu667 10 months ago


Selected Answer: B
It a windows / ssm based question. it' trying to assert if you know about the remove login option of ssm
upvoted 1 times

" # vbal 11 months ago


https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/connecting_to_windows_instance.html
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


B is right. Systems manager manages EC2 instances
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


D is right. Systems manager manages EC2 instances
upvoted 1 times

" # Liongeek 1 year ago


I totally agree with all those who say it's B. I'd mark B too. I'm just a bit concern by "Windows remote desktop protocol" which can't be used with
session manager.
upvoted 1 times

" # Viper57 1 year ago


RDP isn't required if you use session manager as it can be accessed through the console. You can created a RDP tunnel through session

363 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

manager if its completely necessary.

https://awscloudsecvirtualevent.com/workshops/module1/rdp/
upvoted 2 times

" # Liongeek 12 months ago


I switch to B thanks to this lab, we can use SSM to RDP an EC2 Windows instance
upvoted 1 times

" # andylogan 1 year ago


It's B - Systems Manager agent can manage EC2 instances with RDP
upvoted 2 times

" # Goram113 1 year ago


https://aws.amazon.com/blogs/mt/manage-aws-managed-microsoft-ad-resources-with-session-manager-port-forwarding/

it is B
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # blackgamer 1 year ago


B for sure
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B

Guys, with Systems Manager agent you can manage EC2 instances without the need to leave open ports to the world.

Also, you can control which user's can access Systems Manager, giving one more security control
upvoted 2 times

" # jobe42 1 year ago


After Reading:https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/connecting_to_windows_instance.html#session-manager " After
the session is started, you can run Powershell commands as you would through any other connection type. " , so no RDP, just PS => D
upvoted 1 times

" # TomPaschenda 1 year ago


You can use Port Forwarding with SSM to still access via Remote Desktop: https://aws.amazon.com/about-aws/whats-new/2019/08/now-
forward-traffic-between-a-local-and-remote-port-using-session-manager/
B is correct
upvoted 2 times

" # Tony_W 1 year ago


The security team has noticed connection attempts. The ONLY way to stop this it seems is a site-to-site VPN. A and C won't work with Windows
so they are auto out. Seems to me a VPN secures the connection, stops the outside attempts, and would allow RDP without any other config
changes.
upvoted 1 times

" # WhyIronMan 1 year ago


You're wrong. SSM Sessions manager works for windows and you don't need to leave the ports open to the world.
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/connecting_to_windows_instance.html#session-manager
upvoted 2 times

" # zapper1234 1 year ago


It has to be "D". All other answer still leave the EC2 open to the word.
upvoted 2 times

" # DerekKey 1 year ago


You are wrong. Session Manager allows for connection without any change to your VPC/Security configuration
upvoted 1 times

" # WhyIronMan 1 year ago


You're wrong. SSM Sessions manager works for windows and you don't need to leave the ports open to the world.

https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/connecting_to_windows_instance.html#session-manager
upvoted 1 times

" # zapper1234 1 year ago


It has to be D. B would still allow (but deny) connection attempts from outside. Whereas, D would not.

364 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # WhyIronMan 1 year ago


You're wrong. SSM Sessions manager works for windows and you don't need to leave the ports open to the world.
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/connecting_to_windows_instance.html#session-manager
upvoted 1 times

365 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #655 Topic 1

A retail company has a custom .NET web application running on AWS that uses Microsoft SQL Server for the database. The application servers
maintain a user's session locally.
Which combination of architecture changes are needed to ensure all tiers of the solution are highly available? (Choose three.)

A. Refactor the application to store the user's session in Amazon ElastiCache. Use Application Load Balancers to distribute the load between
application instances.

B. Set up the database to generate hourly snapshots using Amazon EBS. Con+gure an Amazon CloudWatch Events rule to launch a new
database instance if the primary one fails.

C. Migrate the database to Amazon RDS for SQL Server. Con+gure the RDS instance to use a Multi-AZ deployment.

D. Move the .NET content to an Amazon S3 bucket. Con+gure the bucket for static website hosting.

E. Put the application instances in an Auto Scaling group. Con+gure the Auto Scaling group to create new instances if an instance becomes
unhealthy.

F. Deploy Amazon CloudFront in front of the application tier. Con+gure CloudFront to serve content from healthy application instances only.

Correct Answer: BDE

Community vote distribution


ACE (100%)

" # liono Highly Voted $ 1 year, 1 month ago


A, C, E are correct
upvoted 30 times

" # MarkDillon1075 1 year, 1 month ago


C,D,E. The question asks for a highly available solution for all tiers. A - gives load balancing, not high availability.
upvoted 1 times

" # cpd 1 year, 1 month ago


"Move the .NET content to an Amazon S3 bucket" does NOT make sense.
upvoted 4 times

" # WhyIronMan 1 year ago


yeah? hey genius, how you will handle session management ? you need A
upvoted 3 times

" # XRiddlerX 1 year, 1 month ago


Answer is A,C,E
D is incorrect because S3 doesn't support server-side scripting like "ASP.NET, PHP, or JSP". In addition, high availability is one of the
features of ELB.
ELB Features - https://aws.amazon.com/elasticloadbalancing/features
/#:~:text=High%20availability,response%20to%20incoming%20application%20traffic.

S3 static web site hosting - https://docs.amazonaws.cn/en_us/AmazonS3/latest/user-guide/static-website-hosting.html


"You can host a static website on Amazon S3. On a static website, individual webpages include static content. A static website might also
contain client-side scripts. By contrast, a dynamic website relies on server-side processing, including server-side scripts such as PHP, JSP,
or ASP.NET. Amazon S3 does not support server-side scripting."
upvoted 8 times

" # Ebi Highly Voted $ 1 year ago


Answer is ACE
upvoted 10 times

" # cldy Most Recent % 11 months ago


A. Refactor the application to store the user‫ג‬€™s session in Amazon ElastiCache. Use Application Load Balancers to distribute the load between
application instances.
C. Migrate the database to Amazon RDS for SQL Server. Configure the RDS instance to use a Multi-AZ deployment.
E. Put the application instances in an Auto Scaling group. Configure the Auto Scaling group to create new instances if an instance becomes
unhealthy.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


A,C, E is correct

366 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


hope I can have it in my exam as difficile ones
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: ACE
D is incorrect because S3 doesn't support server-side scripting like "ASP.NET, PHP, or JSP". In addition, high availability is one of the features of
ELB.
ELB Features - https://aws.amazon.com/elasticloadbalancing/features
/#:~:text=High%20availability,response%20to%20incoming%20application%20traffic.

S3 static web site hosting - https://docs.amazonaws.cn/en_us/AmazonS3/latest/user-guide/static-website-hosting.html


"You can host a static website on Amazon S3. On a static website, individual webpages include static content. A static website might also
contain client-side scripts. By contrast, a dynamic website relies on server-side processing, including server-side scripts such as PHP, JSP, or
ASP.NET. Amazon S3 does not support server-side scripting."
upvoted 1 times

" # andylogan 1 year ago


It's A C E - ElasticCache, RDS Multi AZ, Auto Scaling group
upvoted 2 times

" # student22 1 year ago


A,C,E
A - Session management
C - HA DB
E - HA Application
upvoted 1 times

" # tgv 1 year ago


AAA CCC EEE
---
upvoted 1 times

" # blackgamer 1 year ago


ACE is correct.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,C,E
upvoted 1 times

" # Waiweng 1 year ago


it;s A,C,E
upvoted 6 times

" # nil3112 1 year ago


can AWS RDS supports SQL server or we have to put it on EC2 ?
upvoted 1 times

" # alisyech 1 year ago


A C E for sure
upvoted 1 times

" # kiev 1 year ago


ACE is the correct answer. S3 don't use NET applications as they are dynamic and that also rules out cloudfront.
upvoted 1 times

" # Kian1 1 year ago


going with ACE
upvoted 2 times

" # kopper2019 1 year ago


A,C,E without thinking twice about it
upvoted 2 times

367 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #656 Topic 1

A company is using an existing orchestration tool to manage thousands of Amazon EC2 instances. A recent penetration test found a vulnerability
in the company's software stack. This vulnerability has prompted the company to perform a full evaluation of its current production environment.
The analysis determined that the following vulnerabilities exist within the environment:
✑ Operating systems with outdated libraries and known vulnerabilities are being used in production.
✑ Relational databases hosted and managed by the company are running unsupported versions with known vulnerabilities.
✑ Data stored in databases is not encrypted.
The solutions architect intends to use AWS Con+g to continuously audit and assess the compliance of the company's AWS resource
con+gurations with the company's policies and guidelines.
What additional steps will enable the company to secure its environments and track resources while adhering to best practices?

A. Use AWS Application Discovery Service to evaluate all running EC2 instances Use the AWS CLI to modify each instance, and use EC2 user
data to install the AWS Systems Manager Agent during boot. Schedule patching to run as a Systems Manager Maintenance Windows task.
Migrate all relational databases to Amazon RDS and enable AWS KMS encryption.

B. Create an AWS CloudFormation template for the EC2 instances. Use EC2 user data in the CloudFormation template to install the AWS
Systems Manager Agent, and enable AWS KMS encryption on all Amazon EBS volumes. Have CloudFormation replace all running instances.
Use Systems Manager Patch Manager to establish a patch baseline and deploy a Systems Manager Maintenance Windows task to execute
AWS-RunPatchBaseline using the patch baseline.

C. Install the AWS Systems Manager Agent on all existing instances using the company's current orchestration tool. Use the Systems Manager
Run Command to execute a list of commands to upgrade software on each instance using operating system-speci+c tools. Enable AWS KMS
encryption on all Amazon EBS volumes.

D. Install the AWS Systems Manager Agent on all existing instances using the company's current orchestration tool. Migrate all relational
databases to Amazon RDS and enable AWS KMS encryption. Use Systems Manager Patch Manager to establish a patch baseline and deploy a
Systems Manager Maintenance Windows task to execute AWS-RunPatchBaseline using the patch baseline.

Correct Answer: D

Community vote distribution


D (100%)

" # liono Highly Voted $ 1 year, 1 month ago


D looks correct
upvoted 15 times

" # Ebi Highly Voted $ 1 year ago


D for sure
upvoted 10 times

" # AzureDP900 Most Recent % 11 months, 1 week ago


D is correct answer !
upvoted 1 times

" # Kopa 1 year ago


D correct
upvoted 1 times

" # tgv 1 year ago


DDD
---
upvoted 1 times

" # Suresh108 1 year ago


B, C -eliminated, no mention of RDS
A - AWS Application Discovery Service involved with on premise migration , elimiate it.

choosing DDDDDD
upvoted 4 times

" # WhyIronMan 1 year ago


I'll go with D

368 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # Waiweng 1 year ago
it;s D
upvoted 6 times

" # Kian1 1 year ago


going with D
upvoted 5 times

" # Bulti 1 year, 1 month ago


D is correct. You do not want to go with B because it's a lot of work to replace the current orchestration toll with cloud formation templates
upvoted 4 times

" # T14102020 1 year, 1 month ago


D is correct
upvoted 2 times

" # jackdryan 1 year, 1 month ago


I'll go with D
upvoted 3 times

" # liono 1 year, 1 month ago


You need to encrypt DB during creation.
upvoted 1 times

369 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #657 Topic 1

A company wants to improve cost awareness for its Amazon EMR platform. The company has allocated budgets for each team's Amazon EMR
usage. When a budgetary threshold is reached, a noti+cation should be sent by email to the budget omce's distribution list. Teams should be able
to view their EMR cluster expenses to date. A solutions architect needs to create a solution that ensures the policy is proactively and centrally
enforced in a multi-account environment.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

A. Update the AWS CloudFormation template to include the AWS::Budgets::Budget::resource with the Noti+cationsWithSubscribers property.

B. Implement Amazon CloudWatch dashboards for Amazon EMR usage.

C. Create an EMR bootstrap action that runs at startup that calls the Cost Explorer API to set the budget on the cluster with the
GetCostForecast and Noti+cationsWithSubscribers actions.

D. Create an AWS Service Catalog portfolio for each team. Add each team's Amazon EMR cluster as an AWS CloudFormation template to their
Service Catalog portfolio as a Product.

E. Create an Amazon CloudWatch metric for billing. Create a custom alert when costs exceed the budgetary threshold.

Correct Answer: DE

Community vote distribution


AD (100%)

" # liono Highly Voted $ 1 year, 1 month ago


A&D
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-budgets-budget.html
upvoted 18 times

" # arulrajjayaraj 1 year, 1 month ago


Why not DE - custom alert when costs exceed the budgetary threshold
upvoted 1 times

" # oscargee 1 year ago


It's AWS::Budgets::Budget not AWS::Budgets::Budget::resource. Please read the html you posted more carefully. A is wrong!
upvoted 1 times

" # Waiweng 1 year ago


there's typo in the answer as it should be AWS::Bugests::Budget. the content of the answer is complete. The questions want centrally
managing the requirement to which A&D should satisfy
upvoted 2 times

" # Anhdd Most Recent % 5 months ago


why not BE?
upvoted 1 times

" # Anhdd 5 months ago


sorry, typo. I mean why not DE?
upvoted 1 times

" # Netaji 7 months ago


A: Update the AWS CloudFormation template to include the AWS::Budgets::Budget::resource with the NotificationsWithSubscribers property.
upvoted 1 times

" # itznotme 8 months, 1 week ago


Selected Answer: AD
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-budgets-budget.html
upvoted 1 times

" # cldy 10 months, 1 week ago


A and D.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


I will go with A & D

370 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

You can use AWS Budgets to track your service costs and usage within AWS Service Catalog. You can associate
budgets with AWS Service Catalog products and portfolios.
AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are
forecasted to exceed) your budgeted amount.
If a budget is associated to a product, you can view information about the budget on the Products and Product
details page. If a budget is associated to a portfolio, you can view information about the budget on
the Portfolios and Portfolio details page.
When you click on a product or portfolio, you are taken to a detail page. These Portfolio detail and Product
detail pages have a section with detailed information about the associated budget. You can see the budgeted
amount, current spend, and forecasted spend. You also have the option to view budget details and edit the budget.
upvoted 3 times
" # andylogan 1 year ago
It's A D
upvoted 1 times

" # tgv 1 year ago


AAA DDD
---
upvoted 1 times

" # blackgamer 1 year ago


Yes, A & D.
upvoted 1 times

" # blackgamer 1 year ago


https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_budgets.html#budgets-view

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-budgets-budget.html
upvoted 1 times

" # WhyIronMan 1 year ago


Changing to A,D
upvoted 2 times

" # Waiweng 1 year ago


it's A&D
upvoted 2 times

" # Amitv2706 1 year ago


Option E's language is not clear. It seems they want to create an alert manually when limit cross.

Any ways CF option is better in terms of centrally managing this requirement through script for each account.
upvoted 1 times

" # alisyech 1 year ago


i think A & D
upvoted 1 times

" # Kian1 1 year ago


going with AD
upvoted 2 times

" # Ebi 1 year ago


AD is answer
upvoted 4 times

" # Bulti 1 year, 1 month ago


A &D are the combination of steps required to meet the requirements
upvoted 3 times

" # rscloud 1 year, 1 month ago


A,D
D- Create Budget in AWS Budget associate it with AWS service catalog product.
https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_budgets.html
upvoted 2 times

" # elf78 1 year ago


+1 for the link.
upvoted 1 times

371 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #658 Topic 1

A company is migrating its on-premises systems to AWS. The user environment consists of the following systems:
✑ Windows and Linux virtual machines running on VMware.
Physical servers running Red Hat Enterprise Linux.

The company wants to be able to perform the following steps before migrating to AWS:
✑ Identify dependencies between on-premises systems.
✑ Group systems together into applications to build migration plans.
✑ Review performance data using Amazon Athena to ensure that Amazon EC2 instances are right-sized.
How can these requirements be met?

A. Populate the AWS Application Discovery Service import template with information from an on-premises con+guration management
database (CMDB). Upload the completed import template to Amazon S3, then import the data into Application Discovery Service.

B. Install the AWS Application Discovery Service Discovery Agent on each of the on-premises systems. Allow the Discovery Agent to collect
data for a period of time.

C. Install the AWS Application Discovery Service Discovery Connector on each of the on-premises systems and in VMware vCenter. Allow the
Discovery Connector to collect data for one week.

D. Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Install the AWS Application Discovery
Service Discovery Connector in VMware vCenter. Allow the Discovery Agent to collect data for a period of time.

Correct Answer: C

Community vote distribution


D (83%) B (17%)

" # pankajrawat Highly Voted $ 6 months, 1 week ago


D is the correct ans for me
upvoted 6 times

" # snakecharmer2 Highly Voted $ 6 months, 2 weeks ago


Selected Answer: D
The service discovery agentless connector can oly work with VMware, you still need the agent for the physicall server.
"AWS Application Discovery Service supports agent-based and agentless modes of operation. With the agentless discovery, VMware customers
collect VM configuration and performance profiles without deploying the AWS Application Discovery Agent on each host, which accelerates data
collection. Customers in a non-VMware environment or that need additional information, like network dependencies and information about
running processes, may install the Application Discovery Agent on servers and virtual machines (VMs) to collect data" (taken from the Application
Discovery FAQ)
upvoted 5 times

" # wassb Most Recent % 2 weeks, 2 days ago


Selected Answer: B
I think it's B since we need the agent to evaluate dependencies between on premise system.
upvoted 1 times

372 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #659 Topic 1

A company hosts a web application on AWS in the us-east-1 Region. The application servers are distributed across three Availability Zones behind
an Application
Load Balancer. The database is hosted in MySQL database on an Amazon EC2 instance. A solutions architect needs to design a cross-Region data
recovery solution using AWS services with an RTO of less than 5 minutes and an RPO of less than 1 minute. The solutions architect is deploying
application servers in us- west-2, and has con+gured Amazon Route 53 health checks and DNS failover to us-west-2.
Which additional step should the solutions architect take?

A. Migrate the database to an Amazon RDS for MySQL instance with a cross-Region read replica in us-west-2.

B. Migrate the database to an Amazon Aurora global database with the primary in us-east-1 and the secondary in us-west-2.

C. Migrate the database to an Amazon RDS for MySQL instance with a Multi-AZ deployment.

D. Create a MySQL standby database on an Amazon EC2 instance in us-west-2.

Correct Answer: B

" # liono Highly Voted $ 1 year, 1 month ago


B
https://aws.amazon.com/rds/aurora/global-database/
upvoted 11 times

" # Ebi Highly Voted $ 1 year ago


B is my choice
upvoted 6 times

" # AkaAka4 Most Recent % 10 months, 3 weeks ago


Why not C though? :/
upvoted 1 times

" # pixepe 2 months, 1 week ago


Because Multi-AZ provides reliability w.r.t Availability Zone (AZ) where as question mentions reliability w.r.t Regions (AWS regions)
upvoted 2 times

" # Tokyoboy 10 months ago


I think Not C becuse the application is deployed in two AZ. Multi-AZ RDS works in primary-stanby instance fashion. There will not be efficient
for the application as it is deployed in 2 AZ.
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


B
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora
database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast
local reads with low latency in each region, and provides disaster recovery from region-wide outages
upvoted 1 times

" # andylogan 1 year ago


It's B - RTO is 1 mins, and RPO is 1 second.
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # CloudFan 1 year ago


Aurora is must for RTO of 1 min. here RTO is 5 mins. you can easily promote a RDS read replica in 5 mins. RPO of 1 min is met by both. Why not
A?
upvoted 5 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # Waiweng 1 year ago

373 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

it's B
upvoted 3 times
" # alisyech 1 year ago
should be B, https://aws.amazon.com/rds/aurora/global-database/
upvoted 2 times

" # Kian1 1 year ago


going with B
upvoted 3 times

" # MichaelHuang 1 year ago


Is A an option? For B, RTO is 1 mins, and RPO is 1 second.
upvoted 1 times

" # 01037 1 year ago


Unlike an Amazon RDS Multi-AZ configuration, failover to a Read Replica is not an automated process, I don't think A is an option.
upvoted 6 times

" # Bulti 1 year ago


Answer is B.
upvoted 1 times

" # petebear55 1 year ago


Could also be A ...
upvoted 1 times

" # T14102020 1 year, 1 month ago


Its only Aurora for RTO of 1 min. Answer is B.
upvoted 1 times

" # jackdryan 1 year, 1 month ago


I'll go with B
upvoted 4 times

" # gookseang 1 year, 1 month ago


B for sure
upvoted 1 times

" # Kelvin1477 1 year, 1 month ago


Agree esp with RTO of 1 min
upvoted 1 times

374 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #660 Topic 1

A company wants to migrate its on-premises data center to the AWS Cloud. This includes thousands of virtualized Linux and Microsoft Windows
servers, SAN storage, Java and PHP applications with MYSQL, and Oracle databases. There are many dependent services hosted either in the
same data center or externally.
The technical documentation is incomplete and outdated. A solutions architect needs to understand the current environment and estimate the
cloud resource costs after the migration.
Which tools or services should solutions architect use to plan the cloud migration? (Choose three.)

A. AWS Application Discovery Service

B. AWS SMS

C. AWS x-Ray

D. AWS Cloud Adoption Readiness Tool (CART)

E. Amazon Inspector

F. AWS Migration Hub

Correct Answer: BCF

Community vote distribution


ADF (100%)

" # keos Highly Voted $ 1 year, 1 month ago


ADF, as for planning
upvoted 12 times

" # smartassX 1 year, 1 month ago


F is for visibility/ current status during migration. --> "AWS Migration Hub provides a single place to monitor migrations in any AWS region
where your migration tools are available. There is no additional charge"
upvoted 4 times

" # ExtHo Highly Voted $ 1 year ago


ADF
- Use AWS Application Discovery Service to gather information about the running virtual machines and running applications inside the servers.

- Use the AWS Cloud Adoption Readiness Tool (CART) to generate a migration assessment report to identify gaps in organizational skills and
processes.

- Use AWS Migration Hub to discover and track the status of the application migration across AWS and partner solutions.
upvoted 7 times

" # hilft Most Recent % 3 months, 2 weeks ago


ADF.
Directly from Job Bonso's exam
upvoted 1 times

" # aandc 4 months, 1 week ago


Selected Answer: ADF
keyword:planning
upvoted 2 times

" # tartarus23 6 months, 3 weeks ago


Selected Answer: ADF
A. AWS Application Discovery Service
D. AWS Cloud Adoption Readness Tool (CART)
F. AWS Migration Hub

These AWS tools and questionnaires are very helpful for assessment and planning before doing the migration activity.
upvoted 1 times

" # pititcu667 9 months, 4 weeks ago


Selected Answer: ADF
B is more for implementation the question talk about planning.
upvoted 3 times

375 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # edgarrodriguez2303 10 months, 3 weeks ago


Selected Answer: ADF
The best option
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


A,D, F is correct
upvoted 1 times

" # pcops 11 months, 2 weeks ago


A D and F
upvoted 1 times

" # andylogan 1 year ago


It's A D F - refer ExtHo
upvoted 1 times

" # Kopa 1 year ago


Also for A,D,F
upvoted 1 times

" # tgv 1 year ago


AAA DDD FFF
---
upvoted 2 times

" # blackgamer 1 year ago


A , D and F
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,D,F
upvoted 2 times

" # Waiweng 1 year ago


it's A,D,F
upvoted 4 times

" # alisyech 1 year ago


im going with A, D & F
upvoted 1 times

" # Kian1 1 year ago


going with ADF
upvoted 3 times

376 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #661 Topic 1

A company decided to purchase Amazon EC2 Reserved Instances. A solutions architect is tasked with implementing a solution where only the
master account in
AWS Organizations is able to purchase the Reserved Instances. Current and future member accounts should be blocked from purchasing Reserved
Instances.
Which solution will meet these requirements?

A. Create an SCP with the Deny effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to the root of the organization.

B. Create a new organizational unit (OU) Move all current member accounts to the new OU. Create an SCP with the Deny effect on the
ec2:PurchaseReservedInstancesOffering action. Attach the SCP to the new OU.

C. Create an AWS Con+g rule event that triggers automation that will terminate any Reserved Instances launched by member accounts.

D. Create two new organizational units (OUs): OU1 and OU2. Move all member accounts to OU2 and the master account to OU1. Create an SCP
with the Allow effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to OU1.

Correct Answer: C

Community vote distribution


A (100%)

" # XRiddlerX Highly Voted $ 1 year, 1 month ago


Answer is A
B is INCORRECT because this doesn't satisfy the requirement for future accounts possibility in OTHER OU's that might be created. The SCP in
this answer would only affect the OU the SCP is applied too.
C is INCORRECT because this allows accounts to continue to purchase RIs and the requirement is to BLOCK from purchasing
D is INCORRECT because the "master account (a.k.a management account) is the root account of the org and should not and can not be in an
OU. See the following diagram here (https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html)

A is CORRECT because applying the explicit deny on the API and attaching it to the root org allows for current and future account in ANY OU to
not be able to purchase RI's.
upvoted 27 times

" # blackgamer 1 year ago


Answer is D. It is incorrect that "master account can not be in an OU". It is tested and it is allowed. A is incorrect because it will block
everyone from purchasing reserved instances including the management account if you attach to root.
upvoted 2 times

" # blackgamer 1 year ago


Sorry , after having more research, A is correct because "SCPs don't affect users or roles in the management account. They affect only the
member accounts in your organization."
upvoted 7 times

" # student2020 1 year ago


Just a correction, you can move a management account into an OU, I just tested it.
upvoted 2 times

" # cpd 1 year, 1 month ago


I was about to go with B but after reading above, A makes sense to ensure any future accounts to also not buy RI's. With A. member accounts
cannot purchase RI but, master account can still purchase RI, "SCPs cannot restrict the Master account of the Organization. This is a primary
reason why it is best practice not to use the Organization Master account for anything other than Organization activities."
upvoted 6 times

" # student22 1 year ago


A is correct.
SCPs don't affect users or roles in the management account. They affect only the member accounts in your organization.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
upvoted 2 times

" # T14102020 Highly Voted $ 1 year, 1 month ago


https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
SCPs don't affect users or roles in the management (master) account. They affect only the member accounts in your organization.

So correct answer is A.
upvoted 16 times

377 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # 01037 1 year ago


Good point.
upvoted 1 times
" # cldy Most Recent % 11 months ago
A. Create an SCP with the Deny effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to the root of the organization.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


A is right
upvoted 1 times

" # acloudguru 11 months, 3 weeks ago


Selected Answer: A
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
upvoted 1 times

" # andylogan 1 year ago


It's A - The master account of the organization is not affected by any SCPs that are attached either to it or to any root or OU the master account
might be in.
upvoted 1 times

" # tgv 1 year ago


AAA
---
upvoted 1 times

" # denccc 1 year ago


go with A
upvoted 1 times

" # tekkart 1 year ago


The answer should be D
- Rights work as intersection between Root, OU SCP and IAM Policy
- Explicit Deny > Explicit Allow > Implicit Deny > Implicit Allow

A : Explicit Deny. Blocks everybody, none can purchase instances


B : Same
C : As XRiddlerX states, ruled out
D : Explicit Allow in OU1 SCP for master account, and Implicit Deny in OU2 SCP - as long as no Explicit Allow on OU2 SCP, works fine.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

" # Akhil254 1 year ago


A Correct
upvoted 1 times

" # Pb55 1 year ago


If you assign SCP to root, how does the master account buy reserved instances? It will be blocked as well. Has to be B.
upvoted 2 times

" # OAS1 1 year ago


SCPs affect only member accounts in the organization hence applying it on root will not impact master account. Answer is 𝗔.
upvoted 2 times

" # hk436 1 year ago


I don't think it's A!
AWS strongly recommends that you don't attach SCPs to the root of your organization without thoroughly testing the impact that the policy has
on accounts. Instead, create an OU that you can move your accounts into one at a time, or at least in small numbers, to ensure that you don't
inadvertently lock users out of key services.

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
upvoted 4 times

" # Waiweng 1 year ago


it's A
upvoted 2 times

" # tvs 1 year ago


AWS strongly recommends that you don't attach SCPs to the root of your organization without thoroughly testing the impact that the policy has
on accounts. Instead, create an OU that you can move your accounts into one at a time, or at least in small numbers, to ensure that you don't

378 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

inadvertently lock users out of key services. https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html


However we dont know how many OU currently exist whether all current belongs to one OU , or what are existing SCP applied old OU's. So I will
go with A
upvoted 2 times
" # victornj 1 year ago
Putting deny on root account does not make sense, it means you have to add explicit allow to other services. Therefore A cannot be the answer.
C& D are incorrect .. therfore it seems B is the right answer
upvoted 3 times

379 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #662 Topic 1

A company is using multiple AWS accounts. The DNS records are stored in a private hosted zone for Amazon Route 53 in Account A. The
company's applications and databases are running in Account B.
A solutions architect will deploy a two-tier application in a new VPC. To simplify the con+guration, the db.example.com CNAME record set for the
Amazon RDS endpoint was created in a private hosted zone for Amazon Route 53.
During deployment, the application failed to start. Troubleshooting revealed that db.example.com is not resolvable on the Amazon EC2 instance.
The solutions architect con+rmed that the record set was created correctly in Route 53.
Which combination of steps should the solutions architect take to resolve this issue? (Choose two.)

A. Deploy the database on a separate EC2 instance in the new VPC. Create a record set for the instance's private IP in the private hosted zone.

B. Use SSH to connect to the application tier EC2 instance. Add an RDS endpoint IP address to the /etc/resolv.conf +le.

C. Create an authorization to associate the private hosted zone in Account A with the new VPC in Account B.

D. Create a private hosted zone for the example com domain in Account B. Con+gure Route 53 replication between AWS accounts.

E. Associate a new VPC in Account B with a hosted zone in Account A. Delete the association authorization in Account A.

Correct Answer: BE

Community vote distribution


CE (100%)

" # ali98 Highly Voted $ 1 year, 1 month ago


C&E
https://aws.amazon.com/premiumsupport/knowledge-center/private-hosted-zone-different-account/
upvoted 37 times

" # student22 1 year ago


C,E
Authorize --> Associate --> Remove Authorization
As in the ali98's link.
upvoted 3 times

" # cloudgc 1 year, 1 month ago


Perfect!
upvoted 4 times

" # RVD Most Recent % 7 months, 3 weeks ago


Selected Answer: CE
Ans: C & E
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


C, E is correct
upvoted 1 times

" # andylogan 1 year ago


It's C E
A Authorize B --> B Associate A --> A Remove Authorization
upvoted 2 times

" # tgv 1 year ago


CCC EEE
---
upvoted 1 times

" # blackgamer 1 year ago


C & E for me.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C,E
upvoted 1 times

380 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # ibrahimsow 1 year ago


For sure, the correct answers are C & E. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-associate-vpcs-
different-accounts.html
upvoted 2 times

" # Waiweng 1 year ago


it's C,E
upvoted 3 times

" # Kian1 1 year ago


going for CE
upvoted 3 times

" # Ebi 1 year ago


CE for sure
upvoted 4 times

" # Bulti 1 year ago


C&E is the right answer
upvoted 2 times

" # rscloud 1 year ago


CE for sure
upvoted 2 times

" # T14102020 1 year, 1 month ago


For sure CE
upvoted 2 times

" # jackdryan 1 year, 1 month ago


I'll go with C,E
Thanks to ali98 for providing the right on link
upvoted 3 times

381 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #663 Topic 1

A solutions architect needs to advise a company on how to migrate its on-premises data processing application to the AWS Cloud. Currently, users
upload input +les through a web portal. The web server then stores the uploaded +les on NAS and messages the processing server over a
message queue. Each media +le can take up to 1 hour to process. The company has determined that the number of media +les awaiting
processing is signi+cantly higher during business hours, with the number of +les rapidly declining after business hours.
What is the MOST cost-effective migration recommendation?

A. Create a queue using Amazon SQS. Con+gure the existing web server to publish to the new queue. When there are messages in the queue,
invoke an AWS Lambda function to pull requests from the queue and process the +les. Store the processed +les in an Amazon S3 bucket.

B. Create a queue using Amazon MQ. Con+gure the existing web server to publish to the new queue. When there are messages in the queue,
create a new Amazon EC2 instance to pull requests from the queue and process the +les. Store the processed +les in Amazon EFS. Shut down
the EC2 instance after the task is complete.

C. Create a queue using Amazon MQ. Con+gure the existing web server to publish to the new queue. When there are messages in the queue,
invoke an AWS Lambda function to pull requests from the queue and process the +les. Store the processed +les in Amazon EFS.

D. Create a queue using Amazon SQS. Con+gure the existing web server to publish to the new queue. Use Amazon EC2 instances in an EC2
Auto Seating group to pull requests from the queue and process the +les. Scale the EC2 instances based on the SQS queue length. Store the
processed +les in an Amazon S3 bucket.

Correct Answer: D

Community vote distribution


D (100%)

" # liono Highly Voted $ 1 year, 1 month ago


As the length of processing the files take 1 hour, Lambda seems to be out of question, then we are left with EC2 option, D seems to be correct as
we are auto-scaling EC2
upvoted 31 times

" # Ebi Highly Voted $ 1 year ago


Answer is D
upvoted 7 times

" # hilft Most Recent % 3 months, 1 week ago


keyword here is sqs length + asg
upvoted 1 times

" # bobsmith2000 5 months, 2 weeks ago


Selected Answer: D
Right by the book
upvoted 1 times

" # tyrk 7 months, 3 weeks ago


Selected Answer: D
DDDDDDDDDDDD
upvoted 1 times

" # jj22222 9 months ago


D looks right
upvoted 1 times

" # kaush4u 9 months, 2 weeks ago


https://aws.amazon.com/blogs/compute/operating-lambda-performance-optimization-part-1/
upvoted 1 times

" # lulz111 9 months, 3 weeks ago


D is correct even though there is a typo (auto seating group) its obvious when you look at the other answers. The Lambda execution limit is 15
minutes which instantly rules out two. The remaining answer that suggests creating a new EC2 instance when something goes in the queue is not
a good solution (slow).
upvoted 1 times

" # cldy 10 months, 1 week ago

382 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

D is correct.
upvoted 1 times
" # cldy 11 months ago
D. Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue. Use Amazon EC2 instances in an EC2
Auto Seating group to pull requests from the queue and process the files. Scale the EC2 instances based on the SQS queue length. Store the
processed files in an Amazon S3 bucket.
upvoted 1 times

" # KiraguJohn 11 months, 1 week ago


I would have chosen Lambda until i saw the processing time. Therefore i will also go with D
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


D is correct, initially i thought A but it is not scalable.
upvoted 1 times

" # acloudguru 11 months, 1 week ago


this is really a easy one, hope I can have it as complex one in my exam
upvoted 1 times

" # andylogan 1 year ago


It's D
upvoted 1 times

" # tgv 1 year ago


DDD
---
upvoted 1 times

" # Suresh108 1 year ago


going for DDDDD

eliminate A & C for lambda (15 mins timeout)


eliminate B - solution is incomplete, it creates EC2 processes and shutdown, what triggers up new EC2??

D - is correct
upvoted 2 times

" # WhyIronMan 1 year ago


MOST cost-effective
upvoted 2 times

383 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #664 Topic 1

A company has a media catalog with metadata for each item in the catalog. Different types of metadata are extracted from the media items by an
application running on AWS Lambda. Metadata is extracted according to a number of rules with the output stored in an Amazon ElastiCache for
Redis cluster. The extraction process is done in batches and takes around 40 minutes to complete.
The update process is triggered manually whenever the metadata extraction rules change.
The company wants to reduce the amount of time it takes to extract metadata from its media catalog. To achieve this, a solutions architect has
split the single metadata extraction Lambda function into a Lambda function for each type of metadata.
Which additional steps should the solutions architect take to meet the requirements?

A. Create an AWS Step Functions workjow to run the Lambda functions in parallel. Create another Step Functions workjow that retrieves a list
of media items and executes a metadata extraction workjow for each one.

B. Create an AWS Batch compute environment for each Lambda function. Con+gure an AWS Batch job queue for the compute environment.
Create a Lambda function to retrieve a list of media items and write each item to the job queue.

C. Create an AWS Step Functions workjow to run the Lambda functions in parallel. Create a Lambda function to retrieve a list of media items
and write each item to an Amazon SQS queue. Con+gure the SQS queue as an input to the Step Functions workjow.

D. Create a Lambda function to retrieve a list of media items and write each item to an Amazon SQS queue. Subscribe the metadata extraction
Lambda functions to the SQS queue with a large batch size.

Correct Answer: C

Community vote distribution


A (100%)

" # avland Highly Voted $ 1 year, 1 month ago


A.

B is nearly nonsensical with how it uses Batch.


C wouldn't work since Step Functions operate with JSON inputs/outputs. SQS wouldn't work as an input quite like that.
D wouldn't work because the metadata extraction Lambda functions are non-identical workers, so one function will pull a message containing
the media item and process it, but then the other functions won't see that item and process their part of the metadata for it. For something like
that to work, you'd actually want a separate SQS queue for each type of metadata function, and use an SNS topic to publish each item to each
queue.
upvoted 18 times

" # dmscountera 2 weeks, 4 days ago


https://docs.aws.amazon.com/step-functions/latest/dg/concepts-invoke-sfn.html
Points to A
upvoted 1 times

" # alexmena1981 1 year, 1 month ago


You are incorrect,check https://www.youtube.com/watch?v=tPYa1r_cZ2E , Ans. C
upvoted 1 times

" # MrCarter 1 year ago


Ans AAAAAAAA
upvoted 1 times

" # gbrnq 1 year, 1 month ago


That video shows how to trigger SQS from SF.. so it’s not relevant. Ans A
upvoted 3 times

" # MrCarter 1 year ago


No, you are incorrect, SQS cannot be the input of an AWS Step Functions workflow.
upvoted 4 times

" # liono Highly Voted $ 1 year, 1 month ago


C seems to be correct answer
upvoted 12 times

" # keos 1 year, 1 month ago


SQS queue cannot trigger Step function
upvoted 8 times

384 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # caveman712 Most Recent % 1 month ago


Selected Answer: A
Workflow task can trigger other workflows: https://docs.aws.amazon.com/step-functions/latest/dg/concepts-nested-workflows.html
upvoted 1 times

" # Enigmaaaaaa 4 months ago


Selected Answer: A
Only A and C are valid but Answer is A since SQS cannot trigger SFW ...
https://docs.aws.amazon.com/step-functions/latest/dg/concepts-invoke-sfn.html
Only lambda and another SF can do it (as described in A)
upvoted 3 times

" # Dohecadi 8 months ago


Selected Answer: A
C is wrong. SQS has to start a Lambda funtion first, which in turn will start Step Function.
upvoted 1 times

" # frankzeng 10 months ago


C. when there are new items in the media catalog, a lambda function retrieve the list of media items and write the item information into SQS. The
step function workflow read the item from SQS and run the LAMBDA functions in parallel.
upvoted 1 times

" # wem 11 months ago


Ans C
https://www.youtube.com/watch?v=tPYa1r_cZ2E
upvoted 2 times

" # wassb 3 weeks, 4 days ago


this step function does broadcast to SQS ... Useless LINK
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


C

The best solution presented is to use a combination of AWS Step Functions and Amazon SQS. This results in each
Lambda function being able to run in parallel and use a queue for buffering the jobs.
CORRECT: “Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create a Lambda
function to retrieve a list of files and write each item to an Amazon SQS queue. Configure the SQS queue as an input to the Step Functions
workflow” is the correct answer
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


c is correct
upvoted 1 times

" # tiana528 11 months, 2 weeks ago


D. It is simple, straightforward. Stepfunctions is not needed here. lambda triggered by SQS can run in parallel pretty well.
upvoted 1 times

" # acloudguru 11 months, 3 weeks ago


Selected Answer: A
SQS needs lambda to trigger step function, can not do it directly. So C is not right.
upvoted 2 times

" # StelSen 1 year ago


Option-C. Taken from AWS Website. https://docs.aws.amazon.com/step-functions/latest/dg/sample-project-express-high-volume-sqs.html (We
can process items from SQS).
upvoted 2 times

" # andylogan 1 year ago


It's A - SQS queue is not an input to the Step Functions workflow
upvoted 2 times

" # tgv 1 year ago


AAA
---
upvoted 2 times

" # blackgamer 1 year ago


I will go with A.
upvoted 1 times

385 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # mericov 1 year ago


A - https://aws.amazon.com/blogs/compute/accelerating-workloads-using-parallelism-in-aws-step-functions/
upvoted 3 times

" # kirrim 1 year ago


This is absolutely the right answer. The entire question is about reducing the overall time to extract media by splitting it into multiple parallel
processes, which is exactly what this blog post describes.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

386 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #665 Topic 1

A utility company wants to collect usage data every 5 minutes from its smart meters to facilitate time-of-use metering. When a meter sends data
to AWS, the data is sent to Amazon API Gateway, processed by an AWS Lambda function and stored in an Amazon DynamoDB table. During the
pilot phase, the Lambda functions took from 3 to 5 seconds to complete.
As more smart meters are deployed, the Engineers notice the Lambda functions are taking from 1 to 2 minutes to complete. The functions are
also increasing in duration as new types of metrics are collected from the devices. There are many ProvisionedThroughputExceededException
errors while performing PUT operations on DynamoDB, and there are also many TooManyRequestsException errors from Lambda.
Which combination of changes will resolve these issues? (Choose two.)

A. Increase the write capacity units to the DynamoDB table.

B. Increase the memory available to the Lambda functions.

C. Increase the payload size from the smart meters to send more data.

D. Stream the data into an Amazon Kinesis data stream from API Gateway and process the data in batches.

E. Collect data in an Amazon SQS FIFO queue, which triggers a Lambda function to process each message.

Correct Answer: AB

Community vote distribution


AD (100%)

" # keos Highly Voted $ 1 year, 1 month ago


would go for AD
upvoted 23 times

" # tuananhngo 1 year ago


I DONT THINK SO. THE QUESTION DOES NOT MENTION DATA SHOULD BE PROCESSED IN REAL-TIME.
I AM GOING WITH AB.
upvoted 6 times

" # Bulti Highly Voted $ 1 year, 1 month ago


E is out as IoT core cannot use Amazon SQS FIFO as target. C is out as increasing the payload size would not necessarily result in reduced
volume over a period of time. Between B and D, I would go with D as batching the data before sending to Lambda would result in reducing the
concurrency which is the reason for TooManyRequestsException. A is correct because Lambda is writing to Dynamo DB and batching wouldn't
help with the rate at which Lambda is writing data to DynamoDB which is the cause of ProvisionedThroughputExceededException. So need to
increase the WCU in DynamoDB. Correct answer is A and D.
upvoted 18 times

" # gnic Most Recent % 2 months, 1 week ago


Selected Answer: AD
AD
D - The question talk about "tooManyRequest". Kinesis can batch data
upvoted 2 times

" # johnnsmith 8 months, 2 weeks ago


D is vague. Does it use the same Lambda function? If yes, it doesn't work. If you do a batch of 10, it will take 20 minutes to finish. If a new design
with EC2 is allowed, D is correct. The question says "modification" which implies same Lambda function. Then B is correct. Then what is
currently memory size? If it is already 10GB, B is wrong. Overall, it is a badly worded question.
upvoted 1 times

" # Ishu_awsguy 9 months ago


I would go for A and B.
The error on DynamoDb is because of the resources constraint since the requests are too high.
A for increasing WCU

B is supported by https://aws.amazon.com/premiumsupport/knowledge-center/lambda-troubleshoot-throttling/
The blog quotes below.
Check for spikes in Duration metrics for your function

Concurrency depends on function duration. If your function code is taking too long to complete, then there might not be enough compute
resources.

Try increasing the function's memory setting. Then, use AWS X-Ray and CloudWatch Logs to isolate the cause of duration increases

D should not be ideal because it changes the whole architecture and will induce more latency I believe.

387 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times

" # joancarles 1 month, 3 weeks ago


Basically AWS Lambda has a default safety throttle of 100 concurrent executions per account per region. Increasing the lambda size doesn't
solve the root problem, so D seems to be a better option.
upvoted 2 times

" # cldy 11 months ago


A. Increase the write capacity units to the DynamoDB table.
D. Stream the data into an Amazon Kinesis data stream from API Gateway and process the data in batches.
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


A,D seems to be correct
upvoted 1 times

" # Bigbearcn 1 year ago


AD.
https://alienattack.workshop.aws/en/short-labs/kinesis/300-ingestion-to-dynamodb.html
upvoted 1 times

" # andylogan 1 year ago


It's A D
upvoted 1 times

" # student22 1 year ago


B,D
A good explanation is in XRiddlerX's answer below.
upvoted 3 times

" # tgv 1 year ago


AAA DDD
---
upvoted 1 times

" # blackgamer 1 year ago


It is A and D.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go for A,D
upvoted 2 times

" # Akhil254 1 year ago


AD Correct
upvoted 3 times

" # MrCarter 1 year ago


This is straight out of Jon Bonso's exam questions. Answers are A and D
upvoted 2 times

" # kpcert 1 year ago


Ans : A & B
There are 2 issues here.
Issue 1 - Lambda function started taking more time when the load increases. Fix : Increase Lambda CPU by increasing the memory.
Issue 2 - TooManyRequetsException from DynamoDB. Fix: Increate WCU of DynamoDB

Since this change has already passed the pilot phase and the issue is happening in the production workload, the simple fix should be
considered.
upvoted 9 times

" # kirrim 1 year ago


+1

To increase CPU for a Lambda function, oddly enough, you give it more memory: https://aws.amazon.com/blogs/compute/operating-lambda-
performance-optimization-part-2/
(This is the same kind of indirect performance increase by adjusting something seemingly unrelated like increasing an EBS disk's IOPS by
increasing the disk size.)

The "ProvisionedThroughputExceeded" exception is in the SDK the Lambda function is using to write to DynamoDB. When DynamoDB can't
keep up, it throws that error back to Lambda, and Lambda logs it. But it's indicating that you've run out of Write Capacity Units in DDB:

https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/model
/ProvisionedThroughputExceededException.html

388 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # pradhyumna 1 year ago
AB may still be correct. I think the hint here is that the data collection is not real time instead every 5 minutes which is kind of queueing, so we
would not really need an SQS or a KDS. So, by simply increasing the memory, lambda can process faster and since it is processing faster, an
increase in the WCU should really fix the issue.
upvoted 4 times

" # kpcert 1 year ago


I Agree
upvoted 1 times

389 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #666 Topic 1

An ecommerce company has an order processing application it wants to migrate to AWS. The application has inconsistent data volume patterns,
but needs to be avail at all times. Orders must be processed as they occur and in the order that they are received.
Which set of steps should a solutions architect take to meet these requirements?

A. Use AWS Transfer for SFTP and upload orders as they occur. Use On-Demand Instances in multiple Availability Zones for processing.

B. Use Amazon SNS with FIFO and send orders as they occur. Use a single large Reserved Instance for processing.

C. Use Amazon SQS with FIFO and send orders as they occur. Use Reserved Instances in multiple Availability Zones for processing.

D. Use Amazon SQS with FIFO and send orders as they occur. Use Spot Instances in multiple Availability Zones for processing.

Correct Answer: C

Community vote distribution


C (100%)

" # liono Highly Voted $ 1 year, 1 month ago


C is correct, SQS with FIFO to process the orders as they come and reserved instances for availability at all times
upvoted 19 times

" # kirrim 1 year ago


Agree, C is the best answer given

Better approach might be to:


- start with On-Demand instances in an ASG
- set the ASG scaling metric to SQS FIFO queue depth
- monitor for steady-state minimum number of instances needed
- purchase RIs for minimum number of instances needed
- use On-Demand instances for additional bursting instances in the ASG above base
upvoted 2 times

" # Anhdd Most Recent % 5 months ago


Selected Answer: C
C for sure
upvoted 1 times

" # bobsmith2000 5 months, 2 weeks ago


OK, everyone is on the same ground that it's between C and D.
RI vs Spot.
1) Data pattern is erratic.
2) The app must be available all the time.
3) Cost-effectiveness isn't mentioned.

First of all, if we set up the bid price to be equal the on-demand price of a particular instance, then we are always gonna get compute power. The
stop price can't be higher on-demand one and it's never gonna be interrupted.
Second of all, we can't predict the amount of RI to purchase due to "1)".
Third of all, the Q states "must be available all the time".

The perfect answer would be use fleet with RI + Spot, because we can't predict how many RI to purchase.
Without giving it too much thoughts it's C. But if you think about it for a bit longer, it seems to be D.
Following KISS principle, let's say it's C.
upvoted 2 times

" # vijay1319 1 week, 5 days ago


awesome explanation Bob !!!
upvoted 1 times

" # cooldeity 6 months, 1 week ago


I think always means available to accept order, not necessarily for processing it
upvoted 1 times

" # AMKazi 9 months ago


C is the right answer
upvoted 1 times

" # cldy 10 months, 1 week ago

390 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

C correct.
upvoted 1 times
" # vbal 10 months, 2 weeks ago
I don't see the point in using RI with SQS; https://aws.amazon.com/blogs/compute/running-cost-effective-queue-workers-with-amazon-sqs-
and-amazon-ec2-spot-instances/
Answer: D
upvoted 3 times

" # kemalgoklen 10 months, 1 week ago


They meant RI for processing the SQS queue so answer is C
upvoted 1 times

" # user0001 5 months, 2 weeks ago


it is C because Orders must be handled on a first-come, first-serve basis and in the order in which they are received.

they are not asking for most cost effective


upvoted 1 times

" # AzureDP900 11 months, 1 week ago


C is correct
upvoted 1 times

" # andylogan 1 year ago


It's C with Reserved instance
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 2 times

" # KittuCheeku 1 year ago


Option C: In order (FIFO) + Reserved Instances in X AZs (Availability)
upvoted 1 times

" # mustpassla 1 year ago


C, SAA level question
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 3 times

" # KnightVictor 1 year ago


Answer is C. keywords "needs to be avail at all times", process the orders as they come
upvoted 2 times

" # alisyech 1 year ago


C for sure
upvoted 2 times

" # Kian1 1 year ago


going for C
upvoted 2 times

391 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #667 Topic 1

An AWS partner company is building a service in AWS Organizations using its organization named org1. This service requires the partner company
to have access to AWS resources in a customer account, which is in a separate organization named org2. The company must establish least
privilege security access using an API or command line tool to the customer account.
What is the MOST secure way to allow org1 to access resources in org2?

A. The customer should provide the partner company with their AWS account access keys to log in and perform the required tasks.

B. The customer should create an IAM user and assign the required permissions to the IAM user. The customer should then provide the
credentials to the partner company to log in and perform the required tasks.

C. The customer should create an IAM role and assign the required permissions to the IAM role. The partner company should then use the IAM
role's Amazon Resource Name (ARN) when requesting access to perform the required tasks.

D. The customer should create an IAM role and assign the required permissions to the IAM role. The partner company should then use the IAM
role's Amazon Resource Name (ARN), including the external ID in the IAM role's trust policy, when requesting access to perform the required
tasks.

Correct Answer: B

Community vote distribution


D (100%)

" # A_New_Guy Highly Voted $ 1 year, 1 month ago


I think D is the Answer:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html
upvoted 20 times

" # oscargee 1 year ago


Not correct. I think you mixed client and host role. Host owner should create IAM with external ID. But in this case, reversed.
upvoted 2 times

" # student22 1 year ago


Yes. I think the answer is C.
upvoted 1 times

" # student22 1 year ago


Changing my answer to D.
Assuming the answer is not suggesting to include the external id when making the request.
upvoted 1 times

" # Kelvin1477 1 year, 1 month ago


agree, external id is a safety precatious to only allow certain user in that third party app orgnizahttps://www.examtopics.com/exams/amazon
/aws-certified-solutions-architect-professional/view/68/#tion to assume the role
upvoted 3 times

" # kirrim 1 year ago


Agree, it's addressing the "Confused Deputy problem":

https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html
upvoted 1 times

" # OBA1 Most Recent % 7 months, 1 week ago


Selected Answer: D
Answer is D
Difference between C and D is “ What is the SECUREST”
upvoted 3 times

" # cldy 11 months ago


D. The customer should create an IAM role and assign the required permissions to the IAM role. The partner company should then use the IAM
role‫ג‬€™s Amazon Resource Name (ARN), including the external ID in the IAM role‫ג‬€™s trust policy, when requesting access to perform the
required tasks.
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


D is correct

392 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # acloudguru 11 months, 2 weeks ago
Selected Answer: D
D is the Answer, such simple security question, hope I can have it in my real exam
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html
upvoted 1 times

" # andylogan 1 year ago


It's D
upvoted 1 times

" # tgv 1 year ago


DDD
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 1 times

" # Waiweng 1 year ago


it's D
upvoted 3 times

" # blackgamer 1 year ago


Anwer is D.
upvoted 2 times

" # alisyech 1 year ago


i choose D
upvoted 1 times

" # Joaster 1 year ago


Definitely D: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html
upvoted 1 times

" # Kian1 1 year ago


will go with D
upvoted 2 times

" # Ebi 1 year ago


D is my choice
upvoted 4 times

" # Bulti 1 year ago


Answer is D.
upvoted 3 times

" # petebear55 1 year ago


Why do they have the correct answer as B ? when it is clearly D ?
upvoted 1 times

" # kopper2019 1 year ago


the idea behind examtopics is resolved the Qs use the crow and share and debate about the correct answers here not the ones depicted
upvoted 4 times

" # devilman222 1 year, 1 month ago


There is no way that A or B are correct.
Does reveal solution just pick a random answer?
upvoted 2 times

" # somebodyelse 1 year ago


Yes it does.
upvoted 1 times

393 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #668 Topic 1

An enterprise company is building an infrastructure services platform for its users. The company has the following requirements:
✑ Provide least privilege access to users when launching AWS infrastructure so users cannot provision unapproved services.
✑ Use a central account to manage the creation of infrastructure services.
✑ Provide the ability to distribute infrastructure services to multiple accounts in AWS Organizations.
Provide the ability to enforce tags on any infrastructure that is started by users.

Which combination of actions using AWS services will meet these requirements? (Choose three.)

A. Develop infrastructure services using AWS Cloud Formation templates. Add the templates to a central Amazon S3 bucket and add the-IAM
roles or users that require access to the S3 bucket policy.

B. Develop infrastructure services using AWS Cloud Formation templates. Upload each template as an AWS Service Catalog product to
portfolios created in a central AWS account. Share these portfolios with the Organizations structure created for the company.

C. Allow user IAM roles to have AWSCloudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organizations SCP at
the AWS account root user level to deny all services except AWS CloudFormation and Amazon S3.

D. Allow user IAM roles to have ServiceCatalogEndUserAccess permissions only. Use an automation script to import the central portfolios to
local AWS accounts, copy the TagOption assign users access and apply launch constraints.

E. Use the AWS Service Catalog TagOption Library to maintain a list of tags required by the company. Apply the TagOption to AWS Service
Catalog products or portfolios.

F. Use the AWS CloudFormation Resource Tags property to enforce the application of tags to any CloudFormation templates that will be
created for users.

Correct Answer: ABE

Community vote distribution


BDE (75%) BCE (25%)

" # liono Highly Voted $ 1 year, 1 month ago


B,D,E seems to be the correct options
upvoted 20 times

" # AK2020 Highly Voted $ 1 year, 1 month ago


B, D , E - Seems correct
upvoted 8 times

" # AwsBRFan Most Recent % 1 month ago


Selected Answer: BCE
Considering BCE
https://docs.aws.amazon.com/servicecatalog/latest/adminguide/controlling_access.html

If you apply the ServiceCatalogEndUserAccess policy, your users have access to the end user console view, but they won't have the permissions
that they need to launch products and manage provisioned products. You can grant these permissions directly to an end user in IAM, but if you
want to limit the access that end users have to AWS resources, you should attach the policy to a launch role. You then use AWS Service Catalog
to apply the launch role to a launch constraint for the product.
upvoted 1 times

" # psou7 1 month ago


Not 100% accurate. The question does not specify what kind of EndUserAccess is granted.
With EndUserFullAccess, user can launch products.
End users
AWSServiceCatalogEndUserFullAccess — Grants full access to the end user console view. Grants permission to launch products and manage
provisioned products.

AWSServiceCatalogEndUserReadOnlyAccess — Grants read-only access to the end user console view. Does not grant permission to launch
products or manage provisioned products.

BDE
upvoted 1 times

" # Ell89 1 month, 1 week ago

394 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Selected Answer: BDE


BDE gets my vote
upvoted 1 times
" # CloudHell 4 months, 4 weeks ago
Selected Answer: BDE
It's BDE to me.
upvoted 1 times

" # bobsmith2000 5 months, 3 weeks ago


Selected Answer: BDE
No-brainer.
Choose everything which is related to Service Catalog
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


B,D,E is correct answer
upvoted 1 times

" # andylogan 1 year ago


It's B D E
upvoted 1 times

" # Kopa 1 year ago


B,D,E all related to Service Catalog
upvoted 2 times

" # tgv 1 year ago


BBB DDD EEE
---
upvoted 3 times

" # blackgamer 1 year ago


BDE is the answer.
upvoted 1 times

" # Suresh108 1 year ago


"user cannot provision unapproved services" --- choose 'service catalog' in all the options given. thats BDE.
upvoted 3 times

" # WhyIronMan 1 year ago


I'll go with B,D,E
upvoted 3 times

" # student2020 1 year ago


I think BEF is a better option. D looks good but the ServiceCatalogEndUserAccess permission only allows read only access and users cannot
launch products. And in B the portfolios have already been shared, why share again using automation scripts?
https://docs.aws.amazon.com/servicecatalog/latest/adminguide/controlling_access.html
upvoted 2 times

" # mustpassla 1 year ago


BDE. Easy question.
upvoted 2 times

" # Waiweng 1 year ago


it's B,D,E
upvoted 5 times

" # awsnoob 1 year ago


BDE
https://aws.amazon.com/blogs/mt/how-to-launch-secure-and-governed-aws-resources-with-aws-cloudformation-and-aws-service-catalog/
upvoted 2 times

395 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #669 Topic 1

A Solutions Architect is building a solution for updating user metadata that is initiated by web servers. The solution needs to rapidly scale from
hundreds to tens of thousands of jobs in less than 30 seconds. The solution must be asynchronous always avertable and minimize costs.
Which strategies should the Solutions Architect use to meet these requirements?

A. Create an AWS SWF worker that will update user metadata updating web application to start a new workjow for every job.

B. Create an AWS Lambda function that will update user metadata. Create an Amazon SOS queue and con+gure it as an event source for the
Lambda function. Update the web application to send jobs to the queue.

C. Create an AWS Lambda function that will update user metadata. Create AWS Step Functions that will trigger the Lambda function. Update
the web application to initiate Step Functions for every job.

D. Create an Amazon SQS queue. Create an AMI with a worker to check the queue and update user metadata. Con+gure an Amazon EC2 Auto
Scaling group with the new AMI. Update the web application to send jobs to the queue.

Correct Answer: B

" # Bulti Highly Voted $ 1 year, 1 month ago


Since this is a just a simple job to update the metadata, I would eliminate workflow options such as A and C. Between B and D I would chose B
because it will be easier to scale with Lambda using SQS as an event source as per the requirement than it is with EC2 Auto scaling.
upvoted 7 times

" # hilft Most Recent % 3 months, 1 week ago


queue + lambda to decouple the task
upvoted 2 times

" # KiraguJohn 4 months, 1 week ago


What is SOS queue?
upvoted 1 times

" # challenger1 11 months ago


My answer B:
No Simple Workflow Service (SWF) or Step Functions needed.
upvoted 2 times

" # AzureDP900 11 months, 1 week ago


B is perfect
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # student22 1 year ago


B
SQS with Lambda
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # blackgamer 1 year ago


Yes, it is B.

Lambda concurrent quota can increase up to Tens of thousands.


upvoted 3 times

" # kirrim 1 year ago


Agree, here's the documentation for that:

https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html

The problem with D is that it takes time to spin up new instances in an ASG. And the question said "rapidly scale from hundreds to tens of
thousands of jobs in less than 30 seconds".

396 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # WhyIronMan 1 year ago
I'll go with B
upvoted 3 times

" # Waiweng 1 year ago


it's B
upvoted 3 times

" # alisyech 1 year ago


i go with B
upvoted 2 times

" # cnethers 1 year ago


B is talking about SOS not SQS
If it was talking about SQS then B would make sense .. is that just a typo ?
upvoted 2 times

" # student22 1 year ago


It's a typo. The questions also mentions 'avertable' instead of 'available'
upvoted 1 times

" # champcloud 1 year ago


Yup probably a typo. Ans is B.
upvoted 2 times

" # Joaster 1 year ago


B. Asynchronous workflows have always been the primary use case for SQS.
upvoted 3 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # Ebi 1 year ago


My choice is B, but even that is not a complete solution as Lambda concurrency limit must be increased as well which hasn't been included in
this answer.
All other answers can be eliminated easily
A, no SWF is needed in here
C, No step function is needed, simple metadata update
D, Auto scaling group can not scale fast, can take more than 30 sec to launch new instances in asg
upvoted 3 times

" # nqobza 1 year, 1 month ago


The answer is D. Anything involving lambda will not scale well in this scenario. Look at concurrent execution limits. https://docs.aws.amazon.com
/lambda/latest/dg/gettingstarted-limits.html
upvoted 1 times

" # Justu 1 year ago


Nope, it's still B. As Lambda max concurrent execution limits can be requested from default 1000 to Hundreds of thousands.

Scaling up ec2 enough to handle the need in 30 seconds seems impossible for me.
upvoted 4 times

" # RedKane 1 year ago


Even that might not be needed if Lambda consumes SQS messages in batches. Batch size can be up to 10 000. With that it's possible to
consume and process tens of thousands of messages with just several lambda functions running concurrently (if one wants to minimize
number of concurrent functions and execution time for all messages in batch does not exceed 15 minutes)
upvoted 1 times

397 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #670 Topic 1

A company's main intranet page has experienced degraded response times as its user base has increased although there are no reports of users
seeing error pages. The application uses Amazon DynamoDB in read-only mode.
Amazon DynamoDB latency metrics for successful requests have been in a steady state even during times when users have reported degradation.
The
Development team has correlated the issue to ProvisionedThrough put Exceeded exceptions in the application logs when doing Scan and read
operations The team also identi+ed an access pattern of steady spikes of read activity on a distributed set of individual data items.
The Chief Technology Omcer wants to improve the user experience.
Which solutions will meet these requirements with the LEAST amount of changes to the application? (Choose two.)

A. Change the data model of the DynamoDB tables to ensure that all Scan and read operations meet DynamoDB best practices of uniform data
access, reaching the full request throughput provisioned for the DynamoDB tables.

B. Enable DynamoDB Auto Scaling to manage the throughput capacity as table tramc increases. Set the upper and lower limits to control costs
and set a target utilization given the peak usage and how quickly the tramc changes.

C. Provision Amazon ElastiCache for Redis with cluster mode enabled. The cluster should be provisioned with enough shards to spread the
application load and provision at least one read replica node for each shard.

D. Implement the DynamoDB Accelerator (DAX) client and provision a DAX cluster with the appropriate node types to sustain the application
load. Tune the item and query cache con+guration for an optimal user experience.

E. Remove error retries and exponential backoffs in the application code to handle throttling errors.

Correct Answer: AE

Community vote distribution


BD (100%)

" # liono Highly Voted $ 1 year, 1 month ago


B and D will require least amount of changes in the application while increasing over all performance
upvoted 29 times

" # Ebi Highly Voted $ 1 year ago


BD my choice
upvoted 7 times

" # fdoxxx Most Recent % 2 weeks ago


Selected Answer: BD
For sure B, D
upvoted 1 times

" # cldy 11 months ago


B. Enable DynamoDB Auto Scaling to manage the throughput capacity as table traffic increases. Set the upper and lower limits to control costs
and set a target utilization given the peak usage and how quickly the traffic changes.
D. Implement the DynamoDB Accelerator (DAX) client and provision a DAX cluster with the appropriate node types to sustain the application
load. Tune the item and query cache configuration for an optimal user experience.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


B,D is my choice
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: BD
BD are very easy one, hope I can have it in my exam
upvoted 1 times

" # AzureDP900 11 months, 2 weeks ago


B & D is correct
upvoted 1 times

" # andylogan 1 year ago


It's B D - Auto scaling and DAX

398 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # tgv 1 year ago
BBB DDD
---
upvoted 2 times

" # blackgamer 1 year ago


B and D. Least amount of changes.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B,D
upvoted 1 times

" # ss160700 1 year ago


why not B & C ?
upvoted 2 times

" # Viper57 1 year ago


D is better than C. DAX is specifically built for caching queries for DynamoDB.
upvoted 1 times

" # pradhyumna 1 year ago


A and D. Implementing best practices for uniform data access and DAX to improve the read performance.
B may not be right, auto-scaling may not really help if the data is distributed, it will only increase cost by scaling up resources.
upvoted 3 times

" # Viper57 1 year ago


A is wrong. This requires changing the data model of the entire database. The questions asks for the solution with the LEAST amount of
changes to the application. B and D are correct.
upvoted 2 times

" # frankzeng 9 months, 3 weeks ago


It is DynamoDB. The data model can be changed with adding Global secondary index. No change to the application
upvoted 1 times

" # Waiweng 1 year ago


it's B&D
upvoted 3 times

" # alisyech 1 year ago


i choose B & D
upvoted 2 times

" # eji 1 year ago


why not CD?
upvoted 3 times

" # Kian1 1 year ago


going for BD
upvoted 2 times

399 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #671 Topic 1

A solutions architect has implemented a SAML 2.0 federated identity solution with their company's on-premises identity provider (IdP) to
authenticate users' access to the AWS environment. When the solutions architect tests authentication through the federated identity web portal,
access to the AWS environment is granted. However, when test users attempt to authenticate through the federated identity web portal, they are
not able to access the AWS environment.
Which items should the solutions architect check to ensure identity federation is properly con+gured? (Choose three.)

A. The IAM user's permissions policy has allowed the use of SAML federation for that user.

B. The IAM roles created for the federated users' or federated groups' trust policy have set the SAML provider as the principal.

C. Test users are not in the AWSFederatedUsers group in the company's IdR.

D. The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the IAM role, and the SAML
assertion from IdR.

E. The on-premises IdP's DNS hostname is reachable from the AWS environment VPCs.

F. The company's IdP de+nes SAML assertions that properly map users or groups in the company to IAM roles with appropriate permissions.

Correct Answer: DEF

Community vote distribution


BDF (100%)

" # liono Highly Voted $ 1 year, 1 month ago


B, D & F looks correct
upvoted 29 times

" # Cantaloupe Highly Voted $ 1 year, 1 month ago


B: "In IAM, you create one or more IAM roles. In the role's trust policy, you set the SAML provider as the principal, which establishes a trust
relationship between your organization and AWS"

D: "The client app calls the AWS STS AssumeRoleWithSAML API, passing the ARN of the SAML provider, the ARN of the role to assume, and the
SAML assertion from IdP"

F: "In your organization's IdP, you define assertions that map users or groups in your organization to the IAM roles"
upvoted 18 times

" # kirrim 1 year ago


Where these quotes came from:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html
upvoted 3 times

" # kadev Most Recent % 2 months, 1 week ago


B,D,F
Explain:
Follow to accecc to AWS console by IDP 3rd:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-saml.html
From stituation of question, we know that: Flow is succesc to step 3 ( got Grant from Idp)
=> user cant not access to AWS console because failed at step 4 or step 5
So:
1. we need to verify SAML assertion (D) : "The IAM role and IAM identity provider are specified as a comma-delimited pair of ARNs in the same
format as the RoleArn and PrincipalArn"
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml_assertions.html#saml-attribute-mapping

2, Next, Verify mapping User and Role is exaclty ( F)

3, Verify "Prerequisites for creating a role for SAML" : Principal must has "PROVIDER-NAME"
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_saml.html#idp_saml_Prerequisites
upvoted 1 times

" # RVivek 8 months, 4 weeks ago


Answer BDF
E is also require ? Can some one say why E is not rquired ?
upvoted 1 times

" # hancoms 2 weeks, 1 day ago

400 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

i think
'the AWS environment VPCs' -> it does not need to assume role for SAML in aws account side
upvoted 1 times

" # vbal 10 months, 3 weeks ago


I can't see B an answer : The IAM roles created FOR the federated users‫ג‬€™ or federated groups???
upvoted 1 times

" # vbal 10 months, 3 weeks ago


B is fine
upvoted 1 times

" # CloudChef 11 months ago


BDF is it.
upvoted 1 times

" # cldy 11 months ago


B. The IAM roles created for the federated users‫ג‬€™ or federated groups‫ג‬€™ trust policy have set the SAML provider as the principal.
D. The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the IAM role, and the SAML
assertion from IdR.
F. The company‫ג‬€™s IdP defines SAML assertions that properly map users or groups in the company to IAM roles with appropriate permissions.
upvoted 1 times

" # AzureDP900 11 months, 2 weeks ago


B,D,F is perfect answer for given scenario.
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: BDF
B: "In IAM, you create one or more IAM roles. In the role's trust policy, you set the SAML provider as the principal, which establishes a trust
relationship between your organization and AWS"

D: "The client app calls the AWS STS AssumeRoleWithSAML API, passing the ARN of the SAML provider, the ARN of the role to assume, and the
SAML assertion from IdP"

F: "In your organization's IdP, you define assertions that map users or groups in your organization to the IAM roles"
upvoted 2 times

" # andylogan 1 year ago


It's B D F
upvoted 1 times

" # tgv 1 year ago


BBB DDD FFF
---
upvoted 2 times

" # blackgamer 1 year ago


BDF for me.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B,D,F
upvoted 1 times

" # Waiweng 1 year ago


it's B,D and F
upvoted 4 times

" # Kian1 1 year ago


will go with BDF
upvoted 2 times

" # Ebi 1 year ago


BDF is my choice
upvoted 4 times

" # gookseang 1 year ago


I will go BDF
upvoted 1 times

401 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #672 Topic 1

A company's security compliance requirements state that all Amazon EC2 images must be scanned for vulnerabilities and must pass a CVE
assessment. A solutions architect is developing a mechanism to create security- approved AMIs that can be used by developers. Any new AMIs
should go through an automated assessment process and be marked as approved before developers can use them. The approved images must be
scanned every 30 days to ensure compliance.
Which combination of steps should the solutions architect take to meet these requirements while following best practices? (Choose two.)

A. Use the AWS Systems Manager EC2 agent to run the CVE assessment on the EC2 instances launched from the AMIs that need to be
scanned.

B. Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use Amazon
EventBridge to trigger an AWS Systems Manager Automation document on all EC2 instances every 30 days.

C. Use Amazon Inspector to run the CVE assessment on the EC2 instances launched from the AMIs that need to be scanned.

D. Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use a managed
AWS Con+g rule for continuous scanning on all EC2 instances, and use AWS Systems Manager Automation documents for remediation.

E. Use AWS CloudTrail to run the CVE assessment on the EC2 instances launched from the AMIs that need to be scanned.

Correct Answer: BC

Community vote distribution


BC (100%)

" # Ebi Highly Voted $ 1 year ago


BC,
For CVE we don't need continuoues scanning, so D is ruled out.
upvoted 16 times

" # bbnbnuyh Highly Voted $ 1 year, 1 month ago


https://aws.amazon.com/blogs/security/how-to-set-up-continuous-golden-ami-vulnerability-assessments-with-amazon-inspector/
upvoted 8 times

" # elf78 1 year ago


+1. Answers are B&C
upvoted 2 times

" # AwsBRFan Most Recent % 1 month ago


Selected Answer: BC
https://aws.amazon.com/blogs/mt/automate-vulnerability-management-and-remediation-in-aws-using-amazon-inspector-and-aws-systems-
manager-part-1/
upvoted 1 times

" # cldy 11 months, 1 week ago


B. Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use Amazon
EventBridge to trigger an AWS Systems Manager Automation document on all EC2 instances every 30 days.
C. Use Amazon Inspector to run the CVE assessment on the EC2 instances launched from the AMIs that need to be scanned.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


B,C Is my option
upvoted 1 times

" # AzureDP900 11 months, 2 weeks ago


B,C is right choice
upvoted 1 times

" # andylogan 1 year ago


It's B C - Inspector and 30 days
upvoted 1 times

" # tgv 1 year ago


BBB CCC
---
upvoted 1 times

402 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Suresh108 1 year ago


easy to remember trick -

B - question has 30 days, this is the only answer has 30 days in it.
C - CVE needs to be inspected, use 'Amazon Inspector' only C has these words.
upvoted 4 times

" # Sean2021 1 year ago


C&D
You cannot use SSM document to scan
upvoted 1 times

" # Waiweng 1 year ago


it's B and C
upvoted 3 times

" # Amitv2706 1 year ago


B and C for sure
upvoted 3 times

" # Kian1 1 year ago


going with BC
upvoted 4 times

" # kopper2019 1 year ago


B and C, for sure Amazon inspectos is needed
upvoted 1 times

" # rkbala 1 year, 1 month ago


A and B
https://aws.amazon.com/about-aws/whats-new/2020/10/now-use-aws-systems-manager-to-view-vulnerability-identifiers-for-missing-patches-
on-your-linux-instances/
upvoted 1 times

" # Superomam 1 year, 1 month ago


B, C. Remediation activity is not asked into the question.
upvoted 1 times

" # Bulti 1 year, 1 month ago


C is correct, Now between B and D, both might work but since we are asked to scan the EC2 instances every 30 days I will go with B. So the
final answer is B and C.
upvoted 4 times

403 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #673 Topic 1

A company uses AWS Organizations with a single OU named Production to manage multiple accounts. All accounts are members of the
Production OU.
Administrators use deny list SCPs in the root of the organization to manage access to restricted services.
The company recently acquired a new business unit and invited the new unit's existing AWS account to the organization. Once onboarded, the
administrators of the new business unit discovered that they are not able to update existing AWS Con+g rules to meet the company's policies.
Which option will allow administrators to make changes and continue to enforce the current policies without introducing additional long-term
maintenance?

A. Remove the organization's root SCPs that limit access to AWS Con+g. Create AWS Service Catalog products for the company's standard
AWS Con+g rules and deploy them throughout the organization, including the new account.

B. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Con+g actions. Move the
new account to the Production OU when adjustments to AWS Con+g are complete.

C. Convert the organization's root SCPs from deny list SCPs to allow list SCPs to allow the required services only. Temporally apply an SCP to
the organization's root that allows AWS Con+g actions for principals only in the new account.

D. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Con+g actions. Move the
organization's root SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS Con+g are complete.

Correct Answer: D

" # avland Highly Voted $ 1 year, 1 month ago


D. The problem with B is that the new OU will be created within the root of the organization, and so the Deny on changes to Config rules (from
the root of the organization) will then apply to the new OU as well. The new OU must not have a parent that denies changes to Config rules. That
would be the case for D.
upvoted 26 times

" # smartassX Highly Voted $ 1 year, 1 month ago


D --> "Administrators use deny list SCPs in the root of the organization to manage access to restricted services." In Option D "Move the
organization's root SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS Config are complete."
upvoted 10 times

" # cldy Most Recent % 11 months ago


D. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Config actions. Move the
organization‫ג‬€™s root SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS Config are complete.
upvoted 1 times

" # AzureDP900 11 months, 2 weeks ago


D Is right
upvoted 1 times

" # andylogan 1 year ago


It's D - create temporary OU
upvoted 2 times

" # tgv 1 year ago


DDD
---
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 1 times

" # Waiweng 1 year ago


it's D
upvoted 2 times

" # ExtHo 1 year ago


D is correct as for B If the SCP applied on the organization's root has a "deny" permission, all OUs under the organization will inherit that rule.
You cannot override an explicit "deny" permission with an explicit "allow" applied to the temporary Onboarding OU.
upvoted 4 times

404 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # awsnoob 1 year ago


B is not correct.... Deny takes precedent... it should be D
upvoted 1 times

" # kiev 1 year ago


OK review again. The answer is B. Don't forget scp was already applicable to root do no need to apply it again and thus why D isn't correct
upvoted 2 times

" # kiev 1 year ago


Guys anyone here having problems accessing exam topics from laptop? I just can't get access for over two days now. My answer is D
upvoted 1 times

" # Ebi 1 year ago


B does not work, deny at root does not allow member account even with an allow
D for sure
upvoted 4 times

" # njthomas 1 year ago


Going with D, due to the "allow administrators to make changes and continue to enforce the current policies" part.
upvoted 1 times

" # njthomas 1 year ago


If scp is applied via deny policy at the root, we cannot enable it at a lower level. https://docs.aws.amazon.com/organizations/latest/userguide
/orgs_manage_policies_inheritance_auth.html
I suggest C
upvoted 1 times

" # Bulti 1 year ago


I think the correct answer is D. The best practice is to not assign an SCP to the root of the organization. So B is incorrect.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html - Search for "Testing effects of SCPs".
upvoted 2 times

" # T14102020 1 year, 1 month ago


D is correct answer. The new OU must not have a parent that denies changes to Config rules.
upvoted 2 times

405 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #674 Topic 1

A company is launching a web-based application in multiple regions around the world. The application consists of both static content stored in a
private Amazon
S3 bucket and dynamic content hosted in Amazon ECS containers content behind an Application Load Balancer (ALB). The company requires that
the static and dynamic application content be accessible through Amazon CloudFront only.
Which combination of steps should a solutions architect recommend to restrict direct content access to CloudFront? (Choose three.)

A. Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB.

B. Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the CloudFront
distribution.

C. Con+gure CloudFront to add a custom header to origin requests.

D. Con+gure the ALB to add a custom header to HTTP requests.

E. Update the S3 bucket ACL to allow access from the CloudFront distribution only.

F. Create a CloudFront Origin Access Identity (OAI) and add it to the CloudFront distribution. Update the S3 bucket policy to allow access to
the OAI only.

Correct Answer: ADF

Community vote distribution


ACF (83%) BCF (17%)

" # gookseang Highly Voted $ 1 year, 1 month ago


A.C.F for sure
upvoted 34 times

" # petebear55 1 year, 1 month ago


Your answer a is wrong !!! .. If your going to come out with wild statements like this then back up your answers .. its B
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-awswaf.html
upvoted 1 times

" # shammous 1 year ago


He is right, ... pete Red Herring.
upvoted 2 times

" # petebear55 1 year, 1 month ago


Repalce A with B
upvoted 2 times

" # MrCarter 1 year ago


no mate nobody believes you
upvoted 5 times

" # tvs 1 year ago


Pete read this https://aws.amazon.com/blogs/security/how-to-enhance-amazon-cloudfront-origin-security-with-aws-waf-and-aws-secrets-
manager/#:~:text=In%20this%20blog%20post%2C%20you,it%20sends%20to%20your%20origin.
upvoted 4 times

" # MrCarter 1 year ago


ANSWER IS OBVIOUSLY A,C,F
Straight out of Jon Bonso's exams
upvoted 5 times

" # Kopa 11 months, 3 weeks ago


good explanation
upvoted 1 times

" # Bulti Highly Voted $ 1 year ago


To ensure all requests are coming from CloudFront, the combination of steps should be C, A, F. C will add the custom header. A will detect the
presence of custom header using Web ACL rules in the WAF around ALB and then F will ensure that all request to access S3 buckets are coming
from Cloudfront using the OAI.
upvoted 27 times

406 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # shammous 1 year ago


Great job Bulti, as always. Thank you for taking the time to explain your choices. I suggest everybody else do the same instead of just
throwing their answers...
upvoted 7 times

" # AkaAka4 10 months, 3 weeks ago


Indeed, rookies like me really appreciate you guys' information :D
upvoted 1 times

" # 01037 1 year ago


Agree.
B and C are contradiction to each other.
If CloudFront adds the custom header, how could WAF in front of CloudFront validate the presence of the custom header.
upvoted 6 times
" # hilft Most Recent % 3 months, 2 weeks ago
ACF
I saw the same one in Jon Bonso's exam. MrCarter got it too
upvoted 2 times

" # aandc 4 months, 1 week ago


Selected Answer: ACF
deny direct access to S3 and ALB
upvoted 2 times

" # KennethTam 7 months, 1 week ago


Selected Answer: ACF
ACF, you need to deny direct access to origin(ALB), but not cloudfront.
upvoted 2 times

" # Ni_yot 8 months, 1 week ago


Agree with ACF. CF is used to add custom http headers to request.
upvoted 1 times

" # HellGate 9 months, 1 week ago


My answer is B C F.

To deliver contents through only CloudFront, we need associate the Web ACL with CloudFront, not ALB. ALB is for ECS here and for OAI, doesn’t
need ALB.

https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-associating-aws-resource.html
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/?nc1=h_ls
upvoted 1 times

" # pititcu667 9 months, 3 weeks ago


Selected Answer: ACF
i chose acf
upvoted 1 times

" # Binoj_1985 10 months, 3 weeks ago


Selected Answer: BCF
BCF right? Since static and dynamic application material must be available through Amazon CloudFront.
upvoted 1 times

" # Binoj_1985 10 months, 3 weeks ago


ACF - Since validate rule to ALB
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


A,C,F Is right
upvoted 1 times

" # andylogan 1 year ago


It's C A F
upvoted 2 times

" # DerekKey 1 year ago


A/C/F: very similar configuration is used by us in an environment that serves over 1 mln requests per second. The only difference is in usage of
AWF. This answer is a waste of money. ALB can check header in incoming traffic. You don't need WAF to do it.

CF sets custom header with a value


ALB check if custom header exists with this value
S3 uses CF OAI

407 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 4 times

" # tgv 1 year ago


AAA CCC FFF
---
upvoted 2 times

" # blackgamer 1 year ago


CAF for me.
upvoted 2 times

" # mericov 1 year ago


ACF https://blogs.halodoc.io/implementation-of-custom-header-to-origin-requests/
upvoted 2 times

" # Akhil254 1 year ago


ACF Correct
upvoted 2 times

" # MrCarter 1 year ago


Official explanation:
The option that says: Use CloudFront to add a custom header to all origin requests. Using AWS WAF, create a web rule that denies all requests
without this custom header. Associate the web ACL to the CloudFront distribution is incorrect. If any new requests are going to CloudFront, they
won't have the custom header initially so AWS WAF may block the request immediately. This could deny any new connections to CloudFront.
Therefore, you need to associate the web ACL to the ALB, which is after the CloudFront adds the custom header.
upvoted 3 times

408 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #675 Topic 1

A company has multiple lines of business (LOBs) that roll up to the parent company. The company has asked its solutions architect to develop a
solution with the following requirements:
✑ Produce a single AWS invoice for all of the AWS accounts used by its LOBs.
✑ The costs for each LOB account should be broken out on the invoice.
✑ Provide the ability to restrict services and features in the LOB accounts, as de+ned by the company's governance policy.
✑ Each LOB account should be delegated full administrator permissions, regardless of the governance policy.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

A. Use AWS Organizations to create an organization in the parent account for each LOB. Then, invite each LOB account to the appropriate
organization.

B. Use AWS Organizations to create a single organization in the parent account. Then, invite each LOB's AWS account to pin the organization.

C. Implement service quotas to de+ne the services and features that are permitted and apply the quotas to each LOB as appropriate.

D. Create an SCP that allows only approved services and features, then apply the policy to the LOB accounts. Enable consolidated billing in the
parent account's billing console and link the LOB accounts.

Correct Answer: CD

Community vote distribution


BD (100%)

" # keos Highly Voted $ 1 year, 1 month ago


BD, most likely
upvoted 23 times

" # Ebi Highly Voted $ 1 year ago


BD is my choice
upvoted 6 times

" # gnic Most Recent % 2 months, 3 weeks ago


Selected Answer: BD
BD is the right question
upvoted 1 times

" # Dohecadi 8 months ago


Selected Answer: BD
Choice is between A and B for Organizations, and A is wrong. You cannot belong to more than one organization.
For permissions, coice is between C and D, and C is wrong. SCP is right.
upvoted 3 times

" # Buggie 10 months, 1 week ago


SCP can be applied to OU. NO other choice. hence A and D
upvoted 1 times

" # cldy 11 months ago


B. Use AWS Organizations to create a single organization in the parent account. Then, invite each LOB‫ג‬€™s AWS account to pin the
organization.
D. Create an SCP that allows only approved services and features, then apply the policy to the LOB accounts. Enable consolidated billing in the
parent account‫ג‬€™s billing console and link the LOB accounts.
upvoted 2 times

" # AzureDP900 11 months, 2 weeks ago


B, D is right
upvoted 1 times

" # acloudguru 11 months, 3 weeks ago


Selected Answer: BD
C is not right, should be BD.
upvoted 2 times

" # acloudguru 11 months, 3 weeks ago


A rare simple question, BD hope I can have it in my exam

409 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # andylogan 1 year ago


It's B D
upvoted 1 times

" # wakame 1 year ago


Not B
This question says "Produce a single AWS invoice for all of the AWS accounts used by its LOBs." and consolidated billing put together billing for
accounts in an organization.
So B is wrong.
The answer is A and D.
upvoted 1 times

" # wakame 1 year ago


I made a mistake.
consolidated billing put together billing for accounts in an organization so The answer is BD.
upvoted 1 times

" # DerekKey 1 year ago


A wrong - correct but adds additional management overhead later. In this scenario we must assure: ability to restrict services and features in the
LOB accounts, as defined by the company's governance policy. Same policy/restrictions for all.
B correct - and more suited to this case (simplification of management)
C wrong - quotas
D correct
E wrong - billing is by default. The other option is full features meaning billing and management. Btw. if you start with billing you can go up but
this is one way process.
upvoted 1 times

" # tgv 1 year ago


BBB DDD
---
upvoted 1 times

" # blackgamer 1 year ago


BD but weird question.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B,D
upvoted 1 times

" # digimaniac 1 year ago


Can someone comment on "Each LOB account should be delegated full administrator permissions, regardless of the governance policy." I think
only A can satisfy this one.
upvoted 2 times

" # DerekKey 1 year ago


it doesn't matter if you have one org or many orgs. The process of registering delegated account uses account it and service principal. It
means that this procedure works directly on an account.
upvoted 1 times

" # Waiweng 1 year ago


it's B,D
upvoted 2 times

410 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #676 Topic 1

An ecommerce website running on AWS uses an Amazon RDS for MySQL DB instance with General Purpose SSD storage. The developers chose an
appropriate instance type based on demand, and con+gured 100 GB of storage with a sumcient amount of free space.
The website was running smoothly for a few weeks until a marketing campaign launched. On the second day of the campaign, users reported long
wait times and time outs. Amazon CloudWatch metrics indicated that both reads and writes to the DB instance were experiencing long response
times. The CloudWatch metrics show 40% to 50% CPU and memory utilization, and sumcient free storage space is still available. The application
server logs show no evidence of database connectivity issues.
What could be the root cause of the issue with the marketing campaign?

A. It exhausted the I/O credit balance due to provisioning low disk storage during the setup phase.

B. It caused the data in the tables to change frequently, requiring indexes to be rebuilt to optimize queries.

C. It exhausted the maximum number of allowed connections to the database instance.

D. It exhausted the network bandwidth available to the RDS for MySQL DB instance.

Correct Answer: C

Community vote distribution


A (100%)

" # bbnbnuyh Highly Voted $ 1 year, 1 month ago


A. 100G GP2 is going to give roughtly 300 IOPS which is too low
upvoted 27 times

" # Cantaloupe 1 year, 1 month ago


Yes. There is burst option but it can be exhausted
"When using General Purpose SSD storage, your DB instance receives an initial I/O credit balance of 5.4 million I/O credits. This initial credit
balance is enough to sustain a burst performance of 3,000 IOPS for 30 minutes."
upvoted 6 times

" # Ebi Highly Voted $ 1 year, 1 month ago


Answer is A,
Key is "on the second day", so all the credit have been used by then
upvoted 11 times

" # 5kk Most Recent % 3 months, 2 weeks ago


Selected Answer: A
A looks good.
upvoted 1 times

" # pankajrawat 6 months, 1 week ago


Selected Answer: A
A looks good
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


A is right
upvoted 1 times

" # backfringe 11 months, 2 weeks ago


I'd go with A
upvoted 1 times

" # AzureDP900 11 months, 2 weeks ago


Before even reading answers my best bet was A. I am with you guys, I am not sure why they want to fool us with C. This is one of the reason
knowing concepts is very important rather than depending on answers :)
upvoted 1 times

" # Cal88 6 days, 7 hours ago


An easy way to eliminate C is to read the question carefully
“The application server logs show no evidence of database connectivity issues.“
So there are no connectivity issues
I really hope that I read the questions carefully in my exam and not jump to any conclusions quickly

411 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # acloudguru 11 months, 3 weeks ago
Selected Answer: A
There is burst option but it can be exhausted
"When using General Purpose SSD storage, your DB instance receives an initial I/O credit balance of 5.4 million I/O credits. This initial credit
balance is enough to sustain a burst performance of 3,000 IOPS for 30 minutes."
upvoted 1 times

" # andylogan 1 year ago


It's A - exhausted all its initial I/O credits on the second day
upvoted 1 times

" # tgv 1 year ago


AAA
---
upvoted 1 times

" # blackgamer 1 year ago


A is the answer.
upvoted 1 times

" # jobe42 1 year ago


A Emerging similar happened Touch us.. So I've learned shout EBS burstable balance
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 1 times

" # mustpassla 1 year ago


A, a typical SAP IOPS related question
upvoted 1 times

" # Waiweng 1 year ago


it's A
upvoted 3 times

" # Kian1 1 year, 1 month ago


going with A
upvoted 3 times

" # Bulti 1 year, 1 month ago


A is the right answer. It must have exhausted all its I/O credits due to the marketing event and now operates with 300 IOPS which is pretty low for
that event.
upvoted 2 times

412 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #677 Topic 1

A solutions architect has been assigned to migrate a 50 TB Oracle data warehouse that contains sales data from on-premises to Amazon
Redshift. Major updates to the sales data occur on the +nal calendar day of the month. For the remainder of the month, the data warehouse only
receives minor daily updates and is primarily used for reading and reporting. Because of this, the migration process must start on the +rst day of
the month and must be complete before the next set of updates occur. This provides approximately 30 days to complete the migration and ensure
that the minor daily changes have been synchronized with the
Amazon Redshift data warehouse. Because the migration cannot impact normal business network operations, the bandwidth allocated to the
migration for moving data over the internet is 50 Mbps. The company wants to keep data migration costs low.
Which steps will allow the solutions architect to perform the migration within the speci+ed timeline?

A. Install Oracle database software on an Amazon EC2 instance. Con+gure VPN connectivity between AWS and the company's data center.
Con+gure the Oracle database running on Amazon EC2 to join the Oracle Real Application Clusters (RAC). When the Oracle database on
Amazon EC2 +nishes synchronizing, create an AWS DMS ongoing replication task to migrate the data from the Oracle database on Amazon
EC2 to Amazon Redshift. Verify the data migration is complete and perform the cut over to Amazon Redshift.

B. Create an AWS Snowball import job. Export a backup of the Oracle data warehouse. Copy the exported data to the Snowball device. Return
the Snowball device to AWS. Create an Amazon RDS for Oracle database and restore the backup +le to that RDS instance. Create an AWS DMS
task to migrate the data from the RDS for Oracle database to Amazon Redshift. Copy daily incremental backups from Oracle in the data center
to the RDS for Oracle database over the internet. Verify the data migration is complete and perform the cut over to Amazon Redshift.

C. Install Oracle database software on an Amazon EC2 instance. To minimize the migration time, con+gure VPN connectivity between AWS
and the company's data center by provisioning a 1 Gbps AWS Direct Connect connection. Con+gure the Oracle database running on Amazon
EC2 to be a read replica of the data center Oracle database. Start the synchronization process between the company's on-premises data
center and the Oracle database on Amazon EC2. When the Oracle database on Amazon EC2 is synchronized with the on-premises database,
create an AWS DMS ongoing replication task to migrate the data from the Oracle database read replica that is running on Amazon EC2 to
Amazon Redshift. Verify the data migration is complete and perform the cut over to Amazon Redshift.

D. Create an AWS Snowball import job. Con+gure a server in the company's data center with an extraction agent. Use AWS SCT to manage the
extraction agent and convert the Oracle schema to an Amazon Redshift schema. Create a new project in AWS SCT using the registered data
extraction agent. Create a local task and an AWS DMS task in AWS SCT with replication of ongoing changes. Copy data to the Snowball device
and return the Snowball device to AWS. Allow AWS DMS to copy data from Amazon S3 to Amazon Redshift. Verify that the data migration is
complete and perform the cut over to Amazon Redshift.

Correct Answer: A

Community vote distribution


D (75%) A (25%)

" # liono Highly Voted $ 1 year, 1 month ago


D is correct, you need Snowball for the size of the DB, SCT for converting from oracle to redshift and DMS for migration job
https://aws.amazon.com/getting-started/hands-on/migrate-oracle-to-amazon-redshift/
upvoted 26 times

" # kirrim 1 year ago


50TB to transfer
- Transmitting over 50Mbps VPN ~ 90 days, not going to work
- Transmitting over 1Gbps DX ~ 4.3 days, but ~ 60 days to provision circuit, not going to work

A and C are automatically ruled out

- Transmitting via Snowball (Edge) ~ 3-5 days, can hold up to 80TB usable disk, feasible

Between B and D, difference is around whether to use SCT and DMS to Snowball in your datacenter, then move to AWS. Or, copy to Snowball
in data center, move to AWS, then do DMS WITHOUT SCT within AWS. Clearly, you need SCT to go from Oracle to Redshift, so it has to be D

https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/agents.dw.html
upvoted 7 times

" # sangkhuu 3 weeks ago


great explain!
upvoted 1 times

413 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # hkwong Highly Voted $ 1 year, 1 month ago


D. SCT is a must for converting Oracle DATA WAREHOUSE to Redshift
upvoted 5 times

" # Anhdd Most Recent % 5 months, 1 week ago


Selected Answer: A
I wonder that if we choose Snow Ball as the solution. It's perfect, but the time when we ship back to AWS (1 week maybe), on-premis data are
not being synced. While the question requires that "guarantee that small daily updates are synced with the Amazon Redshift data warehouse" ?
How this can be complete?
upvoted 1 times

" # leoluo2020 7 months, 3 weeks ago


Selected Answer: D
mark D
upvoted 2 times

" # palace 11 months ago


For a Oracle database running in the AWS Cloud on "Target architecture" SCT is not printed.
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-oracle-database-to-amazon-redshift-using-aws-dms-and-aws-
sct.html
With D the changes of the month will be lost.
upvoted 1 times

" # cldy 11 months ago


D. Create an AWS Snowball import job. Configure a server in the company‫ג‬€™s data center with an extraction agent. Use AWS SCT to manage
the extraction agent and convert the Oracle schema to an Amazon Redshift schema. Create a new project in AWS SCT using the registered data
extraction agent. Create a local task and an AWS DMS task in AWS SCT with replication of ongoing changes. Copy data to the Snowball device
and return the Snowball device to AWS. Allow AWS DMS to copy data from Amazon S3 to Amazon Redshift. Verify that the data migration is
complete and perform the cut over to Amazon Redshift.
upvoted 2 times

" # backfringe 11 months, 1 week ago


I go with D
upvoted 1 times

" # AzureDP900 11 months, 2 weeks ago


I just thought about D and all the candidates mentioned same. I am getting ready for exam :)
upvoted 1 times

" # acloudguru 11 months, 3 weeks ago


Selected Answer: D
since it is Oracle to Redshit, it needs SCT. scan for the key word SCT and answer is D. 50T through network is impossible for A to finish in
30days, snowball is a must.
upvoted 1 times

" # moon2351 1 year ago


Answer is D
upvoted 1 times

" # andylogan 1 year ago


It's D - for Oracle to Redshit, it needs SCT
upvoted 2 times

" # DerekKey 1 year ago


Requirements:
1. "the data warehouse only receives minor daily updates and is primarily used for reading and reporting"
2. "ensure that the minor daily changes have been synchronized with the
Amazon Redshift data warehouse"

D - how would you make it working if Snowball will travel to AWS for 3-4 days and 1 day more will be spent on restoring database?
upvoted 1 times

" # tgv 1 year ago


DDD
---
upvoted 1 times

" # blackgamer 1 year ago


Only D makes sense, but the solution is not written very clearly.
upvoted 1 times

" # Suresh108 1 year ago

414 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

This is question is so long and big as 50TB. :D

since it is Oracle to Redshit, it needs SCT. scan for the key word SCT and answer is D.
upvoted 1 times
" # denccc 1 year ago
D: https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/agents.dw.html
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 1 times

415 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #678 Topic 1

A solutions architect is designing a disaster recovery strategy for a three-tier application. The application has an RTO of 30 minutes and an RPO of
5 minutes for the data tier. The application and web tiers are stateless and leverage a jeet of Amazon EC2 instances. The data tier consists of a
50 TB Amazon Aurora database.
Which combination of steps satis+es the RTO and RPO requirements while optimizing costs? (Choose two.)

A. Create daily snapshots of the EC2 instances and replicate the snapshots to another Region.

B. Deploy a hot standby of the application to another Region.

C. Create snapshots of the Aurora database every 5 minutes.

D. Create a cross-Region Aurora Replica of the database.

E. Create an AWS Backup job to replicate data to another Region.

Correct Answer: AD

Community vote distribution


BD (56%) AD (44%)

" # cpd Highly Voted $ 1 year, 1 month ago


Question is asking for: RTO of 30 minutes and an RPO of 5
RPO RTO -> mode
24 24hr -> backup
12 4hr -> pilot light
1.4 15min -> warm standup
15min 5min -> active-active
B because of above ^
D is obvious
upvoted 26 times

" # student22 1 year ago


A,D
RPO/RTO is for the data tier.
upvoted 2 times

" # tgv 1 year ago


As this might be the general best practice, the question is asking to optimize costs and I think we can easily achieve the RTO / RPO with
option A
upvoted 4 times

" # MikeyJ 2 months, 3 weeks ago


My thinking too. If it hadn't specifically mentioned costs I would have said B.
upvoted 1 times

" # Ebi Highly Voted $ 1 year ago


I go with AD
upvoted 22 times

" # JohnPi Most Recent % 4 weeks, 1 day ago


Selected Answer: BD
https://aws.amazon.com/blogs/architecture/disaster-recovery-dr-architecture-on-aws-part-iii-pilot-light-and-warm-standby/
upvoted 2 times

" # psou7 1 month ago


I will go with A/D.
The question highlights "optimizing costs". B - Hot standby would work but is more expensive.
RTO and RPO is for data and D covers that.
upvoted 3 times

" # Kyperos 2 months, 2 weeks ago


Because of "The application and web layers are stateless" so dont have any data stored in EC2 Instance. If application dont have multiple
deployments in a day, option A is cost effective. Cross-Region Aurora will effect to RPO/RTO and meet requirements. --> AD are best options!
upvoted 2 times

" # hilft 3 months, 1 week ago


C and D is right answer.

416 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

A is non sense 24 hours "DAILY"


upvoted 1 times

" # hilft 3 months, 1 week ago


A is wrong. daily snapshot won't be enough for 30min/5min
upvoted 1 times

" # aandc 4 months, 1 week ago


AD, RTO & RPO only for Data tier,
upvoted 1 times

" # TechX 4 months, 1 week ago


Selected Answer: AD
AD for me
B will work but it's too expensive cause you have active-active model, and the RTO and RPO within only minute, while the question say that it's
can up to 30 minutes. A will work and more cost effective
upvoted 4 times

" # gorodetsky 7 months, 3 weeks ago


Selected Answer: BD
B,D https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/plan-for-disaster-recovery-dr.html
upvoted 3 times

" # good_tea 8 months, 2 weeks ago


I go with BD
https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/plan-for-disaster-recovery-dr.html
upvoted 2 times

" # lifebegins 9 months ago


Answer is B & D
Hot Standby is the correct answer:

Because, if we have fleet of EC2 Instances, which are stateless, why even we are taking snapshots. Suppost, if we have 5 instance in app later,
10 instances in BL, what is the use of taking the snapshot of the disk of App Layes which is stateless, instead of that, we can maintain thin layer
of Hot Standby 1 instance in Web, 1 instance in BL behind autoscaling group with Cross Replication of Aurora, we can bring the entire layer with
in few minutes by standing up the instance by Cloud Formation with the DR database:

https://www.wellarchitectedlabs.com/reliability/disaster-recovery/workshop_4/

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-disaster-recovery.html
upvoted 2 times

" # Ni_yot 9 months, 3 weeks ago


Will go with B and D. The host standby solves the 30min RTO and the DB cross replication solves the 5min RPO
upvoted 2 times

" # CloudChef 10 months ago


B and D as stated on Digital Cloud Training.
upvoted 2 times

" # AzureDP900 11 months, 2 weeks ago


I'll go with A,D
upvoted 1 times

" # Kopa 11 months, 3 weeks ago


The prob with A to be copied this snapshot in 5 min?
upvoted 1 times

" # sashenka 11 months, 4 weeks ago


BBB D. Same exact question appears on the Tutorials Dojo test which has moderators and good credibility and their answer is [B]D with the
explanation that it may take longer than 30 min to get the snapshot back up and running especially if there is manual intervention. Picture that a
regional outage happens in the middle of the night on a weekend or holiday. What is the likelihood that someone can get an EC2 instance
restored from a snapshot and fully operational from the time the region goes out. If you were the person responsible for that 30 min RTO SLA
would you put YOUR JOB on the line? I too was inclined to select AD but the more I understand the explanation I can agree with BD.
upvoted 3 times

" # ppandey96 9 months, 3 weeks ago


RPO and RTO is for data layer not application
upvoted 1 times

" # sashenka 11 months, 4 weeks ago


And to make the 30 min RTO window even more critical is this, "application and web layers are stateless and run on an Amazon EC2 fleet of
instances". Can one truly recover an EC2 fleet for both the Web and App tiers(we don't know how many but it sounds like more than a couple"

417 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

from a snapshot within 30 min from the time we have a failure?


upvoted 1 times

" # sashenka 11 months, 4 weeks ago


Take a look at the following guidance:
https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/plan-for-disaster-recovery-dr.html
What [A][D] describe is a Pilot light (RPO in minutes, RTO in hours) which DOES NOT MEET THE REQUIREMENTS: Replicate your data
from one region to another and provision a copy of your core workload infrastructure. Resources required to support data replication and
backup such as databases and object storage are always on. Other elements such as application servers are loaded with application code
and configurations, but are switched off and are only used during testing or when Disaster Recovery failover is invoked.

What's needed here is Warm standby (RPO in seconds, RTO in minutes) but since that is not available the only valid option is [B][D] .
upvoted 1 times

" # tomosabc1 3 weeks, 4 days ago


Obviously, you don't understand what ppandey96 mean by his/her comment. Please read carefully.
upvoted 1 times

418 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #679 Topic 1

A company has a primary Amazon S3 bucket that receives thousands of objects every day. The company needs to replicate these objects into
several other S3 buckets from various AWS accounts. A solutions architect is designing a new AWS Lambda function that is triggered when an
object is created in the main bucket and replicates the object into the target buckets. The objects do not need to be replicated in real time. There
is concern that this function may impact other critical
Lambda functions due to Lambda's regional concurrency limit.
How can the solutions architect ensure this new Lambda function will not impact other critical Lambda functions?

A. Set the new Lambda function reserved concurrency limit to ensure the executions do not impact other critical Lambda functions. Monitor
existing critical Lambda functions with Amazon CloudWatch alarms for the Throttles Lambda metric.

B. Increase the execution timeout of the new Lambda function to 5 minutes. Monitor existing critical Lambda functions with Amazon
CloudWatch alarms for the Throttles Lambda metric.

C. Con+gure S3 event noti+cations to add events to an Amazon SQS queue in a separate account. Create the new Lambda function in the
same account as the SQS queue and trigger the function when a message arrives in the queue.

D. Ensure the new Lambda function implements an exponential backoff algorithm. Monitor existing critical Lambda functions with Amazon
CloudWatch alarms for the Throttles Lambda metric.

Correct Answer: A

Community vote distribution


C (89%) 11%

" # liono Highly Voted $ 1 year, 1 month ago


A is correct,
https://aws.amazon.com/blogs/compute/managing-aws-lambda-function-concurrency/
upvoted 25 times

" # gerhardbl 2 months, 2 weeks ago


But those other Lambda functions would now be running in different accounts where the target buckets are, hence there would be no impact
on the 'key' Lambdas in the main account. Also by using Lambda to process SQS, it will pull multiple messages off the queue at once, instead
of firing up a new concurrent Lambda for every object that needs to be copied. Lastly, if you set a Reserved Concurrency limit on the
Lambdas that process these large quantities of S3 uploads, and the limit is hit, you will start losing data and your buckets will be out of sync. I
think just using Reserved Concurrency is too simple here; they are looking for one step further.
upvoted 1 times

" # cpd 1 year, 1 month ago


Thank you, very nice blog.
upvoted 1 times

" # rscloud 1 year, 1 month ago


Thankyou, very informative blog!
A is correct
upvoted 2 times

" # kirrim 1 year ago


Agree!

Another document supporting A:

https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html

"Your function can't scale out of control – Reserved concurrency also limits your function from using concurrency from the unreserved pool,
which caps its maximum concurrency. You can reserve concurrency to prevent your function from using all the available concurrency in the
Region, or from overloading downstream resources."
upvoted 1 times

" # PredaOvde Highly Voted $ 1 year, 1 month ago


I choose C. Let me explain why I don't think A is the right approach: if you set Reserved concurrency to let's say 200, I make myself 2 questions:
1) 800 instances will remain available for other lambdas. How do you guarantee 800 is enough for other lambdas? Perhaps 999 lambdas was
answering in a timely manner to all the requests and adding a new lambda will break everything.
2) Assigning 200 Reserved instances to the new Lambda does not guarantee that will be enough for the new Lambda. Maybe it requires 500?

So, adding SQS queue resolve both of the problems. Thoughts?

419 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 14 times

" # cannottellname 9 months ago


I believe SQS queue + Lamda Reserve will help solve the issue. Not keeping Lambda limit/reserve have a chance that concurrency is fully
utilized by this only.
upvoted 2 times
" # astalavista1 Most Recent % 2 months ago
Selected Answer: C
"The objects do not need to be processed in real-time" - SQS gives you the option to process in batches, you can also delay delivery to ensure
Lambda isn't swarmed with multiple concurrent executions from the object upload.
upvoted 2 times

" # jyrajan69 4 months, 1 week ago


Key point, addition of this new Lambda function has no adverse effect on other key Lambda functions? So how does C address that? Only A
provides and answer with reserved concurrency
upvoted 2 times

" # azurehunter 4 months, 4 weeks ago


Answer is A.

C is wrong because the Lamdba functions which read the message from SQS may scale out to 1000 if hundreds of thousands of upload occur in
a very short time. It will impact the other Lamdba functions.

Refer to https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
upvoted 1 times

" # jj22222 7 months, 1 week ago


Selected Answer: C
C. Configure S3 event notifications to add events to an Amazon SQS queue in a separate account. Create the new Lambda function in the same
account as the SQS queue and trigger the function when a message arrives in the queue.
upvoted 2 times

" # pititcu667 8 months, 3 weeks ago


Selected Answer: A
I think A is correct.
upvoted 1 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: C
I think this is C
upvoted 2 times

" # tkanmani76 9 months, 3 weeks ago


C is right - The concurrency space available is common for all the functions in the region. By reserving concurrency for a function we ensure no
other function can use that concurrency space. However this limits the ability to use the concurrency from open pool.
upvoted 2 times

" # cldy 11 months ago


C. Configure S3 event notifications to add events to an Amazon SQS queue in a separate account. Create the new Lambda function in the same
account as the SQS queue and trigger the function when a message arrives in the queue.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


I am going with C
upvoted 1 times

" # kaleen_bhaiya 11 months, 1 week ago


Selected Answer: C
Answer is C, only C assures to copy all the. files, if you throttle the Lambda many of the copy requests will fail. And there is no need to have
synchronous copying of data.
upvoted 2 times

" # student22 1 year ago


A
---
The question mentions "The objects do not need to be replicated in real time.", hinting at A.
C is too much work.
upvoted 1 times

" # andylogan 1 year ago


It's A
upvoted 1 times

420 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # DerekKey 1 year ago


Question is:
"How can the solutions architect ensure this new Lambda function will not impact other critical Lambda functions?"
WILL NOT IMPACT
Any usage of the new replication lambda in current account will impact critical Lambda functions.
C is CORRECT in my opinion
upvoted 2 times

" # tgv 1 year ago


AAA
---
upvoted 1 times

" # blackgamer 1 year ago


C is the answer to me.
upvoted 2 times

421 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #680 Topic 1

A company wants to run a serverless application on AWS. The company plans to provision its application in Docker containers running in an
Amazon ECS cluster.
The application requires a MySQL database and the company plans to use Amazon RDS. The company has documents that need to be accessed
frequently for the +rst 3 months, and rarely after that. The document must be retained for 7 years.
What is the MOST cost-effective solution to meet these requirements?

A. Create an ECS cluster using On-Demand Instances. Provision the database and its read replicas in Amazon RDS using Spot Instances. Store
the documents in an encrypted EBS volume, and create a cron job to delete the documents after 7 years.

B. Create an ECS cluster using a jeet of Spot Instances, with Spot Instance draining enabled. Provision the database and its read replicas in
Amazon RDS using Reserved Instances. Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents
that are older than 3 months to Amazon S3 Glacier, then delete the documents from Amazon S3 Glacier that are more than 7 years old.

C. Create an ECS cluster using On-Demand Instances. Provision the database and its read replicas in Amazon RDS using On-Demand
Instances. Store the documents in Amazon EFS. Create a cron job to move the documents that are older than 3 months to Amazon S3 Glacier.
Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.

D. Create an ECS cluster using a jeet of Spot Instances with Spot Instance draining enabled. Provision the database and its read replicas in
Amazon RDS using On-Demand Instances. Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents
that are older than 3 months to Amazon S3 Glacier, then delete the documents in Amazon S3 Glacier after 7 years.

Correct Answer: B

" # liono Highly Voted $ 1 year, 1 month ago


B seems to be correct, spot instances for ECS cluster and Reserved instances for RDB
upvoted 15 times

" # kirrim 1 year ago


Agree, more info:

A with RDS on spot instances is automatically ruled out


B is feasible by using a "Diversified" allocation strategy when setting up the Spot provisioning ECS cluster
C is feasible but more expensive to do RDS on-demand instances than RDS RI as in B, and uses EFS instead of S3 to store the documents,
not as cost effective
D is feasible but more expensive to do RDS on-demand instances than RDS RI as in B

https://aws.amazon.com/ec2/spot/containers-for-less/get-started/
https://aws.amazon.com/ec2/spot/instance-advisor/
upvoted 2 times

" # Ebi Highly Voted $ 1 year ago


B for sure
upvoted 6 times

" # KiraguJohn Most Recent % 3 months, 1 week ago


I was skeptical of spot instance on ECS until i read this;
Amazon Elastic Container Service (ECS) supports Automated Spot Instance Draining, a new capability that reduces service interruptions due to
Spot termination for ECS workloads. This feature will enable ECS customers to safely manage any interruptions of ECS tasks running on Spot
instances due to termination of the underlying EC2 Spot instance.
upvoted 2 times

" # frankzeng 9 months, 3 weeks ago


https://aws.amazon.com/about-aws/whats-new/2019/09/amazon-ecs-supports-automated-draining-for-spot-instances-running-ecs-services/
upvoted 1 times

" # cldy 10 months, 1 week ago


B is correct.
upvoted 1 times

" # Ni_yot 10 months, 2 weeks ago


B for me. you want to use reserved instances as its cost effective.
upvoted 2 times

" # vramchn 11 months ago

422 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

B for sure
upvoted 1 times
" # AzureDP900 11 months, 1 week ago
B Is right
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # Suresh108 1 year ago


method of elimination -

choosing B as answer.
A and C are eliminated - due to cron usage
D - eliminated due to on-demand instance where DB cost can be reduced by reserved instances (seems it needs to be run for several years).
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # Kopa 1 year ago


B, AWS RDS Reserved instances more cost efficent as it says the company will operate more then 7 years.
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 2 times

" # Kian1 1 year ago


going with B
upvoted 2 times

" # kopper2019 1 year ago


B is the correct one spot instances and Reserved for DB
upvoted 2 times

" # Bulti 1 year, 1 month ago


Answer is B and not X because in rare circumstances would you chose on demand RDS instances as usually RDS databases are required 24/7
upvoted 2 times

423 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #681 Topic 1

A +nancial services company receives a regular data feed from its credit card servicing partner. Approximately 5,000 records are sent every 15
minutes in plaintext, delivered over HTTPS directly into an Amazon S3 bucket with server-side encryption. This feed contains sensitive credit card
primary account number
(PAN) data. The company needs to automatically mask the PAN before sending the data to another S3 bucket for additional internal processing.
The company also needs to remove and merge speci+c +elds, and then transform the record into JSON format. Additionally, extra feeds are likely
to be added in the future, so any design needs to be easily expandable.
Which solutions will meet these requirements?

A. Trigger an AWS Lambda function on +le delivery that extracts each record and writes it to an Amazon SQS queue. Trigger another Lambda
function when new messages arrive in the SQS queue to process the records, writing the results to a temporary location in Amazon S3. Trigger
a +nal Lambda function once the SQS queue is empty to transform the records into JSON format and send the results to another S3 bucket for
internal processing.

B. Trigger an AWS Lambda function on +le delivery that extracts each record and writes it to an Amazon SQS queue. Con+gure an AWS Fargate
container application to automatically scale to a single instance when the SQS queue contains messages. Have the application process each
record, and transform the record into JSON format. When the queue is empty, send the results to another S3 bucket for internal processing
and scale down the AWS Fargate instance.

C. Create an AWS Glue crawler and custom classi+er based on the data feed formats and build a table de+nition to match. Trigger an AWS
Lambda function on +le delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation
requirements. De+ne the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal
processing.

D. Create an AWS Glue crawler and custom classi+er based upon the data feed formats and build a table de+nition to match. Perform an
Amazon Athena query on +le delivery to start an Amazon EMR ETL job to transform the entire record according to the processing and
transformation requirements. De+ne the output format as JSON. Once complete, send the results to another S3 bucket for internal processing
and scale down the EMR cluster.

Correct Answer: A

Community vote distribution


C (100%)

" # liono Highly Voted $ 1 year, 1 month ago


C seems to be correct
https://docs.aws.amazon.com/glue/latest/dg/trigger-job.html
upvoted 22 times

" # fabianjanu 1 year, 1 month ago


I agree. A) can bring cost problems and concurrency limits in lambda. Furthermore, Glue already solves these issues with much less
development.
upvoted 4 times

" # blackgamer Highly Voted $ 1 year ago


C is the correct answer.

https://d1.awsstatic.com/Products/product-name/diagrams/product-page-diagram_Glue_Event-driven-ETL-
Pipelines.e24d59bb79a9e24cdba7f43ffd234ec0482a60e2.png
upvoted 6 times

" # kirrim 1 year ago


Beautiful diagram!

Just in case the URL for that image gets modifed, scroll down to "Use Cases" on the home page for Glue: https://aws.amazon.com/glue/
upvoted 1 times

" # CloudHell Most Recent % 4 months, 4 weeks ago


I'm going with C.
upvoted 1 times

" # cldy 11 months ago


C. Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS

424 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Lambda function on file delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation
requirements. Define the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
upvoted 1 times
" # AzureDP900 11 months, 1 week ago
c is correct
You can use a Glue crawler to populate the AWS Glue Data Catalog with tables. The Lambda function can be triggered
using S3 event notifications when object create events occur. The Lambda function will then trigger the Glue ETL job
to transform the records masking the sensitive data and modifying the output format to JSON. This solution meets all
requirements.
upvoted 1 times

" # AzureDP900 11 months, 2 weeks ago


C is correct
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: C
https://aws.amazon.com/glue/
upvoted 2 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 2 times

" # mustpassla 1 year ago


D, a use case of Glue crawler.
upvoted 2 times

" # Waiweng 1 year ago


it's C
upvoted 2 times

" # KnightVictor 1 year ago


going with C
upvoted 2 times

" # eji 1 year ago


i think D
upvoted 1 times

" # wasabidev 1 year ago


C for me
upvoted 1 times

" # Kian1 1 year ago


I will go with C
upvoted 2 times

" # Ebi 1 year ago


I go with C
upvoted 3 times

425 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #682 Topic 1

A media company is serving video +les stored in Amazon S3 using Amazon CloudFront. The development team needs access to the logs to
diagnose faults and perform service monitoring. The log +les from CloudFront may contain sensitive information about users.
The company uses a log processing service to remove sensitive information before making the logs available to the development team. The
company has the following requirements for the unprocessed logs:
✑ The logs must be encrypted at rest and must be accessible by the log processing service only.
✑ Only the data protection team can control access to the unprocessed log +les.
✑ AWS CloudFormation templates must be stored in AWS CodeCommit.
✑ AWS CodePipeline must be triggered on commit to perform updates made to CloudFormation templates.
CloudFront is already writing the unprocessed logs to an Amazon S3 bucket, and the log processing service is operating against this S3 bucket.

Which combination of steps should a solutions architect take to meet the company's requirements? (Choose two.)

A. Create an AWS KMS key that allows the AWS Logs Delivery account to generate data keys for encryption Con+gure S3 default encryption to
use server-side encryption with KMS managed keys (SSE-KMS) on the log storage bucket using the new KMS key. Modify the KMS key policy to
allow the log processing service to perform decrypt operations.

B. Create an AWS KMS key that follows the CloudFront service role to generate data keys for encryption Con+gure S3 default encryption to use
KMS managed keys (SSE-KMS) on the log storage bucket using the new KMS key Modify the KMS key policy to allow the log processing
service to perform decrypt operations.

C. Con+gure S3 default encryption to use AWS KMS managed keys (SSE-KMS) on the log storage bucket using the AWS Managed S3 KMS key.
Modify the KMS key policy to allow the CloudFront service role to generate data keys for encryption Modify the KMS key policy to allow the log
processing service to perform decrypt operations.

D. Create a new CodeCommit repository for the AWS KMS key template. Create an IAM policy to allow commits to the new repository and
attach it to the data protection team's users. Create a new CodePipeline pipeline with a custom IAM role to perform KMS key updates using
CloudFormation Modify the KMS key policy to allow the CodePipeline IAM role to modify the key policy.

E. Use the existing CodeCommit repository for the AWS KMS key template. Create an IAM policy to allow commits to the new repository and
attach it to the data protection team's users. Modify the existing CodePipeline pipeline to use a custom IAM role and to perform KMS key
updates using CloudFormation. Modify the KMS key policy to allow the CodePipeline IAM role to modify the key policy.

Correct Answer: AD

" # Mr`ip Highly Voted $ 1 year ago


AD
`There is no such Role called Cloudfront service link role. Cloudfront uses the awslogsdelivery to deliver logs to s3 bucket ->
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html#AccessLogsBucketAndFileOwnership
upvoted 13 times

" # LCC92 1 year ago


From the link Meflip gives:
If you enabled server-side encryption for your Amazon S3 bucket using AWS KMS-managed keys (SSE-KMS) with a customer-managed
Customer Master Key (CMK), you must add the following to the key policy for your CMK to enable writing log files to the bucket. You cannot
use the default CMK because CloudFront won't be able to upload the log files to the bucket.
{
"Sid": "Allow CloudFront Flow Logs to use the key",
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "kms:GenerateDataKey*",
"Resource": "*"
}
upvoted 3 times

" # SD13 Highly Voted $ 1 year, 1 month ago


B & D : These are correct options —

If you enabled server-side encryption for your Amazon S3 bucket using AWS KMS-managed keys (SSE-KMS) with a customer-managed
Customer Master Key (CMK), you must add the following to the key policy for your CMK to enable writing log files to the bucket. You cannot use
the default CMK because CloudFront won't be able to upload the log files to the bucket.

426 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

URL : https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html#AccessLogsKMSPermissions
upvoted 11 times

" # pablobairat 1 year ago


According to that link and the paragraph you have pasted, the correct answers are C & D
upvoted 1 times

" # bobsmith2000 5 months, 3 weeks ago


C is wrong. You cannot modify a key policy of a AWS managed KMS key.
https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk
upvoted 3 times

" # ryu10_09 Most Recent % 11 months, 1 week ago


according to AWS:
If the S3 bucket for your standard logs uses server-side encryption with AWS KMS keys (SSE-KMS) using a customer managed key, you must
add the following statement to the key policy for your customer managed key. This allows CloudFront to write log files to the bucket. (You can’t
use SSE-KMS with the AWS managed key because CloudFront won’t be able to write log files to the bucket.)
with this I go with A&D
upvoted 2 times

" # bobsmith2000 5 months, 3 weeks ago


There's no such thing as "AWS Logs Delivery account".
It's a service
upvoted 3 times

" # student22 1 year ago


A,D
AWS Logs Delivery account + new repository
upvoted 3 times

" # andylogan 1 year ago


It's A D as tgv's comment
upvoted 1 times

" # Kopa 1 year ago


Coorect A, D
upvoted 2 times

" # tgv 1 year ago


AAA DDD
---
CloudFront service role doesn't exist. It uses "delivery.logs.amazonaws.com" which is the "awslogsdelivery account"
---> https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html
upvoted 4 times

" # blackgamer 1 year ago


B &has D for me.
upvoted 1 times

" # blackgamer 1 year ago


Change to A& D after reading this document.

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html
upvoted 1 times

" # denccc 1 year ago


B and D
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B,D
upvoted 2 times

" # Waiweng 1 year ago


it's A&D, no such thing as cloudfront service role
upvoted 6 times

" # DashL 1 year ago


Whichever service delivers logs to S3 needs to have permission to use the CMK. In this case CloudFront delivers the logs to AWS Logs
Delivery account. Then AWS Logs Delivery account delivers the logs to S3. In this case, CloudFront doesn't encrypt the logs - the AWS Logs
Delivery account does. CloudFront isn't even aware of the fact that the logs are being encrypted.
upvoted 5 times

427 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # ss160700 1 year ago


A & D - CloudFront will use "Service": "delivery.logs.amazonaws.com" to log to S3. Need action": "kms:GenerateDataKey*" to the principal.
upvoted 2 times

" # bobsmith2000 5 months, 3 weeks ago


Isn't it a service? A states that it's account, but in documentation it's shown as service
upvoted 1 times

" # ladh 1 year ago


why not E?
upvoted 1 times

" # CarisB 1 year ago


Thanks for the link SD13. My first choice was AD, but BD seems right.
upvoted 2 times

" # Pupu86 1 year, 1 month ago


Option C is correct as SSE-S3 is needed minimally to encrypt at rest and reduce unnecessary cost of SSE-KMS.
upvoted 1 times

" # M_Asep 1 year, 1 month ago


BD
For me because
A seems not right because it using AWS SSE you can't use your own key
upvoted 1 times

" # eji 1 year, 1 month ago


BD for me
upvoted 2 times

428 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #683 Topic 1

A company's service for video game recommendations has just gone viral. The company has new users from all over the world. The website for
the service is hosted on a set of Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). The website
consists of static content with different resources being loaded depending on the device type.
Users recently reported that the load time for the website has increased. Administrators are reporting high loads on the EC2 instances that host
the service.
Which set actions should a solutions architect take to improve response times?

A. Create separate Auto Scaling groups based on device types. Switch to Network Load Balancer (NLB). Use the User-Agent HTTP header in
the NLB to route to a different set of EC2 instances.

B. Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use Lambda@Edge to load
different resources based on the User-Agent HTTP header.

C. Create a separate ALB for each device type. Create one Auto Scaling group behind each ALB. Use Amazon Route 53 to route to different
ALBs depending on the User-Agent HTTP header.

D. Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use the User-Agent HTTP
header to load different content.

Correct Answer: A

Community vote distribution


B (100%)

" # liono Highly Voted $ 1 year, 1 month ago


B seems to fulfill the requirements
upvoted 17 times

" # dolphina02 1 year, 1 month ago


I'll say.
upvoted 1 times

" # Ni_yot Most Recent % 2 months, 3 weeks ago


B is obs
upvoted 1 times

" # pal40sg 8 months, 1 week ago


Selected Answer: B
https://aws.amazon.com/blogs/networking-and-content-delivery/dynamically-route-viewer-requests-to-any-origin-using-lambdaedge/
upvoted 1 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: B
it is B
upvoted 1 times

" # pititcu667 10 months ago


Selected Answer: B
B seems correct
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


it is B
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

429 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # blackgamer 1 year ago


Answer is B. Refer below on the explanation.

https://aws.amazon.com/blogs/networking-and-content-delivery/dynamically-route-viewer-requests-to-any-origin-using-lambdaedge/
upvoted 2 times

" # tiffanny 1 year ago


For those who confuse between B and D. Check this link https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-
examples.html#lambda-examples-redirecting-examples
upvoted 3 times

" # kirrim 1 year ago


Totally agree on B, and great document!

Here's the exact fragment URL on that page to the code to redirect based on device type:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-examples-vary-on-device-type
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

" # hk436 1 year ago


B for sure.!
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 4 times

" # blackgamer 1 year ago


The answer is B.
upvoted 1 times

" # wasabidev 1 year ago


B is correct
upvoted 3 times

" # Kian1 1 year ago


will go with B
upvoted 1 times

" # Ebi 1 year ago


I go with B
upvoted 3 times

430 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #684 Topic 1

A company is planning a large event where a promotional offer will be introduced. The company's website is hosted on AWS and backed by an
Amazon RDS for
PostgreSQL DB instance. The website explains the promotion and includes a sign-up page that collects user information and preferences.
Management expects large and unpredictable volumes of tramc periodically, which will create many database writes. A solutions architect needs
to build a solution that does not change the underlying data model and ensures that submissions are not dropped before they are committed to
the database.
Which solution meets these requirements?

A. Immediately before the event, scale up the existing DB instance to meet the anticipated demand. Then scale down after the event.

B. Use Amazon SQS to decouple the application and database layers. Con+gure an AWS Lambda function to write items from the queue into
the database.

C. Migrate to Amazon DynamoDB and manage throughput capacity with automatic scaling.

D. Use Amazon ElastiCache for Memcached to increase write capacity to the DB instance.

Correct Answer: D
Reference:
https://aws.amazon.com/elasticache/faqs/

Community vote distribution


B (100%)

" # petebear55 Highly Voted $ 1 year, 1 month ago


Come on Guys !! .. Ive seen the discussions below ... if you can not get this one then you are no where near to taking the exam !! .. the problem
here is a WRITE issue .. so for that we would use SQS to help hold the solution until it is ready to be written ... Elastichche is for caching solution
.. so we would use that in READ situation .... ANSWER IS B .. really concerned why the incorrect answer appears in the answer box
upvoted 40 times

" # ryu10_09 11 months, 2 weeks ago


maybe you should not take the exam yourself. havn't you heard of write-through cache before:
https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html#Strategies.WriteThrough
upvoted 1 times

" # gerhardbl 2 months, 3 weeks ago


I guess you kind of just proved his point.
upvoted 2 times

" # redipa 5 months ago


caching is for read speed. write-through cache doesn't increase write speed to the DB, it just means any new writes to the DB are also
written to the cache along the way.
upvoted 1 times

" # PAUGURU 1 year, 1 month ago


Agree 100%, these questions are the easy ones...
upvoted 7 times

" # student22 1 year ago


Well said! Answers in the answer box make me worried. It's good that we have a great community here.
upvoted 6 times

" # smartassX Highly Voted $ 1 year, 1 month ago


B is the answer! SQS with Lambda.
upvoted 30 times

" # beso 1 year, 1 month ago


SQS with lambda is eventually consistent, we are looking here for strong consistent which is option A?
upvoted 1 times

" # petebear55 1 year ago


"large and unpredictable volumes of traffic periodically" beso a can not be the answer
upvoted 2 times

431 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # petebear55 1 year ago


WELL DONE
upvoted 1 times
" # skywalker Most Recent % 4 weeks ago
Selected Answer: B
B.. SQS to capture new writes before storing in DB will help
D is wrong as it only provide enhancement during READ operation.
upvoted 1 times

" # kangtamo 4 months, 1 week ago


Selected Answer: B
Agree with B: SQS.
upvoted 1 times

" # user0001 5 months, 2 weeks ago


A :because they dont want to introduce change "A solutions architect must provide a solution that does not alter the underlying data
architecture"
B would be better if they do allow changes
upvoted 1 times

" # Cal88 6 days ago


They don’t wanna introduce change to the “data mode” so no change to the DB type.
No one said anything about changing the way to process the writes.
if you choose A , what capacity will you chose to scale up?
Remember its unpredictable traffic which a really important keyword in the question.
B is the best way to achieve what is asked in the question
upvoted 1 times

" # vbal 11 months, 1 week ago


D is NOT correct. The write-through strategy adds data or updates data in the cache whenever data is written to the database. Every write
involves two trips:
A write to the cache
A write to the database
Which adds latency to the process.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


going with B
upvoted 1 times

" # ryu10_09 11 months, 2 weeks ago


I would still go with using memcached and the write-through capabilities for it:https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-
ug/Strategies.html#Strategies.WriteThrough
SQS is also an option here
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # blackgamer 1 year ago


B is the answer.
upvoted 1 times

" # tiffanny 1 year ago


lol but in real life, aws user are using A option. LOL because they are too lazy to change the infra.
upvoted 1 times

" # mustpassla 1 year ago


B, use SQS to ensures that submissions are not dropped. A is not correct as scaling up manually cant help as the volumes of traffic are
unpredictable.
upvoted 4 times

" # Waiweng 1 year ago


it's B
upvoted 2 times

" # wasabidev 1 year ago


B. in my opinion, one of the easiest question
upvoted 2 times

432 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Kian1 1 year ago


going for B
upvoted 3 times

" # Ebi 1 year ago


B for sure
upvoted 4 times

" # Trap_D0_r 1 year ago


B
This is literally the use case for SQS why are people fighting about it?
upvoted 4 times

433 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #685 Topic 1

A mobile app has become very popular, and usage has gone from a few hundred to millions of users. Users capture and upload images of
activities within a city, and provide ratings and recommendations. Data access patterns are unpredictable. The current application is hosted on
Amazon EC2 instances behind an
Application Load Balancer (ALB). The application is experiencing slowdowns and costs are growing rapidly.
Which changes should a solutions architect make to the application architecture to control costs and improve performance?

A. Create an Amazon CloudFront distribution and place the ALB behind the distribution. Store static content in Amazon S3 in an Infrequent
Access storage class.

B. Store static content in an Amazon S3 bucket using the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of
the S3 bucket and the ALB.

C. Place AWS Global Accelerator in front of the ALB. Migrate the static content to Amazon EFS, and then run an AWS Lambda function to
resize the images during the migration process.

D. Move the application code to AWS Fargate containers and swap out the EC2 instances with the Fargate containers.

Correct Answer: B

Community vote distribution


D (50%) B (50%)

" # kalyan_krishna742020 Highly Voted $ 1 year, 1 month ago


B. Keywords: "Data access patterns are unpredictable" best fits to Intelligent Tiering
upvoted 25 times

" # Waiweng Highly Voted $ 1 year ago


it's B
upvoted 7 times

" # hahaaaaa Most Recent % 3 months ago


I will go with D
there is no mention of storage related to the current environment, in this question.
So, B cannot be the answer.
EC2: Predictable and sustained workloads having high utilization rates
Fargate: Automatic provisioning of workloads is required. More flexibility is required
upvoted 1 times

" # kapara 3 months, 1 week ago


Selected Answer: B
its B!!
upvoted 2 times

" # skyblue07 4 months, 2 weeks ago


Selected Answer: D
"Data access patterns are unpredictable", cloud front cache will be inefficient.
And with B option you are not solving the bottleneck.
upvoted 2 times

" # cldy 11 months ago


B. Store static content in an Amazon S3 bucket using the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of the
S3 bucket and the ALB.
upvoted 1 times

" # AzureDP900 11 months, 2 weeks ago


I will go with B, The patterns of data access are unexpected is the key word in this question.
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # tgv 1 year ago


BBB
---

434 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

" # mustpassla 1 year ago


B, SAA level.
upvoted 2 times

" # vkbajoria 1 year ago


it's B for me
upvoted 2 times

" # CarisB 1 year, 1 month ago


B seems good.
https://aws.amazon.com/about-aws/whats-new/2018/11/s3-intelligent-tiering/?nc1=h_ls
upvoted 4 times

435 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #686 Topic 1

A +nancial company with multiple departments wants to expand its on-premises environment to the AWS Cloud. The company must retain
centralized access control using an existing on-premises Active Directory (AD) service. Each department should be allowed to create AWS
accounts with precon+gured networking and should have access to only a speci+c list of approved services. Departments are not permitted to
have account administrator permissions.
What should a solutions architect do to meet these security requirements?

A. Con+gure AWS Identity and Access Management (IAM) with a SAML identity provider (IdP) linked to the on-premises Active Directory, and
create a role to grant access. Con+gure AWS Organizations with SCPs and create new member accounts. Use AWS CloudFormation templates
to con+gure the member account networking.

B. Deploy an AWS Control Tower landing zone. Create an AD Connector linked to the on-premises Active Directory. Change the identity source
in AWS Single Sign-On to use Active Directory. Allow department administrators to use Account Factory to create new member accounts and
networking. Grant the departments AWS power user permissions on the created accounts.

C. Deploy an Amazon Cloud Directory. Create a two-way trust relationship with the on-premises Active Directory, and create a role to grant
access. Set up an AWS Service Catalog to use AWS CloudFormation templates to create the new member accounts and networking. Use IAM
roles to allow access to approved AWS services.

D. Con+gure AWS Directory Service for Microsoft Active Directory with AWS Single Sign-On. Join the service to the on-premises Active
Directory. Use AWS CloudFormation to create new member accounts and networking. Use IAM roles to allow access to approved AWS
services.

Correct Answer: B
Reference:
https://d1.awsstatic.com/whitepapers/aws-overview.pdf
(46)

" # ExtHo Highly Voted $ 1 year, 1 month ago


B it looks
AWS Control Tower automates the setup of a new landing zone using best-practices blueprints for identity, federated access, and account
structure.
The account factory automates provisioning of new accounts in your organization. As a configurable account template, it helps you standardize
the provisioning of new accounts with pre-approved account configurations. You can configure your account factory with pre-approved network
configuration and region selections.

https://aws.amazon.com/controltower/features/
upvoted 14 times

" # heany 3 weeks, 1 day ago


A doesn't address ‘Each department should be allowed to create AWS accounts with preconfigured networking’
B and C doesn't address ' should have access to only a specific list of approved services'
D doesn't make sense.
If SCP can be added to B, then b is a perfect answer. Anyway this q&A is not a good one
upvoted 1 times

" # cldy Most Recent % 11 months ago


B. Deploy an AWS Control Tower landing zone. Create an AD Connector linked to the on-premises Active Directory. Change the identity source in
AWS Single Sign-On to use Active Directory. Allow department administrators to use Account Factory to create new member accounts and
networking. Grant the departments AWS power user permissions on the created accounts.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


B is perfect
upvoted 1 times

" # student22 1 year ago


B
Control Tower + AD Connector + Account Factory
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

436 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # tgv 1 year ago


BBB
---
Key: "Each department should be allowed to create AWS accounts with preconfigured networking"
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

" # Waiweng 1 year ago


it's B
https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html
upvoted 2 times

" # Kayode 1 year ago


The answer is B
https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html
upvoted 2 times

" # TerrenceC 1 year, 1 month ago


Although option #A points out the key factor SCP, however, it does not emphasize its major functionality which is the service boundary in this
case. All the options here are more about how to govern the accounts. According to the introduction (https://aws.amazon.com/controltower
/?nc2=h_ql_prod_mg_ct), there are two highlights to make option #B is much more ideal than #A.

1) Blueprints are available to provide identity management, federate access to accounts, centralize logging, establish cross-account security
audits, define workflows for provisioning accounts, and implement account baselines with network configurations.

2) Control Tower provides mandatory and strongly recommended high-level rules, called guardrails, that help enforce your policies using service
control policies (SCPs), or detect policy violations using AWS Config rules.
upvoted 3 times

" # KevinZhong 1 year, 1 month ago


Seems to be B
------------
AWS Control Tower seems to maintain the control of AWS Organizations, AWS Service Catalog and AWS Config
------ https://d1.awsstatic.com/whitepapers/aws-overview.pdf (46)
upvoted 1 times

" # Flosuccess 1 year, 1 month ago


Is service catalog not more in line with "access to only a specific list of approved services"? maybe C.
upvoted 3 times

" # mijeko8879 1 year ago


C it is https://aws.amazon.com/blogs/mt/automate-account-creation-and-resource-provisioning-using-aws-service-catalog-aws-
organizations-and-aws-lambda/
upvoted 3 times

" # eji 1 year, 1 month ago


I go for A, keyword "access to only a specific list of approved services. " it means SCP
upvoted 1 times

" # sarah_t 1 year, 1 month ago


In ControlTower you can apply guardrails to OUs (restricting what those accounts can do). With AccontFactory you can determine which OU
the newly created account belongs to.
upvoted 1 times

" # awsnoob 1 year, 1 month ago


Should be A

https://aws.amazon.com/blogs/security/aws-federated-authentication-with-active-directory-federation-services-ad-fs/
upvoted 1 times

" # awsnoob 1 year, 1 month ago


Nvm, it is B. I misread the question.
upvoted 3 times

" # wasabidev 1 year, 1 month ago


I think A
upvoted 1 times

437 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #687 Topic 1

A large +nancial company is deploying applications that consist of Amazon EC2 and Amazon RDS instances to the AWS Cloud using AWS
CloudFormation.
The CloudFormation stack has the following stack policy:

The company wants to ensure that developers do not lose data by accidentally removing or replacing RDS instances when updating the
CloudFormation stack.
Developers also still need to be able to modify or remove EC2 instances as needed.
How should the company change the stack policy to meet these requirements?

A. Modify the statement to specify ‫ג‬€Effect‫ ג‬:€‫ג‬€Deny‫ ג‬,€‫ג‬€Action‫ג‬€:[‫ג‬€Update:*‫ג‬€] for all logical RDS resources.

B. Modify the statement to specify ‫ג‬€Effect‫ ג‬:€‫ג‬€Deny‫ ג‬,€‫ג‬€Action‫ג‬€:[‫ג‬€Update:Delete‫ג‬€] for all logical RDS resources.

C. Add a second statement that speci+es ‫ג‬€Effect‫ ג‬:€‫ג‬€Deny‫ ג‬,€‫ג‬€Action‫ג‬€:[‫ג‬€Update:Delete‫ ג‬,€‫ג‬€Update:Replace‫ג‬€] for all logical RDS
resources.

D. Add a second statement that speci+es ‫ג‬€Effect‫ ג‬:€‫ג‬€Deny‫ ג‬,€‫ג‬€Action‫ג‬€:[‫ג‬€Update:*‫ג‬€] for all logical RDS resources.

Correct Answer: C

Community vote distribution


C (100%)

" # Waiweng Highly Voted $ 1 year ago


it's C
upvoted 10 times

" # SD13 Highly Voted $ 1 year ago


C:
Supporting link : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html#protect-stack-resources-
modifying
Prevent replacement updates for an instance
The following policy denies updates that would cause a replacement of the instance with the MyInstance logical ID. It allows all update actions on
all other stack resources with an Allow statement. The Allow statement doesn't apply to the MyInstance resource because the Deny statement
always overrides allow actions.

Prevent replacement updates for an instance


The following policy denies updates that would cause a replacement of the instance with the MyInstance logical ID. It allows all update actions on
all other stack resources with an Allow statement. The Allow statement doesn't apply to the MyInstance resource because the Deny statement
always overrides allow actions.
upvoted 6 times

" # awsa37430 Most Recent % 1 week, 1 day ago


Selected Answer: C
cccccccc
upvoted 1 times

" # cldy 11 months ago


C. Add a second statement that specifies ‫ג‬€Effect‫ ג‬:€‫ג‬€Deny‫ ג‬,€‫ג‬€Action‫ג‬€:[‫ג‬€Update:Delete‫ ג‬,€‫ג‬€Update:Replace‫ג‬€] for all logical RDS
resources.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


C is right

438 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # backfringe 11 months, 2 weeks ago
CCCCCCC
upvoted 1 times

" # ByomkeshDas 11 months, 3 weeks ago


C is correct
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

" # hk436 1 year ago


c for sure.!
upvoted 3 times

" # Pupu86 1 year ago


D is correct, with Update: * (representing delete & replace) on 2nd statement. This explicitly denies any updates to database resources only. You
do not have to explicitly indicate Update with delete and replace (like what option C did).
upvoted 1 times

" # Pupu86 1 year ago


My apologies, Answer should be C as Action: Modify should be implicitly allowed
upvoted 4 times

" # kirrim 1 year ago


Agree with C!

A & B are invalid because by overwriting that allow statement, you would not allow updates to anything. Whereas C & D leave the general
allow statement in place, but add another statement with more specific deny actions for the RDS resources

Between C & D, there are four options for the Update action:
- Update:Modify
- Update:Replace
- Update:Delete
- Update:*
The question says to deny "removing or replacing RDS instances", so that means we only need to deny Update:Replace and
Update:Delete, while still allowing Update:Modify

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html#stack-policy-reference
upvoted 3 times

" # MrCarter 1 year ago


Yes, the question says prevent replacement or deletion but not modification of existing RDS resource
upvoted 2 times

" # nasuu 1 year, 1 month ago


This answer is C. Question is about CloudFormation Stack Policy, not CLI.
"Update:Delete" and "Update:Replace" should be applied for only RDS. C explains that.

https://docs.aws.amazon.com/ja_jp/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html
upvoted 3 times

" # DashL 1 year ago


This link is in Chinese/Japanese/Korean. Open at your own risk.
Here is the correct link: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html
upvoted 1 times

" # eji 1 year, 1 month ago


i agree with C
upvoted 3 times

" # eji 1 year, 1 month ago


sorry i think D, because you can modify or remove for ec2, but you cannot do update or delete for rds resources
upvoted 1 times

" # awsnoob 1 year, 1 month ago

439 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Should be C, policies are implicit deny and explicit allow.


upvoted 3 times
" # sek12324 1 year, 1 month ago
Sorry its B, you modify the the same statement
upvoted 2 times

" # sek12324 1 year, 1 month ago


C
{
"Statement" : [
{
"Effect" : "Deny",
"Action" : "Update:*",
"Principal": "*",
"Resource" : "LogicalResourceId/MyDatabase"
},
{
"Effect" : "Allow",
"Action" : "Update:*",
"Principal": "*",
"Resource" : "*"
}
]
}
upvoted 3 times

" # tvs 1 year ago


this will block all DB update ,we need to block only delete and replace.
upvoted 2 times

440 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #688 Topic 1

A company is currently in the design phase of an application that will need an RPO of less than 5 minutes and an RTO of less than 10 minutes. The
solutions architecture team is forecasting that the database will store approximately 10 TB of data. As part of the design, they are looking for a
database solution that will provide the company with the ability to fail over to a secondary Region.
Which solution will meet these business requirements at the LOWEST cost?

A. Deploy an Amazon Aurora DB cluster and take snapshots of the cluster every 5 minutes. Once a snapshot is complete, copy the snapshot to
a secondary Region to serve as a backup in the event of a failure.

B. Deploy an Amazon RDS instance with a cross-Region read replica in a secondary Region. In the event of a failure, promote the read replica
to become the primary.

C. Deploy an Amazon Aurora DB cluster in the primary Region and another in a secondary Region. Use AWS DMS to keep the secondary Region
in sync.

D. Deploy an Amazon RDS instance with a read replica in the same Region. In the event of a failure, promote the read replica to become the
primary.

Correct Answer: B

" # nitinz Highly Voted $ 1 year, 1 month ago


B works for me.
upvoted 15 times

" # KiraguJohn Most Recent % 4 months, 1 week ago


I do not know if the questions are designed to confuse us or just typos. "response time of fewer than five minutes and a response time of less
than ten minutes." Anyway because it talks of cost i would go for B.
upvoted 1 times

" # cldy 11 months ago


B. Deploy an Amazon RDS instance with a cross-Region read replica in a secondary Region. In the event of a failure, promote the read replica to
become the primary.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


B works fine
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

" # hk436 1 year ago


b is my answer!
upvoted 2 times

" # mustpassla 1 year ago


B for sure, C is expensive.
upvoted 1 times

" # Waiweng 1 year, 1 month ago


it's B
upvoted 3 times

" # nil3112 1 year, 1 month ago


what is the problem with C ? Aurora can have multi-master now
upvoted 2 times

" # MrCarter 1 year ago


Multi master is a regional service, so no multi master cross region for Aurora yet

441 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # tvs 1 year ago
cost, so go with B.
upvoted 3 times

" # victordun 1 year, 1 month ago


C is proposing two db clusters in 2 regions
upvoted 2 times

" # WhyIronMan 1 year ago


very expensive
upvoted 2 times

" # kirrim 1 year ago


@nil3112, it sounds to me like you're referring to Aurora global database which probably didn't exist when this question was written, but exists
now.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html

Even with Aurora global DB, C refers to using DMS to sync the DB, which is invalid. And even if you ignore the DMS problem, it's still going to
be more costly to run an Aurora cluster in each region as opposed to the classic use case of a single read replica in the DR region and
promote to master during a DR scenario. And B also allows you to pick any RDS engine you want (as long as it supports read replicas), not
just Aurora
upvoted 2 times

" # CarisB 1 year, 1 month ago


Agree on B
upvoted 2 times

" # Ziegler 1 year, 1 month ago


B is the more cost effective
upvoted 3 times

442 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #689 Topic 1

A company has a web application that uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. A recent marketing campaign has
increased demand.
Monitoring software reports that many requests have signi+cantly longer response times than before the marketing campaign.
A solutions architect enabled Amazon CloudWatch Logs for API Gateway and noticed that errors are occurring on 20% of the requests. In
CloudWatch, the
Lambda function Throttles metric represents 1% of the requests and the Errors metric represents 10% of the requests. Application logs indicate
that, when errors occur, there is a call to DynamoDB.
What change should the solutions architect make to improve the current response times as the web application becomes more popular?

A. Increase the concurrency limit of the Lambda function

B. Implement DynamoDB auto scaling on the table

C. Increase the API Gateway throttle limit

D. Re-create the DynamoDB table with a better-partitioned primary index

Correct Answer: B

" # certainly Highly Voted $ 1 year, 1 month ago


i think it is B, application log indicate there is error when access to database.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html
upvoted 18 times

" # Kelvin 1 year ago


Yes, it's B. DynamoDB problem causing Lambda failed.
upvoted 3 times

" # nitinz Highly Voted $ 1 year, 1 month ago


B is okay
upvoted 7 times

" # losdwind Most Recent % 1 week, 2 days ago


Why Not D? Is Auto Scaling a built-in feature of DynamoDB?
upvoted 1 times

" # eshmet2012 4 days, 15 hours ago


yes it is
upvoted 1 times

" # cldy 11 months ago


B. Implement DynamoDB auto scaling on the table
upvoted 1 times

" # AzureDP900 11 months ago


B is right
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

" # hk436 1 year ago


B is my answer!
upvoted 1 times

" # tvs 1 year ago


could be B . "longer response time" which means API gateway is retrying the request. if it is throttle issue you get 4XX error and no retry
https://docs.aws.amazon.com/apigateway/api-reference/handling-errors/

443 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 4 times
" # mustpassla 1 year ago
B for sure
upvoted 2 times

" # Waiweng 1 year ago


it's B
upvoted 3 times

" # Kayode 1 year ago


I will go with B
upvoted 2 times

" # consultsk 1 year, 1 month ago


Application logs indicate that, when errors occur, there is a call to DynamoDB.
It indicates that the requests passed API Gateway and happening when a call from the application is made to DB.
I would for B
upvoted 1 times

" # ExtHo 1 year, 1 month ago


looks B I think key hint is "when errors occur, there is a call to DynamoDB"
upvoted 1 times

" # SD13 1 year, 1 month ago


Correct Option C : 20% errors are occurring at the API gateway level, so this should be addressed first.
upvoted 2 times

" # MrCarter 1 year ago


No. Re-read the question properly
upvoted 1 times

" # awsnoob 1 year, 1 month ago


Should be C
upvoted 4 times

" # MrCarter 1 year ago


Absolutely not
upvoted 2 times

" # txrocker13 1 year, 1 month ago


Seems to me the problem is at the API gateway, so that's where it should be addressed.
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
I think C is the way to go.
upvoted 2 times

" # LisX 1 year, 1 month ago


The issue is stated as "significantly longer response time". Throttling issue would return 409 instead.
upvoted 1 times

" # mijeko8879 1 year ago


slower response time indicates query and/or scan operations taking longer on the dynamodb table (as the table grows) and/or API retries
caused by timeouts and/or throttles. all relate to capacity problems caused potentially (but not necessary) by design issues. short term fix:
increase capacity. answer B (long term review design especially anticipating table growth)
upvoted 2 times

444 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #690 Topic 1

A European online newspaper service hosts its public-facing WordPress site in a collocated data center in London. The current WordPress
infrastructure consists of a load balancer, two web servers, and one MySQL database server. A solutions architect is tasked with designing a
solution with the following requirements:
✑ Improve the website's performance
✑ Make the web tier scalable and stateless
✑ Improve the database server performance for read-heavy loads
✑ Reduce latency for users across Europe and the US
✑ Design the new architecture with a goal of 99.9% availability
Which solution meets these requirements while optimizing operational emciency?

A. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of WordPress Amazon EC2 instances in one AWS Region and
three Availability Zones. Con+gure an Amazon ElastiCache cluster in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the
WordPress shared +les to Amazon EFS. Con+gure Amazon CloudFront with the ALB as the origin, and select a price class that includes the US
and Europe.

B. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of WordPress Amazon EC2 instances in two AWS Regions and
two Availability Zones in each Region. Con+gure an Amazon ElastiCache cluster in front of a global Amazon Aurora MySQL database. Move
the WordPress shared +les to Amazon EFS. Con+gure Amazon CloudFront with the ALB as the origin, and select a price class that includes the
US and Europe. Con+gure EFS cross- Region replication.

C. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of WordPress Amazon EC2 instances in one AWS Region and
three Availability Zones. Con+gure an Amazon DocumentDB table in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the
WordPress shared +les to Amazon EFS. Con+gure Amazon CloudFront with the ALB as the origin, and select a price class that includes all
global locations.

D. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of WordPress Amazon EC2 instances in two AWS Regions and
three Availability Zones in each Region. Con+gure an Amazon ElastiCache cluster in front of a global Amazon Aurora MySQL database. Move
the WordPress shared +les to Amazon FSx with cross-Region synchronization. Con+gure Amazon CloudFront with the ALB as the origin and a
price class that includes the US and Europe.

Correct Answer: A

Community vote distribution


A (100%)

" # certainly Highly Voted $ 1 year, 1 month ago


Correct choice is A.
B and D are eliminated since Auto Scaling spans across multiple Availability Zones within the same region but cannot span across regions. C
doesn't make sense by DocumentDB infront of another database.
upvoted 30 times

" # nitinz 1 year, 1 month ago


thanks for the comment, you are right. All LB are regional resources. you need route 53 to do it at global level.
upvoted 3 times

" # astalavista1 Most Recent % 2 months ago


Selected Answer: A
BD - Wrong as you can't configure ALB across regions but across AZ, need R53 first for multi-region config before moving to ALB.
C - Wrong once it starts mentioning DocumentDB.
A- Correct as it's a single Region with ALB across Multi-AZ, Cache in front of DB and Multi-AZ DB. Which satisfies all the requirements.
upvoted 1 times

" # hilft 3 months, 2 weeks ago


A for me
For B, regarding EFS cross region replication feature. As of 2022 Jan, AWS support EFS cross-region replication
https://aws.amazon.com/blogs/aws/new-replication-for-amazon-elastic-file-system-efs/
upvoted 2 times

" # Ni_yot 8 months, 1 week ago


A for me
upvoted 1 times

445 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # cldy 11 months ago


A. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of WordPress Amazon EC2 instances in one AWS Region and three
Availability Zones. Configure an Amazon ElastiCache cluster in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the WordPress
shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin, and select a price class that includes the US and Europe.
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


A is right
upvoted 1 times

" # tgv 1 year ago


AAA
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

" # hk436 1 year ago


A is my answer.!
upvoted 1 times

" # mustpassla 1 year ago


A, B is not operational efficiency.
upvoted 1 times

" # LCC92 1 year ago


B is wrong "Configure EFS cross- Region replication." is not possible, can only use Datasync to replicate EFS.
upvoted 2 times

" # Waiweng 1 year, 1 month ago


it's A
upvoted 3 times

" # Ziegler 1 year, 1 month ago


A is the right option
upvoted 1 times

" # SD13 1 year, 1 month ago


D for me
upvoted 2 times

" # SD13 1 year, 1 month ago


Changing it to A
upvoted 1 times

" # nitinz 1 year, 1 month ago


B works for me.
upvoted 2 times

" # nitinz 1 year, 1 month ago


changing to A.
upvoted 1 times

446 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #691 Topic 1

A company built an ecommerce website on AWS using a three-tier web architecture. The application is Java-based and composed of an Amazon
CloudFront distribution, an Apache web server layer of Amazon EC2 instances in an Auto Scaling group, and a backend Amazon Aurora MySQL
database.
Last month, during a promotional sales event, users reported errors and timeouts while adding items to their shopping carts. The operations team
recovered the logs created by the web servers and reviewed Aurora DB cluster performance metrics. Some of the web servers were terminated
before logs could be collected and the Aurora metrics were not sumcient for query performance analysis.
Which combination of steps must the solutions architect take to improve application performance visibility during peak tramc events? (Choose
three.)

A. Con+gure the Aurora MySQL DB cluster to publish slow query and error logs to Amazon CloudWatch Logs.

B. Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances and implement tracing of SQL queries with the X-Ray
SDK for Java.

C. Con+gure the Aurora MySQL DB cluster to stream slow query and error logs to Amazon Kinesis

D. Install and con+gure an Amazon CloudWatch Logs agent on the EC2 instances to send the Apache logs to CloudWatch Logs.

E. Enable and con+gure AWS CloudTrail to collect and analyze application activity from Amazon EC2 and Aurora.

F. Enable Aurora MySQL DB cluster performance benchmarking and publish the stream to AWS X-Ray.

Correct Answer: BCE

Community vote distribution


ABD (100%)

" # wasabidev Highly Voted $ 1 year, 1 month ago


ABD for me
upvoted 19 times

" # kpcert 1 year ago


A,B and D are correct answers.
upvoted 3 times

" # CarisB Highly Voted $ 1 year, 1 month ago


Yes, ABD:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide
/USER_LogAccess.Concepts.MySQL.html#USER_LogAccess.MySQLDB.PublishAuroraMySQLtoCloudWatchLogs
https://aws.amazon.com/blogs/mt/simplifying-apache-server-logs-with-amazon-cloudwatch-logs-insights/
https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-dotnet-messagehandler.html
https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-java-sqlclients.html
upvoted 11 times

" # Kelvin 1 year ago


ABD is correct.
upvoted 2 times

" # Blair77 Most Recent % 3 weeks, 5 days ago


Selected Answer: ABD
ABD! Let's GO!
upvoted 3 times

" # jj22222 7 months, 1 week ago


Selected Answer: ABD
ABD looks right
upvoted 1 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: ABD
i think it is abd
upvoted 1 times

" # pititcu667 10 months ago

447 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Selected Answer: ABD


abd for me
upvoted 1 times
" # AzureDP900 11 months ago
I'll go with A,B,D
upvoted 1 times

" # moon2351 1 year ago


Answer is ABD
upvoted 1 times

" # andylogan 1 year ago


It's A B D
upvoted 1 times

" # tgv 1 year ago


AAA BBB DDD
---
upvoted 2 times

" # blackgamer 1 year ago


ABD for me as well.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A,B,D
upvoted 2 times

" # mustpassla 1 year ago


ABD, use case of X-Ray. Send custom logs out using CW agent.
upvoted 1 times

" # Waiweng 1 year ago


it's A,B,D
upvoted 2 times

" # blackgamer 1 year ago


Yes, ABD
upvoted 2 times

" # certainly 1 year, 1 month ago


ABD sounds good.
upvoted 2 times

" # nitinz 1 year, 1 month ago


Yes I agree ABD.
upvoted 4 times

448 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #692 Topic 1

A solutions architect has an operational workload deployed on Amazon EC2 instances in an Auto Scaling group. The VPC architecture spans two
Availability
Zones (AZ) with a subnet in each that the Auto Scaling group is targeting. The VPC is connected to an on-premises environment and connectivity
cannot be interrupted. The maximum size of the Auto Scaling group is 20 instances in service. The VPC IPv4 addressing is as follows:

VPC CIDR: 10.0.0.0/23 -

AZ1 subnet CIDR: 10.0.0.0/24 -

AZ2 subnet CIDR: 10.0.1.0/24 -


Since deployment, a third AZ has become available in the Region. The solutions architect wants to adopt the new AZ without adding additional
IPv4 address space and without service downtime.
Which solution will meet these requirements?

A. Update the Auto Scaling group to use the AZ2 subnet only. Delete and re-create the AZ1 subnet using half the previous address space.
Adjust the Auto Scaling group to also use the new AZ1 subnet. When the instances are healthy, adjust the Auto Scaling group to use the AZ1
subnet only. Remove the current AZ2 subnet. Create a new AZ2 subnet using the second half of the address space from the original AZ1
subnet. Create a new AZ3 subnet using half the original AZ2 subnet address space, then update the Auto Scaling group to target all three new
subnets.

B. Terminate the EC2 instances in the AZ1 subnet. Delete and re-create the AZ1 subnet using half the address space. Update the Auto Scaling
group to use this new subnet. Repeat this for the second AZ. De+ne a new subnet in AZ3, then update the Auto Scaling group to target all three
new subnets.

C. Create a new VPC with the same IPv4 address space and de+ne three subnets, with one for each AZ. Update the existing Auto Scaling
group to target the new subnets in the new VPC.

D. Update the Auto Scaling group to use the AZ2 subnet only. Update the AZ1 subnet to have the previous address space. Adjust the Auto
Scaling group to also use the AZ1 subnet again. When the instances are healthy, adjust the Auto Scaling group to use the AZ1 subnet only.
Update the current AZ2 subnet and assign the second half of the address space from the original AZ1 subnet. Create a new AZ3 subnet using
half the original AZ2 subnet address space, then update the Auto Scaling group to target all three new subnets.

Correct Answer: A

Community vote distribution


A (60%) B (40%)

" # KevinZhong Highly Voted $ 1 year, 1 month ago


A
-------------------------
https://aws.amazon.com/premiumsupport/knowledge-center/vpc-ip-address-range/?nc1=h_ls
It's not possible to modify the IP address range of an existing virtual private cloud (VPC) or subnet. You must delete the VPC or subnet, and then
create a new VPC or subnet with your preferred CIDR block.
upvoted 20 times

" # nitinz Highly Voted $ 1 year, 1 month ago


A sounds like it
upvoted 6 times

" # kangtamo Most Recent % 4 months, 2 weeks ago


Selected Answer: A
Agree with A.
upvoted 3 times

" # dev10 6 months, 3 weeks ago


Selected Answer: B
You need to terminate the instances before you can delete the subnet which option B states.
If you no longer need a subnet, you can delete it. You cannot delete a subnet if it contains any network interfaces. For example, you must
terminate any instances in a subnet before you can delete it.
link: https://docs.aws.amazon.com/vpc/latest/userguide/working-with-subnets.html

449 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

A: It says delete and recreate, however you need to terminate instances as well which option B points out clearly.
C: does not allow to use this approach as VPC is physically attached to on-prem
D: Modify is not allowed, you need to delete and create subnets
upvoted 2 times

" # fanq10 2 weeks, 6 days ago


You do not need to terminate instances as when you update ASG only use AZ1, it will automatically recreate instances in AZ1, once all the
instances created in AZ1, then you can delete the subnet. Tested it in my lab.
So answer is A.
upvoted 1 times
" # AzureDP900 11 months ago
It is A
upvoted 1 times

" # andylogan 1 year ago


It's A - cannot modify
upvoted 2 times

" # tgv 1 year ago


AAA
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

" # mustpassla 1 year ago


A, no downtime, D is incorrect as CIDR cant be updated in this case.
upvoted 3 times

" # vkbajoria 1 year, 1 month ago


it is A, cannot modify CIDR block
upvoted 2 times

" # Waiweng 1 year, 1 month ago


it's A
upvoted 3 times

" # eji 1 year, 1 month ago


The answer is A because we cannot modify the IPv4 CIDR for the subnet so we need to delete and recreate
upvoted 4 times

450 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #693 Topic 1

A company is storing data on premises on a Windows +le server. The company produces 5 GB of new data daily.
The company migrated part of its Windows-based workload to AWS and needs the data to be available on a +le system in the cloud. The company
already has established an AWS Direct Connect connection between the on-premises network and AWS.
Which data migration strategy should the company use?

A. Use the +le gateway option in AWS Storage Gateway to replace the existing Windows +le server, and point the existing +le share to the new
+le gateway

B. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows +le server and Amazon FSx

C. Use AWS Data Pipeline to schedule a daily task to replicate data between the on-premises Windows +le server and Amazon Elastic File
System (Amazon EFS)

D. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows +le server and Amazon Elastic File System
(Amazon EFS)

Correct Answer: B

Community vote distribution


B (67%) A (33%)

" # awsnoob Highly Voted $ 1 year, 1 month ago


B is correct, the workload on cloud relies on the Windows based storage
upvoted 19 times

" # chris1025 Highly Voted $ 1 year, 1 month ago


I believe it's A. DataSync is for initial migration but it's not meant for ongoing. The better answer would be to use DataSync first then file gateway.
upvoted 5 times

" # chris1025 1 year ago


Going with B. While I believe my initial comment is correct, the questions asks for data migration strategy.
upvoted 4 times

" # rb39 11 months ago


A is wrong - question asks about migrating a portion of workloads, no replacement of Windows server
upvoted 3 times

" # alxjandroleiva Most Recent % 1 day, 14 hours ago


Selected Answer: B
B: "and needs the data to be available on a file system in the cloud. "
No, access like a file system....on a file system
upvoted 1 times

" # JohnPi 1 month, 1 week ago


It is clear that we need Amazon Fx on the AWS side. For the on-prem, we can achieve this with DataSync or with Amazon FSx File Gateway
(https://aws.amazon.com/storagegateway/file/fsx/). Option A is incomplete, cannot decide between A and B
upvoted 1 times

" # gondohwe 3 months ago


if the company already relocated its workload to the cloud then storage gateway is the way...A make sense tho
upvoted 1 times

" # asfsdfsdf 4 months ago


Selected Answer: B
Have to choose B - Windows based share file system + Question clearly states "requires data to be accessible through a cloud file system".
A - cannot be right since it will use NFS/SMB protocol to cache & transfer files to an S3 bucket which is not a FS.
upvoted 2 times

" # Naj_64 3 weeks, 3 days ago


The Storage Gateway can be an FSx File GW, that way the file share data is synchronized with an FSx file system in the cloud.
upvoted 1 times

" # Harry_01 5 months ago


Selected Answer: A

451 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

the statement says "Relocated" which means the migration has already happened and now what they want is just access to the data on-prem.
So has to be storage gateway.
upvoted 1 times
" # Anhdd 5 months, 1 week ago
Selected Answer: A
It's say that "relocated a portion of its Windows-based workload to AWS". So in this case we have to use Storage Gateway, because we need to
access data both from on-premis and on AWS. So we can't use DataSync which is used for transfer 100% data to AWS and keep no data remain
on-premis. That's my opinion, so the answer should be ANH
upvoted 1 times

" # Anhdd 5 months, 1 week ago


so the answer should be A* (my miss spell :D )
upvoted 1 times

" # Racinely 5 months, 2 weeks ago


Selected Answer: B
aggreed with hansmong
upvoted 1 times

" # hansmong 10 months, 1 week ago


A - access file on s3 from on-prem
C - datapipeline is an ETL tool, should be datasync in this case
D - efs does not support Windows https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/AmazonEFS.html
upvoted 3 times

" # Duke_YU 10 months ago


Why "access file on S3 from on-prem" is not an option? I don't like it but don't think it is impossible.
upvoted 1 times

" # GatesChi 7 months ago


You will need to re-write whatever program using the file share.
upvoted 1 times

" # vbal 10 months, 4 weeks ago


You can use AWS DataSync to migrate on-premises data to Amazon S3, Amazon EFS, and Amazon FSx for Windows File Server. Configure
DataSync to make an initial copy of your entire dataset, and schedule subsequent incremental transfers of changing data until the final cut-over
from on-premises to AWS.
upvoted 1 times

" # AzureDP900 11 months ago


B is right
upvoted 1 times

" # cldy 11 months, 1 week ago


B. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx
upvoted 2 times

" # user0001 6 months ago


B is wrong in this case , A is right
upvoted 2 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # student22 1 year ago


B
'needs the data to be available on a file system in the cloud'
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 1 times

452 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #694 Topic 1

A company uses AWS Organizations to manage one parent account and nine member accounts. The number of member accounts is expected to
grow as the business grows. A security engineer has requested consolidation of AWS CloudTrail logs into the parent account for compliance
purposes. Existing logs currently stored in Amazon S3 buckets in each individual member account should not be lost. Future member accounts
should comply with the logging strategy.
Which operationally emcient solution meets these requirements?

A. Create an AWS Lambda function in each member account with a cross-account role. Trigger the Lambda functions when new CloudTrail
logs are created and copy the CloudTrail logs to a centralized S3 bucket. Set up an Amazon CloudWatch alarm to alert if CloudTrail is not
con+gured properly.

B. Con+gure CloudTrail in each member account to deliver log events to a central S3 bucket. Ensure the central S3 bucket policy allows
PutObject access from the member accounts. Migrate existing logs to the central S3 bucket. Set up an Amazon CloudWatch alarm to alert if
CloudTrail is not con+gured properly.

C. Con+gure an organization-level CloudTrail in the parent account to deliver log events to a central S3 bucket. Migrate the existing CloudTrail
logs from each member account to the central S3 bucket. Delete the existing CloudTrail and logs in the member accounts.

D. Con+gure an organization-level CloudTrail in the parent account to deliver log events to a central S3 bucket. Con+gure CloudTrail in each
member account to deliver log events to the central S3 bucket.

Correct Answer: A
Reference:
https://aws.amazon.com/blogs/architecture/stream-amazon-cloudwatch-logs-to-a-centralized-account-for-audit-and-analysis/

Community vote distribution


C (67%) D (33%)

" # kalyan_krishna742020 Highly Voted $ 1 year, 1 month ago


I think answer is C.
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html
upvoted 15 times

" # ExtHo Highly Voted $ 1 year ago


C is correct
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html
see section Best practices for moving from member account trails to organization trails why delete the existing CloudTrail and logs in the member
accounts
Thanks to kalyan_krishna742020 providing official AWS link
upvoted 9 times

" # Kyperos Most Recent % 2 months, 2 weeks ago


Selected Answer: C
I think that consolidate Cloudtrail Log will stream logs all member accounts to parent accounts. If choose D, existing logs in member account still
retain in S3 bucket member account. If choose C, existing logs in member account are migrated to S3 bucket central account.
So C will adhere to consolidate logging approach! --> Answer is C
upvoted 2 times

" # Andykris 2 months, 4 weeks ago


B & C is deleting existing logs which defeats the requirements. D is the answer
upvoted 1 times

" # asfsdfsdf 4 months ago


Selected Answer: C
Have to choose C.
The "most operationally efficient solution" is to create 1 org trail which capture and send events to a central bucket- deploy it on all member
accounts - move old member accounts logs to the central buckets and delete them. see below link:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html
upvoted 1 times

" # ksaru 4 months ago


People those answered C - note that the question states that logs must be retained in the member account S3 buckets and this option deletes
them.
Hence, D is correct.

453 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # sb333 3 weeks, 6 days ago


The logs should not be lost, so you copy them to the centralized bucket. Then there is no more need for them in the member accounts. The
question does not state that they must remain in the member accounts. The answer is C.
upvoted 1 times
" # kangtamo 4 months, 2 weeks ago
Selected Answer: C
Agree with C.
upvoted 1 times

" # TechIsi 6 months, 1 week ago


Correct answer is C, when you create an organizational trail and specify a bucket, all account trails are automatically configured to send to that
bucket. You also have to configure the bucket policy to allow put action for all the accounts.
upvoted 2 times

" # westcon 6 months, 4 weeks ago


DDD
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html
upvoted 1 times

" # jj22222 7 months, 1 week ago


Selected Answer: D
D. Configure an organization-level CloudTrail in the parent account to deliver log events to a central S3 bucket. Configure CloudTrail in each
member account to deliver log events to the central S3 bucket.
upvoted 1 times

" # lifebegins 8 months, 2 weeks ago


Sorry Dear Friends, Answer is C.
We can created the CloudTrail in Parent Account and the set the level to Entire Orgranization, Automatically Cloud Trail applied to all member
accounts.

When i practically done, I understand the Truth.


Answer is C:
upvoted 1 times

" # lifebegins 8 months, 2 weeks ago


Answer B:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html

Cloud Trail cannot manage the logs for others. Only Destination bucket can be shared centrally
upvoted 2 times

" # lifebegins 8 months, 2 weeks ago


Answer is B:

https://d0.awsstatic.com/aws-answers/AWS_Multi_Account_Security_Strategy.pdf

Refer Logging Account Structure


upvoted 1 times

" # Yardenfayer 8 months, 2 weeks ago


its D
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html
upvoted 1 times

" # futen0326 8 months, 3 weeks ago


Selected Answer: D
It's D. Question explicitly states that the logs in the member accounts should not be lost. Deleting them does exactly that.
upvoted 1 times

" # Alvindo 8 months, 1 week ago


thought that as well BUT answer c says to migrate the existing logs to central s3 bucket so it wouldn't be lost and i believe enabling cloud
trail(whole organization) in the central account is enough and you don't need to do it in each account
upvoted 1 times

" # HellGate 9 months, 1 week ago


My answer is C.
We don't need to use Lambda function to move logs...

https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html
upvoted 1 times

454 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # AzureDP900 11 months ago


I will go with C
upvoted 1 times

455 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #695 Topic 1

A weather service provides high-resolution weather maps from a web application hosted on AWS in the eu-west-1 Region. The weather maps are
updated frequently and stored in Amazon S3 along with static HTML content. The web application is fronted by Amazon CloudFront.
The company recently expanded to serve users in the us-east-1 Region, and these new users report that viewing their respective weather maps is
slow from time to time.
Which combination of steps will resolve the us-east-1 performance issues? (Choose two.)

A. Con+gure the AWS Global Accelerator endpoint for the S3 bucket in eu-west-1. Con+gure endpoint groups for TCP ports 80 and 443 in us-
east-1.

B. Create a new S3 bucket in us-east-1. Con+gure S3 cross-Region replication to synchronize from the S3 bucket in eu-west-1.

C. Use Lambda@Edge to modify requests from North America to use the S3 Transfer Acceleration endpoint in us-east-1.

D. Use Lambda@Edge to modify requests from North America to use the S3 bucket in us-east-1.

E. Con+gure the AWS Global Accelerator endpoint for us-east-1 as an origin on the CloudFront distribution. Use Lambda@Edge to modify
requests from North America to use the new origin.

Correct Answer: BC

Community vote distribution


BD (100%)

" # wasabidev Highly Voted $ 1 year, 1 month ago


BD. with replication there is not more need to use S3 Transfer Acceleration
upvoted 25 times

" # nitinz 1 year, 1 month ago


i agree
upvoted 1 times

" # KevinZhong Highly Voted $ 1 year, 1 month ago


BC
Seems S3 Transfer Acceleration it better to work with Edge.
-------------
Amazon S3 Transfer Acceleration is a bucket-level feature that enables fast, easy, and secure transfers of files over long distances between your
client and an S3 bucket. Transfer Acceleration takes advantage of the globally distributed edge locations in Amazon CloudFront.
upvoted 6 times

" # certainly 1 year, 1 month ago


sorry, upvoted by mistake. BD are correct.
S3 Transfer Acceleration is for upload not download. https://aws.amazon.com/about-aws/whats-new/2016/04/transfer-files-into-amazon-s3-
up-to-300-percent-faster/
upvoted 3 times

" # certainly 1 year, 1 month ago


my bad, it's also can be used for download from s3.
https://aws.amazon.com/blogs/aws/aws-storage-update-amazon-s3-transfer-acceleration-larger-snowballs-in-more-regions
however, i still think it is not needed as the maps are updated frequently so caching should not help much.
upvoted 2 times

" # KevinZhong 1 year ago


changed my mind to BD, seems it's not the case to use Transfer Acceleration
------------------
Why use Transfer Acceleration?
You might want to use Transfer Acceleration on a bucket for various reasons:
1. Your customers upload to a centralized bucket from all over the world.
2. You transfer gigabytes to terabytes of data on a regular basis across continents.
3. You can't use all of your available bandwidth over the internet when uploading to Amazon S3.
upvoted 7 times

" # tomosabc1 Most Recent % 4 weeks ago


Similar to StanM's response, I don't think S3 cross region replication is a good fit for this scenario, as weather map are updated frequently. The
CRR replication lag means that the users in us-east-1 is always 15 minutes(or even 2 hours) slower than the users in eu-west-1 in seeing the
updated weather data, which doesn't sound right in a real world scenario.

S3 CRR Replication Lag

456 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Cross-Region Replication is an asynchronous process, and the objects are eventually replicated. Most objects replicate within 15 minutes, but
sometimes replication can take a couple hours or more.
https://aws.amazon.com/premiumsupport/knowledge-center/s3-crr-replication-time/

Can anyone explain why AC is wrong?


upvoted 1 times

" # tomosabc1 4 weeks ago


Of course, BD surely resolves the issue in question, that is, these new users report that viewing their respective weather maps is slow from
time to time. But it will create a new issue, which is the users in us-east-1 is always 15 minutes(or even 2 hours) slower than the users in eu-
west-1 in seeing the updated weather data. That's not good.
upvoted 1 times

" # p2010 5 months, 2 weeks ago


Selected Answer: BD
S3 Transfer Acceleration is for upload not download
upvoted 2 times

" # user0001 6 months ago


C is wrong, you can't configure o use the S3 Transfer Acceleration endpoint in us-east-1
upvoted 1 times

" # kenchou73 7 months, 3 weeks ago


Selected Answer: BD
https://aws.amazon.com/blogs/apn/using-amazon-cloudfront-with-multi-region-amazon-s3-origins/
upvoted 1 times

" # RVD 7 months, 3 weeks ago


Selected Answer: BD
ANS: BD are correct, C is use for data upload to local region or nearest region and from there it will transfer to destination bucket using aws
backbone
upvoted 1 times

" # AzureDP900 11 months ago


I'll go with B, D
upvoted 1 times

" # cldy 11 months, 1 week ago


B. Create a new S3 bucket in us-east-1. Configure S3 cross-Region replication to synchronize from the S3 bucket in eu-west-1.
D. Use Lambda@Edge to modify requests from North America to use the S3 bucket in us-east-1.
upvoted 1 times

" # backfringe 11 months, 3 weeks ago


B and D
upvoted 2 times

" # andylogan 1 year ago


It's B D - since S3 Transfer Acceleration for transferring of files over long distances
this case we need replica and Lambda@Edge
upvoted 1 times

" # DerekKey 1 year ago


B&D correct. see also
https://aws.amazon.com/blogs/networking-and-content-delivery/dynamically-route-viewer-requests-to-any-origin-using-lambdaedge/
upvoted 4 times

" # Kopa 1 year ago


B,D Company that has implemented the same scenarios says:

To serve content from these other regions, we need to route requests to the different Amazon S3 buckets we’re using. In this post, we explore
how to accomplished this by using Amazon CloudFront as a content delivery network and Lambda@Edge as a router. We will also take a quick
look at how this impacts latency and cost.

Reference : https://aws.amazon.com/blogs/apn/using-amazon-cloudfront-with-multi-region-amazon-s3-origins/
upvoted 1 times

" # tgv 1 year ago


BBB DDD
---
upvoted 1 times

" # blackgamer 1 year ago


B and D for sure.

457 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # WhyIronMan 1 year ago
I'll go with B, D
upvoted 1 times

" # Waiweng 1 year ago


it's B,D
upvoted 2 times

458 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #696 Topic 1

A company is deploying a public-facing global application on AWS using Amazon CloudFront. The application communicates with an external
system. A solutions architect needs to ensure the data is secured during end-to-end transit and at rest.
Which combination of steps will satisfy these requirements? (Choose three.)

A. Create a public certi+cate for the required domain in AWS Certi+cate Manager and deploy it to CloudFront, an Application Load Balancer,
and Amazon EC2 instances.

B. Acquire a public certi+cate from a third-party vendor and deploy it to CloudFront, an Application Load Balancer, and Amazon EC2 instances.

C. Provision Amazon EBS encrypted volumes using AWS KMS and ensure explicit encryption of data when writing to Amazon EBS.

D. Provision Amazon EBS encrypted volumes using AWS KMS.

E. Use SSL or encrypt data while communicating with the external system using a VPN.

F. Communicate with the external system using plaintext and use the VPN to encrypt the data in transit.

Correct Answer: ACE

Community vote distribution


BDE (100%)

" # SD13 Highly Voted $ 1 year, 1 month ago


Correct Options: BDE
C is asking for explicit encryption on top of EBS encryption with KMS, I believe it's not needed.
upvoted 17 times

" # certainly 1 year, 1 month ago


I Agree. explicit encryption on top of EBS encryption with KMS just sounds weird
upvoted 1 times

" # tuananhngo 1 year ago


A IS BETTER THAN B
upvoted 3 times

" # WhyIronMan 1 year ago


A is wrong because public ACM certificates can be used only with specific AWS services. EC2 is not included

https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html
upvoted 17 times

" # eji Highly Voted $ 1 year, 1 month ago


BDE for me, we cannot use "public" certificate for ec2 from amazon certificate manager, so A cannot be the answer. and for C i agree with SD13 i
think explicit encryption it's not needed
upvoted 8 times

" # dcdcdc3 Most Recent % 1 month, 1 week ago


IRL we use self-signed cert between LB and the ec2 (or private from ACM). The way the answer is written A cannot be true.
upvoted 1 times

" # cldy 11 months ago


B. Acquire a public certificate from a third-party vendor and deploy it to CloudFront, an Application Load Balancer, and Amazon EC2 instances.
D. Provision Amazon EBS encrypted volumes using AWS KMS.
E. Use SSL or encrypt data while communicating with the external system using a VPN.
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: BDE
A is wrong because public ACM certificates can be used only with specific AWS services. EC2 is not included

https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html
C is asking for explicit encryption on top of EBS encryption with KMS, I believe it's not needed.
upvoted 1 times

" # student22 1 year ago


B,D,E
---

459 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Q: With which AWS services can I use ACM certificates?

You can use public and private ACM certificates with the following AWS services:
• Elastic Load Balancing – Refer to the Elastic Load Balancing documentation
• Amazon CloudFront – Refer to the CloudFront documentation
• Amazon API Gateway – Refer to the API Gateway documentation
• AWS Elastic Beanstalk – Refer to the AWS Elastic Beanstalk documentation
• AWS CloudFormation – Support is currently limited to public certificates that use email validation. Refer to the AWS CloudFormation
documentation

In addition, you can use private certificates issued with ACM Private CA with EC2 instances, containers, IoT devices, and on your own servers.

https://aws.amazon.com/certificate-manager/faqs/?nc1=h_ls
upvoted 2 times

" # andylogan 1 year ago


It's B D E
upvoted 1 times

" # tgv 1 year ago


BBB DDD EEE
---
https://aws.amazon.com/certificate-manager/faqs/
upvoted 1 times

" # blackgamer 1 year ago


BDE is correct.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B,D,E

Q: Can I use certificates on Amazon EC2 instances or on my own servers?

You can use private certificates issued with ACM Private CA with EC2 instances, containers, and on your own servers. At this time, public ACM
certificates can be used only with specific AWS services. See With which AWS services can I use ACM certificates?

https://aws.amazon.com/certificate-manager/faqs/?nc1=h_ls
upvoted 3 times

" # DashL 1 year ago


BDE
For those answering ADE:
HTTPS between viewers and CloudFront – You can use a certificate that was issued by a trusted certificate authority (CA) such as Comodo,
DigiCert, or Symantec, or you can use a certificate provided by AWS Certificate Manager (ACM).
HTTPS between CloudFront and a custom origin – If the origin is not an Elastic Load Balancing (ELB) load balancer, such as Amazon EC2, the
certificate must be issued by a trusted CA such as Comodo, DigiCert, or Symantec. If your origin is an ELB load balancer, you can also use a
certificate provided by ACM.
For SSL Between ELB and EC2: Amazon-issued certificates can’t be installed on an EC2 instance. To enable end-to-end encryption, you must
use a third-party SSL certificate. Install the third-party certificate on an EC2 instance. Then, associate the third-party certificate with a load
balancer by importing it into AWS Certificate Manager (ACM) (https://aws.amazon.com/premiumsupport/knowledge-center/acm-ssl-certificate-
ec2-elb/)
The requirement of 3rd party cert between ELB and EC2 makes Option A is invalid.
upvoted 4 times

" # hk436 1 year ago


BDE are my answers!!
upvoted 2 times

" # chkmtess 1 year ago


BDE
Explanation for B - "You can't export an Amazon Issued ACM public certificate for use on an EC2 instance because ACM manages the private
key."
https://aws.amazon.com/premiumsupport/knowledge-center/configure-acm-certificates-ec2/
upvoted 2 times

" # ogryzek 1 year ago


What about get-certificate . Doesn't it export cert with whole chain. For me it is still AC
https://docs.aws.amazon.com/cli/latest/reference/acm/get-certificate.html
upvoted 1 times

" # TonyGe 1 year ago


BED for sure.
A is incorrect, public cert cannot be used in EC2.
upvoted 2 times

460 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # ElreySham 1 year ago


You generate the certificate for CF. Answer is ADE.
upvoted 1 times

" # Waiweng 1 year ago


it's A,D,E
upvoted 3 times

" # WhyIronMan 1 year ago


A is wrong because public ACM certificates can be used only with specific AWS services. EC2 is not included

https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html
upvoted 3 times

" # Kopa 1 year ago


Can you please argument why its A? Thanks
upvoted 1 times

" # tvs 1 year ago


ACE - https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html , public certificates not available for EC2 instance.
upvoted 1 times

" # tvs 1 year ago


I mean ADE
upvoted 1 times

" # tvs 1 year ago


Sorry typo BDE
upvoted 3 times

461 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #697 Topic 1

A company provides a centralized Amazon EC2 application hosted in a single shared VPC. The centralized application must be accessible from
client applications running in the VPCs of other business units. The centralized application front end is con+gured with a Network Load Balancer
(NLB) for scalability.
Up to 10 business unit VPCs will need to be connected to the shared VPC. Some of the business unit VPC CIDR blocks overlap with the shared
VPC, and some overlap with each other. Network connectivity to the centralized application in the shared VPC should be allowed from authorized
business unit VPCs only.
Which network con+guration should a solutions architect use to provide connectivity from the client applications in the business unit VPCs to the
centralized application in the shared VPC?

A. Create an AWS Transit Gateway. Attach the shared VPC and the authorized business unit VPCs to the transit gateway. Create a single transit
gateway route table and associate it with all of the attached VPCs. Allow automatic propagation of routes from the attachments into the route
table. Con+gure VPC routing tables to send tramc to the transit gateway.

B. Create a VPC endpoint service using the centralized application NLB and enable the option to require endpoint acceptance. Create a VPC
endpoint in each of the business unit VPCs using the service name of the endpoint service. Accept authorized endpoint requests from the
endpoint service console.

C. Create a VPC peering connection from each business unit VPC to the shared VPC. Accept the VPC peering connections from the shared
VPC console. Con+gure VPC routing tables to send tramc to the VPC peering connection.

D. Con+gure a virtual private gateway for the shared VPC and create customer gateways for each of the authorized business unit VPCs.
Establish a Site-to-Site VPN connection from the business unit VPCs to the shared VPC. Con+gure VPC routing tables to send tramc to the
VPN connection.

Correct Answer: A
Reference:
https://d1.awsstatic.com/whitepapers/building-a-scalable-and-secure-multi-vpc-aws-network-infrastructure.pdf

Community vote distribution


B (100%)

" # wasabidev Highly Voted $ 1 year, 1 month ago


B. Transit Gateway doesn't support routing between VPC with identical CIDRs
upvoted 18 times

" # DashL 1 year ago


Amazon Transit Gateway doesn’t support routing between Amazon VPCs with overlapping CIDRs. If you attach a new Amazon VPC that has a
CIDR which overlaps with an already attached Amazon VPC, Amazon Transit Gateway will not propagate the new Amazon VPC route into the
Amazon Transit Gateway route table.
upvoted 3 times

" # tvs Highly Voted $ 1 year ago


B. Use NLB VPC endpoint service name overcome CIDR overlap issues.
upvoted 7 times

" # kirrim 1 year ago


Agree!

NLBs always SNAT the client source IP address to their own IP within your VPC when the incoming request to the NLB via a gateway load
balancer endpoint or vpc endpoint (private link):

https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#client-ip-preservation

(This can be annoying if you want the NLB's client IP preservation feature!)
upvoted 2 times

" # asfsdfsdf Most Recent % 4 months ago


Selected Answer: B
B - classic use cased for PrivateLink (NLB +EP) all other options are out due to overlapping CIDRs not possible to route it
upvoted 2 times

" # cldy 10 months, 1 week ago

462 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

B correct.
upvoted 1 times
" # AzureDP900 11 months ago
I'll go with B
upvoted 1 times

" # acloudguru 11 months, 1 week ago


Selected Answer: B
A is not useful for overlap CIDR. B, use NLB's vpc endpoint
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 2 times

" # blackgamer 1 year ago


It is B
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # Waiweng 1 year, 1 month ago


it's B
upvoted 3 times

" # aws_master 1 year, 1 month ago


B for sure
upvoted 3 times

" # SD13 1 year, 1 month ago


Correct option : B
upvoted 2 times

" # gm 1 year, 1 month ago


Yes, B
upvoted 5 times

" # kalyan_krishna742020 1 year, 1 month ago


Ans: C
https://docs.aws.amazon.com/vpc/latest/peering/peering-configurations-partial-access.html
upvoted 3 times

" # kalyan_krishna742020 1 year, 1 month ago


My bad.. it is B.
https://aws.amazon.com/blogs/networking-and-content-delivery/how-to-securely-publish-internet-applications-at-scale-using-application-
load-balancer-and-aws-privatelink/
upvoted 8 times

463 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #698 Topic 1

A company has an on-premises monitoring solution using a PostgreSQL database for persistence of events. The database is unable to scale due
to heavy ingestion and it frequently runs out of storage.
The company wants to create a hybrid solution and has already set up a VPN connection between its network and AWS. The solution should
include the following attributes:
✑ Managed AWS services to minimize operational complexity.
✑ A buffer that automatically scales to match the throughput of data and requires no ongoing administration.
✑ A visualization tool to create dashboards to observe events in near-real time.
✑ Support for semi-structured JSON data and dynamic schemas.
Which combination of components will enable the company to create a monitoring solution that will satisfy these requirements? (Choose two.)

A. Use Amazon Kinesis Data Firehose to buffer events. Create an AWS Lambda function to process and transform events.

B. Create an Amazon Kinesis data stream to buffer events. Create an AWS Lambda function to process and transform events.

C. Con+gure an Amazon Aurora PostgreSQL DB cluster to receive events. Use Amazon QuickSight to read from the database and create near-
real-time visualizations and dashboards.

D. Con+gure Amazon Elasticsearch Service (Amazon ES) to receive events. Use the Kibana endpoint deployed with Amazon ES to create near-
real-time visualizations and dashboards.

E. Con+gure an Amazon Neptune DB instance to receive events. Use Amazon QuickSight to read from the database and create near-real-time
visualizations and dashboards.

Correct Answer: BC

Community vote distribution


AD (100%)

" # ExtHo Highly Voted $ 1 year ago


A is correct instead B and Final answer AD.
https://aws.amazon.com/kinesis/data-firehose/faqs/
Q: What is Amazon Kinesis Data Firehose?
It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration.
upvoted 15 times

" # DashL 1 year ago


Unlike some other AWS services, Kinesis does not provide a native auto-scaling solution like DynamoDB On-Demand or EC2 Auto Scaling.
Therefore, there is a need for the right number of shards to be calculated for every stream based on the expected number of records and/or
the size of the records. This can lead to over/under-provisioning of shards within a stream resulting in higher costs and/or data ingestion being
throttled.
upvoted 3 times

" # Waiweng Highly Voted $ 1 year ago


It's A&D
upvoted 8 times

" # Andykris Most Recent % 2 months, 3 weeks ago


D because data is semi structure.
upvoted 1 times

" # Andykris 2 months, 3 weeks ago


A becase Kenisis data firehose can scale and handle data traffic
upvoted 1 times

" # Anhdd 5 months ago


I still dont understand why choose D over C. Amazon QuickSight do the same thing as Kibana isn't it?
upvoted 3 times

" # Niaj 5 months, 3 weeks ago


Selected Answer: AD
AD for sure is the right answer here
upvoted 1 times

" # jj22222 9 months ago

464 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

a and D look right


upvoted 1 times

" # shaiker 10 months, 1 week ago


Selected Answer: AD
es is the only one doing the json unstructured data in the list. Aurora is relational and requires structured schema. firehose stream near realtime
into es
upvoted 1 times

" # tkanmani76 10 months, 1 week ago


A is fine.
C - Why not QuickSight ? As this specifically handles embeddable dashboards with visualizations on real time basis. Kibana can as well do this
along with ES - but why not QuickSight - AWS solution for dashboards ?.
upvoted 2 times

" # fanq10 2 weeks, 5 days ago


Why C? Here is the answer:
https://www.elastic.co/guide/en/elasticsearch/reference/current/near-real-time.html
Therefore, Amazon ES + kibana
upvoted 1 times

" # fanq10 2 weeks, 5 days ago


Sorry, a typo - Correction -> Why NOT C? Here is the answer:
https://www.elastic.co/guide/en/elasticsearch/reference/current/near-real-time.html
Therefore, the Answer is D: Amazon ES + kibana
upvoted 1 times

" # Anhdd 5 months ago


seem that Aurora is relational and requires structured schema, while the question require the json unstructured data. In my opinion
upvoted 1 times

" # tkanmani76 10 months, 1 week ago


Supporting Links - Confirming Quicksight support for handling JSON Semi structured data as well ..https://docs.aws.amazon.com/quicksight
/latest/user/supported-data-sources.html
upvoted 1 times

" # cldy 11 months ago


A. Use Amazon Kinesis Data Firehose to buffer events. Create an AWS Lambda function to process and transform events.
D. Configure Amazon Elasticsearch Service (Amazon ES) to receive events. Use the Kibana endpoint deployed with Amazon ES to create near-
real-time visualizations and dashboards.
upvoted 1 times

" # AzureDP900 11 months ago


A & D is right
upvoted 1 times

" # acloudguru 11 months, 1 week ago


Selected Answer: AD
AD, this is a pattern of AWS, simple question, hope I can have it in my exam
upvoted 1 times

" # andylogan 1 year ago


It's A D
upvoted 1 times

" # icttss 1 year ago


how about B&E ?
https://aws-samples.github.io/aws-dbs-refarch-graph/src/writing-from-amazon-kinesis-data-streams/
upvoted 1 times

" # DerekKey 1 year ago


A&D correct
B wrong - Kinesis Datastream will not scale automatically. You must do it manually
C wrong - "dynamic schemas"
upvoted 3 times

" # tgv 1 year ago


AAA DDD
---
upvoted 1 times

" # blackgamer 1 year ago


Answer is AD

465 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # WhyIronMan 1 year ago
I'll go with with B,D
upvoted 1 times

" # WhyIronMan 1 year ago


Changing to A,D after read
upvoted 1 times

466 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #699 Topic 1

A life sciences company is using a combination of open source tools to manage data analysis workjows and Docker containers running on
servers in its on- premises data center to process genomics data. Sequencing data is generated and stored on a local storage area network (SAN),
and then the data is processed.
The research and development teams are running into capacity issues and have decided to re-architect their genomics analysis platform on AWS
to scale based on workload demands and reduce the turnaround time from weeks to days.
The company has a high-speed AWS Direct Connect connection. Sequencers will generate around 200 GB of data for each genome, and individual
jobs can take several hours to process the data with ideal compute capacity. The end result will be stored in Amazon S3. The company is
expecting 10-15 job requests each day.
Which solution meets these requirements?

A. Use regularly scheduled AWS Snowball Edge devices to transfer the sequencing data into AWS. When AWS receives the Snowball Edge
device and the data is loaded into Amazon S3, use S3 events to trigger an AWS Lambda function to process the data.

B. Use AWS Data Pipeline to transfer the sequencing data to Amazon S3. Use S3 events to trigger an Amazon EC2 Auto Scaling group to
launch custom-AMI EC2 instances running the Docker containers to process the data.

C. Use AWS DataSync to transfer the sequencing data to Amazon S3. Use S3 events to trigger an AWS Lambda function that starts an AWS
Step Functions workjow. Store the Docker images in Amazon Elastic Container Registry (Amazon ECR) and trigger AWS Batch to run the
container and process the sequencing data.

D. Use an AWS Storage Gateway +le gateway to transfer the sequencing data to Amazon S3. Use S3 events to trigger an AWS Batch job that
executes on Amazon EC2 instances running the Docker containers to process the data.

Correct Answer: A

Community vote distribution


C (100%)

" # CarisB Highly Voted $ 1 year ago


Agree on C.
For instance: https://docs.aws.amazon.com/whitepapers/latest/genomics-data-transfer-analytics-and-machine-learning/transferring-genomics-
data-to-the-cloud-and-establishing-data-access-patterns-using-aws-datasync-and-aws-storage-gateway-for-files.html => Use AWS DataSync
to transfer data to Amazon S3
upvoted 19 times

" # ExtHo 1 year ago


Very good reference provided clears doubts :)
upvoted 1 times

" # sek12324 Highly Voted $ 1 year, 1 month ago


C for me, as docker images are used - they need ECR
upvoted 15 times

" # sjpd10 Most Recent % 1 month ago


The question has some key words that are referred in the AWS link - third-party tools, open-source tools, which is in the question. It also says, if
the data is local storage (SAN), use 'Storage GW.

Is the answer still A. I vote D.


upvoted 1 times

" # JohnPi 1 month, 1 week ago


the problem that I see with option C AWS DataSync is related to the fact that input is a storage area network (SAN) and according to the docs
AWS DataSync can connect to NAS as a NFS/SMB file share
upvoted 1 times

" # JohnPi 1 month, 1 week ago


this is the statement: "Sequencing data is generated and stored on a local storage area network (SAN), and then the data is processed." so
SAN not NAS
upvoted 1 times

" # asfsdfsdf 4 months ago


Selected Answer: C
Answer is C - docker images with ECR / DataSync to move the data with DX / Batch to run compute process with containers - this done with job

467 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Type: "container" and an image from ECR


upvoted 1 times
" # johnnsmith 8 months, 2 weeks ago
D is the correct answer. C is incorrect. S3 event can trigger Batch job.
upvoted 2 times

" # asfsdfsdf 4 months ago


This is incorrect, S3 event can trigger only the below services:
Amazon Simple Notification Service (Amazon SNS) topics
Amazon Simple Queue Service (Amazon SQS) queues
AWS Lambda function
Answer is C - docker images - ECR / DataSync to move the data with DX
upvoted 1 times

" # jj22222 9 months ago


Selected Answer: C
CCCCCCCCCCCCCCCCCC
upvoted 2 times

" # pititcu667 9 months, 4 weeks ago


Selected Answer: C
c because i belive it's better to use ecs than lambda.
upvoted 2 times

" # cldy 10 months, 1 week ago


C is correct.
upvoted 1 times

" # mm84 10 months, 4 weeks ago


Agree on C.
upvoted 1 times

" # AzureDP900 11 months ago


C is right
upvoted 1 times

" # andylogan 1 year ago


It's C
refer to compare DataSync usage to others
https://aws.amazon.com/datasync/faqs/#When_to_choose_AWS_DataSync
upvoted 3 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # blackgamer 1 year ago


C is the answer
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 1 times

" # mustpassla 1 year ago


C looks good
upvoted 2 times

468 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #700 Topic 1

A company has +ve physical data centers in speci+c locations around the world. Each data center has hundreds of physical servers with a mix of
Windows and
Linux-based applications and database services. Each data center also has an AWS Direct Connect connection of 10 Gbps to AWS with a
company-approved
VPN solution to ensure that data transfer is secure. The company needs to shut down the existing data centers as quickly as possible and migrate
the servers and applications to AWS.
Which solution meets these requirements?

A. Install the AWS Server Migration Service (AWS SMS) connector onto each physical machine. Use the AWS Management Console to select
the servers from the server catalog, and start the replication. Once the replication is complete, launch the Amazon EC2 instances created by
the service.

B. Install the AWS DataSync agent onto each physical machine. Use the AWS Management Console to con+gure the destination to be an AMI,
and start the replication. Once the replication is complete, launch the Amazon EC2 instances created by the service.

C. Install the CloudEndure Migration agent onto each physical machine. Create a migration blueprint, and start the replication. Once the
replication is complete, launch the Amazon EC2 instances in cutover mode.

D. Install the AWS Application Discovery Service agent onto each physical machine. Use the AWS Migration Hub import option to start the
replication. Once the replication is complete, launch the Amazon EC2 instances created by the service.

Correct Answer: A

Community vote distribution


A (50%) C (50%)

" # ajeeshb Highly Voted $ 1 year, 1 month ago


Answer is C.
SMS is currently not supported on physical machines.
https://aws.amazon.com/blogs/architecture/field-notes-choosing-a-rehost-migration-tool-cloudendure-or-aws-sms/
upvoted 19 times

" # blackgamer Highly Voted $ 1 year, 1 month ago


The question is outdated. Now the recommended solution from AWS is Application Migration Service.

https://aws.amazon.com/application-migration-service/
upvoted 17 times

" # MrCarter 1 year ago


awesome info, thanks
upvoted 1 times

" # cannottellname 9 months, 1 week ago


Exactly....!!
AWS Application Migration Service (MGN) is the primary migration service recommended for lift-and-shift migrations to the AWS Cloud.
Customers who currently use CloudEndure Migration or AWS Server Migration Service (AWS SMS) are encouraged to switch to MGN for
future migrations.
upvoted 1 times

" # asfsdfsdf Most Recent % 4 months ago


Only C will work for migration lift and shift physical servers.
Question is outdated - SMS is no longer recommended and it used to work only on Vmware/hyper-v/azure virtual machines.
Cloudendure is MGN now - so the correct answer if you see this in the exam is Application Migration Service. (MGN)
upvoted 1 times

" # psou7 1 month ago


Well, MGN is replacing AWS Server Migration Services & CloudEndure Migration.
So the question does not make sense for the time being.
C would be the best fit but it wont show up in the exam. It is outdated (based on material from Stephane Maarek).
upvoted 2 times

" # Enigmaaaaaa 4 months ago


Selected Answer: C
CloudEndure - which is now MGN is the correct solution for physical server.

469 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

SMS is for migration virtual servers.


upvoted 1 times
" # Chuky64 4 months, 3 weeks ago
Selected Answer: A
With AWS Application Migration Service, you can migrate your applications from physical infrastructure, VMware vSphere, Microsoft Hyper-V,
Amazon Elastic Compute Cloud (AmazonEC2), Amazon Virtual Private Cloud (Amazon VPC), and other clouds to AWS.

December 30, 2022 - Existing CloudEndure MigrationThe CloudEndure is deprecated.


upvoted 1 times

" # itznotme 7 months, 3 weeks ago


Selected Answer: C
C

aws.amazon.com/blogs/architecture/field-notes-choosing-a-rehost-migration-tool-cloudendure-or-aws-sms/
upvoted 1 times

" # cannottellname 9 months, 1 week ago


Selected Answer: A
SMS is currently supported. SMS is named as MGN because of few improvements & changes in its process.

AWS still does not recommend CloudEndure currently. please check their website.

"AWS Application Migration Service (MGN) is the primary migration service recommended for lift-and-shift migrations to the AWS Cloud.
Customers who currently use CloudEndure Migration or AWS Server Migration Service (AWS SMS) are encouraged to switch to MGN for future
migrations."
upvoted 1 times

" # tkanmani76 10 months, 3 weeks ago


SMS is meant for migration of Virtual machines to AWS whereas Cloud Endure can be used for Physical, Virtual or Cloud Server. Hence C is
correct. https://aws.amazon.com/blogs/architecture/field-notes-choosing-a-rehost-migration-tool-cloudendure-or-aws-sms/
upvoted 1 times

" # AzureDP900 11 months ago


C is right
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # Liongeek 1 year ago


If you reading this, in 2021, answer is A. Read the second line here:
https://docs.aws.amazon.com/es_es/mgn/latest/ug/what-is-application-migration-service.html
"AWS Application Migration Service (MGN) is a highly automated lift-and-shift (rehost) solution that simplifies, expedites, and reduces the cost of
migrating applications to AWS. It enables companies to lift-and-shift a large number of physical, virtual, or cloud servers"
upvoted 4 times

" # student22 1 year ago


This link is about MGM. The question is about SMS.
Answer is C.
upvoted 1 times

" # student22 1 year ago


I mean answer A (not the question) is about SMS.
upvoted 1 times

" # student22 1 year ago


https://aws.amazon.com/blogs/architecture/field-notes-choosing-a-rehost-migration-tool-cloudendure-or-aws-sms/
upvoted 1 times

" # Goram113 1 year ago


MGN is not SMS but new AWS flavoured version of cloudendure. So it is C :-)
Amazon made acquisition of cloudendure some time ago.
upvoted 2 times

" # Suresh108 1 year ago

470 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

CCCCCC - key word here is 'physical servers' - MGN or cloud endure

https://console.cloudendure.com/#/register/register
upvoted 1 times
" # WhyIronMan 1 year ago
I'll go with C
upvoted 2 times

" # hk436 1 year ago


c for sure
upvoted 1 times

" # Waiweng 1 year, 1 month ago


it's C
upvoted 2 times

" # blackgamer 1 year, 1 month ago


Yes agreed with C. Keyword is physical servers.
upvoted 1 times

471 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #701 Topic 1

A security engineer determined that an existing application retrieves credentials to an Amazon RDS for MySQL database from an encrypted +le in
Amazon S3. For the next version of the application, the security engineer wants to implement the following application design changes to improve
security:
✑ The database must use strong, randomly generated passwords stored in a secure AWS managed service.
✑ The application resources must be deployed through AWS CloudFormation.
✑ The application must rotate credentials for the database every 90 days.
A solutions architect will generate a CloudFormation template to deploy the application.
Which resources speci+ed in the CloudFormation template will meet the security engineer's requirements with the LEAST amount of operational
overhead?

A. Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the
database password. Specify a Secrets Manager RotationSchedule resource to rotate the database password every 90 days.

B. Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Create an AWS Lambda
function resource to rotate the database password. Specify a Parameter Store RotationSchedule resource to rotate the database password
every 90 days.

C. Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the
database password. Create an Amazon EventBridge scheduled rule resource to trigger the Lambda function password rotation every 90 days.

D. Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Specify an AWS AppSync
DataSource resource to automatically rotate the database password every 90 days.

Correct Answer: C

Community vote distribution


A (100%)

" # sek12324 Highly Voted $ 1 year, 1 month ago


A
https://aws.amazon.com/blogs/security/how-to-securely-provide-database-credentials-to-lambda-functions-by-using-aws-secrets-manager/
upvoted 12 times

" # nitinz Highly Voted $ 1 year, 1 month ago


Sure Answer is A, but I do not see the need of lambda to do the rotation when secret manager can do the rotation. can some one enlighten me?
upvoted 8 times

" # kalyan_krishna742020 1 year ago


Nope. You need to choose a Lambda function in order to rotate the secret.
upvoted 5 times

" # hilft Most Recent % 3 months, 2 weeks ago


A. AWS secret manager with lambda.
upvoted 1 times

" # TechX 4 months ago


Selected Answer: A
A 100%, AWS Secret Manager auto rotate, you don't need to trigger a lambda function
upvoted 1 times

" # tartarus23 6 months, 1 week ago


Selected Answer: A
A. as AWS secrets manager supports passport rotation through Lambda functions and rotation schedule.
C. is not correct since there is no need to use Amazon EventBridge to meet the requirements

https://aws.amazon.com/secrets-manager/
upvoted 1 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: A
It is A
upvoted 1 times

472 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # weurseuk 9 months, 3 weeks ago


A : secret manager can rotate on console but here it's with cloudformation, and it's done with the lambda, see
AWS::SecretsManager::RotationSchedule
upvoted 1 times

" # GeniusMikeLiu 10 months, 2 weeks ago


why we need lambda to rotate database password???
upvoted 1 times

" # cldy 11 months ago


A. Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the
database password. Specify a Secrets Manager RotationSchedule resource to rotate the database password every 90 days.
upvoted 1 times

" # AzureDP900 11 months ago


I will go with A
upvoted 1 times

" # andylogan 1 year ago


It's A
upvoted 1 times

" # mrphuongbn 1 year ago


Both A & C are ok. But "with the LEAST amount of operational overhead"
=> A.
https://aws.amazon.com/blogs/security/how-to-securely-provide-database-credentials-to-lambda-functions-by-using-aws-secrets-manager/
upvoted 1 times

" # tgv 1 year ago


AAA
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 1 times

" # blackgamer 1 year ago


The answer is A. it can configure AWS Secret Manager to rotate key automatically, not necessary to use EventBridge for that.

https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
upvoted 3 times

" # Waiweng 1 year ago


it's A
upvoted 1 times

" # CarisB 1 year ago


A
https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_cloudformation.html
upvoted 3 times

473 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #702 Topic 1

A company has a three-tier application running on AWS with a web server, an application server, and an Amazon RDS MySQL DB instance. A
solutions architect is designing a disaster recovery (DR) solution with an RPO of 5 minutes.
Which solution will meet the company's requirements?

A. Con+gure AWS Backup to perform cross-Region backups of all servers every 5 minutes. Reprovision the three tiers in the DR Region from
the backups using AWS CloudFormation in the event of a disaster.

B. Maintain another running copy of the web and application server stack in the DR Region using AWS CloudFormation drift detection.
Con+gure cross-Region snapshots of the DB instance to the DR Region every 5 minutes. In the event of a disaster, restore the DB instance
using the snapshot in the DR Region.

C. Use Amazon EC2 Image Builder to create and copy AMIs of the web and application server to both the primary and DR Regions. Create a
cross-Region read replica of the DB instance in the DR Region. In the event of a disaster, promote the read replica to become the master and
reprovision the servers with AWS CloudFormation using the AMIs.

D. Create AMIs of the web and application servers in the DR Region. Use scheduled AWS Glue jobs to synchronize the DB instance with
another DB instance in the DR Region. In the event of a disaster, switch to the DB instance in the DR Region and reprovision the servers with
AWS CloudFormation using the AMIs.

Correct Answer: C

Community vote distribution


C (71%) A (29%)

" # hbrand Highly Voted $ 1 year ago


My pick would be A.
https://docs.aws.amazon.com/aws-backup/latest/devguide/cross-region-backup.html

C and D don't really make sense to me. EC2 image builder is for deploying an creating new AMIs. Glue is for data integration. With B I am not
sure how drift detection would help as that would just allow a rollback and is not geared towards backup. Also A seemed to be the only one that
addressed backing up the web and app servers along with RDS.
upvoted 10 times

" # sashenka 11 months, 3 weeks ago


[C} It can'b be A becasue cross-Region backup of a large RDS MySQL could take more than 5 minutes and in fact often takes 1+ hrs. This is
done with a cop;y and not a continuous replication. So a backup job would not complete fast enough to be able to meet the 5 min RPO.
upvoted 7 times

" # peddyua 8 months, 4 weeks ago


you didn't read the article...
"With this feature, database and backup administrators are able to reduce their recovery point objective (RPO) to 5 minutes or under,
directly from the AWS Backup console."

We use AWS Backups+Terraform at work for our DR Scenario.


upvoted 2 times

" # PatrickLi 8 months, 3 weeks ago


while this is true, it requires the point-in-time recovery feature of AWS backup. Option A didn't say that and implies it uses the normal
snapshot based backups, which won't meet the RPO
upvoted 3 times

" # futen0326 Highly Voted $ 8 months, 3 weeks ago


Selected Answer: C
Answer CANNOT be A, deploying a brand new RDS instance will take >30 minutes. You will use EC2 Image builder to put the AMIs into the new
region, but not use image builder to LAUNCH them. To me the answer is clearly C.
upvoted 5 times

" # razerlg 8 months ago


it doesn't matter how long it takes to deploy the RDS. Question is only asking about RPO, not RTO.
upvoted 4 times

" # Nidhin07 Most Recent % 5 days, 18 hours ago


RTO is not provided hence we are covered with option A
upvoted 1 times

474 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # dcdcdc3 1 month, 1 week ago


Selected Answer: C
This article describes continuous backup for RDS (not Aurora, not EC2, just RDS) with AWS backup (RPO under 5 min), which means that
backups of all servers every 5 minutes is NOT possible, only RDS can be handled, I think I will have to go with C:
https://aws.amazon.com/blogs/storage/point-in-time-recovery-and-continuous-backup-for-amazon-rds-with-aws-backup/
upvoted 3 times

" # skywalker 1 month, 2 weeks ago


Between A and C... I go for C ...
Time to restore the data will take times.. and having AWS CloudFormation also take time to rebuilt the infras.... all within 5 minutes is
questionable.

On C, AMI, and Database are already in place on DR site. Just need to activite the failover to make the DR become production.. All this can
happen in 5 mins... Thus my pick i C.
upvoted 2 times

" # pixepe 1 month, 3 weeks ago


C is Correct;

Note that we have aggressive RPO.

Now, Disaster Strategy - Cold DR (Backup & Restore) vs Hot DR (active to active);
due to aggressive RPO => Replication (Hot DR)

Article on DR for RDS (though it has SQL server instead of MySQL; concept remains same) - To meet very aggressive RPO and RTO
requirements, your DR strategy needs to consider continuous replication capability
https://aws.amazon.com/blogs/database/cross-region-disaster-recovery-of-amazon-rds-for-sql-server/
upvoted 2 times

" # Sathish1412 1 month, 3 weeks ago


Selected Answer: A
Because RPO is 5 minutes, not RTO.
upvoted 1 times

" # foureye2004 2 months, 3 weeks ago


Should be C
https://aws.amazon.com/image-builder
/faqs/#:~:text=Image%20Builder%20can%20copy%20AMIs%20to%20selected%20AWS%20regions%20using%20existing%20AMI%20sharin
g%20mechanisms.%20The%20distribution%20can%20be%20gated%20on%20the%20passing%20of%20tests%20with%20Image%20Builder
.

If A, it should be included Point-In-Time Recovery


https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html
upvoted 2 times

" # TechX 4 months, 1 week ago


Selected Answer: A
I support A, a simple require that need 5 mins of RPO (Not RTO).
A meet the requirement with least complexity. Although C would work but it have higher cost when you have active-active model which is not
require in the question, I prefer the simpler solution
upvoted 1 times

" # sudeender 5 months, 2 weeks ago


A is correct here, promoting RDS Mysql read replica as primary needs to be done manually and also RDS instance has to be rebooted.. this will
not meet 5 minutes RPO
upvoted 2 times

" # aloha123 6 months, 1 week ago


Selected Answer: A
Agree with hbrand
upvoted 1 times

" # pititcu667 8 months, 3 weeks ago


A is not feasible because of 5 min restore time. Restore point objective less than 5 min, the limit for aws backup way more than that. The only
way to achieve it is with a read replica.
upvoted 1 times

" # Bigbearcn 8 months, 3 weeks ago


Selected Answer: C
It's C.
upvoted 2 times

" # peddyua 8 months, 4 weeks ago

475 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Selected Answer: A
https://docs.aws.amazon.com/aws-backup/latest/devguide/cross-region-backup.html
A
upvoted 1 times

" # cannottellname 9 months ago


Cross Region Replication without RDS doesn't seem a good option as well as to maintain it as Read Replica.

From https://aws.amazon.com/blogs/storage/point-in-time-recovery-and-continuous-backup-for-amazon-rds-with-aws-backup/ I think answer


should be A
upvoted 1 times

" # AzureDP900 11 months ago


I agree with C as right answer for given scnerio
upvoted 2 times

" # peddyua 8 months, 4 weeks ago


You can't use EC2 Image Builder for DR Scenario. Also Read replicas have a lag as well on replication as it's async. How big this lag can be...
depends on several factors.
upvoted 1 times

" # Liongeek 11 months, 4 weeks ago


You cannot use EC2 Image Builder as DR strategy, its porpurse isn't even related... that eliminates C(although cross-region replica sounds good).
That leave us A, which is the ans
upvoted 1 times

476 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #703 Topic 1

A company wants to migrate its corporate data center from on premises to the AWS Cloud. The data center includes physical servers and VMs
that use VMware and Hyper-V. An administrator needs to select the correct services to collect data for the initial migration discovery process. The
data format should be supported by AWS Migration Hub. The company also needs the ability to generate reports from the data.
Which solution meets these requirements?

A. Use the AWS Agentless Discovery Connector for data collection on physical servers and all VMs. Store the collected data in Amazon S3.
Query the data with S3 Select. Generate reports by using Kibana hosted on Amazon EC2.

B. Use the AWS Application Discovery Service agent for data collection on physical servers and all VMs. Store the collected data in Amazon
Elastic File System (Amazon EFS). Query the data and generate reports with Amazon Athena.

C. Use the AWS Application Discovery Service agent for data collection on physical servers and Hyper-V. Use the AWS Agentless Discovery
Connector for data collection on VMware. Store the collected data in Amazon S3. Query the data with Amazon Athena. Generate reports by
using Amazon QuickSight.

D. Use the AWS Systems Manager agent for data collection on physical servers. Use the AWS Agentless Discovery Connector for data
collection on all VMs. Store, query, and generate reports from the collected data by using Amazon Redshift.

Correct Answer: C

" # awsnoob Highly Voted $ 1 year ago


C is correct. VMware can support agentless discovery
upvoted 14 times

" # nitinz Highly Voted $ 1 year, 1 month ago


C is correct answer.
upvoted 7 times

" # cldy Most Recent % 11 months ago


C. Use the AWS Application Discovery Service agent for data collection on physical servers and Hyper-V. Use the AWS Agentless Discovery
Connector for data collection on VMware. Store the collected data in Amazon S3. Query the data with Amazon Athena. Generate reports by using
Amazon QuickSight.
upvoted 1 times

" # AzureDP900 11 months ago


C is right
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # Suresh108 1 year ago


CCCC -- Possible answers are B and C , eliminate B due to 'EFS', hence CCCC.
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with C

https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-agent.html

https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-connector.html
upvoted 1 times

" # tvs 1 year ago


Question is badly defined. Athena can query the data collect by agent not the connector. No other choice os going with C eventhougth it is
partially wrong.
upvoted 1 times

" # Waiweng 1 year ago

477 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

it's C
upvoted 2 times

" # anandbabu 1 year ago


D is correct option
upvoted 1 times

" # kejam 1 year ago


C: https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-connector.html
upvoted 3 times

" # champcloud 1 year ago


Going with C
upvoted 3 times

" # SD13 1 year ago


Correct Option : C
upvoted 3 times

" # sek12324 1 year, 1 month ago


B-
Agent-based discovery can be performed by deploying the AWS Application Discovery Agent on each of your VMs and physical servers.
upvoted 1 times

" # wasabidev 1 year, 1 month ago


Athena doesn't allow to query EFS, but S3
upvoted 2 times

478 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #704 Topic 1

A company is using Amazon Aurora MySQL for a customer relationship management (CRM) application. The application requires frequent
maintenance on the database and the Amazon EC2 instances on which the application runs. For AWS Management Console access, the system
administrators authenticate against
AWS Identity and Access Management (IAM) using an internal identity provider. For database access, each system administrator has a user name
and password that have previously been con+gured within the database.
A recent security audit revealed that the database passwords are not frequently rotated. The company wants to replace the passwords with
temporary credentials using the company's existing AWS access controls.
Which set of options will meet the company's requirements?

A. Create a new AWS Systems Manager Parameter Store entry for each database password. Enable parameter expiration to invoke an AWS
Lambda function to perform password rotation by updating the parameter value. Create an IAM policy allowing each system administrator to
retrieve their current password from the Parameter Store. Use the AWS CLI to retrieve credentials when connecting to the database.

B. Create a new AWS Secrets Manager entry for each database password. Con+gure password rotation for each secret using an AWS Lambda
function in the same VPC as the database cluster. Create an IAM policy allowing each system administrator to retrieve their current password.
Use the AWS CLI to retrieve credentials when connecting to the database.

C. Enable IAM database authentication on the database. Attach an IAM policy to each system administrator's role to map the role to the
database user name. Install the Amazon Aurora SSL certi+cate bundle to the system administrators' certi+cate trust store. Use the AWS CLI to
generate an authentication token used when connecting to the database.

D. Enable IAM database authentication on the database. Con+gure the database to use the IAM identity provider to map the administrator
roles to the database user. Install the Amazon Aurora SSL certi+cate bundle to the system administrators' certi+cate trust store. Use the AWS
CLI to generate an authentication token used when connecting to the database.

Correct Answer: C
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/users-connect-rds-iam/

Community vote distribution


C (100%)

" # ExtHo Highly Voted $ 1 year ago


C seems correct.
passwords are not frequently rotated+AWS Secrets Manager is just a trap but actual requirement is replace the passwords with temporary
credentials that C fulfill
upvoted 18 times

" # ItsmeP Highly Voted $ 1 year, 1 month ago


D is correct : As company wants to replace the passwords with temporary credentials using the company existing AWS access controls.
below are the steps:
set up IAM database authentication using IAM roles, follow these steps:

1. Enable IAM DB authentication on the DB instance.


2. Create a database user account that uses an AWS authentication token.
3. Add an IAM policy that maps the database user to the IAM role.
4. Attach the IAM role to the EC2 instance.
5. Generate an AWS authentication token to identify the IAM role.
6. Download the SSL root certificate file or certificate bundle file.
7. Connect to the DB instance using IAM role credentials and the authentication token or an SSL certificate.
https://aws.amazon.com/premiumsupport/knowledge-center/users-connect-rds-iam/
upvoted 14 times

" # Bilengh 1 year, 1 month ago


Doesn't step no. 3 mean that the answer is then C?
upvoted 10 times

" # sarah_t 1 year ago


C, because: "To allow an IAM user or role to connect to your DB cluster, you must create an IAM policy. After that, you attach the policy to an
IAM user or role."
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html
upvoted 8 times

479 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # certainly 1 year ago


I think C sounds closer to what is described in the aws document. also D fit better with question with Idp solution. but it sounds little off for
me how do you configure the database to use the IAM identity provider to map the administrator roles to the database user?
upvoted 7 times

" # Gaurav_GGG 10 months, 2 weeks ago


Your steps have one 3rd point about IAM policy map to IAM Role which means C is correct.
upvoted 1 times
" # skywalker Most Recent % 1 month ago
Here is create a new account and not enroll an existing account. Thus B is the answer. If we refer to enroll an exisiting account, then A as it
require to sign in with a 64bit character from the email specified during account invitation.
upvoted 1 times

" # hilft 3 months, 1 week ago


when you see rotation => secret manager
B.
upvoted 2 times

" # Bigbearcn 7 months, 3 weeks ago


Selected Answer: C
To allow an IAM user or role to connect to your DB instance, you must create an IAM policy. After that, you attach the policy to an IAM user or
role.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html
upvoted 2 times

" # cldy 10 months, 1 week ago


C is correct.
https://aws.amazon.com/premiumsupport/knowledge-center/users-connect-rds-iam/
upvoted 1 times

" # cldy 11 months ago


C. Enable IAM database authentication on the database. Attach an IAM policy to each system administrator‫ג‬€™s role to map the role to the
database user name. Install the Amazon Aurora SSL certificate bundle to the system administrators‫ג‬€™ certificate trust store. Use the AWS CLI
to generate an authentication token used when connecting to the database.
upvoted 1 times

" # Liongeek 11 months, 4 weeks ago


Ans C
If you could even configure the db to use the IAM IDP to map the administrator roles... you were not only giving access to the administrator but
also all the users in the IDP.
upvoted 2 times

" # andylogan 1 year ago


It's C - IAM policy with temporary credentials
upvoted 2 times

" # student22 1 year ago


C - Correct
A,B - Wrong: Retrieves password
D - Not sure if IDP can be used this way. C sounds better.
upvoted 2 times

" # DerekKey 1 year ago


C correct- https://aws.amazon.com/blogs/big-data/federate-database-user-authentication-easily-with-iam-and-amazon-redshift/
D wrong
upvoted 1 times

" # Coffeinerd 1 year ago


A and B are wrong: it does not address the use of temporary credentials and using existing company controls. It will just rotate existing
credentials but not use temporary ones.

C and D are in the fight... from a technical perspective D would be better BUT I could not find any doc that explains how to leverage an IdP with
IAM DB Auth, so I would go for C as it follows the current process to grant an IAM user DB rights.

CCC!
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # Sur272 1 year ago

480 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

why A is not correct?


upvoted 1 times

" # Viper57 1 year ago


Because the question states the company wants to replace the passwords with temporary credentials using the company's existing AWS
access controls.

Option A means the they would still rely on storing their password rather than using temp credentials.
upvoted 1 times
" # Suresh108 1 year ago
Going with CCCCC. Probable C and D, eliminated D due to Identity provider
upvoted 1 times

" # denccc 1 year ago


it's D
upvoted 1 times

" # Kopa 1 year ago


Im going for C
upvoted 2 times

481 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #705 Topic 1

A company's AWS architecture currently uses access keys and secret access keys stored on each instance to access AWS services. Database
credentials are hard-coded on each instance. SSH keys for command-line remote access are stored in a secured Amazon S3 bucket. The company
has asked its solutions architect to improve the security posture of the architecture without adding operational complexity.
Which combination of steps should the solutions architect take to accomplish this? (Choose three.)

A. Use Amazon EC2 instance pro+les with an IAM role

B. Use AWS Secrets Manager to store access keys and secret access keys

C. Use AWS Systems Manager Parameter Store to store database credentials

D. Use a secure jeet of Amazon EC2 bastion hosts for remote access

E. Use AWS KMS to store database credentials

F. Use AWS Systems Manager Session Manager for remote access

Correct Answer: ABD

Community vote distribution


ACF (100%)

" # wasabidev Highly Voted $ 1 year, 1 month ago


I think ACF are better
upvoted 28 times

" # cnethers Highly Voted $ 1 year ago


ACF
A - roles and instance profiles attached to an instance defining who and what access is a best practice
B - not required if your using SSM session manager so you would not need access keys for instances
C - parameter store can be used to store secrets so we are green better option would be secrets manager which password rotation
D - not wrong but why would you when you can use session manager?
E - just wrong
F - no brainer https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
upvoted 16 times

" # gnic Most Recent % 2 months, 1 week ago


ABF - Database Credential should be stored in secret manager
upvoted 1 times

" # JYZ 6 months, 2 weeks ago


F is not a good choice as it requires the access to console.
upvoted 1 times

" # cldy 11 months ago


A. Use Amazon EC2 instance profiles with an IAM role
C. Use AWS Systems Manager Parameter Store to store database credentials
F. Use AWS Systems Manager Session Manager for remote access
upvoted 1 times

" # AzureDP900 11 months ago


A,C,F correct
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: ACF
ACF
A - roles and instance profiles attached to an instance defining who and what access is a best practice
B - not required if your using SSM session manager so you would not need access keys for instances
C - parameter store can be used to store secrets so we are green better option would be secrets manager which password rotation
D - not wrong but why would you when you can use session manager?
E - just wrong
F - no brainer https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
upvoted 3 times

" # andylogan 1 year ago


It's A C F

482 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # Kopa 1 year ago
A,C,F no doubt
upvoted 1 times

" # tgv 1 year ago


AAA CCC FFF
---
upvoted 1 times

" # blackgamer 1 year ago


ACF is the answer
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A, C, F
upvoted 1 times

" # vimgoru24 1 year ago


ACF no doubts
upvoted 1 times

" # Waiweng 1 year ago


it's A,C,F
upvoted 2 times

" # blackgamer 1 year ago


Yes, totally agree with ACF.
upvoted 1 times

" # anandbabu 1 year ago


ACD is correct answer
upvoted 1 times

" # certainly 1 year ago


B is incorrect. don't need to store secret if you can access by assuming role. so I will go for ACF
upvoted 3 times

483 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #706 Topic 1

A company wants to change its internal cloud billing strategy for each of its business units. Currently, the cloud governance team shares reports
for overall cloud spending with the head of each business unit. The company uses AWS Organizations to manage the separate AWS accounts for
each business unit. The existing tagging standard in Organizations includes the application, environment, and owner. The cloud governance team
wants a centralized solution so each business unit receives monthly reports on its cloud spending. The solution should also send noti+cations for
any cloud spending that exceeds a set threshold.
Which solution is the MOST cost-effective way to meet these requirements?

A. Con+gure AWS Budgets in each account and con+gure budget alerts that are grouped by application, environment, and owner. Add each
business unit to an Amazon SNS topic for each alert. Use Cost Explorer in each account to create monthly reports for each business unit.

B. Con+gure AWS Budgets in the organization's master account and con+gure budget alerts that are grouped by application, environment, and
owner. Add each business unit to an Amazon SNS topic for each alert. Use Cost Explorer in the organization's master account to create
monthly reports for each business unit.

C. Con+gure AWS Budgets in each account and con+gure budget alerts that are grouped by application, environment, and owner. Add each
business unit to an Amazon SNS topic for each alert. Use the AWS Billing and Cost Management dashboard in each account to create monthly
reports for each business unit.

D. Enable AWS Cost and Usage Reports in the organization's master account and con+gure reports grouped by application, environment, and
owner. Create an AWS Lambda function that processes AWS Cost and Usage Reports, sends budget alerts, and sends monthly reports to each
business unit's email list.

Correct Answer: B

Community vote distribution


B (100%)

" # nitinz Highly Voted $ 1 year, 1 month ago


B: https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-aws-budgets-reports
/#:~:text=AWS%20Budgets%20gives%20you%20the,below%20the%20threshold%20you%20define.
upvoted 19 times

" # Blair77 Most Recent % 3 weeks, 4 days ago


Selected Answer: B
BBB ! Centralized + AWS Budgets + Cost Explorer
upvoted 1 times

" # AzureDP900 11 months ago


B is right
upvoted 1 times

" # cldy 11 months ago


B. Configure AWS Budgets in the organization‫ג‬€™s master account and configure budget alerts that are grouped by application, environment,
and owner. Add each business unit to an Amazon SNS topic for each alert. Use Cost Explorer in the organization‫ג‬€™s master account to create
monthly reports for each business unit.
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # AkanshaR91 1 year ago


this question appeared in my exam on 12th July
upvoted 3 times

" # student22 1 year ago


B
Centralized + AWS Budgets + Cost Explorer
upvoted 1 times

" # tgv 1 year ago


BBB
---

484 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # WhyIronMan 1 year ago
I'll go with B
upvoted 1 times

" # hk436 1 year, 1 month ago


B: AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your
budgeted amount.
upvoted 3 times

" # vkbajoria 1 year, 1 month ago


it's B
upvoted 2 times

" # Waiweng 1 year, 1 month ago


it's B
upvoted 4 times

" # ppshein 1 year, 1 month ago


I will go with B.
upvoted 2 times

485 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #707 Topic 1

A company is con+guring connectivity to a multi-account AWS environment to support application workloads that serve users in a single
geographic region. The workloads depend on a highly available, on-premises legacy system deployed across two locations. It is critical for the
AWS workloads to maintain connectivity to the legacy system, and a minimum of 5 Gbps of bandwidth is required. All application workloads within
AWS must have connectivity with one another.
Which solution will meet these requirements?

A. Con+gure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from a DX partner for each on-premises location. Create
private virtual interfaces on each connection for each AWS account VPC. Associate the private virtual interface with a virtual private gateway
attached to each VPC.

B. Con+gure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from two DX partners for each on-premises location. Create
and attach a virtual private gateway for each AWS account VPC. Create a DX gateway in a central network account and associate it with the
virtual private gateways. Create a public virtual interface on each DX connection and associate the interface with the DX gateway.

C. Con+gure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from two DX partners for each on-premises location. Create a
transit gateway and a DX gateway in a central network account. Create a transit virtual interface for each DX interface and associate them
with the DX gateway. Create a gateway association between the DX gateway and the transit gateway.

D. Con+gure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from a DX partner for each on-premises location. Create and
attach a virtual private gateway for each AWS account VPC. Create a transit gateway in a central network account and associate it with the
virtual private gateways. Create a transit virtual interface on each DX connection and attach the interface to the transit gateway.

Correct Answer: B

Community vote distribution


C (73%) B (27%)

" # aws_master Highly Voted $ 1 year ago


A - no, there is no connection between VPCs.
B - no, bcz DX gateway doesn't support routing from one VPN to another ( https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-
connect-gateways-intro.html )
C - right answer. https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-aws-transit-gateway.html
D - no, you can not connect Direct Connect to the Transit gateway without Direct Connect gateway in the middle.
upvoted 27 times

" # Tony_W 1 year ago


One more thing I think I see wrong with B is at the end of the answer. It says to create a PUBLIC Virtual interface on each DX connection.
Wouldnt that be a private virtual interface making this answer wrong from the start? Plus transit gateways can do peering. So C would work
just fine.
upvoted 8 times

" # tvs Highly Voted $ 1 year ago


I go with C https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/direct-connect.html
upvoted 5 times

" # MrCarter 1 year ago


awesome link
upvoted 1 times

" # davideccc Most Recent % 3 weeks, 5 days ago


Selected Answer: C
c is the right one
upvoted 1 times

" # Ell89 1 month ago


Selected Answer: C
ill go for C
upvoted 1 times

" # aandc 4 months, 1 week ago


C: All AWS application workloads must be connected to one another -> transit-gateway
upvoted 5 times

486 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # user89 5 months, 2 weeks ago


C.
The solution involves the following components:

A transit gateway that has VPC attachments.

A Direct Connect gateway.

An association between the Direct Connect gateway and the transit gateway.

A transit virtual interface that is attached to the Direct Connect gateway.


https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-intro.html
upvoted 2 times

" # MuskyWilkins 6 months ago


Selected Answer: B
The exact scenario is detailed here: https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-intro.html
upvoted 3 times

" # Jonfernz 5 months ago


That link actually suggests C as the answer, bruv.
upvoted 1 times

" # LiamNg 5 months, 3 weeks ago


Agree, the answer should be B. The workload are in separate accounts.
Ref to"Virtual private gateway associations across accounts" in MuskyWilkins link
upvoted 1 times

" # Enigmaaaaaa 4 months ago


Where do you see in the link "ublic virtual interface "?
upvoted 1 times

" # jyrajan69 7 months, 1 week ago


There must be a reason for saying its confined to one region, so DX Gateway is not required. So left with A or D, and D only option that allows for
comm between VPC, so the answer is D
upvoted 1 times

" # sb333 3 weeks, 6 days ago


DX Gateway is required when you want to use a TGW (with Transit VIF per DX connection), even if only for a single Region.
upvoted 1 times

" # jj22222 9 months, 1 week ago


Selected Answer: C
C. Configure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from two DX partners for each on-premises location. Create a
transit gateway and a DX gateway in a central network account. Create a transit virtual interface for each DX interface and associate them with
the DX gateway. Create a gateway association between the DX gateway and the transit gateway.
upvoted 2 times

" # jj22222 9 months, 1 week ago


Selected Answer: C
C looks right
upvoted 2 times

" # Derrick888 10 months, 1 week ago


Selected Answer: C
C. for sure
upvoted 2 times

" # cldy 11 months ago


C. Configure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from two DX partners for each on-premises location. Create a
transit gateway and a DX gateway in a central network account. Create a transit virtual interface for each DX interface and associate them with
the DX gateway. Create a gateway association between the DX gateway and the transit gateway.
upvoted 1 times

" # AzureDP900 11 months ago


It is C
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

487 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Kopa 1 year ago


Im for C. TransitGateway makes sense.
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # blackgamer 1 year ago


C for me
upvoted 1 times

488 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #708 Topic 1

A +nancial company needs to create a separate AWS account for a new digital wallet application. The company uses AWS Organizations to
manage its accounts.
A solutions architect uses the IAM user Support1 from the master account to create a new member account with +nance1@example.com as the
email address.
What should the solutions architect do to create IAM users in the new member account?

A. Sign in to the AWS Management Console with AWS account root user credentials by using the 64-character password from the initial AWS
Organizations email sent to +nance1@example.com. Set up the IAM users as required.

B. From the master account, switch roles to assume the OrganizationAccountAccessRole role with the account ID of the new member
account. Set up the IAM users as required.

C. Go to the AWS Management Console sign-in page. Choose ‫ג‬€Sign in using root account credentials.‫ג‬€ Sign in by using the email address
+nance1@example.com and the master account's root password. Set up the IAM users as required.

D. Go to the AWS Management Console sign-in page. Sign in by using the account ID of the new member account and the Support1 IAM
credentials. Set up the IAM users as required.

Correct Answer: A
Reference:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_create.html

Community vote distribution


B (100%)

" # wasabidev Highly Voted $ 1 year, 1 month ago


B.
A wrong because "When you create an account, AWS Organizations initially assigns a long (64 characters), complex, randomly generated
password to the root user. You can't retrieve this initial password. To access the account as the root user for the first time, you must go through
the process for password recovery. "
upvoted 15 times

" # RVivek 8 months, 4 weeks ago


Good explanation. Yes. B is the answer. Aditional inforatonhttps://aws.amazon.com/premiumsupport/knowledge-center/organizations-
member-account-access/
upvoted 1 times

" # vkbajoria Highly Voted $ 1 year ago


it is B for me. It makes more sense
upvoted 6 times

" # dcdcdc3 Most Recent % 1 month, 1 week ago


Not A because an email with password is never sent when creating the account form Orgs
upvoted 1 times

" # dcdcdc3 1 month, 1 week ago


Selected Answer: B
B.
As added value I would always want to reset the root password once, set mfa and vault both. If not, email admins can get themselves access in
the future.
upvoted 2 times

" # Student1950 9 months, 3 weeks ago


A is the correct answer.
Reason: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_invites.html
When an invited account joins your organization, you do not automatically have full administrator control over the account, unlike created
accounts. If you want the management account to have full administrative control over an invited member account, you must create the
OrganizationAccountAccessRole IAM role in the member account and grant permission to the management account to assume the role.
When you create an account in your organization instead of inviting an existing account to join, AWS Organizations automatically creates an IAM
role (named OrganizationAccountAccessRole by default) that you can use to grant users in the management account administrator access to the
created account.
upvoted 1 times

" # AzureDP900 11 months ago


OrganizationAccountAccessRole is keyword here. B is right

489 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times

" # AzureDP900 11 months ago


B is right answer
upvoted 2 times

" # cldy 11 months ago


B. From the master account, switch roles to assume the OrganizationAccountAccessRole role with the account ID of the new member account.
Set up the IAM users as required.
upvoted 1 times

" # acloudguru 11 months, 3 weeks ago


Selected Answer: B
When you create a new member account, Organizations sets an initial password for that account that can't be retrieved. To access the account
as the root user for the first time, follow these instructions to reset the initial password.

https://aws.amazon.com/premiumsupport/knowledge-center/organizations-member-account-access/
upvoted 2 times

" # Smartphone 11 months, 3 weeks ago


The correct option is A. Please see read the below link for the reference.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_access.html
upvoted 3 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # Kopa 1 year ago


Im for B, in Neal Davis we practice this often on labs
upvoted 3 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # blackgamer 1 year ago


Answer is B
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B

https://aws.amazon.com/premiumsupport/knowledge-center/organizations-member-account-access/
upvoted 2 times

" # apmpm 1 year ago


It has to be A
upvoted 3 times

" # WhyIronMan 1 year ago


When you create a new member account, Organizations sets an initial password for that account that can't be retrieved. To access the
account as the root user for the first time, follow these instructions to reset the initial password.

https://aws.amazon.com/premiumsupport/knowledge-center/organizations-member-account-access/
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 5 times

490 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #709 Topic 1

A company is designing a data processing platform to process a large number of +les in an Amazon S3 bucket and store the results in Amazon
DynamoDB.
These +les will be processed once and must be retained for 1 year. The company wants to ensure that the original +les and resulting data are
highly available in multiple AWS Regions.
Which solution will meet these requirements?

A. Create an S3 CreateObject event noti+cation to copy the +le to Amazon Elastic Block Store (Amazon EBS). Use AWS DataSync to sync the
+les between EBS volumes in multiple Regions. Use an Amazon EC2 Auto Scaling group in multiple Regions to attach the EBS volumes.
Process the +les and store the results in a DynamoDB global table in multiple Regions. Con+gure the S3 bucket with an S3 Lifecycle policy to
move the +les to S3 Glacier after 1 year.

B. Create an S3 CreateObject event noti+cation to copy the +le to Amazon Elastic File System (Amazon EFS). Use AWS DataSync to sync the
+les between EFS volumes in multiple Regions. Use an AWS Lambda function to process the EFS +les and store the results in a DynamoDB
global table in multiple Regions. Con+gure the S3 buckets with an S3 Lifecycle policy to move the +les to S3 Glacier after 1 year.

C. Copy the +les to an S3 bucket in another Region by using cross-Region replication. Create an S3 CreateObject event noti+cation on the
original bucket to push S3 +le paths into Amazon EventBridge (Amazon CloudWatch Events). Use an AWS Lambda function to poll EventBridge
(CloudWatch Events) to process each +le and store the results in a DynamoDB table in each Region. Con+gure both S3 buckets to use the S3
Standard-Infrequent Access (S3 Standard-IA) storage class and an S3 Lifecycle policy to delete the +les after 1 year.

D. Copy the +les to an S3 bucket in another Region by using cross-Region replication. Create an S3 CreateObject event noti+cation on the
original bucket to execute an AWS Lambda function to process each +le and store the results in a DynamoDB global table in multiple Regions.
Con+gure both S3 buckets to use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class and an S3 Lifecycle policy to delete the
+les after 1 year.

Correct Answer: A

Community vote distribution


D (100%)

" # lostre Highly Voted $ 1 year, 1 month ago


D is my choice. S3 CRR, C3 create triggers lambda, stores results in Dynamo DB Global, S3IA and lifecycle to delete after 1 year
upvoted 26 times

" # Waiweng Highly Voted $ 1 year, 1 month ago


it's D
upvoted 8 times

" # CloudHell Most Recent % 4 months, 3 weeks ago


I think it's D, and If it's not D then I'll get it wrong in the exam.
upvoted 1 times

" # cldy 11 months ago


D. Copy the files to an S3 bucket in another Region by using cross-Region replication. Create an S3 CreateObject event notification on the
original bucket to execute an AWS Lambda function to process each file and store the results in a DynamoDB global table in multiple Regions.
Configure both S3 buckets to use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class and an S3 Lifecycle policy to delete the files
after 1 year.
upvoted 2 times

" # AzureDP900 11 months ago


D is right answer. We don't need files after 1 year.
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: D
D is my choice. S3 CRR, C3 create triggers lambda, stores results in Dynamo DB Global, S3IA and lifecycle to delete after 1 year
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

491 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # andylogan 1 year ago


typo It's D
upvoted 1 times
" # DerekKey 1 year ago
Have you noticed the requirement:
"original files ..... available in multiple AWS Regions"
How will you make it working with C & D?
"Copy the files to an S3 bucket in another Region by using cross-Region replication"
"Configure both S3 buckets"
--> Therefore C&D should be WRONG
The only solution that will work is B.
upvoted 1 times

" # Viper57 1 year ago


By your own logic question B is also wrong because it relies on a 'CreateObject' event. This assumes new objects will be created in the S3
bucket which also means D can be correct.
upvoted 1 times

" # Viper57 1 year ago


Cross region replication also supports replicating existing objects.

https://aws.amazon.com/blogs/storage/replicating-existing-objects-between-s3-buckets/
upvoted 1 times

" # Kopa 1 year ago


Im going for D.
upvoted 1 times

" # tgv 1 year ago


DDD
---
upvoted 1 times

" # blackgamer 1 year ago


It is D
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 1 times

" # vimgoru24 1 year ago


A is nonsense, D is the way to go
upvoted 1 times

" # KnightVictor 1 year, 1 month ago


D is correct.
Default answer A is incorrect. Glacier cannot give data back immediately
upvoted 3 times

" # champcloud 1 year, 1 month ago


Going with D
upvoted 2 times

" # nitinz 1 year, 1 month ago


D is correct
upvoted 5 times

" # wasabidev 1 year, 1 month ago


I think D is better than A
upvoted 6 times

492 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #710 Topic 1

A company is running an Apache Hadoop cluster on Amazon EC2 instances. The Hadoop cluster stores approximately 100 TB of data for weekly
operational reports and allows occasional access for data scientists to retrieve data. The company needs to reduce the cost and operational
complexity for storing and serving this data.
Which solution meets these requirements in the MOST cost-effective manner?

A. Move the Hadoop cluster from EC2 instances to Amazon EMR. Allow data access patterns to remain the same.

B. Write a script that resizes the EC2 instances to a smaller instance type during downtime and resizes the instances to a larger instance type
before the reports are created.

C. Move the data to Amazon S3 and use Amazon Athena to query the data for reports. Allow the data scientists to access the data directly in
Amazon S3.

D. Migrate the data to Amazon DynamoDB and modify the reports to fetch data from DynamoDB. Allow the data scientists to access the data
directly in DynamoDB.

Correct Answer: C

Community vote distribution


C (50%) A (50%)

" # kejam Highly Voted $ 1 year, 1 month ago


C: S3 and Athena. "The company needs to reduce the cost and operational complexity for storing and serving this data. Which solution meets
these requirements in the MOST cost-effective manner?" EMR storage is ephemeral. The company has 100TB that need to persist, they would
have to use EMRFS to backup to S3 anyway.
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-storage.html
upvoted 25 times

" # AWSum1 1 year ago


Great explanation. I suppose the deliberately put in EMR to confuse you into thinking it solves the Hadoop problem
upvoted 3 times

" # doris0306 Highly Voted $ 1 year, 1 month ago


A EMR helps creating Hadoop clusters to analyse vase amount of Data
upvoted 7 times

" # WhyIronMan 1 year ago


but is not cost effective
upvoted 2 times

" # davideccc Most Recent % 3 weeks, 5 days ago


Selected Answer: C
athena + S3 is definitely the cheaper option here
upvoted 1 times

" # JohnPi 4 weeks, 1 day ago


Selected Answer: C
Move the data to Amazon S3 and use Amazon Athena to query the data for reports. Allow the data scientists to access the data directly in
Amazon S3.
upvoted 1 times

" # dcdcdc3 1 month, 1 week ago


Selected Answer: A
Per the below article, EMR is way cheaper than ec2. I would choose A as I am not sure if the structure of hte data can be queried by Athena in
cost-effective way

https://blogs.perficient.com/2016/05/19/two-choices-1-amazon-emr-or-2-hadoop-on-ec2/
upvoted 1 times

" # chase12345 1 month, 3 weeks ago


I will choose A AWS EMR because Amazon EMR makes it simple and cost effective to run highly distributed processing frameworks such as
Hadoop, Spark, and Presto when compared to on-premises
https://docs.aws.amazon.com/athena/latest/ug/when-should-i-use-ate.html
upvoted 1 times

493 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # AYANtheGLADIATOR 2 months, 2 weeks ago


C is the answer because EMR is not a cheap option.
upvoted 2 times

" # MarkChoi 3 months, 2 weeks ago


Selected Answer: A
100TB??
Is it possible to use Athena?
I'll go with A
upvoted 1 times

" # AzureDP900 11 months ago


I agree with C as right answer.
upvoted 1 times

" # cldy 11 months ago


C. Move the data to Amazon S3 and use Amazon Athena to query the data for reports. Allow the data scientists to access the data directly in
Amazon S3.
upvoted 2 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # DerekKey 1 year ago


100TB
EBS - 8.109$
S3 - 2.355$
You have saved 5.752$
This amount can be used for Athen. BTW. we don't know indexes, amount of data that is scanned. What we know is that tit will be: "occasional
access for data scientists to retrieve data"

I am choosing C as CORRECT answer


upvoted 2 times

" # blackgamer 1 year ago


C over A because it is most cost effective.
upvoted 1 times

" # denccc 1 year ago


It's C
upvoted 1 times

" # Tony_W 1 year ago


Anthena is 5.00 per TB of data scanned.
Scenario A:
10 queries on 5TB of uncompressed data stored in an S3 bucket. 250.00
10 queries on 100TB of uncompressed data ... 5000.00 for just 10 queries.
So long story short, reshift. lol But since they are asking for the MOST cost-effective, I would have to say A. Even though I do see issues with it.
And yes, you would back end this with S3 but it just assumes you know that.
upvoted 2 times

" # DerekKey 1 year ago


You are wrong. Check my answer.
upvoted 2 times

" # vimgoru24 1 year ago


The proper answer should be: move the stuff to Redshift and use it to query / generate reports.
upvoted 1 times

" # DerekKey 1 year ago


This is not most cost-effective solution
upvoted 1 times

" # vimgoru24 1 year ago


All of them nonsense. A does address only part of the problem, and C would fail performance-wise in real world - try query 100 TB with Athena.
upvoted 5 times

494 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #711 Topic 1

A company is building a sensor data collection pipeline in which thousands of sensors write data to an Amazon Simple Queue Service (Amazon
SQS) queue every minute. The queue is processed by an AWS Lambda function that extracts a standard set of metrics from the sensor data. The
company wants to send the data to Amazon CloudWatch. The solution should allow for viewing individual and aggregate sensor metrics and
interactively querying the sensor log data using
CloudWatch Logs Insights.
What is the MOST cost-effective solution that meets these requirements?

A. Write the processed data to CloudWatch Logs in the CloudWatch embedded metric format.

B. Write the processed data to CloudWatch Logs. Then write the data to CloudWatch by using the PutMetricData API call.

C. Write the processed data to CloudWatch Logs in a structured format. Create a CloudWatch metric +lter to parse the logs and publish the
metrics to CloudWatch with dimensions to uniquely identify a sensor.

D. Con+gure the CloudWatch Logs agent for AWS Lambda. Output the metrics for each sensor in statsd format with tags to uniquely identify a
sensor. Write the processed data to CloudWatch Logs.

Correct Answer: C

Community vote distribution


A (100%)

" # sek12324 Highly Voted $ 1 year, 1 month ago


A
The CloudWatch embedded metric format is a JSON specification used to instruct CloudWatch Logs to automatically extract metric values
embedded in structured log events. You can use CloudWatch to graph and create alarms on the extracted metric values.
upvoted 20 times

" # kangtamo Most Recent % 5 months ago


Selected Answer: A
Go with A
upvoted 1 times

" # AzureDP900 11 months ago


A is right

https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-cloudwatch-launches-embedded-metric-format/
upvoted 1 times

" # andylogan 1 year ago


It's A
upvoted 1 times

" # tgv 1 year ago


AAA
---
upvoted 1 times

" # blackgamer 1 year ago


A is the answer.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

" # Waiweng 1 year ago


it's A
upvoted 3 times

" # Kelvin 1 year, 1 month ago


A is correct. https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-cloudwatch-launches-embedded-metric-format/
upvoted 2 times

" # Pupu86 1 year, 1 month ago

495 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

A.
https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-cloudwatch-launches-embedded-metric-format/
upvoted 3 times

" # champcloud 1 year, 1 month ago


Going with A
upvoted 1 times

496 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #712 Topic 1

A car rental company has built a serverless REST API to provide data to its mobile app. The app consists of an Amazon API Gateway API with a
Regional endpoint, AWS Lambda functions, and an Amazon Aurora MySQL Serverless DB cluster. The company recently opened the API to mobile
apps of partners. A signi+cant increase in the number of requests resulted, causing sporadic database memory errors. Analysis of the API tramc
indicates that clients are making multiple HTTP GET requests for the same queries in a short period of time. Tramc is concentrated during
business hours, with spikes around holidays and other events.
The company needs to improve its ability to support the additional usage while minimizing the increase in costs associated with the solution.
Which strategy meets these requirements?

A. Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage.

B. Implement an Amazon ElastiCache for Redis cache to store the results of the database calls. Modify the Lambda functions to use the
cache.

C. Modify the Aurora Serverless DB cluster con+guration to increase the maximum amount of available memory.

D. Enable throttling in the API Gateway production stage. Set the rate and burst values to limit the incoming calls.

Correct Answer: A
Reference:
https://aws.amazon.com/getting-started/projects/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/module-4/

Community vote distribution


A (100%)

" # certainly Highly Voted $ 1 year, 1 month ago


it is A for me. you can enabled cached on API gateway. no need for extra cache layer for additional cost. also changing to Edge-optimized
endpoint would also help caching content base on Cookie https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide
/Cookies.html
upvoted 13 times

" # certainly 1 year, 1 month ago


https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-endpoint-types.html
upvoted 3 times

" # rahulrajtiwari Most Recent % 1 week, 2 days ago


Selected Answer: A
it's A
upvoted 1 times

" # tkanmani76 9 months, 4 weeks ago


Was inclined to B but it does add additional cost for Elasticache. Using API gateway caching helps.

You can enable API caching in Amazon API Gateway to cache your endpoint's responses. With caching, you can reduce the number of calls
made to your endpoint and also improve the latency of requests to your API.

When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds.
API Gateway then responds to the request by looking up the endpoint response from the cache instead of making a request to your endpoint.
The default TTL value for API caching is 300 seconds. The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled.
upvoted 1 times

" # cldy 11 months ago


A. Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage.
upvoted 1 times

" # AzureDP900 11 months ago


A is for sure
upvoted 1 times

" # andylogan 1 year ago


It's A
upvoted 1 times

" # tgv 1 year ago


AAA
---

497 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times

" # denccc 1 year ago


It's A
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

" # vimgoru24 1 year ago


B is classic, but A works better to minimize cost and overhead. So A
upvoted 2 times

" # cmthiru 1 year ago


Will got for B.
https://aws.amazon.com/blogs/database/latency-reduction-of-hybrid-architectures-with-amazon-elasticache/
upvoted 1 times

" # Waiweng 1 year ago


it's A
upvoted 2 times

" # ppshein 1 year, 1 month ago


it should be A.
B = costly
upvoted 1 times

" # anandbabu 1 year, 1 month ago


its B as it was mentioned "same Queries"
upvoted 3 times

" # Pupu86 1 year, 1 month ago


Default endpoint setting is usually edge-optimised to support cookie forwarding which in turn caches multiple identical reads in its cloudfront
cache. Thus reducing the need to keep fetching identical data from the origin. While B is viable but implemented persistent memory based
caching is definitely more costly. So I’m inclined to move towards A.
upvoted 2 times

" # SD13 1 year, 1 month ago


Correct option A. B is not an option for me because API gateway caching will eliminate lambda invocations and costs associated with it.
upvoted 3 times

" # SD13 1 year, 1 month ago


Neal Davis exam also selected this option as the correct one.
upvoted 4 times

" # MrCarter 1 year ago


yessir Neal Davis ftw
upvoted 3 times

" # kejam 1 year, 1 month ago


B: https://aws.amazon.com/getting-started/hands-on/real-time-leaderboard-amazon-aurora-serverless-elasticache/
upvoted 1 times

498 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #713 Topic 1

A company has application services that have been containerized and deployed on multiple Amazon EC2 instances with public IPs. An Apache
Kafka cluster has been deployed to the EC2 instances. A PostgreSQL database has been migrated to Amazon RDS for PostgreSQL. The company
expects a signi+cant increase of orders on its platform when a new version of its jagship product is released.
What changes to the current architecture will reduce operational overhead and support the product release?

A. Create an EC2 Auto Scaling group behind an Application Load Balancer. Create additional read replicas for the DB instance. Create Amazon
Kinesis data streams and con+gure the application services to use the data streams. Store and serve static content directly from Amazon S3.

B. Create an EC2 Auto Scaling group behind an Application Load Balancer. Deploy the DB instance in Multi-AZ mode and enable storage auto
scaling. Create Amazon Kinesis data streams and con+gure the application services to use the data streams. Store and serve static content
directly from Amazon S3.

C. Deploy the application on a Kubernetes cluster created on the EC2 instances behind an Application Load Balancer. Deploy the DB instance
in Multi-AZ mode and enable storage auto scaling. Create an Amazon Managed Streaming for Apache Kafka cluster and con+gure the
application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.

D. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate and enable auto scaling behind an
Application Load Balancer. Create additional read replicas for the DB instance. Create an Amazon Managed Streaming for Apache Kafka
cluster and con+gure the application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.

Correct Answer: B

Community vote distribution


D (86%) 14%

" # kalyan_krishna742020 Highly Voted $ 1 year, 1 month ago


D seems to be correct
upvoted 31 times

" # Liongeek Highly Voted $ 1 year ago


For those that don't know if answer is C or D. They say they want to support the increase demand... MultiAZ doesn't help with that, MuliAZ it's
meant for HA, but read replicas are meant for increased demand... that's why ans is D
upvoted 13 times

" # cen007 Most Recent % 3 months, 2 weeks ago


Selected Answer: D
Ans D.
Solution is here.
https://aws.amazon.com/blogs/big-data/power-your-kafka-streams-application-with-amazon-msk-and-aws-fargate/
Fargate+Amazon MSK

A. Running container on EC2 is more expensive compared to Fargate


B. Same as above + higher Multi AZ costs.
C. same as B.
upvoted 2 times

" # Student1950 3 months, 3 weeks ago


I would vote for C.
RDS support auto-scaling -> less operating cost
https://aws.amazon.com/about-aws/whats-new/2019/06/rds-storage-auto-scaling
/#:~:text=RDS%20Storage%20Auto%20Scaling%20continuously,in%20the%20AWS%20Management%20Console.
RDS-Multi-AZ- To support sync R/W in case of failures since more write load for DB with new orders
upvoted 3 times

" # hahaaaaa 3 months, 3 weeks ago


it's C
- B : not denotes about container
- D : Fargate is managed Service. no use EC2
upvoted 1 times

" # ArreRaja 2 months ago


AWS Fargate with Amazon EKS is available in all Amazon EKS Regions except AWS GovCloud (US-East) and AWS GovCloud (US-West).
from https://docs.aws.amazon.com/eks/latest/userguide/fargate.html
upvoted 1 times

499 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Niaj 5 months, 3 weeks ago


Selected Answer: D
D is the right answer here
upvoted 1 times

" # Sonujunko 8 months ago


Cannot be D, as Fargate is part of ECS and not EKS
B - looks correct
1, Cost saving with ASG
2. Storage Auto Scaling for Volume
3, Multi AZ for better support /availability
upvoted 2 times

" # wsyh 6 months, 3 weeks ago


Amazon EKS support run Kubernetes pods on AWS Fargate
https://docs.aws.amazon.com/eks/latest/userguide/fargate.html
upvoted 1 times

" # Hasitha99 7 months ago


Fargate support for both ECS and EKS.
Ref : https://docs.aws.amazon.com/eks/latest/userguide/fargate.html
upvoted 2 times

" # omishaaaa 8 months, 1 week ago


D seems right to me as well since `deploying the DB instance in Multi-AZ mode and enable storage auto scaling` seems like a huge manual effort.
DBs are not advised to have storage autoscaling.
upvoted 1 times

" # Mechanic 8 months, 1 week ago


Selected Answer: C
I guess it's C.
Not B? because Kinesis and S3 will make an extra costs in addition to the burden of modifying the apps to use Kinesis.
Not D? because EKS will charge more cost.
upvoted 1 times

" # RVivek 9 months, 2 weeks ago


C-- Latency based to direct to the reagion close to the user and failover for resilency
E- DB repplication Instance back up
upvoted 1 times

" # RVivek 9 months, 1 week ago


Sorry commented on a wrong question and unbale to delete
upvoted 1 times

" # faisalshani001 10 months, 3 weeks ago


D seems right. Because I think auto-storage scaling is a feature of DynamoDB and/or Aurora.
upvoted 1 times

" # AzureDP900 11 months ago


D seems right option, I am only thinking about read-write not mentioned in this option as per question there is big rise in order volume
upvoted 1 times

" # cldy 11 months, 1 week ago


D. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate and enable auto scaling behind an Application
Load Balancer. Create additional read replicas for the DB instance. Create an Amazon Managed Streaming for Apache Kafka cluster and
configure the application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.
upvoted 1 times

" # cakriwut 11 months, 2 weeks ago


Selected Answer: D
D - better product release support, using PROD v1 and PROD v2 namespace in kubernetes to support product release lifecycle; better scalability
with lower cost of maintaining own fleet of ec2
upvoted 3 times

" # backfringe 11 months, 2 weeks ago


I go with D
upvoted 1 times

" # andylogan 1 year ago


It's D
upvoted 1 times

500 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # tgv 1 year ago


DDD
---
upvoted 1 times

501 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #714 Topic 1

A company recently completed a large-scale migration to AWS. Development teams that support various business units have their own accounts
in AWS
Organizations. A central cloud team is responsible for controlling which services and resources can be accessed, and for creating operational
strategies for all teams within the company. Some teams are approaching their account service quotas. The cloud team needs to create an
automated and operationally emcient solution to proactively monitor service quotas. Monitoring should occur every 15 minutes and send alerts
when a team exceeds 80% utilization.
Which solution will meet these requirements?

A. Create a scheduled AWS Con+g rule to trigger an AWS Lambda function to call the GetServiceQuota API. If any service utilization is above
80%, publish a message to an Amazon Simple Noti+cation Service (Amazon SNS) topic to alert the cloud team. Create an AWS
CloudFormation template and deploy the necessary resources to each account.

B. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that triggers an AWS Lambda function to refresh the AWS Trusted Advisor
service limits checks and retrieve the most current utilization and service limit data. If the current utilization is above 80%, publish a message
to an Amazon Simple Noti+cation Service (Amazon SNS) topic to alert the cloud team. Create AWS CloudFormation StackSets that deploy the
necessary resources to all Organizations accounts.

C. Create an Amazon CloudWatch alarm that triggers an AWS Lambda function to call the Amazon CloudWatch GetInsightRuleReport API to
retrieve the most current utilization and service limit data. If the current utilization is above 80%, publish an Amazon Simple Email Service
(Amazon SES) noti+cation to alert the cloud team. Create AWS CloudFormation StackSets that deploy the necessary resources to all
Organizations accounts.

D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that triggers an AWS Lambda function to refresh the AWS Trusted Advisor
service limits checks and retrieve the most current utilization and service limit data. If the current utilization is above 80%, use Amazon
Pinpoint to send an alert to the cloud team. Create an AWS CloudFormation template and deploy the necessary resources to each account.

Correct Answer: A
Reference:
https://aws.amazon.com/solutions/implementations/limit-monitor/

Community vote distribution


B (100%)

" # SD13 Highly Voted $ 1 year, 1 month ago


B is correct.
A: Lambda should be invoked by cloudwatch on a schedule, not by Config.
C & D : does not make much sense to me.
upvoted 14 times

" # certainly 1 year, 1 month ago


agree with B
upvoted 3 times

" # KnightVictor Highly Voted $ 1 year ago


Going with B. Verified this in Neal Davis sample questions
upvoted 5 times

" # 9xnine 1 year ago


What's Neal Davis sample questions?
upvoted 1 times

" # Chibuzo1 1 year ago


Google Neal Davis and you will find out
upvoted 3 times

" # bobsmith2000 Most Recent % 5 months, 2 weeks ago


Selected Answer: B
No-brainer
upvoted 2 times

" # pititcu667 9 months, 4 weeks ago

502 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Selected Answer: B
i agree it's b
upvoted 1 times
" # cldy 10 months, 1 week ago
B is correct.
upvoted 1 times

" # Tan0k 11 months ago


Selected Answer: B
BBB

...
upvoted 1 times

" # AzureDP900 11 months ago


I will go with B. Amzon Pinpoint is not relevant to this question

Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. You can connect with customers over
channels like email, SMS, push, voice or in-app messaging. Amazon Pinpoint is easy to set up, easy to use, and is flexible for all marketing
communication scenarios. Segment your campaign audience for the right customer and personalize your messages with the right content.
Delivery and campaign metrics in Amazon Pinpoint measure the success of your communications. Amazon Pinpoint can grow with you and
scales globally to billions of messages per day across channels.
upvoted 1 times

" # andylogan 1 year ago


It's B.
Service Limits ---> Trusted Advisor
upvoted 1 times

" # Kopa 1 year ago


B for sure, Trusted Advisor
upvoted 2 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # blackgamer 1 year ago


B, trusted advisor to check service limits.
upvoted 3 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 3 times

" # Amitv2706 1 year ago


B.
Service Limits ---> Trusted Advisor
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 4 times

" # ladh 1 year ago


why not c?
upvoted 2 times

" # gsw 1 year ago


The answer is A - service quotas were introduced in 2019 so B is not correct
upvoted 1 times

" # kejam 1 year ago


B: https://docs.aws.amazon.com/solutions/latest/limit-monitor/architecture.html
upvoted 2 times

503 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #715 Topic 1

An AWS customer has a web application that runs on premises. The web application fetches data from a third-party API that is behind a +rewall.
The third party accepts only one public CIDR block in each client's allow list.
The customer wants to migrate their web application to the AWS Cloud. The application will be hosted on a set of Amazon EC2 instances behind
an Application
Load Balancer (ALB) in a VPC. The ALB is located in public subnets. The EC2 instances are located in private subnets. NAT gateways provide
internet access to the private subnets.
How should a solutions architect ensure that the web application can continue to call the third-party API after the migration?

A. Associate a block of customer-owned public IP addresses to the VPC. Enable public IP addressing for public subnets in the VPC.

B. Register a block of customer-owned public IP addresses in the AWS account. Create Elastic IP addresses from the address block and
assign them to the NAT gateways in the VPC.

C. Create Elastic IP addresses from the block of customer-owned IP addresses. Assign the static Elastic IP addresses to the ALB.

D. Register a block of customer-owned public IP addresses in the AWS account. Set up AWS Global Accelerator to use Elastic IP addresses
from the address block. Set the ALB as the accelerator endpoint.

Correct Answer: D

Community vote distribution


B (100%)

" # heyheyhei Highly Voted $ 1 year, 1 month ago


B should be correct. When EC2 instances reach third-party API through internet, their privates IP addresses will be masked by NAT Gateway
public IP address.
upvoted 21 times

" # wasabidev Highly Voted $ 1 year, 1 month ago


B. instances in private subnets use NAT, not ELB
upvoted 9 times

" # Mimek Most Recent % 7 months, 1 week ago


Selected Answer: B
B IMHO
upvoted 2 times

" # tkanmani76 9 months, 2 weeks ago


D also works.
upvoted 1 times

" # Sonujunko 8 months, 2 weeks ago


ALB is for connections coming in not going out. EC2 will respond to sessions coming through ALB for 80 and 443, for EC2 initiate anything
from 3rd party source on internet it need NAT G/W
upvoted 1 times

" # cldy 10 months, 1 week ago


B s correct.
upvoted 1 times

" # AzureDP900 11 months ago


B seems right choice.
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # blackgamer 1 year ago

504 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

B obviously.
upvoted 1 times
" # WhyIronMan 1 year ago
I'll go with B
upvoted 4 times

" # zapper1234 1 year ago


Why not C? C is the only answer that meets the solution criteria by only have "one" IP.
upvoted 1 times

" # vimgoru24 1 year ago


There is no such criteria. The criteria is to have calling IP in the predefined CIDR range. The B covers that.
upvoted 1 times

" # Viper57 1 year ago


You cannot assign an elastic IP address to an ALB, only NLBs.
upvoted 3 times

" # hk436 1 year, 1 month ago


B is my answer!!
upvoted 1 times

" # Waiweng 1 year, 1 month ago


it's B
upvoted 4 times

" # blackgamer 1 year, 1 month ago


Agree with B. NAT.
upvoted 2 times

" # SD13 1 year, 1 month ago


B looks like a good option compared to others.
upvoted 2 times

" # kejam 1 year, 1 month ago


B: https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-bring-your-own-ip-byoip-for-amazon-vpc/
upvoted 4 times

505 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #716 Topic 1

A company is using AWS Organizations to manage multiple AWS accounts. For security purposes, the company requires the creation of an
Amazon Simple
Noti+cation Service (Amazon SNS) topic that enables integration with a third-party alerting system in all the Organizations member accounts.
A solutions architect used an AWS CloudFormation template to create the SNS topic and stack sets to automate the deployment of
CloudFormation stacks.
Trusted access has been enabled in Organizations.
What should the solutions architect do to deploy the CloudFormation StackSets in all AWS accounts?

A. Create a stack set in the Organizations member accounts. Use service-managed permissions. Set deployment options to deploy to an
organization. Use CloudFormation StackSets drift detection.

B. Create stacks in the Organizations member accounts. Use self-service permissions. Set deployment options to deploy to an organization.
Enable the CloudFormation StackSets automatic deployment.

C. Create a stack set in the Organizations master account. Use service-managed permissions. Set deployment options to deploy to the
organization. Enable CloudFormation StackSets automatic deployment.

D. Create stacks in the Organizations master account. Use service-managed permissions. Set deployment options to deploy to the
organization. Enable CloudFormation StackSets drift detection.

Correct Answer: C
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-manage-auto-deployment.html

Community vote distribution


C (100%)

" # lostre Highly Voted $ 1 year, 1 month ago


C (master -> members, service-managed permissions, deploy to org, auto deployment)
https://aws.amazon.com/blogs/aws/use-cloudformation-stacksets-to-provision-resources-across-multiple-aws-accounts-and-regions/
upvoted 16 times

" # jj22222 Most Recent % 9 months ago


C looks right
upvoted 1 times

" # cldy 10 months, 1 week ago


C is correct.
upvoted 1 times

" # cldy 11 months ago


C. Create a stack set in the Organizations master account. Use service-managed permissions. Set deployment options to deploy to the
organization. Enable CloudFormation StackSets automatic deployment.
upvoted 1 times

" # challenger1 11 months ago


Selected Answer: C
My Answer: C
upvoted 1 times

" # AzureDP900 11 months ago


I am going with C
upvoted 1 times

" # moon2351 1 year ago


Answer is C
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # tgv 1 year ago

506 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

CCC
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 2 times

" # vimgoru24 1 year ago


Did not know you can do C. Nice feature.
upvoted 1 times

" # hk436 1 year ago


C is my answer!!
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 2 times

" # AJBA 1 year ago


C
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-manage-auto-deployment.html
upvoted 3 times

" # nitinz 1 year, 1 month ago


C is correct
upvoted 2 times

" # aws_master 1 year, 1 month ago


i think C as well
upvoted 2 times

507 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #717 Topic 1

A company wants to provide desktop as a service (DaaS) to a number of employees using Amazon WorkSpaces. WorkSpaces will need to access
+les and services hosted on premises with authorization based on the company's Active Directory. Network connectivity will be provided through
an existing AWS Direct
Connect connection.
The solution has the following requirements:
✑ Credentials from Active Directory should be used to access on-premises +les and services.
✑ Credentials from Active Directory should not be stored outside the company.
✑ End users should have single sign-on (SSO) to on-premises +les and services once connected to WorkSpaces.
Which strategy should the solutions architect use for end user authentication?

A. Create an AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) directory within the WorkSpaces VPC. Use the
Active Directory Migration Tool (ADMT) with the Password Export Server to copy users from the on-premises Active Directory to AWS Managed
Microsoft AD. Set up a one- way trust allowing users from AWS Managed Microsoft AD to access resources in the on-premises Active
Directory. Use AWS Managed Microsoft AD as the directory for WorkSpaces.

B. Create a service account in the on-premises Active Directory with the required permissions. Create an AD Connector in AWS Directory
Service to be deployed on premises using the service account to communicate with the on-premises Active Directory. Ensure the required TCP
ports are open from the WorkSpaces VPC to the on-premises AD Connector. Use the AD Connector as the directory for WorkSpaces.

C. Create a service account in the on-premises Active Directory with the required permissions. Create an AD Connector in AWS Directory
Service within the WorkSpaces VPC using the service account to communicate with the on-premises Active Directory. Use the AD Connector
as the directory for WorkSpaces.

D. Create an AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) directory in the AWS Directory Service within
the WorkSpaces VPC. Set up a one-way trust allowing users from the on-premises Active Directory to access resources in the AWS Managed
Microsoft AD. Use AWS Managed Microsoft AD as the directory for WorkSpaces. Create an identity provider with AWS Identity and Access
Management (IAM) from an on-premises ADFS server. Allow users from this identity provider to assume a role with a policy allowing them to
run WorkSpaces.

Correct Answer: D
Reference:
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html

Community vote distribution


C (67%) D (33%)

" # heyheyhei Highly Voted $ 1 year, 1 month ago


C should be correct. One of the requirement is “Credentials from Active Directory should not be stored outside the company”, where AD
Connector will not cache any information in the cloud.
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_ad_connector.html
upvoted 21 times

" # Jaypdv Highly Voted $ 1 year, 1 month ago


C.

First clue: "AD Connector is a directory gateway with which you can redirect directory requests to your on-premises Microsoft Active Directory
without caching any information in the cloud. " (https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_ad_connector.html)
which includes pretty much everything needed in the question
Other clue: one-way trust do not work with AWS SSO (https://docs.aws.amazon.com/singlesignon/latest/userguide/connectonpremad.html) that
would eliminate D.
upvoted 7 times

" # MikeyJ Most Recent % 3 months ago


Selected Answer: C
AD Connector is a directory gateway with which you can redirect directory requests to your on-premises Microsoft Active Directory without
caching any information in the cloud.

https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_ad_connector.html
upvoted 2 times

" # hilft 3 months, 2 weeks ago

508 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

C.
Not D. it says credential shoudn't leave outside.
upvoted 2 times
" # asfsdfsdf 3 months, 4 weeks ago
I will go with C ...
Caching is not being done on cloud...
Also it requires two-way trust in order to implement D
https://docs.aws.amazon.com/workspaces/latest/adminguide/launch-workspace-trusted-domain.html
taking look at the documentation creating it with one-way trust is done using AD connector:
https://docs.aws.amazon.com/workspaces/latest/adminguide/launch-workspace-ad-connector.html
And
https://d1.awsstatic.com/Projects/deploy-amazon-workspaces-one-way-trust-with-aws-directory-service.pdf
upvoted 1 times

" # aandc 4 months, 1 week ago


Selected Answer: D
"AWS SSO" is a AWS service which dose not support one way trust. But in this Q, The SSO is the general term. hence D
upvoted 1 times

" # prathima 6 months, 4 weeks ago


Answer is D
upvoted 1 times

" # nimodaytona 7 months, 3 weeks ago


C, https://docs.aws.amazon.com/workspaces/latest/adminguide/launch-workspace-ad-connector.html
upvoted 1 times

" # lucesarano 10 months ago


There’s no need to “ensure the required TCP ports are open from the WS VPC to the on-prem AD Connector”

The connector is on aws, meaning B is wrong.

A is wrong, it violates 2nd req.

D is wrong because there’s no need to create an additional AD on AWS.

C is the only feasible answer.


upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # blackgamer 1 year ago


C to me.
upvoted 2 times

" # Kopa 1 year ago


AD connector for this case, im going for C
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 4 times

" # DashL 1 year ago


B
The document https://docs.aws.amazon.com/whitepapers/latest/best-practices-deploying-amazon-workspaces/best-practices-deploying-
amazon-workspaces.pdf provides various scenarios of deploying Workspace. In this document, there is a note: Regardless of its location (on
premises or remote), the device running the Amazon WorkSpaces client uses the same two ports for connectivity to the Amazon WorkSpaces
service. The client uses port 443 (HTTPS port) for all authentication and session-related information, and port 4172 (PCoIP port), with both
Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), for pixel streaming to a given WorkSpace and network health checks.
Looks like it is required to open TCP Ports.
upvoted 1 times

" # DashL 1 year ago


I guess, I was wrong. Option B says open "TCP ports are open from the WorkSpaces VPC to the on-premises AD Connector". The AD
connector is on AWS, not on-prem.
Ans should be C.

509 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 3 times

" # hk436 1 year ago


C is my answer!!
upvoted 2 times

" # Amitv2706 1 year ago


C.

AD Connector simply connects your existing on-premises Active Directory to AWS. AD Connector is a directory gateway with which you can
redirect directory requests to your on-premises Microsoft Active Directory "without caching any information in the cloud. "

https://aws.amazon.com/single-sign-on/faqs/
upvoted 3 times

510 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #718 Topic 1

A company requires that all internal application connectivity use private IP addresses. To facilitate this policy, a solutions architect has created
interface endpoints to connect to AWS public services. Upon testing, the solutions architect notices that the service names are resolving to public
IP addresses, and that internal services cannot connect to the interface endpoints.
Which step should the solutions architect take to resolve this issue?

A. Update the subnet route table with a route to the interface endpoint

B. Enable the private DNS option on the VPC attributes

C. Con+gure the security group on the interface endpoint to allow connectivity to the AWS services

D. Con+gure an Amazon Route 53 private hosted zone with a conditional forwarder for the internal application

Correct Answer: B

Community vote distribution


B (100%)

" # oxfordsolutions Highly Voted $ 1 year, 1 month ago


yup its B https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html#vpce-private-dns
upvoted 18 times

" # pablobairat Highly Voted $ 1 year ago


It is C, Private DNS is turned on by default for endpoints created for AWS services and AWS Marketplace Partner services, so B is ruled out of
the question.
Source: https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html
upvoted 10 times

" # sashsz 7 months ago


Learn some more about the security group's purpose for EP interface
It's B
upvoted 2 times

" # tgv 1 year ago


good catch!
upvoted 1 times

" # hilft Most Recent % 3 months, 2 weeks ago


I think it's B
upvoted 2 times

" # dkp 3 months, 2 weeks ago


Ans is B
https://aws.amazon.com/premiumsupport/knowledge-center/vpc-interface-configure-dns/
upvoted 1 times

" # asfsdfsdf 3 months, 4 weeks ago


Selected Answer: B
I will go with B.
For everyone saying its C - if this was an SG issue the address would be resolved with the private IP but not accessaible.
you can see proof here:
https://aws.amazon.com/premiumsupport/knowledge-center/vpc-interface-configure-dns/
For the interface VPC endpoint, verify that private DNS names is turned on. If private DNS names isn't turned on, the service domain name or
endpoint domain name resolves to regional public IPs. For steps to turn on private DNS names, see Modify an interface endpoint.
upvoted 1 times

" # Chitty9 4 months, 3 weeks ago


Option C is appropriate.
Option B also correct if this point not highlighted-internal services are unable to connect to the interface endpoints.
upvoted 1 times

" # dev10 7 months ago


Ans B: even though private DNS option is enabled we still need to ensure the VPC attributes are set to true. To use private DNS, you must set the
following VPC attributes to true: enableDnsHostnames and enableDnsSupport.
There is a possibility that DNS private option was turned off when the interface endpoint was created inside the VPC.
upvoted 2 times

511 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Bigbearcn 8 months, 3 weeks ago


Selected Answer: B
The service name are resolved to public IP address, so the application is using the default DNS name, not the endpoint hostname. So you need
to enable private DNS option to resolve it to private address.
read the Private DNS for interface endpoints in the link https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html.
upvoted 2 times

" # HellGate 9 months ago


My answer is C.

You don't need to check private DNS because it’s turned on by default while you need to configure SG.
https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html
upvoted 1 times

" # saeidp 9 months, 3 weeks ago


private DNS option on the VPC attributes is a bit confusing. Private DNS is enabled by default on private endpoints not VPC. It appears the
answer is pointing to VPC attributes
DNS hostnames and DNS resolution
Yes without the above enabled attributes privatelink will not work.
answere is B
upvoted 2 times

" # saeidp 9 months, 2 weeks ago


I change my answer. Private DNS is enabled by default. in this case C is correct
upvoted 1 times

" # tkanmani76 9 months, 3 weeks ago


C - https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html
upvoted 1 times

" # vbal 10 months, 4 weeks ago


Enable private DNS for the endpoint so you can make requests to the service using its default DNS hostname; If you don't enable it the default
DNS Hostname resolves to Public IP of the Service. B.
upvoted 1 times

" # AzureDP900 11 months ago


C is right Configure the security group on the interface endpoint to allow connectivity to the AWS services. Interface endpoint need Security
groups .. look at Gateway endpoint vs interface endpoint. Neal Davis having similar question
upvoted 3 times

" # ByomkeshDas 11 months, 1 week ago


Security group issue is far beyond the question requirement. It is just like a firewall which can allow some traffic. The question clearly said that
"the service names resolve to public IP addresses". So it is a DNS resolve related issue not ACL related issue. So option B is correct..
upvoted 2 times

" # andylogan 1 year ago


It's C
upvoted 2 times

" # Cotter 1 year ago


B for sure.https://www.examtopics.com/user/student22/
upvoted 1 times

" # student22 1 year ago


B
You have enable private DNS option. Otherwise it will resolve to the public address.
upvoted 3 times

512 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #719 Topic 1

A company has a data lake in Amazon S3 that needs to be accessed by hundreds of applications across many AWS accounts. The company's
information security policy states that the S3 bucket must not be accessed over the public internet and that each application should have the
minimum permissions necessary to function.
To meet these requirements, a solutions architect plans to use an S3 access point that is restricted to speci+c VPCs for each application.
Which combination of steps should the solutions architect take to implement this solution? (Choose two.)

A. Create an S3 access point for each application in the AWS account that owns the S3 bucket. Con+gure each access point to be accessible
only from the application's VPC. Update the bucket policy to require access from an access point

B. Create an interface endpoint for Amazon S3 in each application's VPC. Con+gure the endpoint policy to allow access to an S3 access point.
Create a VPC gateway attachment for the S3 endpoint

C. Create a gateway endpoint for Amazon S3 in each application's VPC. Con+gure the endpoint policy to allow access to an S3 access point.
Specify the route table that is used to access the access point.

D. Create an S3 access point for each application in each AWS account and attach the access points to the S3 bucket. Con+gure each access
point to be accessible only from the application's VPC. Update the bucket policy to require access from an access point.

E. Create a gateway endpoint for Amazon S3 in the data lake's VPC. Attach an endpoint policy to allow access to the S3 bucket. Specify the
route table that is used to access the bucket

Correct Answer: AC

Community vote distribution


AC (100%)

" # beebatov Highly Voted $ 1 year, 1 month ago


A & C.
https://joe.blog.freemansoft.com/2020/04/protect-data-in-cloud-with-s3-access.html
upvoted 6 times

" # Sumit_Kumar Most Recent % 2 months, 2 weeks ago


https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/
upvoted 1 times

" # xyzman 4 months, 2 weeks ago


It's A,C but the path is App --> S3 Gateway Endpoint --> S3 Access Point --> S3 Bucket
upvoted 1 times

" # pal40sg 8 months, 1 week ago


Selected Answer: AC
It's A,C
App --> S3 Access Point --> S3 Gateway Endpoint --> S3 Bucket
upvoted 1 times

" # JohnPi 1 month, 1 week ago


path is App --> S3 Gateway Endpoint --> S3 Access Point --> S3 Bucket
upvoted 1 times

" # AzureDP900 11 months ago


A,C is right
upvoted 1 times

" # andylogan 1 year ago


It's A,C
App --> S3 Access Point --> S3 Gateway Endpoint --> S3 Bucket
upvoted 3 times

" # JohnPi 1 month, 1 week ago


path is App --> S3 Gateway Endpoint --> S3 Access Point --> S3 Bucket
upvoted 2 times

" # Goram113 1 year ago


can't see this route step here https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/

513 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

but A&C seems to be the best '


upvoted 1 times
" # student22 1 year ago
A,C
App --> S3 Access Point --> S3 Gateway Endpoint --> S3 Bucket
upvoted 3 times

" # tgv 1 year ago


AAA CCC
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A, C
upvoted 2 times

" # Waiweng 1 year ago


it's A&C
upvoted 2 times

" # mustpassla 1 year ago


A & C, https://aws.amazon.com/s3/features/access-points/
upvoted 1 times

" # vkbajoria 1 year, 1 month ago


It is A & C.
upvoted 1 times

" # tvs 1 year, 1 month ago


AC https://aws.amazon.com/s3/features/access-points/ & https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-
endpoints-and-s3-access-points/
upvoted 3 times

" # CarisB 1 year, 1 month ago


I vote for A & C

https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/

https://aws.amazon.com/blogs/storage/setting-up-cross-account-amazon-s3-access-with-
s3-access-points/ => Account A (The Data Owner). This is the account you create the Amazon S3 Access Point in
upvoted 2 times

" # miniso8153 1 year, 1 month ago


I choose B & D
upvoted 1 times

" # towysim 1 year, 1 month ago


I chose B and E.
upvoted 1 times

514 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #720 Topic 1

A company that runs applications on AWS recently subscribed to a new software-as-a-service (SaaS) data vendor. The vendor provides the data by
way of a
REST API that the vendor hosts in its AWS environment. The vendor offers multiple options for connectivity to the API and is working with the
company to +nd the best way to connect.
The company's AWS account does not allow outbound internet access from its AWS environment. The vendor's services run on AWS in the same
Region as the company's applications.
A solutions architect must implement connectivity to the vendor's API so that the API is highly available in the company's VPC.
Which solution will meet these requirements?

A. Connect to the vendor's public API address for the data service

B. Connect to the vendor by way of a VPC peering connection between the vendor's VPC and the company's VPC

C. Connect to the vendor by way of a VPC endpoint service that uses AWS PrivateLink

D. Connect to a public bastion host that the vendor provides. Tunnel the API tramc

Correct Answer: D
Reference:
https://docs.oracle.com/en-us/iaas/big-data/doc/use-bastion-host-connect-your-service.html

Community vote distribution


C (100%)

" # Santoshhhhh Highly Voted $ 1 year, 1 month ago


C : With AWS PrivateLink, you can create an endpoint within your Amazon Virtual Private Cloud (Amazon VPC) that provides access to SaaS
applications over a secure, private connection that eliminates the exposure of private data to the public internet.

https://aws.amazon.com/blogs/apn/using-aws-privatelink-integrations-to-access-saas-solutions-from-apn-partners
/#:~:text=With%20AWS%20PrivateLink%2C%20you%20can,data%20to%20the%20public%20internet.
upvoted 14 times

" # DashL 1 year ago


AWS API Gateway Endpoint types:
• An API endpoint type refers to the hostname of the API. The API endpoint type can be edge-optimized, regional, or private, depending on
where the majority of your API traffic originates from. An edge-optimized API endpoint is best for geographically distributed clients. API
requests are routed to the nearest CloudFront Point of Presence (POP). This is the default endpoint type for API Gateway REST APIs. A
regional API endpoint is intended for clients in the same region. When a client running on an EC2 instance calls an API in the same region, or
when an API is intended to serve a small number of clients with high demands, a regional API reduces connection overhead. A private API
endpoint is an API endpoint that can only be accessed from your Amazon Virtual Private Cloud (VPC) using an interface VPC endpoint, which
is an endpoint network interface (ENI) that you create in your VPC.
upvoted 1 times

" # beebatov Highly Voted $ 1 year ago


C.
In addition to what Santosh mentioned, Endpoints are supported within the same Region only.
https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html
upvoted 6 times

" # naiduveerendra Most Recent % 4 months, 2 weeks ago


Selected Answer: C
C is answer
upvoted 1 times

" # cldy 11 months ago


C. Connect to the vendor by way of a VPC endpoint service that uses AWS PrivateLink
upvoted 1 times

" # AzureDP900 11 months ago


C is right answer
upvoted 1 times

" # acloudguru 11 months, 1 week ago


Selected Answer: C
C,privatelink is designed for such senerio, hope i can have this easy question in my exam

515 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # blackgamer 1 year ago


It is C
upvoted 1 times

" # Kopa 1 year ago


Im for C
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
C is more secure than B
upvoted 2 times

" # tuananhngo 1 year ago


why not B?
upvoted 1 times

" # vimgoru24 1 year ago


D is for perverts. C is the proper way.
upvoted 2 times

" # mustpassla 1 year ago


C for sure
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 2 times

" # LCC92 1 year ago


c for sure
upvoted 1 times

516 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #721 Topic 1

A company is developing a web application that runs on Amazon EC2 instances in an Auto Scaling group behind a public-facing Application Load
Balancer (ALB).
Only users from a speci+c country are allowed to access the application. The company needs the ability to log the access requests that have been
blocked. The solution should require the least possible maintenance.
Which solution meets these requirements?

A. Create an IPSet containing a list of IP ranges that belong to the speci+ed country. Create an AWS WAF web ACL. Con+gure a rule to block
any requests that do not originate from an IP range in the IPSet. Associate the rule with the web ACL. Associate the web ACL with the ALB.

B. Create an AWS WAF web ACL. Con+gure a rule to block any requests that do not originate from the speci+ed country. Associate the rule
with the web ACL. Associate the web ACL with the ALB.

C. Con+gure AWS Shield to block any requests that do not originate from the speci+ed country. Associate AWS Shield with the ALB.

D. Create a security group rule that allows ports 80 and 443 from IP ranges that belong to the speci+ed country. Associate the security group
with the ALB.

Correct Answer: A

Community vote distribution


B (100%)

" # oxfordsolutions Highly Voted $ 1 year, 1 month ago


Correct Answer: B
upvoted 13 times

" # gsw 1 year, 1 month ago


how is it B?
upvoted 1 times

" # jduo 1 year, 1 month ago


https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-geo-match.html
upvoted 2 times

" # DashL 1 year ago


If you want to allow or block web requests based on the country that the requests originate from, create one or more geo match conditions. A
geo match condition lists countries that your requests originate from. Later in the process, when you create a web ACL, you specify whether
to allow or block requests from those countries.
upvoted 3 times

" # jj22222 Most Recent % 9 months ago


B looks right
upvoted 2 times

" # Devgela 9 months, 2 weeks ago


Selected Answer: B
B is the correct answer for me
upvoted 2 times

" # tkanmani76 10 months, 1 week ago


B - https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-geo-match.html
upvoted 1 times

" # cldy 11 months ago


B. Create an AWS WAF web ACL. Configure a rule to block any requests that do not originate from the specified country. Associate the rule with
the web ACL. Associate the web ACL with the ALB.
upvoted 2 times

" # AzureDP900 11 months ago


B is right. Option A provided to confuse you!
upvoted 1 times

" # acloudguru 11 months, 1 week ago


Selected Answer: B

517 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

WAF is designed to serve this case, for A making a IP list is impossible. AWS has such list, and can ganrutee 99.8% accurate, how can a
company do it?
upvoted 1 times
" # AzureDP900 11 months, 2 weeks ago
B completely make sense. A is wrong answer.
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # blackgamer 1 year ago


It is B
upvoted 1 times

" # Kopa 1 year ago


Im for B
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 3 times

" # vimgoru24 1 year ago


It’s B. You should not manage IP lists in this case
upvoted 1 times

" # hk436 1 year ago


B is my answer!!
upvoted 1 times

" # Waiweng 1 year ago


it's B
upvoted 3 times

" # tvs 1 year, 1 month ago


B https://aws.amazon.com/blogs/aws/aws-web-application-firewall-waf-for-application-load-balancers/
upvoted 1 times

518 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #722 Topic 1

A multimedia company needs to deliver its video-on-demand (VOD) content to its subscribers in a cost-effective way. The video +les range in size
from 1-15 GB and are typically viewed frequently for the +rst 6 months after creation, and then access decreases considerably. The company
requires all video +les to remain immediately available for subscribers. There are now roughly 30,000 +les, and the company anticipates doubling
that number over time.
What is the MOST cost-effective solution for delivering the company's VOD content?

A. Store the video +les in an Amazon S3 bucket using S3 Intelligent-Tiering. Use Amazon CloudFront to deliver the content with the S3 bucket
as the origin.

B. Use AWS Elemental MediaConvert and store the adaptive bitrate video +les in Amazon S3. Con+gure an AWS Elemental MediaPackage
endpoint to deliver the content from Amazon S3.

C. Store the video +les in Amazon Elastic File System (Amazon EFS) Standard. Enable EFS lifecycle management to move the video +les to
EFS Infrequent Access after 6 months. Create an Amazon EC2 Auto Scaling group behind an Elastic Load Balancer to deliver the content from
Amazon EFS.

D. Store the video +les in Amazon S3 Standard. Create S3 Lifecycle rules to move the video +les to S3 Standard-Infrequent Access (S3
Standard-IA) after 6 months and to S3 Glacier Deep Archive after 1 year. Use Amazon CloudFront to deliver the content with the S3 bucket as
the origin.

Correct Answer: D

Community vote distribution


A (100%)

" # AndySH Highly Voted $ 10 months, 1 week ago


Answer is A
upvoted 7 times

" # jigu1981 Highly Voted $ 9 months, 3 weeks ago


Answer should be A.. D moves the file to Glacier whereas the expectation is to serve the file instantly
upvoted 5 times

" # zinoxx90 Most Recent % 6 months, 3 weeks ago


Selected Answer: A
Answer is A
upvoted 1 times

" # RVivek 8 months, 3 weeks ago


A is the answer
B- Media convert is not requested and laso will not help in reducing cost
C- EFS is at least 10 times expensive than S3
D- Glacier will not support instant access
upvoted 1 times

519 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #723 Topic 1

A company manages hundreds of AWS accounts centrally in an organization in AWS Organizations. The company recently started to allow product
teams to create and manage their own S3 access points in their accounts. The S3 access points can be accessed only within VPCs, not on the
Internet.
What is the MOST operationally emcient way to enforce this requirement?

A. Set the S3 access point resource policy to deny the s3:CreateAccessPoint action unless the s3:AccessPointNetworkOrigin condition key
evaluates to VPC.

B. Create an SCP at the root level in the organization to deny the s3:CreateAccessPoint action unless the s3:AccessPointNetworkOrigin
condition key evaluates to VPC.

C. Use AWS CloudFormation StackSets to create a new IAM policy in each AWS account that allows the s3:CreateAccessPoint action only if
the s3:AccessPointNetworkOrigin condition key evaluates to VPC.

D. Set the S3 bucket policy to deny the s3:CreateAccessPoint action unless the s3:AccessPointNetworkOrigin condition key evaluates to VPC.

Correct Answer: D
Reference:
https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/

Community vote distribution


B (100%)

" # oxfordsolutions Highly Voted $ 1 year, 1 month ago


correct answer: B
upvoted 18 times

" # AzureDP900 Highly Voted $ 11 months ago


B is right answer
https://aws.amazon.com/s3/features/access-points/
upvoted 5 times

" # caveman712 Most Recent % 1 month ago


Selected Answer: B
https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/
upvoted 1 times

" # psou7 1 month ago


B. "You can control access point usage using AWS Organizations support for AWS SCPs."
https://aws.amazon.com/s3/features/access-points/
upvoted 1 times

" # gnic 2 months, 1 week ago


Selected Answer: B
It's B.
SCP to restrict permission
upvoted 1 times

" # kadev 2 months, 2 weeks ago


B. Example Service control policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:CreateAccessPoint",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"s3:AccessPointNetworkOrigin": "VPC"
}
}
}]

520 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

}
upvoted 1 times
" # jyrajan69 7 months, 3 weeks ago
The question states clearly '. Recently, the firm began allowing product teams to build and administer their own S3 access points under their own
accounts' so setting SCP at root level would not allow this, therefore only possible solution is A.
upvoted 2 times

" # cldy 11 months ago


B. Create an SCP at the root level in the organization to deny the s3:CreateAccessPoint action unless the s3:AccessPointNetworkOrigin condition
key evaluates to VPC.
upvoted 2 times

" # andylogan 1 year ago


It's B
upvoted 2 times

" # Kopa 1 year ago


good point to use Acess Point with SCP. Im for B
upvoted 2 times

" # tgv 1 year ago


BBB
---
upvoted 2 times

" # blackgamer 1 year ago


Answer is B
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 2 times

" # student2020 1 year ago


B
https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/
upvoted 3 times

" # vimgoru24 1 year ago


AWS Org hints it should be B
upvoted 1 times

" # apmpm 1 year ago


B
aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/

"You can set up AWS SCPs to require any new Access Point in the organization to be restricted to VPC-Only type. This makes sure that any
Access Point created in your organization provides access only from within the VPCs and there by firewalling your data to within your private
networks."
upvoted 2 times

" # apmpm 1 year ago


B
https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/
"You can set up AWS SCPs to require any new Access Point in the organization to be restricted to VPC-Only type. This makes sure that any
Access Point created in your organization provides access only from within the VPCs and there by firewalling your data to within your private
networks."
upvoted 2 times

521 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #724 Topic 1

A company needs to architect a hybrid DNS solution. This solution will use an Amazon Route 53 private hosted zone for the domain
cloud.example.com for the resources stored within VPCs.
✑ The company has the following DNS resolution requirements:
✑ On-premises systems should be able to resolve and connect to cloud.example.com.
All VPCs should be able to resolve cloud.example.com.
There is already an AWS Direct Connect connection between the on-premises corporate network and AWS Transit Gateway.
Which architecture should the company use to meet these requirements with the HIGHEST performance?

A. Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the
transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver.

B. Associate the private hosted zone to all the VPCs. Deploy an Amazon EC2 conditional forwarder in the shared services VPC. Attach all
VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the conditional
forwarder.

C. Associate the private hosted zone to the shared services VPC. Create a Route 53 outbound resolver in the shared services VPC. Attach all
VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the outbound
resolver.

D. Associate the private hosted zone to the shared services VPC. Create a Route 53 inbound resolver in the shared services VPC. Attach the
shared services VPC to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the
inbound resolver.

Correct Answer: A
Reference:
https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-
transit- gateway/

Community vote distribution


A (60%) D (40%)

" # beebatov Highly Voted $ 1 year, 1 month ago


A is the answer, based on https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-
amazon-route-53-and-aws-transit-gateway/

"When a Route 53 private hosted zone needs to be resolved in multiple VPCs and AWS accounts as described earlier, the most reliable pattern is
to share the private hosted zone between accounts and associate it to each VPC that needs it."
upvoted 18 times

" # DerekKey 1 year ago


There is only one Account. Not multiple account
upvoted 3 times

" # Blair77 Most Recent % 3 days, 17 hours ago


Selected Answer: A
AAA - Well explain here:
https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-
transit-gateway/
-"When a Route 53 private hosted zone needs to be resolved in multiple VPCs and AWS accounts as described earlier, the most reliable pattern
is to share the private hosted zone between accounts and associate it to each VPC that needs it. "
upvoted 1 times

" # Malluchan 1 month ago


Selected Answer: A
The key point is : the resources stored within VPCs, no were mentioned shared VPC
upvoted 1 times

" # Kyperos 2 months, 2 weeks ago


"Although it is possible to use forwarding rules to resolve private hosted zones in other VPCs, we do not recommend that. The most reliable,
performant and low-cost approach is to share and associate private hosted zones directly to all VPCs that need them."

https://aws.amazon.com/vi/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-

522 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

aws-transit-gateway/

So answer is A!!!!
upvoted 4 times
" # asfsdfsdf 3 months, 4 weeks ago
I will go with D there is a blog for this - there is no need to associate the private zone with all VPCs only with the shared one. the shared one will
be already associated with others.
https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-
transit-gateway/
upvoted 1 times

" # asfsdfsdf 2 months, 3 weeks ago


After reviewing it again i think A will work
"Create Route 53 private hosted zones in the shared services VPC and associate them. In addition, complete the cross-account private
hosted zone-VPC association of the spoke VPCs because the spoke VPCs are in different accounts. All VPC’s will need to associate their
private hosted zones to all other VPC’s if required to."
upvoted 1 times

" # aandc 4 months, 1 week ago


D, after reading https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-
route-53-and-aws-transit-gateway/
upvoted 1 times

" # jyrajan69 4 months, 1 week ago


First criteria, On-premises systems should be able to resolve and connect to cloud.example.com which rules out outbound, must be inbound
from DC. So that leaves us with A and D, both work but based on best practice, using shared VPC is more efficient, so answer must be D
upvoted 2 times

" # jj22222 7 months, 1 week ago


Selected Answer: A
A is the answer
upvoted 1 times

" # gunjan229 7 months, 2 weeks ago


Selected Answer: D
D - We dont need to attach all the VPCs to TGW, only the Shared VPC. Private Hosted Zones and Forwarding Rules can be in shared VPC and
shared using RAM to all the VPC (doesnt need Peering or TGW between VPCs for that, so A is incorrect)
upvoted 2 times

" # Milind 9 months, 1 week ago


When a Route 53 private hosted zone needs to be resolved in multiple VPCs and AWS accounts as described earlier, the most reliable pattern is
to share the private hosted zone between accounts and associate it to each VPC that needs it

from-> https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-
and-aws-transit-gateway/
upvoted 1 times

" # GeniusMikeLiu 10 months ago


A or D? I am so confused...
upvoted 2 times

" # Smartphone 10 months, 1 week ago


D is the Answer. Read the link below. Don't blindly assume anything.
https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-
transit-gateway/
upvoted 1 times

" # heyhey_00 10 months, 3 weeks ago


The answer to the recent dump is marked with D. Is the answer A still valid?
upvoted 1 times

" # tkanmani76 10 months, 3 weeks ago


Question says access to resources inside VPC and hence it is Inbound resolver. Choice is between A and D. The article https://aws.amazon.com
/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/ explains
how to connect a Shared Service VPC with Inbound resolver with TGW. So answer is D.
upvoted 1 times

" # student22 1 year ago


A
1.Associate the private hosted zone to all the VPCs.
2. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-
premises DNS server for cloud.example.com that point to the inbound resolver.
upvoted 1 times

523 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # DerekKey 1 year ago


A correct (in my opinion)
1. You associate all VPCs (we have one AWS account) with 1 private hosted zone (100 max).
2. Route 53 is a global service so it works for all VPCs in your account. All of them will be able to use a private hosted zone.
3. You attach all VPCs to Transit Gateway.
4. You create inbound resolver in a shared services VPC.
5. You create forwarding rules in the on-premises DNS server to resolver.

If you associate a private hosted zone with shared services VPC no other VPC will be identified with this name.
upvoted 1 times

" # tgv 1 year ago


AAA
---
upvoted 1 times

524 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #725 Topic 1

A development team has created a new jight tracker application that provides near-real-time data to users. The application has a front end that
consists of an
Application Load Balancer (ALB) in front of two large Amazon EC2 instances in a single Availability Zone. Data is stored in a single Amazon RDS
MySQL DB instance. An Amazon Route 53 DNS record points to the ALB.
Management wants the development team to improve the solution to achieve maximum reliability with the least amount of operational overhead.
Which set of actions should the team take?

A. Create RDS MySQL read replicas. Deploy the application to multiple AWS Regions. Use a Route 53 latency-based routing policy to route to
the application.

B. Con+gure the DB instance as Multi-AZ. Deploy the application to two additional EC2 instances in different Availability Zones behind an ALB.

C. Replace the DB instance with Amazon DynamoDB global tables. Deploy the application in multiple AWS Regions. Use a Route 53 latency-
based routing policy to route to the application.

D. Replace the DB instance with Amazon Aurora with Aurora Replicas. Deploy the application to multiple smaller EC2 instances across
multiple Availability Zones in an Auto Scaling group behind an ALB.

Correct Answer: B
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

Community vote distribution


D (65%) B (35%)

" # Waiweng Highly Voted $ 1 year, 1 month ago


it's D Aurora which provides the least amount of operational overhead
upvoted 20 times

" # Coffeinerd 1 year ago


For sure D! Besides Aurora we have autoscaling -> no operational overhead on load events.
upvoted 4 times

" # czarno 7 months, 1 week ago


I don't think so.
There might be a reason why there are 2 HUGE ec2 instances running.
Maybe the application needs this kind of a performance as it can't run in parallel.
In this case you can't just scale out... while scaling in.
Aurora would be nice, but not this time.

Answer B is correct
upvoted 1 times

" # beebatov Highly Voted $ 1 year, 1 month ago


Answer: C

B doesn't offer MAXIMUM resiliency, following the well architected framework's resiliency pillar, DR scenario must be considered. In this scenario
we have a near real-time application, we would need DynamoDB + multi region for maximum resiliency for both App and DB. Moreover, we are
working the development team that can switch from RDS to NoSQL.
upvoted 6 times

" # beebatov 1 year, 1 month ago


changing to D

https://youtu.be/ZCt3ctVfGIk?t=111
upvoted 10 times

" # DashL 1 year, 1 month ago


The question says "achieve maximum reliability with the least amount of operational overhead". RDS/Aurora has much higher operational
overhead than DynamoDB.
upvoted 2 times

" # kadev 2 months ago


And you want to double large EC2 to another region + add more money for changing coding lol

525 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # cale Most Recent % 2 months ago

Selected Answer: D
It's D
upvoted 1 times

" # kadev 2 months ago


"operational overhead" => saving cost
B/D

1. Currently, App in "two large Amazon EC2" in 1 AZ, we can saving cost by smaller EC2 + Autoscaling in multi A-Z , not adds more large EC2
2. RDS multi AZ, that mean 2 instance equaly, double cost. With replicas, you can chose a maller RDS type for savign cost.

===> Finally, D
upvoted 1 times

" # KiraguJohn 3 months, 1 week ago


Which one is more costly?
1. Changing existing RDS to multi AZ or
2. Converting the existing RDS to Amazon Aurora
upvoted 1 times

" # kadev 2 months ago


RDS to multi AZ => two instance equally , double cost
Amazon Aurora or RDS ( with replicas , read Q carefully ) => you can pick small RDS type for replica instance
upvoted 1 times

" # TechX 4 months, 1 week ago


Selected Answer: B
B for me
Seem that the question has change, it's say that it needs minimize operational expense, which made me choose B over D. Using Aurora will cost
you more
upvoted 1 times

" # Cal88 5 days, 22 hours ago


Operational overhead not expense.
Read the question carefully
Cost is not a concern here but availability is.
Adding two large nodes will not guarantee HA , but auto scaling will
So D is better I am sorry
upvoted 1 times

" # kangtamo 4 months, 1 week ago


Selected Answer: D
Agree with D: Aurora.
upvoted 1 times

" # azure_kai 7 months, 3 weeks ago


Selected Answer: D
Ans: D
upvoted 2 times

" # razerlg 8 months ago


Selected Answer: B
In my opinion D doesnt guarantee multi-AZ unless the replica is placed in a different AZ, and that is not specified. I would choose B
upvoted 2 times

" # johnnsmith 8 months, 2 weeks ago


B is correct. A and C are wrong because Route 53 health check is missing. D is wrong because it doesn't say "replica in a different AZ" or Multi-
AZ. Only B can still function when an AZ fails,
upvoted 2 times

" # pititcu667 8 months, 3 weeks ago


Selected Answer: B
B because they specifically mention least amount of effort while minimizing costs.
upvoted 1 times

" # futen0326 8 months, 3 weeks ago


Selected Answer: B
Did they change the question ..? I'm reading "The development team is tasked with enhancing the solution's dependability while minimizing
operating expenses."

526 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Switching to Aurora will incur a big cost.. you can simply setup Multi-AZ, and switch the instances to be in different AZs. It is not the most
resilient architecture but it is improved and the most cost-effective one here.
upvoted 2 times
" # kyo 9 months ago
D is better than B.
upvoted 1 times

" # kubala 9 months, 2 weeks ago


Selected Answer: D
D my opinion
upvoted 3 times

" # cannottellname 9 months, 2 weeks ago


Selected Answer: D
DDDDDDDDDDD
upvoted 4 times

" # cldy 10 months, 1 week ago


D: Aurora + ASG + ELB + MultiAZ
upvoted 1 times

" # challenger1 11 months ago


My Answer: D
For sure
upvoted 1 times

527 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #726 Topic 1

A multimedia company with a single AWS account is launching an application for a global user base. The application storage and bandwidth
requirements are unpredictable. The application will use Amazon EC2 instances behind an Application Load Balancer as the web tier and will use
Amazon DynamoDB as the database tier. The environment for the application must meet the following requirements:
✑ Low latency when accessed from any part of the world
✑ WebSocket support
✑ End-to-end encryption
Protection against the latest security threats

✑ Managed layer 7 DDoS protection


Which actions should the solutions architect take to meet these requirements? (Choose two.)

A. Use Amazon Route 53 and Amazon CloudFront for content distribution. Use Amazon S3 to store static content

B. Use Amazon Route 53 and AWS Transit Gateway for content distribution. Use an Amazon Elastic Block Store (Amazon EBS) volume to store
static content

C. Use AWS WAF with AWS Shield Advanced to protect the application

D. Use AWS WAF and Amazon Detective to protect the application

E. Use AWS Shield Standard to protect the application

Correct Answer: BC

Community vote distribution


AC (100%)

" # snakecharmer2 Highly Voted $ 6 months, 2 weeks ago


Selected Answer: AC
A&C
easy one...
upvoted 7 times

" # ravisar Most Recent % 6 months, 2 weeks ago


Answer is AC. AWS Transit Gateway is Enables for attaching VPNs and Different VPC on the same region for different AWS accounts. Nothing to
do content distribution.
upvoted 3 times

528 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #727 Topic 1

A company is using AWS Organizations to manage 15 AWS accounts. A solutions architect wants to run advanced analytics on the company's
cloud expenditures. The cost data must be gathered and made available from an analytics account. The analytics application runs in a VPC and
must receive the raw cost data each night to run the analytics.
The solutions architect has decided to use the Cost Explorer API to fetch the raw data and store the data in Amazon S3 in JSON format. Access to
the raw cost data must be restricted to the analytics application. The solutions architect has already created an AWS Lambda function to collect
data by using the Cost Explorer
API.
Which additional actions should the solutions architect take to meet these requirements?

A. Create an IAM role in the Organizations master account with permissions to use the Cost Explorer API, and establish trust between the role
and the analytics account. Update the Lambda function role and add sts:AssumeRole permissions. Assume the role in the master account
from the Lambda function code by using the AWS Security Token Service (AWS STS) AssumeRole API call. Create a gateway endpoint for
Amazon S3 in the analytics VPC. Create an S3 bucket policy that allows access only from the S3 endpoint.

B. Create an IAM role in the analytics account with permissions to use the Cost Explorer API. Update the Lambda function and assign the new
role. Create a gateway endpoint for Amazon S3 in the analytics VPC. Create an S3 bucket policy that allows access only from the analytics
VPC by using the aws:SourceVpc condition.

C. Create an IAM role in the Organizations master account with permissions to use the Cost Explorer API, and establish trust between the role
and the analytics account. Update the Lambda function role and add sts:AssumeRole permissions. Assume the role in the master account
from the Lambda function code by using the AWS Security Token Service (AWS STS) AssumeRole API call. Create an interface endpoint for
Amazon S3 in the analytics VPC. Create an S3 bucket policy that allows access only from the analytics VPC private CIDR range by using the
aws:SourceIp condition.

D. Create an IAM role in the analytics account with permissions to use the Cost Explorer API. Update the Lambda function and assign the new
role. Create an interface endpoint for Amazon S3 in the analytics VPC. Create an S3 bucket policy that allows access only from the S3
endpoint.

Correct Answer: B

Community vote distribution


A (100%)

" # wowznuz Highly Voted $ 1 year, 1 month ago


Agreed, A is correct. C could be correct except for the part about restricting access using a bucket policy with aws:SourceIp which leaves A.
See https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html#vpc-endpoints-s3-bucket-policies
upvoted 15 times

" # beebatov Highly Voted $ 1 year, 1 month ago


Answer: A

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_permissions_overview.html

the administrator in the management account can create a role to grant cross-account permissions to a user in a member account as follows:

The management account administrator creates an IAM role and attaches a permissions policy to the role that grants permissions to the
organization's resources.

The management account administrator attaches a trust policy to the role that identifies the member account ID as the Principal who can assume
the role.

The member account administrator can then delegate permissions to assume the role to any users in the member account. Doing this allows
users in the member account to create or access resources in the management account and the organization. The principal in the trust policy can
also be an AWS service principal if you want to grant permissions to an AWS service to assume the role.
upvoted 11 times

" # pinhead900 Most Recent % 1 month, 3 weeks ago


answer is A - but still lambda would need to run in VPC for it to be able to use the Gateway endpoint, that part is missing.
upvoted 1 times

" # Andykris 2 months, 1 week ago


C has interface endpoint which is incorrect for S3.

529 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # Andykris 2 months, 1 week ago


Correct answer is A
upvoted 1 times
" # kangtamo 4 months, 1 week ago
Selected Answer: A
Agree with A. Gateway endpoint for S3.
upvoted 1 times

" # AzureDP900 11 months ago


A is correct, S3 gateway endpoint for access within analytics VPC from analytics application.
https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html
upvoted 2 times

" # andylogan 1 year ago


It's A
upvoted 1 times

" # tgv 1 year ago


AAA
---
The cost of the AWS Organization is visible in the master account so B & D are out.
A is a more best practice approach. No need for an interface endpoint
upvoted 4 times

" # tgv 1 year ago


AAA
---
The cost of the AWS Organization is visible in the master account so B & D are out.
A is a more best practice approach. No need for an interface endpoint
upvoted 1 times

" # blackgamer 1 year ago


A for sure
upvoted 1 times

" # pablobairat 1 year ago


It is D. In A, the Lambda does not have permissions to write in the S3 bucket because it is not able to access the endpoint
upvoted 1 times

" # rodolfo2020 1 year ago


agreed A is correct cause S3 supported only Gateway and DynamoDB endpoints
https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-gateway.html
upvoted 2 times

" # Kopa 1 year ago


going for A
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 3 times

" # vimgoru24 1 year ago


I believe A is not the only option here, however it is the most secure one, so A.
upvoted 2 times

" # student2020 1 year ago


Even though the new interface VPC endpoint option seems correct, that last part of the answer to use VPC CIDR range as sourceIP is
incorrect. Only public Ip addresses are supported.
upvoted 1 times

" # hk436 1 year, 1 month ago


A is my answer!!
upvoted 4 times

" # mustpassla 1 year, 1 month ago


A, org acc and gateway endpoint
upvoted 2 times

530 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #728 Topic 1

A company wants to migrate a 30 TB Oracle data warehouse from on premises to Amazon Redshift. The company used the AWS Schema
Conversion Tool (AWS
SCT) to convert the schema of the existing data warehouse to an Amazon Redshift schema. The company also used a migration assessment
report to identify manual tasks to complete.
The company needs to migrate the data to the new Amazon Redshift cluster during an upcoming data freeze period of 2 weeks. The only network
connection between the on-premises data warehouse and AWS is a 50 Mbps internet connection.
Which migration strategy meets these requirements?

A. Create an AWS Database Migration Service (AWS DMS) replication instance. Authorize the public IP address of the replication instance to
reach the data warehouse through the corporate +rewall. Create a migration task to run at the beginning of the fata freeze period.

B. Install the AWS SCT extraction agents on the on-premises servers. De+ne the extract, upload, and copy tasks to send the data to an Amazon
S3 bucket. Copy the data into the Amazon Redshift cluster. Run the tasks at the beginning of the data freeze period.

C. Install the AWS SCT extraction agents on the on-premises servers. Create a Site-to-Site VPN connection. Create an AWS Database Migration
Service (AWS DMS) replication instance that is the appropriate size. Authorize the IP address of the replication instance to be able to access
the on-premises data warehouse through the VPN connection.

D. Create a job in AWS Snowball Edge to import data into Amazon S3. Install AWS SCT extraction agents on the on-premises servers. De+ne
the local and AWS Database Migration Service (AWS DMS) tasks to send the data to the Snowball Edge device. When the Snowball Edge
device is returned to AWS and the data is available in Amazon S3, run the AWS DMS subtask to copy the data to Amazon Redshift.

Correct Answer: D

" # gsw Highly Voted $ 1 year, 1 month ago


D seems correct
upvoted 11 times

" # StelSen 1 year ago


Adding one more link for architecture: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.Process.html
upvoted 1 times

" # ExtHo 1 year, 1 month ago


Yes its D :)
Transfer 30TB over 50Mbps will take around 53Days that ruled out ABC
https://www.calctool.org/CALC/prof/computing/transfer_time
upvoted 9 times

" # cldy Most Recent % 11 months ago


D. Create a job in AWS Snowball Edge to import data into Amazon S3. Install AWS SCT extraction agents on the on-premises servers. Define the
local and AWS Database Migration Service (AWS DMS) tasks to send the data to the Snowball Edge device. When the Snowball Edge device is
returned to AWS and the data is available in Amazon S3, run the AWS DMS subtask to copy the data to Amazon Redshift.
upvoted 1 times

" # backfringe 11 months, 1 week ago


I go with D
upvoted 1 times

" # AzureDP900 11 months, 2 weeks ago


D is best option, data transfer using corporate internet takes around 58 days!
upvoted 1 times

" # andylogan 1 year ago


It's D
upvoted 1 times

" # tgv 1 year ago


DDD
---
You cannot migrate 20 TB over 50 Mbps in 2 weeks without Snowball
upvoted 3 times

" # WhyIronMan 1 year ago

531 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

I'll go with D
for data > 20TB use Snowball
upvoted 2 times
" # vimgoru24 1 year ago
D. This is the way.
upvoted 1 times

" # hk436 1 year ago


D is my answer!!
upvoted 1 times

" # Waiweng 1 year ago


it's D
upvoted 2 times

" # tvs 1 year ago


it is D . Since there is no online data to sync , snowball is best option. Also with 50Mbs you can able to transfer 7Tb only in 14 days.
upvoted 3 times

" # beebatov 1 year, 1 month ago


Answer: D

AWS Database Migration Service (AWS DMS) can use Snowball Edge and Amazon S3 to migrate large databases more quickly than by other
methods
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.html
upvoted 4 times

532 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #729 Topic 1

A company that tracks medical devices in hospitals wants to migrate its existing storage solution to the AWS Cloud. The company equips all of its
devices with sensors that collect location and usage information. This sensor data is sent in unpredictable patterns with large spikes. The data is
stored in a MySQL database running on premises at each hospital. The company wants the cloud storage solution to scale with usage.
The company's analytics team uses the sensor data to calculate usage by device type and hospital. The team needs to keep analysis tools running
locally while fetching data from the cloud. The team also needs to use existing Java application and SQL queries with as few changes as possible.
How should a solutions architect meet these requirements while ensuring the sensor data is secure?

A. Store the data in an Amazon Aurora Serverless database. Serve the data through a Network Load Balancer (NLB). Authenticate users using
the NLB with credentials stored in AWS Secrets Manager.

B. Store the data in an Amazon S3 bucket. Serve the data through Amazon QuickSight using an IAM user authorized with AWS Identity and
Access Management (IAM) with the S3 bucket as the data source.

C. Store the data in an Amazon Aurora Serverless database. Serve the data through the Aurora Data API using an IAM user authorized with
AWS Identity and Access Management (IAM) and the AWS Secrets Manager ARN.

D. Store the data in an Amazon S3 bucket. Serve the data through Amazon Athena using AWS PrivateLink to secure the data in transit.

Correct Answer: A

Community vote distribution


C (100%)

" # Jaypdv Highly Voted $ 1 year, 1 month ago


Going with C. because it works with the app still running on-prem. You will still need a few code changes but the question allows that.
D. would sound plausible but PrivateLink is for VPC only
upvoted 14 times

" # ExtHo 1 year, 1 month ago


Existing MySQL database can't be easy to serve from S3 (object storage) that ruled out S3 option and NLB is also not good option that leaves
only C
upvoted 4 times

" # justfmm 1 year ago


https://aws.amazon.com/blogs/aws/aws-privatelink-for-amazon-s3-now-available/
upvoted 2 times

" # beebatov Highly Voted $ 1 year, 1 month ago


Answer: C

https://aws.amazon.com/blogs/aws/new-data-api-for-amazon-aurora-serverless/
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html
upvoted 7 times

" # JohnPi Most Recent % 1 month, 1 week ago


I wonder how option C satisfies the statement "use existing Java application and SQL queries with as few changes as possible"? btw Aurora
Serverless has an "invisible" NLB in front
upvoted 1 times

" # dcdcdc3 1 month, 1 week ago


Selected Answer: C
B&D - no, as SQL queries are desired..
A - no, unless anyone can show me how this works: "Authenticate users using the NLB with credentials stored in AWS Secrets Manager."
upvoted 1 times

" # AzureDP900 11 months ago


C is right choice.
upvoted 2 times

" # cldy 11 months ago


C. Store the data in an Amazon Aurora Serverless database. Serve the data through the Aurora Data API using an IAM user authorized with AWS
Identity and Access Management (IAM) and the AWS Secrets Manager ARN.
upvoted 1 times

" # AzureDP900 11 months, 2 weeks ago

533 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

C) Store the data in an Amazon Aurora Serverless database. Serve the data through the Aurora Data API using an IAM user authorized with AWS
Identity and Access Management (IAM) and the AWS Secrets Manager ARN.
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # Kopa 1 year ago


going for C, as app code needs no change
upvoted 1 times

" # DerekKey 1 year ago


C is correct - https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html#data-api.access
upvoted 2 times

" # tgv 1 year ago


CCC
---
The data is currently stored in a MySQL database running on-prem. Storing MySQL data in S3 doesn't sound good so B & D are out.
Aurora Data API "enables the SQL HTTP endpoint, a connectionless Web Service API for running SQL queries against this database. When the
SQL HTTP endpoint is enabled, you can also query your database from inside the RDS console (these features are free to use)."
upvoted 1 times

" # blackgamer 1 year ago


C for me
upvoted 1 times

" # vimgoru24 1 year ago


All of this answers are bad in real world, but for the purpose of the exam - C is the answer
upvoted 1 times

" # hk436 1 year, 1 month ago


My answer is D.
The team also needs to use existing Java application and SQL queries with as few changes as possible.
THere is a requirement to be able to execute sql queries. Athena provides the way!!
upvoted 2 times

" # MrCarter 1 year, 1 month ago


s3 through private link? i dont think so matey
upvoted 1 times

" # MrCarter 1 year ago


i stand corrected s3 and privatelink is a thing
upvoted 1 times

" # MrCarter 1 year ago


i stand corrected s3 and privatelink is a thing
upvoted 1 times

" # MrCarter 1 year ago


The team needs to keep analysis tools running locally while fetching data from the cloud. That is why it has to be C
upvoted 1 times

" # DerekKey 1 year ago


Wrong if you think it is not possible.
upvoted 1 times

" # Kukkuji 1 year, 1 month ago


Going with C.
upvoted 1 times

" # mustpassla 1 year, 1 month ago


A, unpredictable patterns with large spikes & less change.
upvoted 1 times

" # mustpassla 1 year, 1 month ago


Change to C, coz A use NLB
upvoted 1 times

" # Waiweng 1 year, 1 month ago


it's C

534 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times

535 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #730 Topic 1

The following AWS Identity and Access Management (IAM) customer managed policy has been attached to an IAM user:

Which statement describes the access that this policy provides to the user?

A. The policy grants access to all Amazon S3 actions, including all actions in the prod-data S3 bucket

B. This policy denies access to all Amazon S3 actions, excluding all actions in the prod-data S3 bucket

C. This policy denies access to the Amazon S3 bucket and objects not having prod-data in the bucket name

D. This policy grants access to all Amazon S3 actions in the prod-data S3 bucket, but explicitly denies access to all other AWS services

Correct Answer: D

Community vote distribution


D (100%)

" # beebatov Highly Voted $ 1 year, 1 month ago


Answer: D

NotAction + NotResource
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_notaction.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_notresource.html
upvoted 12 times

" # TechX Most Recent % 4 months, 1 week ago


Selected Answer: D
100% D
upvoted 1 times

" # yacin 8 months ago


D is correct
upvoted 1 times

" # jyrajan69 8 months, 1 week ago


Definitely C. The explicit deny is clearly for S3 and denies access to any bucket that is not Prod. There is nothing here that prevents access to
other AWS Services.
upvoted 2 times

" # challenger1 11 months ago


My Answer: D

536 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # AzureDP900 11 months, 1 week ago
Selected Answer: D
D is correct for given scnerio!
upvoted 1 times

" # andylogan 1 year ago


It's D
upvoted 1 times

" # tgv 1 year ago


DDD
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
Trick question need to pay attention in the "NotAction" and "NotResources" clauses
upvoted 3 times

" # vimgoru24 1 year, 1 month ago


Today I learned there is a negate Not- prefix in policy statements 😅 D it is
upvoted 3 times

" # hk436 1 year, 1 month ago


D is my answer!!
upvoted 1 times

" # Waiweng 1 year, 1 month ago


it's D
upvoted 3 times

" # ExtHo 1 year, 1 month ago


D
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_notaction.html
upvoted 3 times

" # gsw 1 year, 1 month ago


D seems correct
upvoted 3 times

537 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #731 Topic 1

A company has implemented an ordering system using an event driven architecture. During initial testing, the system stopped processing orders.
Further log analysis revealed that one order message in an Amazon Simple Queue Service (Amazon SQS) standard queue was causing an error on
the backend and blocking all subsequent order messages. The visibility timeout of the queue is set to 30 seconds, and the backend processing
timeout is set to 10 seconds. A solutions architect needs to analyze faulty order messages and ensure that the system continues to process
subsequent messages.
Which step should the solutions architect take to meet these requirements?

A. Increase the backend processing timeout to 30 seconds to match the visibility timeout.

B. Reduce the visibility timeout of the queue to automatically remove the faulty message.

C. Con+gure a new SQS FIFO queue as a dead-letter queue to isolate the faulty messages.

D. Con+gure a new SQS standard queue as a dead-letter queue to isolate the faulty messages.

Correct Answer: D
Reference:
https://aws.amazon.com/blogs/compute/using-amazon-sqs-dead-letter-queues-to-control-message-failure/

Community vote distribution


D (100%)

" # Jaypdv Highly Voted $ 1 year, 1 month ago


D.
You need a dead-letter queue with a type that matches the queue. So a DLQ for a standard queue must be a standard queue. Hence D.
Ref: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html
upvoted 17 times

" # beebatov Highly Voted $ 1 year, 1 month ago


Answer: D

The dead-letter queue of a standard queue must also be a standard queue.


https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html
upvoted 6 times

" # Blair77 Most Recent % 3 weeks ago


Selected Answer: D
+1 for DDD
upvoted 1 times

" # hilft 3 months, 1 week ago


D. DLQ
upvoted 1 times

" # cldy 11 months ago


D. Configure a new SQS standard queue as a dead-letter queue to isolate the faulty messages.
upvoted 1 times

" # AzureDP900 11 months ago


My answer is D
upvoted 1 times

" # andylogan 1 year ago


It's D
upvoted 1 times

" # tgv 1 year ago


DDD
---
dead-letter queue will do the job. FIFO is not needed here
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D

538 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 3 times
" # vimgoru24 1 year ago
D is the way you handle faulty messages in SQS
upvoted 3 times

" # hk436 1 year, 1 month ago


D is my answer
upvoted 1 times

" # Waiweng 1 year, 1 month ago


it's D
upvoted 4 times

" # gsw 1 year, 1 month ago


agree D
upvoted 2 times

539 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #732 Topic 1

A large company has a business-critical application that runs in a single AWS Region. The application consists of multiple Amazon EC2 instances
and an Amazon
RDS Multi-AZ DB instance. The EC2 instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones.
A solutions architect is implementing a disaster recovery (DR) plan for the application. The solutions architect has created a pilot light application
deployment in a new Region, which is referred to as the DR Region. The DR environment has an Auto Scaling group with a single EC2 instance and
a read replica of the RDS DB instance.
The solutions architect must automate a failover from the primary application environment to the pilot light environment in the DR Region.
Which solution meets these requirements with the MOST operational emciency?

A. Publish an application availability metric to Amazon CloudWatch in the DR Region from the application environment in the primary Region.
Create a CloudWatch alarm in the DR Region that is invoked when the application availability metric stops being delivered. Con+gure the
CloudWatch alarm to send a noti+cation to an Amazon Simple Noti+cation Service (Amazon SNS) topic in the DR Region. Add an email
subscription to the SNS topic that sends messages to the application owner. Upon noti+cation, instruct a systems operator to sign in to the
AWS Management Console and initiate failover operations for the application.

B. Create a cron task that runs every 5 minutes by using one of the application's EC2 instances in the primary Region. Con+gure the cron task
to check whether the application is available. Upon failure, the cron task noti+es a systems operator and attempts to restart the application
services.

C. Create a cron task that runs every 5 minutes by using one of the application's EC2 instances in the primary Region. Con+gure the cron task
to check whether the application is available. Upon failure, the cron task modi+es the DR environment by promoting the read replica and by
adding EC2 instances to the Auto Scaling group.

D. Publish an application availability metric to Amazon CloudWatch in the DR Region from the application environment in the primary Region.
Create a CloudWatch alarm in the DR Region that is invoked when the application availability metric stops being delivered. Con+gure the
CloudWatch alarm to send a noti+cation to an Amazon Simple Noti+cation Service (Amazon SNS) topic in the DR Region. Use an AWS Lambda
function that is invoked by Amazon SNS in the DR Region to promote the read replica and to add EC2 instances to the Auto Scaling group.

Correct Answer: A

Community vote distribution


D (100%)

" # gsw Highly Voted $ 1 year, 1 month ago


D makes more sense
upvoted 17 times

" # Jaypdv Highly Voted $ 1 year, 1 month ago


Voting D.
upvoted 8 times

" # linuxmaster007 Most Recent % 1 month, 1 week ago


Does this question came in real exam ?
upvoted 1 times

" # gnic 2 months, 2 weeks ago


Selected Answer: D
It's D. "Automate the failover"
I would like to know why the right answer is not updated....
upvoted 1 times

" # bobsmith2000 5 months, 3 weeks ago


Selected Answer: D
Because of "automate the failover"
upvoted 1 times

" # Yamchi 7 months ago


Neither of them.... the answer is Route53
if have to choose, I will go with DDDDDD
upvoted 5 times

540 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # cldy 11 months ago


D. Publish an application availability metric to Amazon CloudWatch in the DR Region from the application environment in the primary Region.
Create a CloudWatch alarm in the DR Region that is invoked when the application availability metric stops being delivered. Configure the
CloudWatch alarm to send a notification to an Amazon Simple Notification Service (Amazon SNS) topic in the DR Region. Use an AWS Lambda
function that is invoked by Amazon SNS in the DR Region to promote the read replica and to add EC2 instances to the Auto Scaling group.
upvoted 1 times

" # AzureDP900 11 months ago


D is right answer
upvoted 1 times

" # andylogan 1 year ago


It's D
upvoted 1 times

" # tgv 1 year ago


DDD
---
Involving a system operator to do stuff when he receives a notification defeats the purpose of "automate the failover", so A & B are out
Between cron task every 5 minute and AWS native services always go for the latter
upvoted 2 times

" # blackgamer 1 year ago


Answer is D
upvoted 1 times

" # nisoshabangu 1 year ago


Answer is D, I agree with vimgoru24 in real life I would probably do A.
upvoted 1 times

" # Kopa 1 year ago


Im going for D
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 1 times

" # vimgoru24 1 year ago


On exam I’d answer D.
In real life I’d do A.
upvoted 2 times

" # linuxmaster007 1 month, 1 week ago


Does this question came in real exam ?
upvoted 1 times

" # hk436 1 year ago


D is my answer!!
upvoted 1 times

" # mustpassla 1 year ago


D is the ans. C also work but expensive comparing with D.
upvoted 1 times

541 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #733 Topic 1

An education company is running a web application used by college students around the world. The application runs in an Amazon Elastic
Container Service
(Amazon ECS) cluster in an Auto Scaling group behind an Application Load Balancer (ALB). A system administrator detects a weekly spike in the
number of failed login attempts, which overwhelm the application's authentication service. All the failed login attempts originate from about 500
different IP addresses that change each week. A solutions architect must prevent the failed login attempts from overwhelming the authentication
service.
Which solution meets these requirements with the MOST operational emciency?

A. Use AWS Firewall Manager to create a security group and security group policy to deny access from the IP addresses

B. Create an AWS WAF web ACL with a rate-based rule, and set the rule action to Block. Connect the web ACL to the ALB

C. Use AWS Firewall Manager to create a security group and security group policy to allow access only to speci+c CIDR ranges

D. Create an AWS WAF web ACL with an IP set match rule, and set the rule action to Block. Connect the web ACL to the ALB

Correct Answer: A
Reference:
https://docs.aws.amazon.com/waf/latest/developerguide/security-group-policies.html

Community vote distribution


B (100%)

" # Jaypdv Highly Voted $ 1 year, 1 month ago


Going with B.
Rate-base rule in the WAF
https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.html
upvoted 22 times

" # Nidhin07 Most Recent % 5 days, 20 hours ago


The key here is IP Address changes every week, so that leaves the only suitable answer to B
upvoted 1 times

" # KengL 5 months ago


D, coz rated rule only checking 5 minutes which wasn't mentioned in the question.
upvoted 1 times

" # tkanmani76 9 months, 2 weeks ago


A is right - https://aws.amazon.com/blogs/security/automatically-block-suspicious-traffic-with-aws-network-firewall-and-amazon-guardduty/
upvoted 1 times

" # gnic 2 months, 1 week ago


which IP you will block? they change every week...
upvoted 1 times

" # AMKazi 9 months, 2 weeks ago


Ans: B - only rate base rule can be used in this situation.
https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.html
upvoted 1 times

" # cldy 10 months, 1 week ago


B is correct.
upvoted 1 times

" # AzureDP900 11 months ago


B is right ..
The IP set match statement inspects the IP address of a web request against a set of IP addresses and address ranges. Use this to allow or
block web requests based on the IP addresses that the requests originate from. By default, AWS WAF uses the IP address from the web request
origin, but you can configure the rule to use an HTTP header like X-Forwarded-For instead.
https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-ipset-match.html
https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.html
upvoted 1 times

" # AzureDP900 11 months, 1 week ago


I will go with B only!

542 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # acloudguru 11 months, 1 week ago
Selected Answer: B
B,WAF is designed for this kind of DDOS
upvoted 2 times

" # acloudguru 11 months, 1 week ago


hope i can have this easy question in my exam
upvoted 2 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # tgv 1 year ago


BBB
---
This is something that you can do with WAF and the fact that the IP addresses are changing you cannot set an IP set match rule
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with B
upvoted 3 times

" # WhyIronMan 1 year ago


https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.html
upvoted 2 times

" # blackgamer 1 year ago


Yes , it is B
upvoted 1 times

" # vimgoru24 1 year ago


B.

You’d have hell of burden to manually blacklisting +500 IPs every week
upvoted 4 times

" # SJain50 1 year ago


security can not explicitly deny.
So going with B
upvoted 2 times

" # KittuCheeku 1 year ago


B is the right answer
upvoted 1 times

" # hk436 1 year, 1 month ago


B is my answer!!
upvoted 1 times

" # mustpassla 1 year, 1 month ago


B, other not make sense.
upvoted 1 times

543 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #734 Topic 1

A company needs to store and process image data that will be uploaded from mobile devices using a custom mobile app. Usage peaks between 8
AM and 5 PM on weekdays, with thousands of uploads per minute. The app is rarely used at any other time. A user is noti+ed when image
processing is complete.
Which combination of actions should a solutions architect take to ensure image processing can scale to handle the load? (Choose three.)

A. Upload +les from the mobile software directly to Amazon S3. Use S3 event noti+cations to create a message in an Amazon MQ queue.

B. Upload +les from the mobile software directly to Amazon S3. Use S3 event noti+cations to create a message in an Amazon Simple Queue
Service (Amazon SQS) standard queue.

C. Invoke an AWS Lambda function to perform image processing when a message is available in the queue.

D. Invoke an S3 Batch Operations job to perform image processing when a message is available in the queue.

E. Send a push noti+cation to the mobile app by using Amazon Simple Noti+cation Service (Amazon SNS) when processing is complete.

F. Send a push noti+cation to the mobile app by using Amazon Simple Email Service (Amazon SES) when processing is complete.

Correct Answer: BEF

Community vote distribution


BCE (100%)

" # Jaypdv Highly Voted $ 1 year, 1 month ago


BCE
Upload to S3, process with lambda, notify with SNS
upvoted 24 times

" # ExtHo Highly Voted $ 1 year, 1 month ago


S3 Batch Operations job seems not relevant so BCE
https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-basics.html
upvoted 10 times

" # Ni_yot Most Recent % 2 weeks, 6 days ago


Selected Answer: BCE
Should BCE. The only set of ans that work
upvoted 1 times

" # Blair77 3 weeks ago


Selected Answer: BCE
+1 for BCE !
upvoted 1 times

" # kangtamo 4 months, 1 week ago


Selected Answer: BCE
BCE for sure.
upvoted 2 times

" # cldy 11 months ago


B. Upload files from the mobile software directly to Amazon S3. Use S3 event notifications to create a message in an Amazon Simple Queue
Service (Amazon SQS) standard queue.
C. Invoke an AWS Lambda function to perform image processing when a message is available in the queue.
E. Send a push notification to the mobile app by using Amazon Simple Notification Service (Amazon SNS) when processing is complete.
upvoted 1 times

" # AzureDP900 11 months ago


BCE is correct
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


i hope i can have it in my exam
upvoted 2 times

" # andylogan 1 year ago


It's B C E

544 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # oppai1232 1 year ago
Why BCE instead of BDE?
Lambda times out at 15 mins, what if it needed to take more than that?
upvoted 1 times

" # oppai1232 1 year ago


ohh I read it as AWS Batch lol
S3 Batch Operations only performs operations such as Copy, Invoke AWS Lambda function, and Restore on millions or billions of objects.
upvoted 4 times

" # oppai1232 1 year ago


will go for BCE as well
upvoted 1 times

" # Kopa 1 year ago


Im for B,C,E
upvoted 1 times

" # tgv 1 year ago


BBB CCC EEE
---
upvoted 1 times

" # blackgamer 1 year ago


It is BCE
upvoted 1 times

" # Kopa 1 year ago


BCE, very corelated to each other and makes sense
upvoted 3 times

" # WhyIronMan 1 year ago


I'll go with B, C, E
upvoted 1 times

" # vimgoru24 1 year ago


BCE classic
upvoted 1 times

" # hk436 1 year ago


BCE is my answer!!
upvoted 1 times

545 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #735 Topic 1

A company's processing team has an AWS account with a production application. The application runs on Amazon EC2 instances behind a
Network Load
Balancer (NLB). The EC2 instances are hosted in private subnets in a VPC in the eu-west-1 Region. The VPC was assigned the CIDR block of
10.0.0.0/16. The billing team recently created a new AWS account and deployed an application on EC2 instances that are hosted in private
subnets in a VPC in the eu-central-1
Region. The new VPC is assigned the CIDR block of 10.0.0.0/16.
The processing application needs to securely communicate with the billing application over a proprietary TCP port.
What should a solutions architect do to meet this requirement with the LEAST amount of operational effort?

A. In the billing team's account, create a new VPC and subnets in eu-central-1 that use the CIDR block of 192.168.0.0/16. Redeploy the
application to the new subnets. Con+gure a VPC peering connection between the two VPCs.

B. In the processing team's account, add an additional CIDR block of 192.168.0.0/16 to the VPC in eu-west-1. Restart each of the EC2
instances so that they obtain a new IP address. Con+gure an inter-Region VPC peering connection between the two VPCs.

C. In the billing team's account, create a new VPC and subnets in eu-west-1 that use the CIDR block of 192.168.0.0/16. Create a VPC endpoint
service (AWS PrivateLink) in the processing team's account and an interface VPC endpoint in the new VPC. Con+gure an inter-Region VPC
peering connection in the billing team's account between the two VPCs.

D. In each account, create a new VPC with the CIDR blocks of 192.168.0.0/16 and 172.16.0.0/16. Create inter-Region VPC peering
connections between the billing team's VPCs and the processing team's VPCs. Create gateway VPC endpoints to allow tramc to route between
the VPCs.

Correct Answer: A

Community vote distribution


C (100%)

" # AzureDP900 Highly Voted $ 10 months, 2 weeks ago


C is correct answer , This question is part of AWS official practice test.
upvoted 10 times

" # azkzmk 8 months, 2 weeks ago


You are right!
upvoted 1 times

" # Jaypdv Highly Voted $ 1 year, 1 month ago


A. seems right
upvoted 10 times

" # Guoxian Most Recent % 1 month, 4 weeks ago


This is very interesting. I would say B is less trouble than A because instead of creating a brand new VPC, AWS now allows to add additional
CiDR. Since both solutions require re-deployment of the EC2. Then I would argue B is possibly a better option than A. I have also seen answers
ticking C. Honestly, I am not sure if it will work because we will still need a sets of new EC2 in the new CiDR to run the service. At this point of
time, there is no EC2 services in the new CiDR.
upvoted 1 times

" # Enigmaaaaaa 3 months, 4 weeks ago


Selected Answer: C
For me its C.
A - can work but its not MINIMUM amount of operational work for sure.
Answer should be C without redeploying to whole solution - NLB is already configured, just need to create Service Endpoint and inter-region VPC
since private link is a regional service and need to access it from another region.
So C will do it - 1 service EP in another region + VPC inter-region peering to it
https://aws.amazon.com/about-aws/whats-new/2018/10/aws-privatelink-now-supports-access-over-inter-region-vpc-peering/
upvoted 1 times

" # aandc 4 months, 1 week ago


C: redeploy the application means operational work
upvoted 1 times

" # guillmichel 8 months ago


A: redeploy the application -> operational work

546 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

C: just declare the PrivateLink + Interface endpoint (using the existing NLB). Less work
upvoted 4 times
" # jyrajan69 8 months, 2 weeks ago
3 factors in this question, first it should be the least amount of effort, then there is the NLB and the need for secure connection. All of this can be
achieved by A, no issues with NLB based on the followjng link (https://aws.amazon.com/about-aws/whats-new/2018/10/network-load-balancer-
now-supports-inter-region-vpc-peering/). C is way more complicated and not required
upvoted 1 times

" # AzureDP900 11 months ago


A seems right answer based on scenario.
upvoted 2 times

" # AzureDP900 10 months, 2 weeks ago


Changing my answer to C after attempting AWS official practice test
upvoted 1 times

" # Viper57 1 year ago


It's weird that some people think its option C. Here are the reason that it is wrong -

1. If it is using a VPC endpoint, why is a peering connection necessary? It can directly connect to the application via the endpoint so the extra
VPC and peering connection is an unnecessary step
2. 'Inter region peering' is enabled by default for all VPC peering connections so there is no special type of 'inter region peering' connection
3. The order is wrong. The processing account needs to access the billing application. So the VPC endpoint service should be created in the
Billing teams account, and the interface endpoint created in the processing account as the service provider.

A works and is much simpler.


upvoted 6 times

" # student22 1 year ago


C
---
Private Link
upvoted 2 times

" # nisoshabangu 1 year ago


C for me, application is behind an NLB, best way to achieve this is Privatelinks.
upvoted 2 times

" # andylogan 1 year ago


It's A
upvoted 1 times

" # Goram113 1 year ago


https://aws.amazon.com/about-aws/whats-new/2018/10/aws-privatelink-now-supports-access-over-inter-region-vpc-peering/ - so I think this is
C. In such approach you don't need to make any changes in application related to moving ec2 to non overlapping subnets.
upvoted 2 times

" # DerekKey 1 year ago


You will need inter-region peering.
A wrong - doesn't have it
B wrong - will not work
C correct - vpc inter-regon peering and VPC enpoint
D wrong - doesn't make sense
upvoted 2 times

" # johnnsmith 1 year ago


C doesn't work. The processing application needs to communicate with billing application. If the processing application is behind a endpoint.
How can it communicate with billing application. Peering can be between any VPC. You don't need to create a special inter-region peering
type.
upvoted 3 times

" # tgv 1 year ago


AAA
---
upvoted 1 times

" # Suresh108 1 year ago


i see many selected option "A"
question has eu-west-1 where answer A has below eu-central-1
did anyone notice?
upvoted 2 times

" # Suresh108 1 year ago


disregard. mis read first. going with AAAA

547 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # Suresh108 10 months, 1 week ago


after reading this again and again going for CCCCCC

https://aws.amazon.com/about-aws/whats-new/2018/10/aws-privatelink-now-supports-access-over-inter-region-vpc-peering/
upvoted 1 times

" # nisoshabangu 1 year ago


A and C both work, A is less effort .
upvoted 1 times

548 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #736 Topic 1

A company that is developing a mobile game is making game assets available in two AWS Regions. Game assets are served from a set of Amazon
EC2 instances behind an Application Load Balancer (ALB) in each Region. The company requires game assets to be fetched from the closest
Region. If game assets become unavailable in the closest Region, they should be fetched from the other Region.
What should a solutions architect do to meet these requirements?

A. Create an Amazon CloudFront distribution. Create an origin group with one origin for each ALB. Set one of the origins as primary.

B. Create an Amazon Route 53 health check for each ALB. Create a Route 53 failover routing record pointing to the two ALBs. Set the Evaluate
Target Health value to Yes.

C. Create two Amazon CloudFront distributions, each with one ALB as the origin. Create an Amazon Route 53 failover routing record pointing
to the two CloudFront distributions. Set the Evaluate Target Health value to Yes.

D. Create an Amazon Route 53 health check for each ALB. Create a Route 53 latency alias record pointing to the two ALBs. Set the Evaluate
Target Health value to Yes.

Correct Answer: D

Community vote distribution


D (100%)

" # Jaypdv Highly Voted $ 1 year, 1 month ago


D. is valid
upvoted 21 times

" # ExtHo 1 year, 1 month ago


Supporting Jaypdv Yes It's D
Failover routing policy – Use when you want to configure active-passive failover.

Latency routing policy – Use when you have resources in multiple AWS Regions and you want to route traffic to the region that provides the
best latency.

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
upvoted 11 times

" # junehc Most Recent % 8 months, 1 week ago


Can someone please help me to understand why C is wrong?
upvoted 1 times

" # user0001 5 months, 2 weeks ago


because failover routing
upvoted 1 times

" # futen0326 8 months, 3 weeks ago


Selected Answer: D
Answer is D. An alias record can have more than 1 entry, contrary to what people are saying here.
upvoted 1 times

" # vbal 10 months, 2 weeks ago


it did confuse for sure with CloudFront with OriginSet in the options but Answer is D because the request won't go the nearest Region with
CloudFront as every request first lands on Primary.
upvoted 1 times

" # AzureDP900 11 months ago


If the nearest Region's game assets become unavailable, they should be retrieved from the other Region. This is key to answer this question. This
concludes me D as right answer.
upvoted 1 times

" # andylogan 1 year ago


It's D
upvoted 1 times

" # DerekKey 1 year ago


Either I don't know how it works or all of you are wrong.
If you want such sophisticated routing one of the answers should mention Routing Policy. None of them is.

549 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

A correct - CloudFron: create an origin group with two origins: a primary and a secondary. If the primary origin is unavailable or returns specific
HTTP response status codes CloudFront automatically switches to the secondary origin
B wrong - "Create a Route 53 failover routing record pointing to the two ALBs" - you have to set failover in each Route 53 record (each ALB) as
Primary or Secondary
C wrong - "Create an Amazon Route 53 failover routing record pointing to the two CloudFront distributions." - same as above
D wrong - "Create a Route 53 latency alias record pointing to the two ALBs" - alias can use only one destination
upvoted 2 times

" # DerekKey 1 year ago


Additonally for "company requires game assets to be fetched from the closest Region" you would need geoproximity or geolocation.
CloudFront will do it for each request based on requestor IP.
upvoted 1 times

" # johnnsmith 1 year ago


This is a badly worded question. A is wrong because "The company requires game assets to be fetched from the closest Region". With A,
you always fetch from one ALB. D is wrong because a latency alias record can't point to two ALB. If you think "The company requires
game assets to be fetched from the closest Region" really mean touching the closet region, A is correct. If you think a latency alias record
really means two records, D is correct.
upvoted 1 times
" # tgv 1 year ago
DDD
---
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 1 times

" # vimgoru24 1 year ago


D. The proximity requirement rules out other options
upvoted 2 times

" # hk436 1 year, 1 month ago


D is my answer!!
upvoted 1 times

" # mustpassla 1 year, 1 month ago


D for sure
upvoted 1 times

" # Waiweng 1 year, 1 month ago


it's D
upvoted 2 times

" # beebatov 1 year, 1 month ago


Answer: D

Latency routing for this use-case (having active resources in multiple regions)
upvoted 2 times

" # gsw 1 year, 1 month ago


must be B
upvoted 2 times

550 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #737 Topic 1

A large company is running a popular web application. The application runs on several Amazon EC2 Linux instances in an Auto Scaling group in a
private subnet.
An Application Load Balancer is targeting the instances in the Auto Scaling group in the private subnet. AWS Systems Manager Session Manager
is con+gured, and AWS Systems Manager Agent is running on all the EC2 instances.
The company recently released a new version of the application. Some EC2 instances are now being marked as unhealthy and are being
terminated. As a result, the application is running at reduced capacity. A solutions architect tries to determine the root cause by analyzing Amazon
CloudWatch logs that are collected from the application, but the logs are inconclusive.
How should the solutions architect gain access to an EC2 instance to troubleshoot the issue?

A. Suspend the Auto Scaling group's HealthCheck scaling process. Use Session Manager to log in to an instance that is marked as unhealthy.

B. Enable EC2 instance termination protection. Use Session Manager to log in to an instance that is marked as unhealthy.

C. Set the termination policy to OldestInstance on the Auto Scaling group. Use Session Manager to log in to an instance that is marked an
unhealthy.

D. Suspend the Auto Scaling group's Terminate process. Use Session Manager to log in to an instance that is marked as unhealthy.

Correct Answer: A

Community vote distribution


D (60%) A (40%)

" # cen007 3 months, 2 weeks ago


Selected Answer: D
D is more suitable since the EC2 instances are being terminated.
upvoted 2 times

" # skyblue07 4 months, 2 weeks ago


Selected Answer: A
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html
upvoted 2 times

" # cldy 11 months ago


D. Suspend the Auto Scaling group‫ג‬€™s Terminate process. Use Session Manager to log in to an instance that is marked as unhealthy.
upvoted 2 times

" # AzureDP900 11 months ago


D is correct, I agree with acloudguru comments!
upvoted 2 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: D
either replaceunhealthy or terminate will work. here is only D.https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-
processes.html
upvoted 1 times

" # GeeBeeEl 12 months ago


See https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html it shows
For Amazon EC2 Auto Scaling, there are two primary process types: Launch and Terminate. The Launch process adds a new Amazon EC2
instance to an Auto Scaling group, increasing its capacity. The Terminate process removes an Amazon EC2 instance from the group, decreasing
its capacity.
HealthCheck process for EC2 autoscaling is not a primary process! It is a process along with the following
AddToLoadBalancer AlarmNotification AZRebalance HealthCheck InstanceRefresh ReplaceUnhealthy ScheduledActions
From the requirements, Some EC2 instances are now being marked as unhealthy and are being terminated. Application is running at reduced
capacity not because instances are marked unhealthy but because they are being terminated.
I go with D because it is the only option that can prevent application from running at reduced capacity
Option D is the answer
upvoted 1 times

" # Viper57 1 year ago


D is correct.

You can suspend the 'ReplaceUnhealthy' process to prevent unhealthy instances from being terminated.

551 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

See https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html
upvoted 2 times

" # andylogan 1 year ago


It's D
upvoted 1 times

" # DerekKey 1 year ago


A wrong - impossible
B wrong - even with termination protection and instance scale-in protection enabled, instance will be terminated if a health check determines that
an instance is unhealthy
C wrong - stupid
D correct - but not advised
upvoted 1 times

" # tgv 1 year ago


DDD
---
upvoted 1 times

" # blackgamer 1 year ago


D for me
upvoted 1 times

" # sergioandreslq 1 year ago


A-Incorrect: disable the healthcheck won't inform which is healthy and unhealthy, remember we need to review unhealthy and we need to identify
which instance in this status.
B: It is the correct, Termination protection: Prevents Auto Scaling terminating protected instances, this is helpful to execute a deep diagnostics of
the EC2 and identify the root cause why they are unhealthy and they are being terminating.
C: Incorrect: There is not relation about unhealthy and oldest instance.
D: You can't suspend the auto Scaling Terminate process.
upvoted 1 times

" # sergioandreslq 1 year ago


Change From B to D: The way to suspend the Scaling Termination process is using ASG "Instance Scale-in Protection": With the new control
that we are giving you today, you can protect certain instances from termination.
https://aws.amazon.com/blogs/aws/new-instance-protection-for-auto-scaling/
upvoted 1 times

" # lifebegins 1 year ago


How we can suspend Autoscaling Termination Process itself. We can suspend the processes which is running inside the instances.

I hope it shoud be Option 'B'.

https://aws.amazon.com/blogs/aws/new-instance-protection-for-auto-scaling/
upvoted 1 times

" # Viper57 1 year ago


Read this guide, it clearly explains how to suspend the termination process on ASGs.

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html

Answer is D.
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with D
upvoted 3 times

" # Waiweng 1 year ago


it's D
upvoted 3 times

" # beebatov 1 year ago


Answer: D

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html#choosing-suspend-resume
upvoted 4 times

" # GeeBeeEl 12 months ago


Your link actually suggests option A
upvoted 1 times

" # GeeBeeEl 12 months ago

552 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Actually your link also suggests option D and now looking at it, option D is the answer -- see my separate post for the reasoning
upvoted 1 times
" # gsw 1 year, 1 month ago
AWS actually suggests you should put your instances into the standby state to troubleshoot failure but that isn't an option here
upvoted 1 times

" # gsw 1 year, 1 month ago


no i was wrong it is option D
upvoted 2 times

553 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #738 Topic 1

A software company hosts an application on AWS with resources in multiple AWS accounts and Regions. The application runs on a group of
Amazon EC2 instances in an application VPC located in the us-east-1 Region with an IPv4 CIDR block of 10.10.0.0/16. In a different AWS account,
a shared services VPC is located in the us-east-2 Region with an IPv4 CIDR block of 10.10.10.0/24. When a cloud engineer uses AWS
CloudFormation to attempt to peer the application
VPC with the shared services VPC, an error message indicates a peering failure.
Which factors could cause this error? (Choose two.)

A. The IPv4 CIDR ranges of the two VPCs overlap

B. The VPCs are not in the same Region

C. One or both accounts do not have access to an Internet gateway

D. One of the VPCs was not shared through AWS Resource Access Manager

E. The IAM role in the peer accepter account does not have the correct permissions

Correct Answer: AE

Community vote distribution


AE (100%)

" # gsw Highly Voted $ 1 year, 1 month ago


i agree A and E
upvoted 17 times

" # Netaji Most Recent % 7 months ago


https://aws.amazon.com/about-aws/whats-new/2017/11/announcing-support-for-inter-region-vpc-peering/
upvoted 1 times

" # jj22222 7 months, 1 week ago


Selected Answer: AE
these look right: a and e
upvoted 1 times

" # pal40sg 8 months, 1 week ago


Selected Answer: AE
A&E - https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-vpc-peering-error/
upvoted 1 times

" # tkanmani76 9 months, 2 weeks ago


A&E - https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-vpc-peering-error/
upvoted 1 times

" # AzureDP900 11 months ago


A,E sounds good to me.
upvoted 1 times

" # AzureDP900 11 months ago


A,E sound good to me
upvoted 1 times

" # andylogan 1 year ago


It's A E
upvoted 1 times

" # tgv 1 year ago


AAA EEE
---
No need for the VPCs to be in the same region
No need for an internet gateway
No need for the VPCs to be shared using RAM
upvoted 4 times

" # WhyIronMan 1 year ago

554 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

I'll go with A, E
upvoted 1 times
" # vimgoru24 1 year ago
A,E is way to go
upvoted 1 times

" # hk436 1 year ago


A & E is my answer!!
upvoted 1 times

" # Waiweng 1 year ago


it's A and E
upvoted 3 times

" # digimaniac 1 year, 1 month ago


AD
A is obvious, D is documented below
Shared VPCs prerequisites
You must enable resource sharing from the management account for your organization.
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html#vpc-share-prerequisites
upvoted 1 times

" # beebatov 1 year ago


its A & E. Cloud engineer uses "AWS CloudFormation" to attempt to peer the application.
https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-vpc-peering-error/
upvoted 2 times

" # digimaniac 1 year ago


it is shared service VPC, I read it wrong.
AE
upvoted 1 times

" # beebatov 1 year, 1 month ago


Answer: C & E

https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-vpc-peering-error/
upvoted 1 times

" # beebatov 1 year, 1 month ago


Sorry it was a Typo, Answer: A & E
upvoted 2 times

555 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #739 Topic 1

A company that develops consumer electronics with omces in Europe and Asia has 60 TB of software images stored on premises in Europe. The
company wants to transfer the images to an Amazon S3 bucket in the ap-northeast-1 Region. New software images are created daily and must be
encrypted in transit. The company needs a solution that does not require custom development to automatically transfer all existing and new
software images to Amazon S3.
What is the next step in the transfer process?

A. Deploy an AWS DataSync agent and con+gure a task to transfer the images to the S3 bucket

B. Con+gure Amazon Kinesis Data Firehose to transfer the images using S3 Transfer Acceleration

C. Use an AWS Snowball device to transfer the images with the S3 bucket as the target

D. Transfer the images over a Site-to-Site VPN connection using the S3 API with multipart upload

Correct Answer: A

Community vote distribution


A (100%)

" # Chuky64 4 months, 4 weeks ago


Selected Answer: A
AWS DataSync for current and new objects.
upvoted 2 times

" # bobsmith2000 5 months, 3 weeks ago


Selected Answer: A
"current and new objects" = DataSync
upvoted 3 times

" # cldy 11 months ago


A. Deploy an AWS DataSync agent and configure a task to transfer the images to the S3 bucket
upvoted 1 times

" # AzureDP900 11 months ago


A is right
https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html
upvoted 1 times

" # andylogan 1 year ago


It's A
upvoted 1 times

" # tgv 1 year ago


AAA
---
upvoted 1 times

" # Kopa 1 year ago


its A,

. DataSync provides built-in security capabilities such as encryption of data in-transit, and data integrity verification in-transit and at-rest. It
optimizes use of network bandwidth, and automatically recovers from network connectivity failures. In addition, DataSync provides control and
monitoring capabilities such as data transfer scheduling and granular visibility into the transfer process through Amazon CloudWatch metrics,
logs, and events.
upvoted 4 times

" # WhyIronMan 1 year ago


I'll go with A
Unfortunately in real world it will be a combination of D and A due the first high volume
upvoted 4 times

" # StelSen 1 year ago


I would say combination of C&A.
upvoted 2 times

556 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # vimgoru24 1 year ago


No custom development + no time constraints = A
upvoted 3 times

" # hk436 1 year ago


A is my answer!!
upvoted 1 times

" # Rich_Rich 1 year ago


(A) https://aws.amazon.com/blogs/storage/migrating-hundreds-of-tb-of-data-to-amazon-s3-with-aws-datasync/
upvoted 1 times

" # Waiweng 1 year ago


it's A
upvoted 2 times

" # CarisB 1 year ago


Agree on A
upvoted 4 times

" # beebatov 1 year ago


Answer: A

DataSync can automate the transfer + Snowball can't be shipped cross-region!

https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html
https://docs.aws.amazon.com/snowball/latest/ug/shipping.html
upvoted 4 times

" # gsw 1 year, 1 month ago


datasync agent uses TLS so it could well be A
upvoted 2 times

" # Chibuzo1 1 year, 1 month ago


How long will it take to transfer 60TB from one region to another?
upvoted 1 times

" # sergioandreslq 1 year ago


There is no time constraint in the requirements, so, DataSync is the best option to comply with requirements of: "automatically transfer all
existing and new software images to Amazon S3 encrypted in transit"
C: Incorrect: Won't fit the requirement to "automatically transfer all existing and NEW SOFTWARE IMAGES"
D: Incorrect: it works however, it is easier use DataSync as this process is automatic, upload files using Site To Site VPN will require a
manual intervention to review status of upload.
upvoted 1 times

557 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #740 Topic 1

A company is running a distributed application on a set of Amazon EC2 instances in an Auto Scaling group. The application stores large amounts
of data on an
Amazon Elastic File System (Amazon EFS) +le system, and new data is generated monthly. The company needs to back up the data in a secondary
AWS Region to restore from in case of a performance problem in its primary Region. The company has an RTO of 1 hour. A solutions architect
needs to create a backup strategy while minimizing the extra cost.
Which backup strategy should the solutions architect recommend to meet these requirements?

A. Create a pipeline in AWS Data Pipeline. Copy the data to an EFS +le system in the secondary Region. Create a lifecycle policy to move +les
to the EFS One Zone-Infrequent Access storage class.

B. Set up automatic backups by using AWS Backup. Create a copy rule to copy backups to an Amazon S3 bucket in the secondary Region.
Create a lifecycle policy to move backups to the S3 Glacier storage class.

C. Set up AWS DataSync and continuously copy the +les to an Amazon S3 bucket in the secondary Region. Create a lifecycle policy to move
+les to the S3 Glacier Deep Archive storage class.

D. Turn on EFS Cross-Region Replication and set the secondary Region as the target. Create a lifecycle policy to move +les to the EFS
Infrequent Access storage class in the secondary Region.

Correct Answer: A

Community vote distribution


D (100%)

" # seyik Highly Voted $ 8 months, 2 weeks ago


D, EFS cross-Region Replication is now possible with RTO of 15mins
https://aws.amazon.com/blogs/aws/new-replication-for-amazon-elastic-file-system-efs/
upvoted 16 times

" # WhyIronMan Highly Voted $ 1 year ago


I'll go with A

By elimination:

- B and C because of S3 Glacier will not attend the 1 hour RTO

- D: there is no such thing "EFS Cross-Region Replication".... if you google it, everything points to AWS DataSync instead
upvoted 11 times

" # JohnPi 4 weeks ago


https://aws.amazon.com/blogs/aws/new-replication-for-amazon-elastic-file-system-efs/
upvoted 1 times

" # Ni_yot Most Recent % 3 months, 1 week ago


Change my ans to D. EFS cross region replication does exist. It depends on really when answered the question. As of Jan 2022 EFS CRR is a
thing.
https://aws.amazon.com/blogs/aws/new-replication-for-amazon-elastic-file-system-efs/
upvoted 1 times

" # asfsdfsdf 3 months, 4 weeks ago


D should be the answer - RTO of 15
https://aws.amazon.com/blogs/aws/new-replication-for-amazon-elastic-file-system-efs/
before jan 2022 answer its A - Since C is for deep archive more than 1 hour, B you cant copy backup data to s3 only vaults, D wasn't exists
before 2022 jan
upvoted 1 times

" # Anhdd 5 months ago


Selected Answer: D
anyone who choose A can tell me why One Zone - IA could be anh failover? I support for D
upvoted 1 times

" # Anhdd 5 months ago


anyone who choose A can tell me why One Zone - IA could be a failover? I support for D
- my typo sorry

558 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # hilft 5 months ago
I go D
upvoted 1 times

" # bobsmith2000 5 months, 2 weeks ago


Selected Answer: D
https://docs.aws.amazon.com/efs/latest/ug/efs-replication.html
upvoted 1 times

" # Mimek 7 months, 1 week ago


Selected Answer: D
D as of Jan 2022
upvoted 8 times

" # futen0326 8 months, 3 weeks ago


B. Glacier supports expedited archive. AWS Backup is the recommended backup solution for EFS. Not sure about cost here but the question
does not mention anything about cost-efficiency.
upvoted 1 times

" # AMKazi 9 months, 1 week ago


Ans is C: "use as a fallback in the event of a main Region performance issue." meets this requirements as data needs to be continuously
replicated.
Remember its being copied to S3 so it can meet RTO requirement. Glacier will come into picture as a lifecycle policy , not in effect immediately.
upvoted 1 times

" # lulz111 9 months, 3 weeks ago


Its C in my opinion
https://aws.amazon.com/about-aws/whats-new/2019/05/aws-datasync-now-supports-efs-to-efs-transfer/ - copying data between EFS file
systems
https://docs.aws.amazon.com/amazonglacier/latest/dev/downloading-an-archive-two-steps.html - Expedited recovery from Glacier is between
1-5 minutes.
upvoted 1 times

" # lulz111 9 months, 3 weeks ago


B sorry, i missed the Deep archive bit of C.
https://aws.amazon.com/about-aws/whats-new/2020/01/aws-backup-supports-cross-region-backup/
upvoted 2 times

" # AzureDP900 11 months ago


I will go with A
https://aws.amazon.com/about-aws/whats-new/2019/05/aws-datasync-now-supports-efs-to-efs-transfer/
upvoted 1 times

" # AzureDP900 11 months ago


A is right
upvoted 1 times

" # tkanmani76 11 months ago


Option C
A- Is a legacy solution for EFS backup - Refer https://docs.aws.amazon.com/efs/latest/ug/alternative-efs-backup.html#backup-considerations -
hence not preferred
2. Batch - Preferred approach - But not cost effective
3. Data Sync - Supports data movement across regions and can be used to hook EFS with S3. Also Glacier Deep archive cheaper than Glacier
storage and hence the best choice, considering that the expenses should be minimum.
upvoted 1 times

" # tkanmani76 10 months, 1 week ago


Pricing wise S3 Glacier is cheaper than EFS - $0.0133 for EFS Vs .0036$ for Glacier. Using Glacier we can retrieve in minutes using expedited
option. DeepArchive though cheaper retrieval can take more than 12 hours. I retract from C and B is the right option.
upvoted 1 times

" # Gaurav_GGG 11 months, 1 week ago


Should be B.

AWS Documentation clearly mentions AWS Backup as a recommended service for EFS backup solution.
"Recommended Amazon EFS backup solutions
There are two recommended solutions available for backing up your Amazon EFS file systems.

AWS Backup service

The EFS-to-EFS backup solution

"

559 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

https://docs.aws.amazon.com/efs/latest/ug/alternative-efs-backup.html#recommended-backup-solutions
upvoted 3 times

" # Gaurav_GGG 11 months, 1 week ago


And Glacier expedited retrieval supports 1-5 minutes retrieval time.
upvoted 2 times
" # sashenka 12 months ago
Technically speaking both B and C would meet the requirements of the 1 hour RTO! Creating a lifecycle policy to move backups to the S3 Glacier
or S# Glacier Deep Archive storage class happens asynchronously and the actual transition typically takes over 24 hrs. Both B and C do not state
the number of days to transition to Glacier from S3 and even if 0 is selected one would have at least 24 hrs and certainly more than the required 1
hour of RTO to have direct access to the backup data. Again, the point I am making is that the data is not being moved DIRECTLY into S3 Glacier
or S3 Glacier Archive and as such the lifecycle policy transition is not immediate and will allow for a 1 hour RTO.
upvoted 1 times

" # student22 1 year ago


B
---
Glacier expedited retrieval supports 1-5 minutes retrieval time.
A is more expensive with EFS and less reliable with One-Zone IA.
upvoted 5 times

560 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #741 Topic 1

A company runs an application on AWS. An AWS Lambda function uses credentials to authenticate to an Amazon RDS for MySQL DB instance. A
security risk assessment identi+ed that these credentials are not frequently rotated. Also, encryption at rest is not enabled for the DB instance.
The security team requires that both of these issues be resolved.
Which strategy should a solutions architect recommend to remediate these security risks?

A. Con+gure the Lambda function to store and retrieve the database credentials in AWS Secrets Manager and enable rotation of the
credentials. Take a snapshot of the DB instance and encrypt a copy of that snapshot. Replace the DB instance with a new DB instance that is
based on the encrypted snapshot.

B. Enable IAM DB authentication on the DB instance. Grant the Lambda execution role access to the DB instance. Modify the DB instance and
enable encryption.

C. Enable IAM DB authentication on the DB instance. Grant the Lambda execution role access to the DB instance. Create an encrypted read
replica of the DB instance. Promote the encrypted read replica to be the new primary node.

D. Con+gure the Lambda function to store and retrieve the database credentials as encrypted AWS Systems Manager Parameter Store
parameters. Create another Lambda function to automatically rotate the credentials. Create an encrypted read replica of the DB instance.
Promote the encrypted read replica to be the new primary node.

Correct Answer: D
Reference:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/enable-rotation-rds.html

Community vote distribution


A (100%)

" # beebatov Highly Voted $ 1 year, 1 month ago


Answer: A

Parameter store can store DB credentials as secure string but CANNOT rotate secrets, hence, go with A + Cannot enable encryption on existing
MySQL RDS instance, must create a new encrypted one from unencrypted snapshot.
upvoted 21 times

" # AnonymousJhb 9 months ago


https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager
/#:~:text=Secrets%20Manager%20offers%20built%2Din%20integrations%20for%20rotating%20credentials%20for,rotate%20other%20type
s%20of%20secrets.
upvoted 1 times

" # ExtHo Highly Voted $ 1 year, 1 month ago


A
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
upvoted 11 times

" # AYANtheGLADIATOR Most Recent % 2 months, 1 week ago


A for sure.
upvoted 1 times

" # CloudHandsOn 3 months, 3 weeks ago


My first answer was A
upvoted 1 times

" # bobsmith2000 6 months, 2 weeks ago


Selected Answer: A
B and C are wrong because of RDS encryption limitation
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html#Overview.Encryption.Limitations

D is incorrect due to parameter store usage. There's no rotation provided by the service
upvoted 1 times

" # RVD 7 months, 3 weeks ago


Selected Answer: A
Ans: A

561 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # KennethTam 8 months, 1 week ago
A is correct
upvoted 1 times

" # ashehzad 9 months, 1 week ago


Selected Answer: A
A is the right answer
upvoted 1 times

" # mattfaz 9 months, 3 weeks ago


Here is why D cannot be correct: https://aws.amazon.com/premiumsupport/knowledge-center/rds-encrypt-instance-mysql-mariadb/
In the short description of this link - it specifically states that you cannot create an encrypted read-replica from an unencrypted DB. The only way
to set encryption on an RDS instance is during deployment of the initial instance or creating a new instance from a snapshot and selecting the
Encryption and Key in the parameters page. So that eliminates B,C,D. D is also incorrect since you would not need to create another Lambda
function to rotate the keys - this is a feature included in Secrets Manager OOTB.
upvoted 2 times

" # RVivek 10 months, 2 weeks ago


Answer: A
Encrypting a unencrypted instance of DB or creating a encrypted replica of an un encrypted DB instance are not possible Hence A is the only
solution possible.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html#Overview.Encryption.Limitations
upvoted 1 times

" # AzureDP900 11 months ago


A is correct
upvoted 1 times

" # rogan1821 11 months ago


Selected Answer: A
지금 사용중
upvoted 1 times

" # cldy 11 months, 1 week ago


A. Configure the Lambda function to store and retrieve the database credentials in AWS Secrets Manager and enable rotation of the credentials.
Take a snapshot of the DB instance and encrypt a copy of that snapshot. Replace the DB instance with a new DB instance that is based on the
encrypted snapshot.
upvoted 2 times

" # RVD 11 months, 2 weeks ago


Selected Answer: A
RDS has integration with Secret Manger with Key rotation fuction.
upvoted 1 times

" # Gaurav_GGG 11 months, 2 weeks ago


Answer is A. Secret manager will store and rotate secrets. And need encrypted snapshot to create encryption at rest DB.
upvoted 1 times

" # backfringe 11 months, 3 weeks ago


AAAAAAAAAAAAAAAAA
upvoted 1 times

" # ByomkeshDas 12 months ago


Option A is correct. Because you can't create an encrypted read replica from an unencrypted instance. https://aws.amazon.com/premiumsupport
/knowledge-center/rds-encrypt-instance-mysql-mariadb
upvoted 1 times

562 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #742 Topic 1

A company recently deployed a new application that runs on a group of Amazon EC2 Linux instances in a VPC. In a peered VPC, the company
launched an EC2
Linux instance that serves as a bastion host. The security group of the application instances allows access only on TCP port 22 from the private
IP of the bastion host. The security group of the bastion host allows access to TCP port 22 from 0.0.0.0/0 so that system administrators can use
SSH to remotely log in to the application instances from several branch omces.
While looking through operating system logs on the bastion host, a cloud engineer notices thousands of failed SSH logins to the bastion host from
locations around the world. The cloud engineer wants to change how remote access is granted to the application instances and wants to meet the
following requirements:
✑ Eliminate brute-force SSH login attempts.
✑ Retain a log of commands run during an SSH session.
✑ Retain the ability to forward ports.
Which solution meets these requirements for remote access to the application instances?

A. Con+gure the application instances to communicate with AWS Systems Manager. Grant access to the system administrators to use Session
Manager to establish a session with the application instances. Terminate the bastion host.

B. Update the security group of the bastion host to allow tramc from only the public IP addresses of the branch omces.

C. Con+gure an AWS Client VPN endpoint and provision each system administrator with a certi+cate to establish a VPN connection to the
application VPC. Update the security group of the application instances to allow tramc from only the Client VPN IPv4 CIDR. Terminate the
bastion host.

D. Con+gure the application instances to communicate with AWS Systems Manager. Grant access to the system administrators to issue
commands to the application instances by using Systems Manager Run Command. Terminate the bastion host.

Correct Answer: C

Community vote distribution


A (100%)

" # Jaypdv Highly Voted $ 1 year, 1 month ago


A.
"Session Manager removes the need to open inbound ports, manage SSH keys, or use bastion hosts"
Ref: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
upvoted 20 times

" # SJain50 Highly Voted $ 1 year ago


B
question says " Retain the ability to forward ports" - NAT gateway can not do this. Only NAT instance or bastian host is capable to do this.
upvoted 6 times

" # Shankar124 Most Recent % 4 months, 4 weeks ago


A is correct:
As its now also support port forwarding
Ref: https://aws.amazon.com/about-aws/whats-new/2022/05/aws-systems-manager-support-port-forwarding-remote-hosts-using-session-
manager/
upvoted 1 times

" # user89 5 months, 2 weeks ago


A.
Session Manager logs the commands you enter and their output during a session depending on your session preferences. so it covers all
requirement.
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html
upvoted 2 times

" # tartarus23 6 months ago


Selected Answer: A
A. Session manager enables secure SSH Access, port forwarding, and logging of sesssions
upvoted 1 times

" # chatvinoth 10 months ago


I go for A, as session manager also allows port forwarding - Refer below blog
https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/

563 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # AzureDP900 11 months ago


A right answer
upvoted 1 times

" # andylogan 1 year ago


It's A
upvoted 1 times

" # tgv 1 year ago


AAA
---
Good job @ExtHo on sharing:
Retain a log of commands run during an SSH session. https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-
logging.html
Retain the ability to forward ports. https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/
upvoted 1 times

" # blackgamer 1 year ago


A is the answer.
upvoted 1 times

" # sergioandreslq 1 year ago


A: Incorrect: It is the most secure, However, it does not comply with requirement to: "Retain the ability to forward ports."
B: Correct: It is the easy way just allowing SSH from offices, the SysAdmins will continue connecting in the same way they are doing today and
Retain the ability to forward ports.
C: Incorrect, It will work but the issue is the amount of work of the deployment for VPN.
upvoted 1 times

" # sergioandreslq 1 year ago


changed From B to A. At the end, session manager is the most secure. I like the B because it is faster and easier, but exist the risk of brute
force even from the on-premise network. So, the most secure is option A.
upvoted 2 times

" # Suresh108 1 year ago


AAAAAAA
upvoted 1 times

" # Kopa 1 year ago


Its A,

Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or
manage SSH keys. Session Manager also allows you to comply with corporate policies that require controlled access to instances, strict security
practices, and fully auditable logs with instance access details, while still providing end users with simple one-click cross-platform access to your
managed instances.
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with A
upvoted 2 times

" # qurren 1 year ago


https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html

It says: "Logging isn't available for Session Manager sessions that connect through port forwarding or SSH. This is because SSH encrypts all
session data, and Session Manager only serves as a tunnel for SSH connections." So A is not correct...

I will choose B.
upvoted 2 times

" # hk436 1 year ago


A is my answer!!
Session Manager logs the commands you enter and their output during a session depending on your session preferences. To prevent sensitive
data, such as passwords, from being viewed in your session logs we recommend using the following commands when entering sensitive data
during a session.

https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html
upvoted 2 times

" # Karthic 1 year ago


It should be A
ref: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
upvoted 1 times

564 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

565 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #743 Topic 1

A company that provisions job boards for a seasonal workforce is seeing an increase in tramc and usage. The backend services run on a pair of
Amazon EC2 instances behind an Application Load Balancer with Amazon DynamoDB as the datastore. Application read and write tramc is slow
during peak seasons.
Which option provides a scalable application architecture to handle peak seasons with the LEAST development effort?

A. Migrate the backend services to AWS Lambda. Increase the read and write capacity of DynamoDB

B. Migrate the backend services to AWS Lambda. Con+gure DynamoDB to use global tables

C. Use Auto Scaling groups for the backend services. Use DynamoDB auto scaling

D. Use Auto Scaling groups for the backend services. Use Amazon Simple Queue Service (Amazon SQS) and an AWS Lambda function to write
to DynamoDB

Correct Answer: C

" # kpcert Highly Voted $ 1 year, 1 month ago


Agree , C [ Key : LEAST development work]
upvoted 8 times

" # AzureDP900 Most Recent % 11 months ago


C works perfectly fine. There is no need of SQS as mentioned in option D
upvoted 1 times

" # acloudguru 11 months, 1 week ago


simple one, hope i can have it in my exam C
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # tgv 1 year ago


CCC
---
LEAST development work
upvoted 3 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 4 times

" # hk436 1 year ago


C is my answer!!
upvoted 1 times

" # mustpassla 1 year ago


C for sure
upvoted 1 times

" # vkbajoria 1 year, 1 month ago


least amount of work. Answer is C
upvoted 2 times

" # Waiweng 1 year, 1 month ago


it's C
upvoted 4 times

" # gsw 1 year, 1 month ago


i agree C
upvoted 2 times

" # Jaypdv 1 year, 1 month ago


C. works
upvoted 3 times

566 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

567 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #744 Topic 1

A company has an application that sells tickets online and experiences bursts of demand every 7 days. The application has a stateless
presentation layer running on Amazon EC2, an Oracle database to store unstructured data catalog information, and a backend API layer. The front-
end layer uses an Elastic Load Balancer to distribute the load across nine On-Demand instances over three Availability Zones (AZs). The Oracle
database is running on a single EC2 instance. The company is experiencing performance issues when running more than two concurrent
campaigns. A solutions architect must design a solution that meets the following requirements:
✑ Address scalability issues.
✑ Increase the level of concurrency.
✑ Eliminate licensing costs.
✑ Improve reliability.
Which set of steps should the solutions architect take?

A. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce costs. Convert the Oracle
database into a single Amazon RDS reserved DB instance.

B. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce costs. Create two additional
copies of the database instance, then distribute the databases in separate AZs.

C. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce costs. Convert the tables in
the Oracle database into Amazon DynamoDB tables.

D. Convert the On-Demand Instances into Spot instances to reduce costs for the front end. Convert the tables in the Oracle database into
Amazon DynamoDB tables.

Correct Answer: A

Community vote distribution


C (100%)

" # ExtHo Highly Voted $ 1 year, 1 month ago


C
Combination of On-Demand and Spot Instances + DynamoDB.
D Should be eliminated due to only use of Spot Instance as any time can be taken back by AWS
upvoted 23 times

" # hilft 3 months, 2 weeks ago


CCC. GJ
upvoted 1 times

" # HellGate Most Recent % 9 months, 1 week ago


Answer is C!!!
upvoted 2 times

" # pititcu667 9 months, 4 weeks ago


Selected Answer: C
c because eliminate licensing costs and scalability.
upvoted 1 times

" # AzureDP900 11 months ago


C is right answer for scalability and reduce license cost
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # tgv 1 year ago


CCC
---
A: single RDS doesn't improve reliability
B: two additional copies of the Oracle database doesn't eliminate licensing costs
D: doesn't address the scalability issues
upvoted 3 times

568 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # blackgamer 1 year ago


C is the answer.
upvoted 1 times

" # Kopa 1 year ago


Im going for C
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 4 times

" # mustpassla 1 year ago


C for sure
upvoted 1 times

" # vkbajoria 1 year ago


my vote goes to C
upvoted 1 times

" # Waiweng 1 year ago


it's C
upvoted 2 times

" # beebatov 1 year ago


Answer: C

Hints: Unstructured data store + Eliminate licenses cost


upvoted 4 times

" # Jaypdv 1 year, 1 month ago


Voting for D. Option A. does not eliminate licensing cost. At best it's included in the RDS instance price. And the question mentions "unstructured
data" which fits DynamoDB well.
upvoted 2 times

" # Jaypdv 1 year, 1 month ago


Typo. I mean C.
upvoted 3 times

" # gsw 1 year, 1 month ago


doesn't make sense - structured data would normally be Dynamo DB
upvoted 1 times

" # DashL 1 year ago


It is required to address "Scalability issue", which can be addressed only by AutoScaling. That means the options are A, B or C. No "Licensing
cost - That is addressed by only option C.
The other two requirements are: "Increase the level of concurrency "and "Improve reliability" - Options C meets those.
upvoted 2 times

" # gsw 1 year, 1 month ago


has to be C or D
upvoted 1 times

" # hdomingo 1 year ago


Spot instances (option D) is not valid response, because dont meet de requirements of "Address scalability issues". Correct option is C
upvoted 1 times

" # gsw 1 year, 1 month ago


i think C
upvoted 2 times

" # jduo 1 year, 1 month ago


Oracle database to store unstructured data catalog information
upvoted 1 times

569 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #745 Topic 1

A company wants to refactor its retail ordering web application that currently has a load-balanced Amazon EC2 instance jeet for web hosting,
database API services, and business logic. The company needs to create a decoupled, scalable architecture with a mechanism for retaining failed
orders while also minimizing operational costs.
Which solution will meet these requirements?

A. Use Amazon S3 for web hosting with Amazon API Gateway for database API services. Use Amazon Simple Queue Service (Amazon SQS)
for order queuing. Use Amazon Elastic Container Service (Amazon ECS) for business logic with Amazon SQS long polling for retaining failed
orders.

B. Use AWS Elastic Beanstalk for web hosting with Amazon API Gateway for database API services. Use Amazon MQ for order queuing. Use
AWS Step Functions for business logic with Amazon S3 Glacier Deep Archive for retaining failed orders.

C. Use Amazon S3 for web hosting with AWS AppSync for database API services. Use Amazon Simple Queue Service (Amazon SQS) for order
queuing. Use AWS Lambda for business logic with an Amazon SQS dead-letter queue for retaining failed orders.

D. Use Amazon Lightsail for web hosting with AWS AppSync for database API services. Use Amazon Simple Email Service (Amazon SES) for
order queuing. Use Amazon Elastic Kubernetes Service (Amazon EKS) for business logic with Amazon Elasticsearch Service (Amazon ES) for
retaining failed orders.

Correct Answer: C

" # testtaker3434 Highly Voted $ 1 year, 1 month ago


It should be C. You dont use long pooling " for retaining failed orders while also minimizing operational costs.". You use DLQ.
upvoted 15 times

" # beebatov Highly Voted $ 1 year, 1 month ago


Answer C:

Hints: Refactoring app to use GraphQL APIs (AppSync) + Serverless + DLQ for failed orders
upvoted 10 times

" # AMKazi Most Recent % 8 months, 4 weeks ago


C: solves all problems
upvoted 1 times

" # cldy 10 months, 1 week ago


C: DLQ
upvoted 1 times

" # challenger1 11 months ago


My Answer: C
Use a Dead Letter Queue, not long polling
upvoted 1 times

" # tkanmani76 11 months ago


Everyone is in favour of C - however the application mentioned is not a static one - retail ordering online application - so how can S3 host it ? So
it cannot be Option C. The next decoupled scalable architecture is with Step Functions - Option B is correct in that case.
upvoted 1 times

" # AMKazi 8 months, 4 weeks ago


You are only hosting website on S3, for all server side processing you have lambda
upvoted 2 times

" # AzureDP900 11 months ago


It is C, SQS is required.
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # tgv 1 year ago


CCC
---

570 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

A: you don't retain failed orders with SQS long polling


B: Amazon S3 Glacier Deep Archive for retaining failed orders doesn't sound good
C: You cannot use Amazon Simple Email Service (Amazon SES) for order queuing
upvoted 4 times

" # tgv 1 year ago


D: You cannot use Amazon Simple Email Service (Amazon SES) for order queuing*
upvoted 1 times
" # Suresh108 1 year ago
CCCCC

Method of Elimination -- look for failed order options in all the answers
upvoted 2 times

" # WhyIronMan 1 year ago


I'll go with C

Unfortunately is a Trick question...While AppSync is no better than API GW in this context, DLQ is better choice than SQS long polling for
retaining failed orders

Damn aws...
upvoted 6 times

" # santhoshmp 1 year ago


I think its C itself
upvoted 1 times

" # santhoshmp 1 year ago


can S3 be used to host a retail web application. ? answer should be B or D ?
upvoted 2 times

" # vimgoru24 1 year ago


C

While AppSync is no better than API GW in this context, the latter part of the answer does mention DLQ which is a “must have”
upvoted 2 times

" # hk436 1 year ago


C is my answer!!
upvoted 1 times

" # vkbajoria 1 year ago


It is C
upvoted 1 times

" # Waiweng 1 year, 1 month ago


it's C
upvoted 3 times

571 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #746 Topic 1

A +nancial company is building a system to generate monthly, immutable bank account statements for its users. Statements are stored in Amazon
S3. Users should have immediate access to their monthly statements for up to 2 years. Some users access their statements frequently, whereas
others rarely access their statements. The company's security and compliance policy requires that the statements be retained for at least 7 years.
What is the MOST cost-effective solution to meet the company's needs?

A. Create an S3 bucket with Object Lock disabled. Store statements in S3 Standard. De+ne an S3 Lifecycle policy to transition the data to S3
Standard-Infrequent Access (S3 Standard-IA) after 30 days. De+ne another S3 Lifecycle policy to move the data to S3 Glacier Deep Archive
after 2 years. Attach an S3 Glacier Vault Lock policy with deny delete permissions for archives less than 7 years old.

B. Create an S3 bucket with versioning enabled. Store statements in S3 Intelligent-Tiering. Use same-Region replication to replicate objects to
a backup S3 bucket. De+ne an S3 Lifecycle policy for the backup S3 bucket to move the data to S3 Glacier. Attach an S3 Glacier Vault Lock
policy with deny delete permissions for archives less than 7 years old.

C. Create an S3 bucket with Object Lock enabled. Store statements in S3 Intelligent-Tiering. Enable compliance mode with a default retention
period of 2 years. De+ne an S3 Lifecycle policy to move the data to S3 Glacier after 2 years. Attach an S3 Glacier Vault Lock policy with deny
delete permissions for archives less than 7 years old.

D. Create an S3 bucket with versioning disabled. Store statements in S3 One Zone-Infrequent Access (S3 One Zone-IA). De+ne an S3 Lifecycle
policy to move the data to S3 Glacier Deep Archive after 2 years. Attach an S3 Glacier Vault Lock policy with deny delete permissions for
archives less than 7 years old.

Correct Answer: D

Community vote distribution


C (100%)

" # Jaypdv Highly Voted $ 1 year ago


C.
Object lock and compliance mode: https://aws.amazon.com/about-aws/whats-new/2018/11/s3-object-lock/. The rest makes the most sense
upvoted 16 times

" # KennethTam Most Recent % 7 months, 3 weeks ago


Selected Answer: C
C is correct
upvoted 1 times

" # AzureDP900 11 months ago


C is right answer
upvoted 2 times

" # cldy 11 months ago


C. Create an S3 bucket with Object Lock enabled. Store statements in S3 Intelligent-Tiering. Enable compliance mode with a default retention
period of 2 years. Define an S3 Lifecycle policy to move the data to S3 Glacier after 2 years. Attach an S3 Glacier Vault Lock policy with deny
delete permissions for archives less than 7 years old.
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: C
C https://aws.amazon.com/about-aws/whats-new/2018/11/s3-object-lock/. The rest makes the most sense
upvoted 2 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # Liongeek 1 year ago


CCC because of Object lock
upvoted 2 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

572 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # blackgamer 1 year ago


It is C
upvoted 1 times

" # Suresh108 1 year ago


CCCCC -- 'object lock enabled'
upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C, you convinced me
upvoted 2 times

" # vimgoru24 1 year ago


C is far superior than any other answer
upvoted 1 times

" # hk436 1 year ago


C is my answer!!
upvoted 1 times

" # vkbajoria 1 year ago


C.
if you don't know Object lock and compliance mode, you will surely get it wrong like I did.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
upvoted 3 times

" # Waiweng 1 year ago


it's C
upvoted 3 times

" # beebatov 1 year ago


Answer: C

S3 Object Lock protection is maintained regardless of which storage class the object resides in and throughout S3 Lifecycle transitions between
storage classes.
upvoted 2 times

" # gsw 1 year, 1 month ago


has to be C the others are silly
upvoted 3 times

573 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #747 Topic 1

A company hosts a large on-premises MySQL database at its main omce that supports an issue tracking system used by employees around the
world. The company already uses AWS for some workloads and has created an Amazon Route 53 entry for the database endpoint that points to
the on-premises database.
Management is concerned about the database being a single point of failure and wants a solutions architect to migrate the database to AWS
without any data loss or downtime.
Which set of actions should the solutions architect implement?

A. Create an Amazon Aurora DB cluster. Use AWS Database Migration Service (AWS DMS) to do a full load from the on-premises database to
Aurora. Update the Route 53 entry for the database to point to the Aurora cluster endpoint, and shut down the on-premises database.

B. During nonbusiness hours, shut down the on-premises database and create a backup. Restore this backup to an Amazon Aurora DB cluster.
When the restoration is complete, update the Route 53 entry for the database to point to the Aurora cluster endpoint, and shut down the on-
premises database.

C. Create an Amazon Aurora DB cluster. Use AWS Database Migration Service (AWS DMS) to do a full load with continuous replication from
the on-premises database to Aurora. When the migration is complete, update the Route 53 entry for the database to point to the Aurora cluster
endpoint, and shut down the on- premises database.

D. Create a backup of the database and restore it to an Amazon Aurora multi-master cluster. This Aurora cluster will be in a master-master
replication con+guration with the on-premises database. Update the Route 53 entry for the database to point to the Aurora cluster endpoint,
and shut down the on- premises database.

Correct Answer: C

Community vote distribution


C (100%)

" # Jaypdv Highly Voted $ 1 year, 1 month ago


C.
Got the feeling some of those questions come from the DB Specialist cert.
upvoted 13 times

" # hilft Most Recent % 3 months, 2 weeks ago


A or C?
Seems it's C
upvoted 1 times

" # tartarus23 6 months, 1 week ago


Selected Answer: C
C. does not have a downtime or data loss since live replication happens so on-premise DB is still operational until the new AWS Aurora DB is
completely migrated.
upvoted 2 times

" # tartarus23 7 months ago


Selected Answer: C
continuous replication from on-premises to Aurora is a feasible solution.
upvoted 1 times

" # Ni_yot 8 months, 1 week ago


Yah its C. Continuous replication is the key here. No downtime or data loss. Just do a cutover to finalize the migration when complete.
upvoted 1 times

" # AzureDP900 11 months ago


I will go with C, there is no dataloss with this option
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # tgv 1 year ago


CCC
---

574 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # WhyIronMan 1 year ago


I'll go with C
upvoted 4 times

" # vimgoru24 1 year ago


C

“Around the world” eliminates possibility for the maintenance window at night. The other difference is ability to leverage continuous replication in
MySQL to Aurora case.
upvoted 3 times

" # hk436 1 year ago


C is my answer!!
upvoted 1 times

" # vkbajoria 1 year ago


Always choose full load with continuous replication.
C
upvoted 2 times

" # Waiweng 1 year, 1 month ago


it's C
upvoted 2 times

" # LCC92 1 year, 1 month ago


go with C
upvoted 1 times

" # beebatov 1 year, 1 month ago


Answer: C

AWS DMS + CDC to capture daily changes


upvoted 2 times

" # gsw 1 year, 1 month ago


i agree with C
upvoted 3 times

575 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #748 Topic 1

A company has a policy that all Amazon EC2 instances that are running a database must exist within the same subnets in a shared VPC.
Administrators must follow security compliance requirements and are not allowed to directly log in to the shared account. All company accounts
are members of the same organization in AWS Organizations. The number of accounts will rapidly increase as the company grows.
A solutions architect uses AWS Resource Access Manager to create a resource share in the shared account.
What is the MOST operationally emcient con+guration to meet these requirements?

A. Add the VPC to the resource share. Add the account IDs as principals

B. Add all subnets within the VPC to the resource share. Add the account IDs as principals

C. Add all subnets within the VPC to the resource share. Add the organization as a principal

D. Add the VPC to the resource share. Add the organization as a principal

Correct Answer: B
Reference:
https://aws.amazon.com/blogs/networking-and-content-delivery/vpc-sharing-a-new-approach-to-multiple-accounts-and-vpc-management/

Community vote distribution


C (100%)

" # Jaypdv Highly Voted $ 1 year ago


C.
Adding the organization as a principal ensure that current and future accounts will have access to the share. The question mentions that there will
be many new accounts, that's the clue
upvoted 21 times

" # bobsmith2000 Most Recent % 5 months, 3 weeks ago


Selected Answer: C
We have to add organization as principle due to "swift increase in account num".: So it's C or D.
There's no reason to share a VPC, so it's C.
upvoted 3 times

" # caveman712 4 weeks, 1 day ago


You can't share VPC anyways in RAM..only subnets
upvoted 1 times

" # cldy 11 months ago


C. Add all subnets within the VPC to the resource share. Add the organization as a principal
upvoted 1 times

" # AzureDP900 11 months ago


I will go with C as mentioned @https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-create
To restrict resource sharing to only principals in your organization, choose Allow sharing with principals in your organization only.
upvoted 1 times

" # pcops 11 months, 3 weeks ago


How to get access to questions from 390 onwards. I can only see questions upto 389. Please suggest
upvoted 2 times

" # acloudguru 11 months, 2 weeks ago


use incognito mode directly with the url
upvoted 4 times

" # wassb 3 weeks, 2 days ago


and a Cloudfront distribution with OAI configured
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # tgv 1 year ago


CCC
---

576 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # WhyIronMan 1 year ago
I'll go with C
upvoted 1 times

" # Akbarali 1 year ago


New Solutions Architect Pro question.

A company runs an application in the cloud that consists of a database and a website. Users can post data to the website, have the data
processed, and have the data sent back to them in an email, Data is stored in a MySQL database running on an Amazon EC2 instance. The
database is running with two private subnets. The website is running on Apache Tomcat in a single EC2 instance in a different VPC with one
public subnet. There is a single VPC peering connection between the database and website VPC.

The website has suffered several outages during the last month due to high traffic.

Which actions should a solutions architect take to increase the reliability of the application? (select three)

A – Place the Tomcat server in an Autoscaling group with multiple EC2 instances behind an Application Load Balancer

B – Provision an additional VPC peering connection

C – Migrate the MySQL database to Amazon Aurora with one Aurora Replica

D – Provision two NAT gateways in the database VPC

E – Move the tomcat server to the database VPC

F – Create an additional public subnet in a different Availability Zone in the website VPC
upvoted 3 times

" # sergioandreslq 1 year ago


This questing is the 760, Answer is ACF.
upvoted 1 times

" # Chibuzo1 1 year ago


Answer is A C F
upvoted 3 times

" # pradhyumna 1 year ago


looks like ACE
upvoted 2 times

" # Waiweng 1 year ago


it's C
https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-create
upvoted 2 times

" # Chibuzo1 1 year ago


To specify a principal from the list, for each principal, select the principal type, enter the ID or ARN,
AWS account: To add an AWS account, enter the 12-digit account ID. For example:
123456789012
Organization: To add your entire organization, enter the ID of the organization. For example:
o-abcd1234efgh5678
The right answer is B. To add organization, you add the ID, and to add an Account you add the ID.
upvoted 2 times

" # beebatov 1 year ago


Answer: C

You share the resources of the VPC which are Subnets in this case + add Organization as the principal as the number of accounts will grow in
future.

https://docs.aws.amazon.com/ram/latest/userguide/ram-ug.pdf
upvoted 4 times

" # gsw 1 year, 1 month ago


i think B is correct see the link
upvoted 1 times

" # Jaypdv 1 year ago


C. Is better, you can add the entire organization as principal. This covers all existing and future accounts as per the question. See
https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-create
upvoted 7 times

577 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

578 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #749 Topic 1

A solutions architect is evaluating the reliability of a recently migrated application running on AWS. The front end is hosted on Amazon S3 and
accelerated by
Amazon CloudFront. The application layer is running in a stateless Docker container on an Amazon EC2 On-Demand Instance with an Elastic IP
address. The storage layer is a MongoDB database running on an EC2 Reserved Instance in the same Availability Zone as the application layer.
Which combination of steps should the solutions architect take to eliminate single points of failure with minimal application code changes?
(Choose two.)

A. Create a REST API in Amazon API Gateway and use AWS Lambda functions as the application layer

B. Create an Application Load Balancer and migrate the Docker container to AWS Fargate

C. Migrate the storage layer to Amazon DynamoDB

D. Migrate the storage layer to Amazon DocumentDB (with MongoDB compatibility)

E. Create an Application Load Balancer and move the storage layer to an EC2 Auto Scaling group

Correct Answer: AE

Community vote distribution


BD (100%)

" # Jaypdv Highly Voted $ 1 year, 1 month ago


BD.
Seems natural to me
upvoted 17 times

" # WhyIronMan Highly Voted $ 1 year ago


I'll go with B,D

https://aws.amazon.com/documentdb/?nc1=h_ls

https://aws.amazon.com/blogs/containers/using-alb-ingress-controller-with-amazon-eks-on-fargate/
upvoted 7 times

" # kubala Most Recent % 9 months, 3 weeks ago


Selected Answer: BD
BD for sure
upvoted 2 times

" # AzureDP900 11 months ago


B,D is my choice. MongoDB is compatible with DocumentDB and Containers can be hosted on Fargate
upvoted 2 times

" # andylogan 1 year ago


It's B D
upvoted 1 times

" # student22 1 year ago


B,D
Minimum code changes
upvoted 1 times

" # tgv 1 year ago


BBB DDD
----
upvoted 1 times

" # blackgamer 1 year ago


B and D
upvoted 2 times

" # vimgoru24 1 year ago


It’s BD
upvoted 1 times

579 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # nik_aws 1 year, 1 month ago


Given that Lambda now supports containers and it specifically says the containers are stateless, option A also seems good.
upvoted 1 times

" # vimgoru24 1 year, 1 month ago


On paper - yes. But if you’d really try to convert a regular web app to a lambda compatible image - you’d see that this option still far away
from “little code changes” :)
upvoted 1 times

" # hk436 1 year, 1 month ago


BD is my answer!!
upvoted 1 times

" # glahitette 1 year, 1 month ago


BD for me too
upvoted 1 times

" # mustpassla 1 year, 1 month ago


BD for sure
upvoted 1 times

" # Waiweng 1 year, 1 month ago


it's B,D
upvoted 4 times

" # beebatov 1 year, 1 month ago


Answer: BD
upvoted 2 times

" # gsw 1 year, 1 month ago


if they meant lambda layers then its poorly expressed
upvoted 1 times

580 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #750 Topic 1

A company operates an on-premises software-as-a-service (SaaS) solution that ingests several +les daily. The company provides multiple public
SFTP endpoints to its customers to facilitate the +le transfers. The customers add the SFTP endpoint IP addresses to their +rewall allow list for
outbound tramc. Changes to the
SFTP endpoint IP addresses are not permitted.
The company wants to migrate the SaaS solution to AWS and decrease the operational overhead of the +le transfer service.
Which solution meets these requirements?

A. Register the customer-owned block of IP addresses in the company's AWS account. Create Elastic IP addresses from the address pool and
assign them to an AWS Transfer for SFTP endpoint. Use AWS Transfer to store the +les in Amazon S3.

B. Add a subnet containing the customer-owned block of IP addresses to a VPC. Create Elastic IP addresses from the address pool and assign
them to an Application Load Balancer (ALB). Launch EC2 instances hosting FTP services in an Auto Scaling group behind the ALB. Store the
+les in attached Amazon Elastic Block Store (Amazon EBS) volumes.

C. Register the customer-owned block of IP addresses with Amazon Route 53. Create alias records in Route 53 that point to a Network Load
Balancer (NLB). Launch EC2 instances hosting FTP services in an Auto Scaling group behind the NLB. Store the +les in Amazon S3.

D. Register the customer-owned block of IP addresses in the company's AWS account. Create Elastic IP addresses from the address pool and
assign them to an Amazon S3 VPC endpoint. Enable SFTP support on the S3 bucket.

Correct Answer: A

" # miniso8153 Highly Voted $ 1 year, 1 month ago


I vote for A
https://aws.amazon.com/premiumsupport/knowledge-center/sftp-enable-elastic-ip-addresses/
upvoted 21 times

" # sergioandreslq 1 year ago


A:
Bring your own IP addresses (BYOIP)
You can bring part or all of your publicly routable IPv4 or IPv6 address range from your on-premises network to your AWS account. You
continue to own the address range, but AWS advertises it on the internet by default. After you bring the address range to AWS, it appears in
your AWS account as an address pool.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-byoip.html

AWS Transfer for SFTP enables you to easily move your file transfer workloads that use the Secure Shell File Transfer Protocol (SFTP) to AWS
without needing to modify your applications or manage any SFTP servers.
https://aws.amazon.com/about-aws/whats-new/2018/11/aws-transfer-for-sftp-fully-managed-sftp-for-s3/
upvoted 5 times

" # AzureDP900 Most Recent % 11 months ago


I will go with A after reading the AWS docs mentioned by miniso8153
upvoted 1 times

" # cldy 11 months ago


A. Register the customer-owned block of IP addresses in the company‫ג‬€™s AWS account. Create Elastic IP addresses from the address pool
and assign them to an AWS Transfer for SFTP endpoint. Use AWS Transfer to store the files in Amazon S3.
upvoted 1 times

" # andylogan 1 year ago


It's A with AWS Transfer for SFTP
upvoted 2 times

" # tgv 1 year ago


AAA
---
B: You cannot assign an Elastic IP to an Application Load Balancer
C: You cannot register the customer-owned block of IP addresses with Amazon Route 53
D: Not sure if you can assign an IP to S3 VPC endpoint
upvoted 1 times

" # Kopa 1 year ago


A for sure.
The AWS Transfer Family provides fully managed support for file transfers directly into and out of Amazon S3 or Amazon EFS. With support for
Secure File Transfer Protocol (SFTP), File Transfer Protocol over SSL (FTPS), and File Transfer Protocol (FTP), the AWS Transfer Family helps you

581 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

seamlessly migrate your file transfer workflows to AWS by integrating with existing authentication systems, and providing DNS routing with
Amazon Route 53 so nothing changes for your customers and partners, or their applications. With your data in Amazon S3 or Amazon EFS, you
can use it with AWS services for processing, analytics, machine learning, archiving, as well as home directories and developer tools.
upvoted 4 times

" # student22 1 year ago


A
https://aws.amazon.com/aws-transfer-family/
upvoted 1 times
" # WhyIronMan 1 year ago
I'll go for A
upvoted 1 times

" # vimgoru24 1 year ago


It’s A
upvoted 1 times

" # mustpassla 1 year ago


A for sure
upvoted 1 times

" # Waiweng 1 year, 1 month ago


it's A
upvoted 3 times

" # beebatov 1 year, 1 month ago


Answer: A

COIP is registered with an AWS Account (Not Route 53)

https://aws.amazon.com/about-aws/whats-new/2020/01/aws-transfer-for-sftp-supports-vpc-security-groups-and-elastic-ip-addresses/
upvoted 1 times

" # ExtHo 1 year, 1 month ago


C no due to NLB A,D can be valid in first view but D can be ruled out due to Amazon S3 VPC endpoint as the main usage is access to S3 without
internet (AWS resources in Private Subnets) that leaves A is only valid option what i think.
upvoted 3 times

" # gsw 1 year, 1 month ago


could be C?
upvoted 1 times

" # digimaniac 1 year, 1 month ago


watch out for operational overhead cue in the question. host FTP will increase op. overhead
upvoted 1 times

582 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #751 Topic 1

A company is migrating a legacy application from an on-premises data center to AWS. The application consists of a single application server and
a Microsoft SQL
Server database server. Each server is deployed on a VMware VM that consumes 500 TB of data across multiple attached volumes.
The company has established a 10 Gbps AWS Direct Connect connection from the closest AWS Region to its on-premises data center. The Direct
Connect connection is not currently in use by other services.
Which combination of steps should a solutions architect take to migrate the application with the LEAST amount of downtime? (Choose two.)

A. Use an AWS Server Migration Service (AWS SMS) replication job to migrate the database server VM to AWS.

B. Use VM Import/Export to import the application server VM.

C. Export the VM images to an AWS Snowball Edge Storage Optimized device.

D. Use an AWS Server Migration Service (AWS SMS) replication job to migrate the application server VM to AWS.

E. Use an AWS Database Migration Service (AWS DMS) replication instance to migrate the database to an Amazon RDS DB instance.

Correct Answer: BE

Community vote distribution


DE (100%)

" # Jupi Highly Voted $ 1 year ago


D&E

A Not correct, database need to be migrated to RDS, not to EC2


B - AWS Server Migration Service is a significant enhancement of EC2 VM Import. The AWS Server Migration Service provides automated, live
incremental server replication and AWS Console support. For customers using EC2 VM Import for migration, we recommend using AWS Server
Migration Service.
C- Lease amount of downtime will not work
D- is correct SMS for application server
E - Is correct. DMS for data base
upvoted 10 times

" # resnef Most Recent % 2 days, 5 hours ago


The question doest mention time of project, but only LEAST amount of downtime. Excluding Snowball, and instead SMS/DMS replication jobs ,
D/E
upvoted 1 times

" # examaws 2 days, 12 hours ago


A & D.
this is for legacy apps, it may not compatble with RDS, plus RDS MS SQL has limited capacity up to 16TB.
upvoted 1 times

" # redipa 4 weeks ago


The question says EACH server has 500TB of attached storage.

Maximum storage for RDS SQL Server is 16TB. RDS cannot be part of the solution.
upvoted 2 times

" # kangtamo 4 months, 1 week ago


Selected Answer: DE
Go with DE: SMS + DMS
upvoted 1 times

" # timlow84 9 months, 1 week ago


Product Update: As of March 31, 2022, AWS will discontinue AWS Server Migration Service (AWS SMS). You can initiate new migration jobs
using AWS SMS until January 1, 2022. Please complete your active migration projects using AWS SMS by March 31, 2022. Going forward, we
recommend AWS Application Migration Service (AWS MGN) as the primary migration service for lift-and-shift migrations.
upvoted 3 times

" # cldy 11 months ago


D. Use an AWS Server Migration Service (AWS SMS) replication job to migrate the application server VM to AWS.
E. Use an AWS Database Migration Service (AWS DMS) replication instance to migrate the database to an Amazon RDS DB instance.
upvoted 1 times

583 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # AzureDP900 11 months ago


D, E sounds good to me
upvoted 1 times

" # andylogan 1 year ago


It's D E
upvoted 1 times

" # tgv 1 year ago


DDD EEE
---
upvoted 1 times

" # blackgamer 1 year ago


D@ and E
upvoted 1 times

" # denccc 1 year, 1 month ago


D and E
upvoted 1 times

" # fukuyama 1 year, 1 month ago


It's DE
upvoted 2 times

" # pkboy78 1 year, 1 month ago


It is D and E
upvoted 2 times

584 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #752 Topic 1

A company is creating a REST API to share information with six of its partners based in the United States. The company has created an Amazon
API Gateway
Regional endpoint. Each of the six partners will access the API once per day to post daily sales +gures.
After initial deployment, the company observes 1,000 requests per second originating from 500 different IP addresses around the world. The
company believes this tramc is originating from a botnet and wants to secure its API while minimizing cost.
Which approach should the company take to secure its API?

A. Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit
more than +ve requests per day. Associate the web ACL with the CloudFront distribution. Con+gure CloudFront with an origin access identity
(OAI) and associate it with the distribution. Con+gure API Gateway to ensure only the OAI can run the POST method.

B. Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit
more than +ve requests per day. Associate the web ACL with the CloudFront distribution. Add a custom header to the CloudFront distribution
populated with an API key. Con+gure the API to require an API key on the POST method.

C. Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the six partners. Associate the web ACL with the API.
Create a resource policy with a request limit and associate it with the API. Con+gure the API to require an API key on the POST method.

D. Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the six partners. Associate the web ACL with the API.
Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.

Correct Answer: B

Community vote distribution


D (100%)

" # mericov Highly Voted $ 1 year ago


D. - "A usage plan specifies who can access one or more deployed API stages and methods—and also how much and how fast they can access
them. The plan uses API keys to identify API clients and meters access to the associated API stages for each key. It also lets you configure
throttling limits and quota limits that are enforced on individual client API keys." https://docs.aws.amazon.com/apigateway/latest/developerguide
/api-gateway-api-usage-plans.html
upvoted 17 times

" # Jupi Highly Voted $ 1 year ago


A - wrong. You can use WAF to protect your api gateway directly without cloudfront
B - wrong. You can use WAF to protect your api gateway directly without cloudfront
c - wrong. You can use api gateway resource policis to allow users from specified aws account, from specified IP ranges or CIDR blocks or from
specified VPCs or VPC endpoints. request limit is not part of resource policies.
d - correct. API gateway usage plans can limit the API access and be sure that the usage does not exceed thrsholds we define.
upvoted 12 times

" # tomosabc1 Most Recent % 3 weeks, 1 day ago


Selected Answer: D
A(wrong): OAI is used only for S3.
B(wrong): This is not possible to define a WAF web ACL rule to block clients that submit more than five requests per day, because

"
A rate-based rule tracks the rate of requests for each originating IP address, and triggers the rule action on IPs with rates that go over a limit. You
set the limit as the number of requests per 5-minute time span......
The following caveats apply to AWS WAF rate-based rules:
The minimum rate that you can set is 100.
AWS WAF checks the rate of requests every 30 seconds, and counts requests for the prior five minutes each time. Because of this, it's possible
for an IP address to send requests at too high a rate for 30 seconds before AWS WAF detects and blocks it.
AWS WAF can block up to 10,000 IP addresses. If more than 10,000 IP addresses send high rates of requests at the same time, AWS WAF will
only block 10,000 of them.
"
https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.html
upvoted 1 times

" # Kyperos 2 months, 2 weeks ago


CDN is also a way to prevent DDoS, this question focus to "500 unique IP addresses worldwide" so 500 IPs come from multiple Region in the
world, so if you use Cloudfront, you will distribute DDoS Attack traffic to nearest PoPs and apply Rate Limiting on this PoPs.

----> Answer is B
upvoted 1 times

585 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Jughead 2 months, 3 weeks ago


Selected Answer: D
D is the answer
upvoted 1 times

" # RVivek 10 months, 1 week ago


Why Not B ? Adding Cloud front provides AWS Shield service which is a free DDoS protection.https://aws.amazon.com/shield/?whats-new-
cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc
upvoted 1 times

" # AzureDP900 11 months ago


Read both docs and choose your option. I am going with D
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies.html
upvoted 3 times

" # cldy 11 months ago


D. Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the six partners. Associate the web ACL with the API.
Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.
upvoted 2 times

" # andylogan 1 year ago


It's D
upvoted 2 times

" # tgv 1 year ago


DDD
---
upvoted 2 times

" # blackgamer 1 year ago


D is the answer
upvoted 2 times

" # denccc 1 year ago


I'll go with D
upvoted 3 times

" # neta1o 1 year ago


+1 to D, seems like usage plans support the referenced rate limits where resource policies don't.
upvoted 3 times

" # vjawscert 1 year ago


My vote - D
AB - Ignored as Cloudfront not required as its regional based resource. It is gonna add costs. Also WAF can directly sit on top of APIGW
C - Incorrect as resource policies are used to restrict access and not to provide limit. Request limit is done with usage plan.
upvoted 6 times

" # zolthar_z 1 year ago


I think is C, resource policy allows control the IP source: https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-
policies.html
upvoted 4 times

" # rb39 10 months, 3 weeks ago


but you cannot define a limit in a resource policy, it's just allow/deny access
upvoted 2 times

" # pkboy78 1 year, 1 month ago


I think it is C.
upvoted 4 times

586 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #753 Topic 1

A company is running its AWS infrastructure across two AWS Regions. The company has four VPCs in the eu-west-1 Region and has two VPCs in
the us-east-1
Region. The company also has an on-premises data center in Europe that has two AWS Direct Connect connections in eu-west-1.
The company needs a solution in which Amazon EC2 instances in each VPC can connect to each other by using private IP addresses. Servers in
the on-premises data center also must be able to connect to those VPCs by using private IP addresses.
What is the MOST cost-effective solution that meets these requirements?

A. Create an AWS Transit Gateway in each Region, and attach each VPC to the transit gateway in that Region. Create cross-Region peering
between the transit gateways. Create two transit VIFs, and attach them to a single Direct Connect gateway. Associate each transit gateway
with the Direct Connect gateway.

B. Create VPC peering between each VPC in the same Region. Create cross-Region peering between each VPC in different Regions. Create two
private VIFs, and attach them to a single Direct Connect gateway. Associate each VPC with the Direct Connect gateway.

C. Create VPC peering between each VPC in the same Region. Create cross-Region peering between each VPC in different Regions. Create two
public VIFs that are con+gured to route AWS IP addresses globally to on-premises servers.

D. Create an AWS Transit Gateway in each Region, and attach each VPC to the transit gateway in that Region. Create cross-Region peering
between the transit gateways. Create two private VIFs, and attach them to a single Direct Connect gateway. Associate each VPC with the
Direct Connect gateway.

Correct Answer: B

Community vote distribution


A (44%) B (44%) 11%

" # Jupi Highly Voted $ 1 year ago


A - In correct. It will work, but there is cost for each transit gateway
B - Correct. https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/transit-gateway-vs-vpc-
peering.html
"Lower cost — With VPC peering you only pay for data transfer charges. Transit Gateway has an hourly charge per attachment in addition to the
data transfer fees. "
C - public VIFs is for public IP
D - for transit gateways, you need transit VIFs, not private VIFs..
upvoted 13 times

" # anandkl80 Highly Voted $ 1 year ago


Answer: B

What is the MOST cost-effective solution that meets these requirements?

While this makes TGW a good default for most network architectures, VPC peering is still a valid choice due to the following advantages it has
over TGW:

Lower cost — With VPC peering you only pay for data transfer charges. Transit Gateway has an hourly charge per attachment in addition to the
data transfer fees.

Latency — Unlike VPC peering, Transit Gateway is an additional hop between VPCs.
upvoted 7 times

" # alxjandroleiva Most Recent % 4 days, 3 hours ago


Selected Answer: B
Clearly B choice, most cost-effective
upvoted 1 times

" # JohnPi 4 weeks ago


Selected Answer: A
AWS Transit Gateway + transit VIF
upvoted 2 times

" # JohnPi 2 weeks, 4 days ago


option B cost less (you do not pay the transit gateway) but is missing the VGW (each VGW is attached to a VPC)
upvoted 1 times

587 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # pinhead900 1 month, 2 weeks ago


Selected Answer: A
you can ONLY associate a transit gateway or a virtual private gateway to the direct connect gateways, therefore B cannot be correct
https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-intro.html
upvoted 2 times

" # hilft 3 months, 2 weeks ago


B. Forum is right.
This is a terrible question. I would go for a real-world environment. As an architect, I would always go for transit gateway because peering over
peering got scalability issues.
upvoted 1 times

" # asfsdfsdf 3 months, 3 weeks ago


A is the only correct answer
Why not B?
1) VPC peering edge to edge is not possible also its not trasetive with VPN / DC - how on-prem servers will be able to communicate with VPCs
on another region?
https://docs.aws.amazon.com/vpc/latest/peering/invalid-peering-configurations.html#edge-to-edge-vgw
2) you cant associate VPCs to a direct connect GW only virtual private GW or transit GW
https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-intro.html
C - uses public VIFs cant be correct
D - will not route traffic between regions as it uses private VIFs and not transit VIFs
For A its documented architecture - Two DCs, Two VIFs, DX GW and inter-region transit peering, here:
https://docs.aws.amazon.com/whitepapers/latest/hybrid-connectivity/aws-dx-dxgw-with-aws-transit-gateway-multi-regions-and-aws-public-
peering.html
upvoted 5 times

" # bkrish 6 months, 3 weeks ago


Selected Answer: B
B for cost-effective solution and it make use of DC gateway
upvoted 1 times

" # Ni_yot 8 months, 1 week ago


Will go with B as well. We must use private VIFs for connectivity. https://aws.amazon.com/premiumsupport/knowledge-center/public-private-
interface-dx/
upvoted 1 times

" # futen0326 8 months, 2 weeks ago


Selected Answer: B
Not enough VPCs for A to be correct, we're going for cost-efficiency.. B will work fine here.
upvoted 1 times

" # Clandestine60 9 months, 1 week ago


Selected Answer: B
B is the answer. cross-region vpc peering handles inter-vpc communication and 2 private VIFs are all thats needed for the ON-prem DC to the
differnt VPC connections. 1 private VIF for 1 direct connect connection and the 2nd private VIF for the second DConnect connection. This is
possible because we are using a DConnect gateway. with DC gateway, we just need a single private VIF for connectivity to multiple VPCs.
Details Here: https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/direct-connect.html
upvoted 1 times

" # HellGate 9 months, 1 week ago


Selected Answer: D
Answer is D.

This question ask on Transit Gateway Association. https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-


intro.html
upvoted 1 times

" # ByomkeshDas 9 months, 3 weeks ago


Seems option B has lowest cost. But there they have said to create only 2 Private VIFs. But there are total 4+2=6 VPCs in the two regions. So it
require 6 private VIFs not 2. Otherwise option A is correct.
upvoted 1 times

" # andypham 1 year ago


B is NOT correct, because VPC peering is point to point. So if you want to connect 6 total vpc to on-premises, you need 6 private VIFs, not only
2.
upvoted 3 times

" # Bigbearcn 7 months, 3 weeks ago


A Direct Connect Gateway can connect to up to 10 VPCs (via VGWs) globally in any AWS account over a single private VIF. so it's B.
upvoted 1 times

588 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # andylogan 1 year ago


It's B - Lower cos
upvoted 1 times

" # DerekKey 1 year ago


VPC peering is cheaper and additionally you don't pay for transfer within the region if vpcs are in the same zone.
upvoted 1 times

" # tgv 1 year ago


BBB
---
Both A & B are correct but the question is asking to be cost-efficient.
According to AWS, you have Lower cost — With VPC peering
https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/transit-gateway-vs-vpc-peering.html
upvoted 2 times

589 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #754 Topic 1

A company runs an application that gives users the ability to search for videos and related information by using keywords that are curated from
content providers.
The application data is stored in an on-premises Oracle database that is 800 GB in size.
The company wants to migrate the data to an Amazon Aurora MySQL DB instance. A solutions architect plans to use the AWS Schema Conversion
Tool and
AWS Database Migration Service (AWS DMS) for the migration. During the migration, the existing database must serve ongoing requests. The
migration must be completed with minimum downtime.
Which solution will meet these requirements?

A. Create primary key indexes, secondary indexes, and referential integrity constraints in the target database before starting the migration
process.

B. Use AWS DMS to run the conversion report for Oracle to Aurora MySQL. Remediate any issues. Then use AWS DMS to migrate the data.

C. Use the M5 or C5 DMS replication instance type for ongoing replication.

D. Turn off automatic backups and logging of the target database until the migration and cutover processes are complete.

Correct Answer: A
Reference:
https://docs.aws.amazon.com/dms/latest/sbs/chap-rdsoracle2aurora.html

Community vote distribution


B (75%) D (25%)

" # tgv Highly Voted $ 1 year, 1 month ago


BBB
---
Key words: "A solutions architect PLANS to use the AWS Schema Conversion Tool and
AWS Database Migration Service (AWS DMS) for the migration" so running the conversion report for Oracle to Aurora MySQL hasn't been done
yet
https://docs.aws.amazon.com/dms/latest/sbs/chap-rdsoracle2aurora.html

A: AWS actually recommends to: "drop primary key indexes, secondary indexes, referential integrity constraints, and data manipulation language
(DML) triggers. Or you can delay their creation until after the full load tasks are complete" --> https://docs.aws.amazon.com/dms/latest/userguide
/CHAP_BestPractices.html
C: M5 doesn't exist --> https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html
D: You can't disable automated backups on Aurora. The backup retention period for Aurora is managed by the DB cluster -->
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html
upvoted 12 times

" # Coffeinerd 1 year ago


B seems the most logical, we are talking about migration with no downtime, not about increase speed of loading data, so B is the best here.
upvoted 1 times

" # TomPaschenda Highly Voted $ 1 year ago


Following the replies here it seems that no answer can be correct (see tgv + Jupi)
My guess: there is a typo in answer B - it should be AWS SCT in the first part of the answer. Then it all makes sense.
upvoted 8 times

" # bobsmith2000 Most Recent % 5 months, 2 weeks ago


It seems like the answers don't match the question or there are some typos or mistakes
upvoted 2 times

" # Kuang 8 months ago


B is my answer
upvoted 1 times

" # Bigbearcn 9 months, 1 week ago


Selected Answer: D
It's D.
upvoted 1 times

" # cldy 11 months ago


B. Use AWS SCT to run the conversion report for Oracle to Aurora MySQL. Remediate any issues. Then use AWS DMS to migrate the data.

590 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # AzureDP900 11 months ago


B is right
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: B
The first tool in B should be SCT, then B will be the answer. Anyway this is the simplest migration question i've met so far ,hope I can have it in
my exam
upvoted 3 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # Kopa 1 year ago


I will go for B
upvoted 1 times

" # tgv 1 year, 1 month ago


GUYS, You can't disable automated backups on Aurora
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html
It cannot be D
upvoted 2 times

" # network_zeal 1 year ago


good catch. B seems to be right answer. yes, schema Conversion is done by SCT but SCT is a part of DMS.
upvoted 2 times

" # student22 1 year ago


Yes, the link clearly states that:
"You can't disable automated backups on Aurora. The backup retention period for Aurora is managed by the DB cluster."
As TomPaschenda has mentioned, it looks like there's a typo in Answer B which should be SCT in the first part.
upvoted 1 times

" # blackgamer 1 year, 1 month ago


Only D seems to make sense.
upvoted 1 times

" # Jupi 1 year, 1 month ago


A - Incorrect This will be done by DMS by default.
B - Incorrect - Conversion reports are by Schema Conversion tool
C - Incorrect AWS DMS creates the replication instance on an Amazon EC2 instance. AWS DMS currently supports the T2, T3, C4, C5, R4, and
R5 Amazon EC2 instance classes for replication instances:
D - Correct. This will ensure migration is faster.
upvoted 1 times

" # Cotter 1 year, 1 month ago


OK the most sound is D.
upvoted 1 times

" # pablobairat 1 year, 1 month ago


D
Source: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Troubleshooting.html#CHAP_Troubleshooting.General.SlowTask
upvoted 1 times

" # DerekKey 1 year ago


This is a general source not Aurora MySQL specific:
You can't disable automated backups on Aurora. The backup retention period for Aurora is managed by the DB cluster.
D is WRONG
upvoted 1 times

" # mericov 1 year, 1 month ago


D - When migrating to an Amazon RDS database, it's a good idea to turn off backups and Multi-AZ on the target until you're ready to cut over.
Similarly, when migrating to systems other than Amazon RDS, turning off any logging on the target until after cutover is usually a good idea.
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html
upvoted 1 times

" # DerekKey 1 year ago


WRONG we have Aurora MySQL
upvoted 1 times

591 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # pkboy78 1 year, 1 month ago


It should be D
upvoted 1 times

592 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #755 Topic 1

A travel company built a web application that uses Amazon Simple Email Service (Amazon SES) to send email noti+cations to users. The company
needs to enable logging to help troubleshoot email delivery issues. The company also needs the ability to do searches that are based on recipient,
subject, and time sent.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)

A. Create an Amazon SES con+guration set with Amazon Kinesis Data Firehose as the destination. Choose to send logs to an Amazon S3
bucket.

B. Enable AWS CloudTrail logging. Specify an Amazon S3 bucket as the destination for the logs.

C. Use Amazon Athena to query the logs in the Amazon S3 bucket for recipient, subject, and time sent.

D. Create an Amazon CloudWatch log group. Con+gure Amazon SES to send logs to the log group.

E. Use Amazon Athena to query the logs in Amazon CloudWatch for recipient, subject, and time sent.

Correct Answer: A
Reference -
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/ses-dg.pdf

Community vote distribution


A (50%) D (25%) C (25%)

" # mericov Highly Voted $ 1 year, 1 month ago


A / C - https://docs.aws.amazon.com/ses/latest/dg/monitor-sending-activity.html
upvoted 10 times

" # RVivek 8 months, 4 weeks ago


KinesisFirehose data type is JSON. Athena query wll not work on that.
D& E id the answer. https://docs.aws.amazon.com/ses/latest/dg/monitor-using-event-publishing.html
upvoted 1 times

" # denccc Highly Voted $ 1 year, 1 month ago


B and C
upvoted 7 times

" # denccc 1 year, 1 month ago


Change to A and C: To enable you to track your email sending at a granular level, you can set up Amazon SES to publish email sending events
to Amazon CloudWatch, Amazon Kinesis Data Firehose, or Amazon Simple Notification Service based on characteristics that you define.
https://docs.aws.amazon.com/ses/latest/dg/monitor-using-event-publishing.html
upvoted 1 times

" # denccc 1 year, 1 month ago


FYI https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-configuration-sets.html
upvoted 1 times

" # pek77 Most Recent % 2 weeks, 4 days ago


https://docs.aws.amazon.com/ses/latest/dg/event-publishing-add-event-destination.html

AC

If you simply want a running total of each type of event (for example, so that you can set an alarm when the total gets too high), you can use
CloudWatch.

If you want detailed event records that you can output to another service such as Amazon OpenSearch Service or Amazon Redshift for analysis,
you can use Kinesis Data Firehose.
upvoted 1 times

" # Yashar1691 1 month, 3 weeks ago


Selected Answer: D
https://docs.aws.amazon.com/athena/latest/ug/connectors-cloudwatch.html
upvoted 1 times

" # JonnyB1001 3 months, 3 weeks ago


https://docs.aws.amazon.com/ses/latest/dg/event-publishing-add-event-destination-cloudwatch.html
Email Header – Amazon SES retrieves the dimension name and value from a header in the email. Note

593 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

You can't use any of the following email headers as the Dimension Name: Received, To, From, DKIM-Signature, CC, message-id, or Return-
Path...so A&C
upvoted 1 times
" # bobsmith2000 6 months ago
It's AC.

"The event destination that you choose depends on the level of detail you want about the events, and the way you want to receive the event
information. If you simply want a running total of each type of event (for example, so that you can set an alarm when the total gets too high), you
can use CloudWatch.

If you want detailed event records that you can output to another service such as Amazon OpenSearch Service or Amazon Redshift for analysis,
you can use Kinesis Data Firehose.

If you want to receive notifications when certain events occur, you can use Amazon SNS."
Source:
https://docs.aws.amazon.com/ses/latest/dg/event-publishing-add-event-destination.html
upvoted 2 times

" # Hasitha99 6 months, 3 weeks ago


Selected Answer: A
Answer: A, C https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-configuration-sets.html
upvoted 2 times

" # Yasyas86 8 months ago


Answer is A/C
https://aws.amazon.com/getting-started/hands-on/build-serverless-real-time-data-processing-app-lambda-kinesis-s3-dynamodb-cognito-
athena/4/#:~:text=Amazon%20Athena%20allows%20us%20to,to%20an%20Amazon%20S3%20bucket.
upvoted 1 times

" # RVivek 8 months, 4 weeks ago


A& C will work
D& E also work
Now whic combination is the best soloutiion ?
upvoted 1 times

" # RVivek 8 months, 4 weeks ago


Answer is D & E.
Kinesis Firehose delivers Data in JSON format . To run Athena query JSON format should be conveted using Lambda function
upvoted 1 times

" # AzureDP900 11 months ago


A,C is right
upvoted 2 times

" # cldy 11 months ago


A. Create an Amazon SES configuration set with Amazon Kinesis Data Firehose as the destination. Choose to send logs to an Amazon S3
bucket.
C. Use Amazon Athena to query the logs in the Amazon S3 bucket for recipient, subject, and time sent.
upvoted 1 times

" # acloudguru 11 months, 3 weeks ago


Selected Answer: C
CHoose A and C
upvoted 1 times

" # Kopa 12 months ago


A and C

https://docs.aws.amazon.com/ses/latest/dg/event-publishing-retrieving-firehose.html
upvoted 1 times

" # andylogan 1 year ago


It's A, C
upvoted 1 times

" # AkanshaR91 1 year ago


Answer should be B and C.
Reference-
https://aws.amazon.com/about-aws/whats-new/2015/05/amazon-ses-is-now-integrated-with-cloudtrail/
upvoted 1 times

" # andylogan 1 year ago


Quote from your link "Using the information collected by CloudTrail, you can determine what request was made to Amazon SES, the source IP

594 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

address from which the request was made, who made the request, when it was made, and so on"
So CloudTrail doesn't log email content, answer should be A, C instead of B
upvoted 3 times
" # student22 1 year, 1 month ago
A,C
SES --> Kinesis Firehose --> S3 --> Query with Athena
upvoted 2 times

" # Liongeek 1 year, 1 month ago


Ans is A and C . WHY?

You can both publish logs and metric to CloudWatch and Kinesis Data Firehose, but ONLY can publish detailed event records to Kinesis Data
Firehose.
And of course, once in Firehose you can put the logs in S3 and analyze them with Athena

Oficial Ref: https://docs.aws.amazon.com/ses/latest/dg/event-publishing-add-event-destination.html


upvoted 3 times

595 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #756 Topic 1

A company is launching a new web application on Amazon EC2 instances. Development and production workloads exist in separate AWS
accounts.
According to the company's security requirements, only automated con+guration tools are allowed to access the production account. The
company's security team wants to receive immediate noti+cation if any manual access to the production AWS account or EC2 instances occurs.
Which combination of actions should a solutions architect take in the production account to meet these requirements? (Choose three.)

A. Turn on AWS CloudTrail logs in the application's primary AWS Region. Use Amazon Athena to query the logs for AwsConsoleSignIn events.

B. Con+gure Amazon Simple Email Service (Amazon SES) to send email to the security team when an alarm is activated.

C. Deploy EC2 instances in an Auto Scaling group. Con+gure the launch template to deploy instances without key pairs. Con+gure Amazon
CloudWatch Logs to capture system access logs. Create an Amazon CloudWatch alarm that is based on the logs to detect when a user logs in
to an EC2 instance.

D. Con+gure an Amazon Simple Noti+cation Service (Amazon SNS) topic to send a message to the security team when an alarm is activated.

E. Turn on AWS CloudTrail logs for all AWS Regions. Con+gure Amazon CloudWatch alarms to provide an alert when an AwsConsoleSignIn
event is detected.

F. Deploy EC2 instances in an Auto Scaling group. Con+gure the launch template to delete the key pair after launch. Con+gure Amazon
CloudWatch Logs for the system access logs. Create an Amazon CloudWatch dashboard to show user logins over time.

Correct Answer: CDE

Community vote distribution


CDE (100%)

" # iillii 8 months ago


C,D,E is right!!
upvoted 4 times

" # AzureDP900 11 months ago


C,D,E is right
upvoted 2 times

" # Meghaaaa 11 months ago


Why not B?
upvoted 1 times

" # lingxian 5 months, 2 weeks ago


SES is a service that helps you send/receive emails, not a service that could subscribe an event.
upvoted 2 times

" # cldy 11 months ago


C. Deploy EC2 instances in an Auto Scaling group. Configure the launch template to deploy instances without key pairs. Configure Amazon
CloudWatch Logs to capture system access logs. Create an Amazon CloudWatch alarm that is based on the logs to detect when a user logs in to
an EC2 instance.
D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to send a message to the security team when an alarm is activated.
E. Turn on AWS CloudTrail logs for all AWS Regions. Configure Amazon CloudWatch alarms to provide an alert when an AwsConsoleSignin event
is detected.
upvoted 2 times

" # AzureDP900 11 months, 2 weeks ago


C,D,E is correct answer
upvoted 2 times

" # acloudguru 11 months, 2 weeks ago


Selected Answer: CDE
F is not right, cloudwatch dashboard does not have such way to meet the requirement
upvoted 2 times

" # Kopa 12 months ago


yep C.D.E
upvoted 2 times

596 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # andylogan 1 year ago


It's C D E
upvoted 2 times

" # tgv 1 year ago


CCC DDD EEE
---
upvoted 3 times

" # blackgamer 1 year ago


CDE is the answer.
upvoted 1 times

" # Cotter 1 year ago


Sure for C,D and E.
upvoted 1 times

" # denccc 1 year, 1 month ago


Also going for CDE
upvoted 1 times

" # pablobairat 1 year, 1 month ago


C,D,E it is
upvoted 3 times

" # pkboy78 1 year, 1 month ago


I think it is C, D and E
upvoted 2 times

597 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #757 Topic 1

A company is running a workload that consists of thousands of Amazon EC2 instances. The workload is running in a VPC that contains several
public subnets and private subnets. The public subnets have a route for 0.0.0.0/0 to an existing internet gateway. The private subnets have a route
for 0.0.0.0/0 to an existing NAT gateway.
A solutions architect needs to migrate the entire jeet of EC2 instances to use IPv6. The EC2 instances that are in private subnets must not be
accessible from the public internet.
What should the solutions architect do to meet these requirements?

A. Update the existing VPC, and associate a custom IPv6 CIDR block with the VPC and all subnets. Update all the VPC route tables, and add a
route for ::/0 to the internet gateway.

B. Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the VPC and all subnets. Update the VPC route tables for
all private subnets, and add a route for ::/0 to the NAT gateway.

C. Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the VPC and all subnets. Create an egress-only internet
gateway. Update the VPC route tables for all private subnets, and add a route for ::/0 to the egress-only internet gateway.

D. Update the existing VPC, and associate a custom IPv6 CIDR block with the VPC and all subnets. Create a new NAT gateway, and enable IPv6
support. Update the VPC route tables for all private subnets, and add a route for ::/0 to the IPv6-enabled NAT gateway.

Correct Answer: C

Community vote distribution


C (100%)

" # Jupi Highly Voted $ 1 year ago


A - Incorrect - It will allow instances to be accessed from internet
B - Incorrect - NAT gateways are not supported for IPv6 traffic—use an outbound-only (egress-only) internet gateway instead.
C- Correct. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html#vpc-migrate-ipv6-cidr
D: Incorrect - NAT gateways are not supported for IPv6 traffic—use an outbound-only (egress-only) internet gateway instead.
upvoted 9 times

" # TechX Most Recent % 4 months, 1 week ago


Selected Answer: C
C, easy one
upvoted 1 times

" # jj22222 7 months, 1 week ago


Selected Answer: C
C looks right
upvoted 1 times

" # shotty1 9 months, 2 weeks ago


Selected Answer: C
it is C
upvoted 1 times

" # AzureDP900 11 months ago


I will go with C
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # Kopa 1 year ago


key word egress gateway, so C is correct
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 3 times

598 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Cotter 1 year ago


Yes C. > https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html.
upvoted 1 times

" # denccc 1 year, 1 month ago


Also go for C
upvoted 1 times

" # pablobairat 1 year, 1 month ago


C
Sources:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
upvoted 2 times

" # pkboy78 1 year, 1 month ago


It should be C
upvoted 1 times

599 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #758 Topic 1

A company is migrating applications from on premises to the AWS Cloud. These applications power the company's internal web forms. These web
forms collect data for speci+c events several times each quarter. The web forms use simple SQL statements to save the data to a local relational
database.
Data collection occurs for each event, and the on-premises servers are idle most of the time. The company needs to minimize the amount of idle
infrastructure that supports the web forms.
Which solution will meet these requirements?

A. Use Amazon EC2 Image Builder to create AMIs for the legacy servers. Use the AMIs to provision EC2 instances to recreate the applications
in the AWS Cloud. Place an Application Load Balancer (ALB) in front of the EC2 instances. Use Amazon Route 53 to point the DNS names of
the web forms to the ALB.

B. Create one Amazon DynamoDB table to store data for all the data input. Use the application form name as the table key to distinguish data
items. Create an Amazon Kinesis data stream to receive the data input and store the input in DynamoDB. Use Amazon Route 53 to point the
DNS names of the web forms to the Kinesis data stream's endpoint.

C. Create Docker images for each server of the legacy web form applications. Create an Amazon Elastic Container Service (Amazon EC2)
cluster on AWS Fargate. Place an Application Load Balancer in front of the ECS cluster. Use Fargate task storage to store the web form data.

D. Provision an Amazon Aurora Serverless cluster. Build multiple schemas for each web form's data storage. Use Amazon API Gateway and an
AWS Lambda function to recreate the data input forms. Use Amazon Route 53 to point the DNS names of the web forms to their corresponding
API Gateway endpoint.

Correct Answer: B
Reference:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/kds.html

Community vote distribution


D (100%)

" # kadev 2 months, 3 weeks ago


Main point is "The company's goal should be to reduce the quantity of idle infrastructure supporting online forms." => serverless or scheduled
autoscaling, but has no option about autoscaling => answer is D
upvoted 2 times

" # hilft 3 months, 1 week ago


B vs. D
But don't need kinesis here.
D
upvoted 1 times

" # awsgorilla 4 months, 1 week ago


D. Provision an Amazon Aurora Serverless cluster. Build multiple schemas for each web form's data storage. Use Amazon API Gateway and an
AWS Lambda function to recreate the data input forms. Use Amazon Route 53 to point the DNS names of the web forms to their corresponding
API Gateway endpoint.
upvoted 3 times

" # JYZ 6 months, 2 weeks ago


The reference is about using Kinesis for capturing changes to DynamoDB not about updating data in DynamoDB. Also how will customer supply
the data?
upvoted 1 times

" # AMKazi 10 months ago


Kineses data streams will increase the cost exponentially. Also the volume is low and per qtr.
D: is the answer
upvoted 2 times

" # cldy 11 months ago


D. Provision an Amazon Aurora Serverless cluster. Build multiple schemas for each web form‫ג‬€™s data storage. Use Amazon API Gateway and
an AWS Lambda function to recreate the data input forms. Use Amazon Route 53 to point the DNS names of the web forms to their
corresponding API Gateway endpoint.
upvoted 1 times

" # acloudguru 11 months, 1 week ago

600 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Selected Answer: D
Serverless API + Serverless Business Logic + Serverless DB
upvoted 4 times
" # andylogan 1 year ago
It's D
upvoted 1 times

" # student22 1 year ago


D
Serverless API + Serverless Business Logic + Serverless DB
upvoted 1 times

" # tgv 1 year ago


DDD
---
upvoted 1 times

" # blackgamer 1 year ago


It should be D.
upvoted 1 times

" # zolthar_z 1 year, 1 month ago


Answer is D. Fargate Storage is ephemeral storage, so, if the cluster has some problem the data will be lost
upvoted 2 times

" # denccc 1 year, 1 month ago


Would go for D
upvoted 3 times

" # denccc 1 year ago


Because of "respond to each event" and "minimize the amount of idle infrastructure"
upvoted 2 times

" # Rmukh 1 year, 1 month ago


I think it should be C. D says create multiple schemas for each data format storage which is unnecessary.
upvoted 1 times

" # wassb 3 weeks, 2 days ago


Farget Task storage is an ephemeral storage
upvoted 1 times

" # AMKazi 10 months ago


Where will the data be stored? Fargate task is a compute node.
upvoted 1 times

" # pablobairat 1 year, 1 month ago


D for sure
upvoted 3 times

601 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #759 Topic 1

A company wants to migrate its data analytics environment from on premises to AWS. The environment consists of two simple Node.js
applications. One of the applications collects sensor data and loads it into a MySQL database. The other application aggregates the data into
reports. When the aggregation jobs run, some of the load jobs fail to run correctly.
The company must resolve the data loading issue. The company also needs the migration to occur without interruptions or changes for the
company's customers.
What should a solutions architect do to meet these requirements?

A. Set up an Amazon Aurora MySQL database as a replication target for the on-premises database. Create an Aurora Replica for the Aurora
MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as AWS Lambda functions
behind a Network Load Balancer (NLB), and use Amazon RDS Proxy to write to the Aurora MySQL database. When the databases are synced,
disable the replication job and restart the Aurora Replica as the primary instance. Point the collector DNS record to the NLB.

B. Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from
the on-premises database to Aurora. Move the aggregation jobs to run against the Aurora MySQL database. Set up collection endpoints behind
an Application Load Balancer (ALB) as Amazon EC2 instances in an Auto Scaling group. When the databases are synced, point the collector
DNS record to the ALB. Disable the AWS DMS sync task after the cutover from on premises to AWS.

C. Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from
the on-premises database to Aurora. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against
the Aurora Replica. Set up collection endpoints as AWS Lambda functions behind an Application Load Balancer (ALB), and use Amazon RDS
Proxy to write to the Aurora MySQL database. When the databases are synced, point the collector DNS record to the ALB. Disable the AWS
DMS sync task after the cutover from on premises to AWS.

D. Set up an Amazon Aurora MySQL database. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run
against the Aurora Replica. Set up collection endpoints as an Amazon Kinesis data stream. Use Amazon Kinesis Data Firehose to replicate the
data to the Aurora MySQL database. When the databases are synced, disable the replication job and restart the Aurora Replica as the primary
instance. Point the collector DNS record to the Kinesis data stream.

Correct Answer: B

Community vote distribution


C (100%)

" # Jupi Highly Voted $ 1 year ago


A - Incorrect - Network Load Balancers do not support the lambda target type. Application Load Balancers are the only load balancers that
support the lambda target type
B - InCorrect - This will not solve problem of load jobs fail, while the aggregate job runs
C- Correct - Aurora Replica for aggregates job and RDS proxy for better RDS performance
D- Incorrect - Kinesis Firehose ca not replicate data to aurora directly
upvoted 21 times

" # mericov Highly Voted $ 1 year, 1 month ago


C - Migrate the database with DMS -> Create a read replica - aggregation jobs will read data from RR -> Endpoints with Lambda behind an ALB
-> use Proxy to write to master DB. Once synchronized, stop the task and point collectors to ALB. For B, is too complicated to configure the
apps as EC2 instances. Instead Lambda is suitable. A, Lambda cannot run behind NLB.
upvoted 10 times

" # aandc Most Recent % 4 months, 1 week ago


Selected Answer: C
C adds read replica to solve aggregation jobs issue
upvoted 1 times

" # AzureDP900 11 months ago


I will go with C
upvoted 3 times

" # cldy 11 months ago


C. Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from
the on-premises database to Aurora. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the
Aurora Replica. Set up collection endpoints as AWS Lambda functions behind an Application Load Balancer (ALB), and use Amazon RDS Proxy
to write to the Aurora MySQL database. When the databases are synced, point the collector DNS record to the ALB. Disable the AWS DMS sync
task after the cutover from on premises to AWS.

602 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # backfringe 11 months, 2 weeks ago
I'd go with C
upvoted 2 times

" # acloudguru 11 months, 3 weeks ago


Selected Answer: C
B - InCorrect - This will not solve problem of load jobs fail, while the aggregate job runs
upvoted 3 times

" # andylogan 1 year ago


It''s C -
Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and
application scalability. With RDS Proxy, failover times for Aurora and RDS databases are reduced by up to 66%
upvoted 3 times

" # tgv 1 year ago


CCC
---
upvoted 2 times

" # blackgamer 1 year ago


C for sure.
upvoted 1 times

" # Cotter 1 year ago


I think C.
upvoted 2 times

" # zolthar_z 1 year ago


Answer is C, even B works did not fix the reading issue from the aggregation task
upvoted 4 times

" # Abhiju2019 1 year ago


it should be B
upvoted 1 times

" # pablobairat 1 year, 1 month ago


C it is
upvoted 4 times

" # pkboy78 1 year, 1 month ago


I think it is B
upvoted 1 times

603 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #760 Topic 1

A company runs an application in the cloud that consists of a database and a website. Users can post data to the website, have the data
processed, and have the data sent back to them in an email. Data is stored in a MySQL database running on an Amazon EC2 instance. The
database is running in a VPC with two private subnets. The website is running on Apache Tomcat in a single EC2 instance in a different VPC with
one public subnet. There is a single VPC peering connection between the database and website VPC.
The website has suffered several outages during the last month due to high tramc.
Which actions should a solutions architect take to increase the reliability of the application? (Choose three.)

A. Place the Tomcat server in an Auto Scaling group with multiple EC2 instances behind an Application Load Balancer.

B. Provision an additional VPC peering connection.

C. Migrate the MySQL database to Amazon Aurora with one Aurora Replica.

D. Provision two NAT gateways in the database VPC.

E. Move the Tomcat server to the database VPC.

F. Create an additional public subnet in a different Availability Zone in the website VPC.

Correct Answer: ACF

Community vote distribution


ACF (100%)

" # snakecharmer2 Highly Voted $ 6 months, 2 weeks ago


Selected Answer: ACF
A - autoscalling
C - aurora with read replica
F - multi-az
upvoted 7 times

" # Ni_yot Most Recent % 2 weeks, 6 days ago


Selected Answer: ACF
ACF looks good.
upvoted 2 times

" # Yamchi 6 months, 1 week ago


go with ACF
upvoted 3 times

604 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #761 Topic 1

A solutions architect is building a web application that uses an Amazon RDS for PostgreSQL DB instance. The DB instance is expected to receive
many more reads than writes. The solutions architect needs to ensure that the large amount of read tramc can be accommodated and that the DB
instance is highly available.
Which steps should the solutions architect take to meet these requirements? (Choose three.)

A. Create multiple read replicas and put them into an Auto Scaling group.

B. Create multiple read replicas in different Availability Zones.

C. Create an Amazon Route 53 hosted zone and a record set for each read replica with a TTL and a weighted routing policy.

D. Create an Application Load Balancer (ALB) and put the read replicas behind the ALB.

E. Con+gure an Amazon CloudWatch alarm to detect a failed read replicas. Set the alarm to directly invoke an AWS Lambda function to delete
its Route 53 record set.

F. Con+gure an Amazon Route 53 health check for each read replica using its endpoint.

Correct Answer: BDE

Community vote distribution


BCF (100%)

" # pablobairat Highly Voted $ 1 year, 1 month ago


B,C,F
Source: https://aws.amazon.com/premiumsupport/knowledge-center/requests-rds-read-replicas/
upvoted 15 times

" # sergioandreslq 1 year ago


Why not B-D-E? I liked B,C,F, however, B-D-E can be a possible solution.
Either way, I go with B,C,F but if possible to know why the other are not correct?
upvoted 2 times

" # kadev 2 months, 2 weeks ago


ALB supports http protocol ( L7) , To use Mysql you must connect via TCP ( L4)
upvoted 1 times

" # Viper57 1 year ago


You cannot use an ALB to distribute traffic to read replicas, you need to use a Route53 weighted routing policy.
upvoted 8 times

" # patrickznk Highly Voted $ 5 months ago


B,C,E
I believe Route53 can not check an endpoint in a private Subnet. Therefore we need to create a Cloudwatch Alarm and check the state of the
Cloudwatch Alarm.
upvoted 5 times

" # hilft Most Recent % 3 months, 2 weeks ago


B and C is the must.
F, i will memorize.
upvoted 2 times

" # kangtamo 4 months, 3 weeks ago


Selected Answer: BCF
Agree with BCF
upvoted 1 times

" # AzureDP900 11 months ago


Agree with BCF.
upvoted 2 times

" # andylogan 1 year ago


It's B C F
You can use Amazon Route 53 weighted record sets to distribute requests across your read replicas. Within a Route 53 hosted zone, create
individual record sets for each DNS endpoint associated with your read replicas and give them the same weight. Then, direct requests to the
endpoint of the record set.

605 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

You can incorporate Route 53 health checks to be sure that Route 53 directs traffic away from unavailable read replicas
upvoted 1 times
" # tgv 1 year ago
BBB CCC FFF
---
upvoted 3 times

" # Jupi 1 year, 1 month ago


Agree with BCF.
upvoted 3 times

" # tvs 1 year, 1 month ago


BCF. https://aws.amazon.com/premiumsupport/knowledge-center/requests-rds-read-replicas/
upvoted 3 times

" # denccc 1 year, 1 month ago


would go for BCF
upvoted 2 times

" # pkboy78 1 year, 1 month ago


I think it should be B, C and F
upvoted 2 times

606 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #762 Topic 1

A solutions architect at a large company needs to set up network security for outbound tramc to the internet from all AWS accounts within an
organization in AWS
Organizations. The organization has more than 100 AWS accounts, and the accounts route to each other by using a centralized AWS Transit
Gateway. Each account has both an internet gateway and a NAT gateway for outbound tramc to the internet. The company deploys resources only
into a single AWS Region.
The company needs the ability to add centrally managed rule-based +ltering on all outbound tramc to the internet for all AWS accounts in the
organization. The peak load of outbound tramc will not exceed 25 Gbps in each Availability Zone.
Which solution meets these requirements?

A. Create a new VPC for outbound tramc to the internet. Connect the existing transit gateway to the new VPC. Con+gure a new NAT gateway.
Create an Auto Scaling group of Amazon EC2 instances that run an open-source internet proxy for rule-based +ltering across all Availability
Zones in the Region. Modify all default routes to point to the proxy's Auto Scaling group.

B. Create a new VPC for outbound tramc to the internet. Connect the existing transit gateway to the new VPC. Con+gure a new NAT gateway.
Use an AWS Network Firewall +rewall for rule-based +ltering. Create Network Firewall endpoints in each Availability Zone. Modify all default
routes to point to the Network Firewall endpoints.

C. Create an AWS Network Firewall +rewall for rule-based +ltering in each AWS account. Modify all default routes to point to the Network
Firewall +rewalls in each account.

D. In each AWS account, create an Auto Scaling group of network-optimized Amazon EC2 instances that run an open-source internet proxy for
rule-based +ltering. Modify all default routes to point to the proxy's Auto Scaling group.

Correct Answer: B

" # denccc Highly Voted $ 1 year, 1 month ago


B https://aws.amazon.com/blogs/networking-and-content-delivery/deploy-centralized-traffic-filtering-using-aws-network-firewall/
upvoted 10 times

" # pablobairat Highly Voted $ 1 year, 1 month ago


B
Source: https://aws.amazon.com/blogs/networking-and-content-delivery/deployment-models-for-aws-network-firewall/
upvoted 7 times

" # kadev Most Recent % 2 months, 2 weeks ago


If you dont know about Network Firewall, you can remove wrong Answer:
- each AWS account => Nope, manager want to centrally managed
- Modify all default routes to point to the proxy's Auto Scaling group. => Nope, route table can not route to ASG

=> B
upvoted 3 times

" # AzureDP900 11 months ago


B is correct
upvoted 1 times

" # Kopa 12 months ago


B for sure
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # student22 1 year ago


B
AWS Network Firewall + Transit Gateway + NAT Gateway
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 3 times

607 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

608 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #763 Topic 1

A company has multiple business units. Each business unit has its own AWS account and runs a single website within that account. The company
also has a single logging account. Logs from each business unit website are aggregated into a single Amazon S3 bucket in the logging account.
The S3 bucket policy provides each business unit with access to write data into the bucket and requires data to be encrypted.
The company needs to encrypt logs uploaded into the bucket using a single AWS Key Management Service (AWS KMS) CMK. The CMK that
protects the data must be rotated once every 365 days.
Which strategy is the MOST operationally emcient for the company to use to meet these requirements?

A. Create a customer managed CMK in the logging account. Update the CMK key policy to provide access to the logging account only.
Manually rotate the CMK every 365 days.

B. Create a customer managed CMK in the logging account. Update the CMK key policy to provide access to the logging account and business
unit accounts. Enable automatic rotation of the CMK.

C. Use an AWS managed CMK in the logging account. Update the CMK key policy to provide access to the logging account and business unit
accounts. Manually rotate the CMK every 365 days.

D. Use an AWS managed CMK in the logging account. Update the CMK key policy to provide access to the logging account only. Enable
automatic rotation of the CMK.

Correct Answer: A

Community vote distribution


B (71%) D (29%)

" # Jupi Highly Voted $ 1 year, 1 month ago


A -incorrect - Manual Roation of key is not good
B - Correct - Use a customer managed CMK if you want to grant cross-account access to your S3 objects. You can configure the policy of a
customer managed CMK to allow access from another account.
https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
C,D - Incorrect. you cannot manage Amazon managed CMKs, rotate them, or change their key policies. AWS managed customer master key
(CMK) key policies can't be modified because they're read-only
upvoted 14 times

" # mericov Highly Voted $ 1 year, 1 month ago


B - https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
upvoted 5 times

" # AwsBRFan Most Recent % 3 weeks, 6 days ago


Selected Answer: D
https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html
"Server-side encryption is the encryption of data at its destination by the application or service that receives it."

https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
"Automatic key rotation is disabled by default on customer managed keys but authorized users can enable and disable it. When you enable (or
re-enable) automatic key rotation, AWS KMS automatically rotates the KMS key one year (approximately 365 days) after the enable date and
every year thereafter."
upvoted 2 times

" # Jonfernz 3 weeks, 1 day ago


wrong. there's no yearly automatic rotation for AWS managed KMS keys.
upvoted 1 times

" # pek77 1 month ago


What's the AWS managed CMK (AWS Managed Customer Managed Key)? is there the concept of this?
B is Correct
upvoted 1 times

" # Enigmaaaaaa 3 months, 3 weeks ago


Selected Answer: B
B is correct
A - will not work no access provided to other accounts and manual rotate.
C - AWS managed key you cant provide access to others + manual rotate - will not work
D - AWS managed CMK you cant provide access to other
https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#key-mgmt

609 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # etopics 4 months ago
D its correct:

In May 2022, AWS KMS changed the rotation schedule for AWS managed keys from every three years (approximately 1,095 days) to every year
(approximately 365 days).

New AWS managed keys are automatically rotated one year after they are created, and approximately every year thereafter.

Existing AWS managed keys are automatically rotated one year after their most recent rotation, and every year thereafter.

https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
upvoted 1 times

" # WebMaria 9 months, 4 weeks ago


So many answers here. Still wrong. It is D. The question asks for MOST OPTIMAL. D works and is more optimal than any other.

https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
upvoted 1 times

" # Punitsolanki 9 months ago


AWS managed CMK is rotated once every three years automatically, so cant be C or D.
ref- https://docs.aws.amazon.com/whitepapers/latest/kms-best-practices/aws-managed-and-customer-managed-cmks.html
upvoted 1 times

" # Enigmaaaaaa 3 months, 3 weeks ago


with D - How can you provide access to other accounts/roles to use AWS managed key?
upvoted 1 times

" # AzureDP900 11 months ago


B is right
upvoted 1 times

" # cldy 11 months ago


B. Create a customer managed CMK in the logging account. Update the CMK key policy to provide access to the logging account and business
unit accounts. Enable automatic rotation of the CMK.
upvoted 1 times

" # javiems 11 months, 1 week ago


It's A. B is incorrect.
AWS-managed CMK (rotation): Once every three years automatically
Customer-managed CMK: Once a year automatically through opt-in or on-demand manually

https://docs.aws.amazon.com/whitepapers/latest/kms-best-practices/aws-managed-and-customer-managed-cmks.html
upvoted 2 times

" # backfringe 11 months, 1 week ago


I go with B
upvoted 1 times

" # RVD 11 months, 2 weeks ago


Selected Answer: B
enable automatic rotation after 1 year.
upvoted 1 times

" # acloudguru 11 months, 3 weeks ago


Selected Answer: B
A -incorrect - Manual Roation of key is not good
B - Correct - Use a customer managed CMK if you want to grant cross-account access to your S3 objects. You can configure the policy of a
customer managed CMK to allow access from another account.
https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
C,D - Incorrect. you cannot manage Amazon managed CMKs, rotate them, or change their key policies. AWS managed customer master key
(CMK) key policies can't be modified because they're read-only
upvoted 2 times

" # Pratap 1 year ago


got to be B
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # tgv 1 year ago

610 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

BBB
---
upvoted 3 times

" # blackgamer 1 year ago


B is answer.

AWS managed CMK rotate every three year.


https://docs.aws.amazon.com/whitepapers/latest/kms-best-practices/aws-managed-and-customer-managed-cmks.html
upvoted 3 times

611 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #764 Topic 1

A company wants to migrate an application to Amazon EC2 from VMware Infrastructure that runs in an on-premises data center. A solutions
architect must preserve the software and con+guration settings during the migration.
What should the solutions architect do to meet these requirements?

A. Con+gure the AWS DataSync agent to start replicating the data store to Amazon FSx for Windows File Server. Use the SMB share to host the
VMware data store. Use VM Import/Export to move the VMs to Amazon EC2.

B. Use the VMware vSphere client to export the application as an image in Open Virtualization Format (OVF) format. Create an Amazon S3
bucket to store the image in the destination AWS Region. Create and apply an IAM role for VM Import. Use the AWS CLI to run the EC2 import
command.

C. Con+gure AWS Storage Gateway for +les service to export a Common Internet File System (CIFS) share. Create a backup copy to the shared
folder. Sign in to the AWS Management Console and create an AMI from the backup copy. Launch an EC2 instance that is based on the AMI.

D. Create a managed-instance activation for a hybrid environment in AWS Systems Manager. Download and install Systems Manager Agent on
the on-premises VM. Register the VM with Systems Manager to be a managed instance. Use AWS Backup to create a snapshot of the VM and
create an AMI. Launch an EC2 instance that is based on the AMI.

Correct Answer: D
Reference:
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html

Community vote distribution


B (100%)

" # pablobairat Highly Voted $ 1 year, 1 month ago


B
Source:
https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
upvoted 13 times

" # blackgamer 1 year, 1 month ago


Agree B
upvoted 1 times

" # mericov Highly Voted $ 1 year, 1 month ago


B
- Export an OVF Template
- Create / use an Amazon S3 bucket for storing the exported images. The bucket must be in the Region where you want to import your VMs.
- Create an IAM role named vmimport.
- You'll use AWS CLI to run the import commands.
upvoted 6 times

" # Ni_yot Most Recent % 2 weeks, 6 days ago


Selected Answer: B
agree its B
upvoted 1 times

" # Enigmaaaaaa 3 months, 3 weeks ago


Selected Answer: B
B - classic VM Export/Import
upvoted 1 times

" # kangtamo 4 months, 1 week ago


Selected Answer: B
Agree with B: VMware vSphere.
upvoted 1 times

" # tkanmani76 10 months, 3 weeks ago


https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.hostclient.doc/GUID-AFEDC48B-C96F-4088-9C1F-
4F0A30E965DE.html
VMWare vSphere is not an AWS solution. While it works the best option would be to use AWS System manager. Option D is right.
upvoted 1 times

612 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # AzureDP900 11 months ago


B is right
upvoted 1 times

" # nsei 1 year ago


It is B https://aws.amazon.com/premiumsupport/knowledge-center/import-instances/
upvoted 1 times

" # Pratap 1 year ago


B is the answer
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # tgv 1 year, 1 month ago


BBB
---
upvoted 2 times

" # denccc 1 year, 1 month ago


I think it's B: https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
upvoted 4 times

" # AWSDEvops 1 year, 1 month ago


I think its D
upvoted 1 times

" # pkboy78 1 year, 1 month ago


I think it is B? I am not sure how AWS backup can make snapshot of the on-premise server from SSM
upvoted 2 times

613 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #765 Topic 1

A company is running multiple workloads in the AWS Cloud. The company has separate units for software development. The company uses AWS
Organizations and federation with SAML to give permissions to developers to manage resources in their AWS accounts. The development units
each deploy their production workloads into a common production account.
Recently, an incident occurred in the production account in which members of a development unit terminated an EC2 instance that belonged to a
different development unit. A solutions architect must create a solution that prevents a similar incident from happening in the future. The solution
also must allow developers the possibility to manage the instances used for their workloads.
Which strategy will meet these requirements?

A. Create separate OUs in AWS Organizations for each development unit. Assign the created OUs to the company AWS accounts. Create
separate SCPs with a deny action and a StringNotEquals condition for the DevelopmentUnit resource tag that matches the development unit
name. Assign the SCP to the corresponding OU.

B. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation. Update the IAM
policy for the developers' assumed IAM role with a deny action and a StringNotEquals condition for the DevelopmentUnit resource tag and
aws:PrincipalTag/ DevelopmentUnit.

C. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation. Create an SCP
with an allow action and a StringEquals condition for the DevelopmentUnit resource tag and aws:PrincipalTag/DevelopmentUnit. Assign the
SCP to the root OU.

D. Create separate IAM policies for each development unit. For every IAM policy, add an allow action and a StringEquals condition for the
DevelopmentUnit resource tag and the development unit name. During SAML federation, use AWS Security Token Service (AWS STS) to assign
the IAM policy and match the development unit name to the assumed IAM role.

Correct Answer: B

Community vote distribution


B (100%)

" # Viper57 Highly Voted $ 1 year ago


B is the correct answer IMO.

A - Does not make much sense. An account can only belong to one OU. This is a single production account so it can't be in multiple OUs.
B - Session tag is used to identify which business unit a user is part of. IAM policy prevent them from modifying resources for any business unit
but their own.
C. This does not restrict any existing permissions so users can still modify resources from different business units.
D. STS cannot be used to assign a policy to an IAM role. A policy has to be assigned to the role before authentication occurs.
upvoted 9 times

" # Jupi Highly Voted $ 1 year ago


It is B
https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_abac-saml.html
upvoted 8 times

" # caveman712 Most Recent % 4 weeks, 1 day ago


Selected Answer: B
B is correct

C is incorrect because we can't use "Allow" with conditions in SCP


upvoted 1 times

" # cldy 11 months ago


B. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation. Update the IAM
policy for the developers‫ג‬€™ assumed IAM role with a deny action and a StringNotEquals condition for the DevelopmentUnit resource tag and
aws:PrincipalTag/ DevelopmentUnit.
upvoted 1 times

" # Bigbearcn 1 year ago


It is B.
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_iam-tags.html
upvoted 1 times

" # andylogan 1 year ago

614 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

It's B
upvoted 2 times
" # DerekKey 1 year ago
In my opinion
B is correct - they already have ALLOW therefore we need DENY
C is wrong - since they already have ALLOW permission adding additional ALLOW permission doesn't make sense
upvoted 2 times

" # student22 1 year ago


C
Centrally controlled via SCP added to root.
upvoted 1 times

" # student22 1 year ago


Changing to B.
Answer from DerekKey makes sense.
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 3 times

" # denccc 1 year ago


It's B
upvoted 1 times

" # blackgamer 1 year ago


D to me.
upvoted 2 times

" # neta1o 1 year, 1 month ago


Seems we have some research to do on this one, I was thinking B.
upvoted 2 times

" # Cotter 1 year, 1 month ago


I not sure, may be choose B?
upvoted 2 times

" # pablobairat 1 year, 1 month ago


D it is
upvoted 1 times

" # AWSDEvops 1 year, 1 month ago


I think its A
upvoted 3 times

" # AWSDEvops 1 year, 1 month ago


changing it to C
upvoted 1 times

" # pkboy78 1 year, 1 month ago


I think it is C
upvoted 2 times

615 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #766 Topic 1

A company's factory and automation applications are running in a single VPC. More than 20 applications run on a combination of Amazon EC2,
Amazon Elastic
Container Service (Amazon ECS), and Amazon RDS.
The company has software engineers spread across three teams. One of the three teams owns each application, and each time is responsible for
the cost and performance of all of its applications. Team resources have tags that represent their application and team. The teams use IAM
access for daily activities.
The company needs to determine which costs on the monthly AWS bill are attributable to each application or team. The company also must be
able to create reports to compare costs from the last 12 months and to help forecast costs for the next 12 months. A solutions architect must
recommend an AWS Billing and
Cost Management solution that provides these cost reports.
Which combination of actions will meet these requirements? (Choose three.)

A. Activate the user-de+ne cost allocation tags that represent the application and the team.

B. Activate the AWS generated cost allocation tags that represent the application and the team.

C. Create a cost category for each application in Billing and Cost Management.

D. Activate IAM access to Billing and Cost Management.

E. Create a cost budget.

F. Enable Cost Explorer.

Correct Answer: BCF

Community vote distribution


ACF (67%) ADF (33%)

" # Jupi Highly Voted $ 1 year ago


A: After you create and apply user-defined tags, you can activate them for cost allocation.

D. By default, IAM users don't have access to the AWS Billing and Cost Management console. You or your account administrator must grant
users access.

F You can explore your usage and costs using the main graph, the Cost Explorer cost and usage reports, or the Cost Explorer RI reports. You can
view data for up to the last 12 months, forecast how much you're likely to spend for the next 12 months, and get recommendations for what
Reserved Instances to purchase.
upvoted 19 times

" # dmscountera 2 weeks, 1 day ago


ACF
Not D:
Important
Activating IAM access alone doesn't grant IAM users and roles the necessary permissions for these Billing console pages. In addition to
activating IAM access, you must also attach the required IAM policies to those users or roles. For more information, see Using identity-based
policies (IAM policies) for AWS Billing.
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/control-access-billing.html
upvoted 1 times

" # hilft 3 months, 1 week ago


ADF is the answer
upvoted 1 times

" # Shanmahi 1 year ago


Agree Jupi.
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/control-access-billing.html
By default, IAM users dont have access to Billing & Cost Mgmt. One of the ask is that the users should be able to create reports of usage and
forecast, for which they will require access to the billing console.
upvoted 1 times

" # sergioandreslq 1 year ago


Agree: ADF
A: Key line in Question: "Team resources have tags that represent their application and team", meaning that the teams are using Tags, So, It is
required to enable "Cost Allocation Tags "User-defined". Then:
D: Key question:"each time is responsible for the cost", meaning, they need to access the AWS billing and cost management. remember than

616 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

only root account is able to access billing, so, It is required to enable the IAM access to the teams to controls their cost. then,
F: to be able to see the cost using the cost allocation tags, It is required to enable "Cost Explorer".
upvoted 7 times

" # tgv 1 year ago


I disagree on --> D: Key question:"each time is responsible for the cost""
I believe the Key is in the request: "The company needs to determine which costs on the monthly AWS bill are attributable to each
application or team" and for this I will choose C over D
upvoted 1 times
" # tgv Highly Voted $ 1 year ago
AAA CCC FFF
---
A: After you create and apply user-defined tags, you can activate them for cost allocation
C: I believe the Key is in the request: "The company needs to determine which costs on the monthly AWS bill are attributable to each application
or team"
F: You can explore your usage and costs using the main graph, the Cost Explorer cost and usage reports, or the Cost Explorer RI reports. You
can view data for up to the last 12 months, forecast how much you're likely to spend for the next 12 months
upvoted 11 times

" # tgv 1 year ago


C: The COMPANY and not the IAM users
upvoted 5 times

" # sashenka 12 months ago


Then how do you suggest you satisfy the requirement that "each team is responsible for the cost and performance of all of its
applications"? We also are given that they use IAM for their daily use. We are also given that “teams rely on IAM access for day-to-day
operations.” Only possible solution here is D.
upvoted 3 times

" # AwsBRFan Most Recent % 3 weeks, 6 days ago


Selected Answer: ADF
https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_billing.html
"IAM users cannot access billing data until the account owner activates IAM access and also attaches policies that provide billing actions to the
user or role."
upvoted 1 times

" # dmscountera 2 weeks, 1 day ago


Important
Activating IAM access alone doesn't grant IAM users and roles the necessary permissions for these Billing console pages. In addition to
activating IAM access, you must also attach the required IAM policies to those users or roles. For more information, see Using identity-based
policies (IAM policies) for AWS Billing.
upvoted 1 times

" # Enigmaaaaaa 3 months, 3 weeks ago


ACF
E - Incorrect - no need for budgets here
B - Incorrect - need to create custom cost allocation tags not to use the default ones
D - Incorrect -This is a consolidated billing organization - IAM access to billing is enabled by default
https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_billing.html#tutorial-billing-step1
"Note
If you create a member account using AWS Organizations, this feature is enabled by default."
all other answers make sense A C F - Cost categories + user defined CAT + enable CE
upvoted 1 times

" # wassb 3 weeks, 2 days ago


There is no AWS ORGANIZATION in the question
upvoted 1 times

" # TechX 4 months, 1 week ago


Selected Answer: ACF
Answer: ACF
Explanation:
https://aws.amazon.com/aws-cost-management/aws-cost-categories/
upvoted 1 times

" # Kb80 4 months, 2 weeks ago


Selected Answer: ACF
I tend to lean towards ACF for the reason that the teams can manage their billing through cost explorer without needing to be provided access to
the billing and cost management portal. By default a member account will have access to cost explorer and you can control what they have
access to (or revoke access). https://docs.aws.amazon.com/cost-management/latest/userguide/ce-enable.html
https://docs.aws.amazon.com/cost-management/latest/userguide/ce-access.html
upvoted 1 times

617 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # jyrajan69 8 months, 1 week ago


Has to be A,D,F.
C is wrong as it only refers to application, and the question clearly says ether Team or Application
upvoted 1 times

" # lifebegins 8 months, 3 weeks ago


Answer is ACF:

1. https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-cost-categories.html
2.https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-cost-categories.html
3. https://docs.aws.amazon.com/cost-management/latest/userguide/ce-enable.html
upvoted 1 times

" # cannottellname 9 months, 2 weeks ago


Team already includes tags then why activate that?
CDF seems better to me.
upvoted 1 times

" # vbal 10 months, 2 weeks ago


https://aws.amazon.com/premiumsupport/knowledge-center/cost-explorer-analyze-spending-and-usage/ ; ACF.
upvoted 1 times

" # Ni_yot 10 months, 2 weeks ago


ACF for me. you can create a cost category for each team. Since its the org that needs the info no need for users to have access to billing/cost
explorer.
upvoted 1 times

" # Gaurav_GGG 11 months, 1 week ago


It's A, C, F. For A and F agree with other comments. C is because in Question it's said they each Software team manages each Application so in
C you can create category for each Application and monitor corresponding cost using Tags.
upvoted 1 times

" # andylogan 1 year ago


It's A D F
I am with D, not C because you can't create cost category for application, below link also suggest to manage by team: "You can create groupings
of costs using cost categories. For example, assume that your business is organized by teams and that each team has multiple accounts within.
To build this structure in cost categories, create a cost category named Team. Then, you can map costs to a cost category value that's named
Team 1"
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-cost-categories.html#cost-categories-dimensions
upvoted 1 times

" # johnnsmith 1 year ago


C is wrong. The requirement is "application or team". C only allows application.
upvoted 1 times

" # Kopa 1 year ago


Im for A,C,F im for C because it is saying the company not users. Users doesn't need access to Bill Mgm.
upvoted 1 times

" # Bigbearcn 1 year ago


ACF.
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-cost-categories.html
upvoted 2 times

" # student22 1 year ago


A,C,F
It seems C is more relevant than D as the question doesn't mention that the reports are generated by the application teams.
upvoted 1 times

" # sashenka 12 months ago


Then how do you suggest you satisfy the requirement that "each team is responsible for the cost and performance of all of its applications"?
We are also given that “teams rely on IAM access for day-to-day operations.” Only option here is D to satisfy the requirement.
upvoted 1 times

618 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #767 Topic 1

A team collects and routes behavioral data for an entire company. The company runs a Multi-AZ VPC environment with public subnets, private
subnets, and in internet gateway. Each public subnet also contains a NAT gateway. Most of the company's applications read from and write to
Amazon Kinesis Data Streams.
Most of the workloads run in private subnets.
A solutions architect must review the infrastructure. The solution architect needs to reduce costs and maintain the function of the applications.
The solutions architect uses Cost Explorer and notices that the cost in the EC2-Other category is consistently high. A further review shows that
NatGateway-Bytes charges are increasing the cost in the EC2-Other category.
What should the solutions architect do to meet these requirements?

A. Enable VPC Flow Logs. Use Amazon Athena to analyze the logs for tramc that can be removed. Ensure that security groups are blocking
tramc that is responsible for high costs.

B. Add an interface VPC endpoint for Kinesis Data Streams to the VPC. Ensure that applications have the correct IAM permissions to use the
interface VPC endpoint.

C. Enable VPC Flow Logs and Amazon Detective. Review Detective +ndings for tramc that is not related to Kinesis Data Streams. Con+gure
security groups to block that tramc.

D. Add an interface VPC endpoint for Kinesis Data Streams to the VPC. Ensure that the VPC endpoint policy allows tramc from the
applications.

Correct Answer: B

Community vote distribution


D (100%)

" # pablobairat Highly Voted $ 1 year, 1 month ago


It is D
Source: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html
upvoted 13 times

" # Jupi Highly Voted $ 1 year ago


D: If most traffic through your NAT gateway is to AWS services that support interface VPC endpoints, then create an interface VPC endpoint for
the services.

https://aws.amazon.com/premiumsupport/knowledge-center/vpc-reduce-nat-gateway-transfer-costs/

VPC endpoint policies enable you to control access by either attaching a policy to a VPC endpoint or by using additional fields in a policy that is
attached to an IAM user, group, or role to restrict access to only occur via the specified VPC endpoint
upvoted 7 times

" # tgv 1 year ago


First I was the impression that you cannot attach a policy to a VPC interface endpoint (only to VPC gateway endpoints), but did a bit of
research and found this --> https://aws.amazon.com/about-aws/whats-new/2019/06/now-add-endpoint-policies-to-interface-endpoints-for-
aws-services/

Since "a default policy gets attached for you to allow full access to the service" when you create the endpoint you don't really need to ensure
that the VPC endpoint policy allows traffic from the applications. But I guess this is just AWS way to confuse us
upvoted 1 times

" # Bill_Wiiliam Most Recent % 4 months, 1 week ago


D is the correct answer
upvoted 2 times

" # kangtamo 4 months, 2 weeks ago


Selected Answer: D
Agree with D.
upvoted 1 times

" # bfal 6 months, 3 weeks ago


Correct answer is B.
D is wrong because it states VPC endpoint policy for traffic "from" the application. With vac endpoint policy, you can't control traffic from the
application, but traffic to the application. This should be viewed from service consumer perspective .
upvoted 1 times

619 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # cen007 3 months, 2 weeks ago


D.
IAM is Identity and Access Management.
upvoted 1 times

" # andylogan 1 year ago


It's D
upvoted 1 times

" # tgv 1 year ago


DDD
---
upvoted 2 times

" # blackgamer 1 year ago


D is the answer.
upvoted 1 times

" # denccc 1 year ago


Would go for D since you don't need a IAM policy to USE vpc endpoints (only to create/update/delete them).
upvoted 3 times

" # pkboy78 1 year, 1 month ago


I think it is D?
upvoted 1 times

620 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #768 Topic 1

A company is using multiple AWS accounts. The company has a shared service account and several other accounts for different projects.
A team has a VPC in a project account. The team wants to connect this VPC to a corporate network through an AWS Direct Connect gateway that
exists in the shared services account. The team wants to automatically perform a virtual private gateway association with the Direct Connect
gateway by using an already- tested AWS Lambda function while deploying its VPC networking stack. The Lambda function code can assume a
role by using AWS Security Token Service
(AWS STS). The team is using AWS CloudFormation to deploy its infrastructure.
Which combination of steps will meet these requirements? (Choose three.)

A. Deploy the Lambda function to the project account. Update the Lambda function's IAM role with the directconnect:* permission.

B. Create a cross-account IAM role in the shared services account that grants the Lambda function the directconnect:* permission. Add the
sts:AssumeRole permission to the IAM role that is associated with the Lambda function in the shared services account.

C. Add a custom resource to the CloudFormation networking stack that references the Lambda function in the project account.

D. Deploy the Lambda function that is performing the association to the shared services account. Update the Lambda function's IAM role with
the directconnect:* permission.

E. Create a cross-account IAM role in the shared services account that grants the sts:AssumeRole permission to the Lambda function with the
directconnect:* permission acting as a resource. Add the sts:AssumeRole permission with this cross-account IAM role as a resource to the
IAM role that belongs to the Lambda function in the project account.

F. Add a custom resource to the CloudFormation networking stack that references the Lambda function in the shared services account.

Correct Answer: CEF

Community vote distribution


ACE (100%)

" # Rmukh Highly Voted $ 1 year, 1 month ago


I would say it is B, D, F
upvoted 6 times

" # foureye2004 Highly Voted $ 2 months, 4 weeks ago


Selected Answer: ACE
https://docs.aws.amazon.com/directconnect/latest/UserGuide/multi-account-associate-vgw.html
With above guideline, you should:
Project Account: a lambda function to:
- Create an associate from VPG to the DX Gateway ID (input DX Gateway Owner) (answer A, need directconnect:* permission to do this)
- Assume role on Share Service account to accept the association request.

Share Service account:


- an IAM role to allow Project account's Lambda to assume with directconnect:* permission to allow acction accept/deny the request (answer E)

So A,C,E are correct


upvoted 5 times

" # tomosabc1 Most Recent % 1 month ago


Selected Answer: ACE
The answer is ACE.
Quick Resolution: The Lambda must be in the same account with network stack, meaning project account. Therefore,
B(wrong): ... Lambda function in the shared services account.
D(wrong): Deploy the lambda function .... to the shared services account
F(wrong): ... references the lambda function in the shared services account

E(correct): Really bad worded answer.


1) Create a cross-account IAM role in the shared services account
2) The cross-account IAM role grants sts:AssumeRole permission to the Lambda function, allowing lambda function in project account to assume
it.
3) The cross-account IAM role has directconnect:* permission
4) Add policy to allow Lambda execution role to assume the cross-account role.
This is a typical use case of cross account IAM role, but E describes it in a very vague and misleading way.
upvoted 4 times

" # i_bird 2 weeks, 6 days ago


Well said..thanks

621 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # bobsmith2000 5 months, 3 weeks ago


The correct version is a combination of B and E:
1) The role in shared services contains allow for "directconnect:*". Its trusted policy allows sts:AssumeRole for a principle which is lambda role in
B.
2) The role in the project account allows sts:AssumeRole and Resource is the role in the shared services account.
So that lamda assumes the role (allowed by its role) and recives the privileges of ASSUMED role.
So neither is correct.
Apparently the functions should be in the same account (or created along with a VPC networking stack). So it's A without second sentence.
In case of the Lambda in the shared account, we don't need any cross-account roles.

Bad-worded answers
upvoted 1 times

" # AzureDP900 11 months ago


ACE is correct.
upvoted 2 times

" # sashsz 6 months, 4 weeks ago


Exceptionally bad written answers.
This is what E should state:
E. Create a cross-account IAM role in the shared services account that grants the sts:AssumeRole permission to the Lambda function's
execution role in the project account with the directconnect:* permission acting as a resource. Add the sts:AssumeRole permission with this
cross-account IAM role as a resource to the IAM role that belongs to the Lambda function in the project account.
upvoted 1 times

" # sashsz 6 months, 4 weeks ago


Also, the 2nd sentence in A should be removed.
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


how did you choose between B and E?
upvoted 1 times

" # acloudguru 11 months, 2 weeks ago


https://aws.amazon.com/premiumsupport/knowledge-center/lambda-function-assume-iam-role/. seems B is over E.
upvoted 1 times

" # Viper57 1 year ago


ACE makes the most sense. According to AWS

"The owner of the virtual private gateway creates an association proposal and the owner of the Direct Connect gateway must accept the
association proposal."

So it makes sense in this case that the project account would create a virtual gateway association first, and then assume the cross-account role
to accept the association in the shared services account.
upvoted 4 times

" # student22 1 year ago


A,C,E
New role in shared account + Lambda in project account
upvoted 1 times

" # Bigbearcn 1 year ago


Only 1 lambda function in shared service account is enough. BDF.
upvoted 1 times

" # Bigbearcn 1 year ago


ACE is correct.

Actually lambda function can be created in shared service account or project account. If lambda function is create in shared service account,
you need grant your cloudformation customer resource permission to call lambda, so Option F is incomplete. Option B is wrong because in
that case sts::AssumeRole is not needed. Only when lambda in other account that need to assume the role, you need grant sts::AssumeRole
permission. So ACE is answer.
upvoted 7 times

" # andylogan 1 year ago


It's A C E
upvoted 1 times

" # tgv 1 year ago


AAA CCC EEE
---

622 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 3 times
" # blackgamer 1 year, 1 month ago
ACE is the answer
upvoted 1 times

" # Desailly 1 year, 1 month ago


C,D,E
Story explanantion: Project team deploys something which - as a result - creates VGW association in shared account.
The flow works as follows (_P for Project, _S for shared)
Project team runs a lambda_P which invokes lambda_S (the one team has already tested) and lambda_S creates a desired association
In order to do so:
lambda_S needs directconnect:* permission to handle DC (option D)
lambda_P needs permissions to run lambda_S, which is done via 2 roles
- lambda_S can be assumed via dedicated cross-account IAM_1 role (option D part 1)
- lambda_P can assume IAM_1 via IAM_2 (role for lamda_P) (option D part 2)
Cloudformation is run from _P account and runs lamda_P, --> thus C
upvoted 1 times

" # Desailly 1 year, 1 month ago


"option D part x" relates to "E" in explanation
upvoted 1 times

" # tgv 1 year, 1 month ago


I don't think there are 2 lambda functions. Only 1 is enough and should be deployed in the project account.
I'll go with A C E
upvoted 3 times

" # mericov 1 year, 1 month ago


https://aws.amazon.com/premiumsupport/knowledge-center/lambda-function-assume-iam-role/
upvoted 1 times

" # mericov 1 year, 1 month ago


I would say B / C / D https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html
upvoted 1 times

623 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #769 Topic 1

A company is running a line-of-business (LOB) application on AWS to support its users. The application runs in one VPC, with a backup copy in a
second VPC in a different AWS Region for disaster recovery. The company has a single AWS Direct Connect connection between its on-premises
network and AWS. The connection terminates at a Direct Connect gateway.
All access to the application must originate from the company's on-premises network and tramc must be encrypted in transit through the use of
IPsec. The company is routing tramc through a VPN tunnel over the Direct Connect connection to provide the required encryption.
A business continuity audit determines that the Direct Connect connection represents a potential single point of failure for access to the
application. The company needs to remediate this issue as quickly as possible.
Which approach will meet these requirements?

A. Order a second Direct Connect connection to a different Direct Connect location. Terminate the second Direct Connect connection at the
same Direct Connect gateway.

B. Con+gure an AWS Site-to-Site VPN connection over the internet. Terminate the VPN connection at a virtual private gateway in the secondary
Region.

C. Create a transit gateway. Attach the VPCs to the transit gateway, and connect the transit gateway to the Direct Connect gateway. Con+gure
an AWS Site-to- Site VPN connection, and terminate it at the transit gateway.

D. Create a transit gateway. Attach the VPCs to the transit gateway, and connect the transit gateway to the Direct Connect gateway. Order a
second Direct Connect connection, and terminate it at the transit gateway.

Correct Answer: B

Community vote distribution


C (64%) B (18%) A (18%)

" # Viper57 Highly Voted $ 1 year ago


I think the answer is B.

A - Wrong, because this is too slow.


B - Only option that works and does not take too long
C - Wrong. Looks good at first, but a transit gateway can't be associated with multiple VPCs in different regions, only the same region. This only
works if the transit gateway is peered with another gateway in a different region, which is not stated in the question.
D - Wrong, because this is too slow.

A transit gateway is a regional construct, not global. See - https://aws.amazon.com/blogs/networking-and-content-delivery/building-a-global-


network-using-aws-transit-gateway-inter-region-peering/. A
upvoted 11 times

" # Gaurav_GGG 10 months, 2 weeks ago


TG supports InterRegion VPC Peering- https://aws.amazon.com/about-aws/whats-new/2019/12/aws-transit-gateway-supports-inter-region-
peering/
upvoted 3 times

" # aandc 4 months ago


you need two TG https://aws.amazon.com/blogs/networking-and-content-delivery/building-a-global-network-using-aws-transit-gateway-
inter-region-peering/
upvoted 2 times

" # Rmukh Highly Voted $ 1 year, 1 month ago


I agree with C , as of D it will take more time plus it will require additional VPN tunneling for encryption which is not mentioned in D.
upvoted 7 times

" # sergioandreslq 1 year, 1 month ago


TGW is required to receive the Site To Site VPN and the DX Gateway is required to reach the DR in secondary region. (Transit VIF is required to
connect DX GW to TGW).
Site to Site VPN comply with encryption, the answer D doesn't comply with encryption requirement using only DX
upvoted 2 times

" # dmscountera Most Recent % 4 days, 12 hours ago


https://aws.amazon.com/premiumsupport/knowledge-center/dx-configure-dx-and-vpn-failover-tgw/ C
upvoted 2 times

" # dcdcdc3 1 month, 1 week ago

624 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Selected Answer: B
I would choose B for its simplicity and not having to order a second DX
https://docs.aws.amazon.com/vpn/latest/s2svpn/VPNTunnels.html

A requires another DX GW if DX terminates in another location


C and D are an overkill in comparison (if they were adding a redundant WAN/DX link)
also D is not adding encryption in this wording
upvoted 1 times
" # kadev 2 months, 1 week ago
B
Many people confuse between B and C, key point is resolving "single point as quickly as posible"
=> B write, VPN is secondary when DX conection goes down
C "Create a transit gateway. Attach the VPCs to the transit gateway" , Do you see "a TG" => VPCs in diffirent regions, so can not attach VPC in
this region to TG in another region
upvoted 2 times

" # kadev 2 months, 1 week ago


And add TG not resolve "single point", because is still using the same DX connection
upvoted 2 times

" # foureye2004 3 months, 2 weeks ago


Selected Answer: A
https://aws.amazon.com/directconnect/resiliency-recommendation/

A is the best solution because 1 DX is a point of failure, we have to address it by order the second
upvoted 2 times

" # kadev 2 months, 1 week ago


"remediate this issue as quickly as possible" you will fail exam if not read Q carefully
upvoted 2 times

" # dethblow 2 months, 2 weeks ago


C
https://aws.amazon.com/about-aws/whats-new/2019/12/aws-transit-gateway-supports-inter-region-peering/
upvoted 1 times

" # cen007 3 months, 2 weeks ago


Selected Answer: C
Ans is C.
The question is about resiliency of connection.
The connection goes from 2 VPC > Transit Gateway > DXGateway > VPN+DX connection (on-premise).

B is just a disaster recovery site to store copy of the primary site. Also terminating the VPN to a private gateway will not help the primary region.
upvoted 5 times

" # Enigmaaaaaa 3 months, 3 weeks ago


I will go with B
A - will take a lot of time+ Incorrect configuration
C- you cant attach VPCs from different regions to the same transit GW - need to create two TGWs with inter-region
D- you cant attach VPCs from different regions to the same transit GW + will take a lot of time
B - can it address immediately = Add extra VPN from on-prem
upvoted 1 times

" # aandc 4 months ago


Selected Answer: B
Vote for B,
C D are wrong, two VPC are in different regional, a transit gateway is not enough
upvoted 1 times

" # cen007 3 months, 2 weeks ago


Ans is C.
The question is about resiliency of connection.
The connection goes from 2 VPC > Transit Gateway > DXGateway > VPN+DX connection (on-premise).

B is just a disaster recovery site to store copy of the primary site. Also terminating the VPN to a private gateway will not help the primary
region.
upvoted 1 times

" # JonJon03 4 months ago


Selected Answer: C
Secondary region is for disaster recovery. We're not associating the TGW with multiple regions, we are increasing the resiliency of the primary
connection via a TGW/DXW/S2S VPN
upvoted 2 times

625 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # sb333 4 weeks, 1 day ago


Answer C specifically says to connect the "VPCs" to the TGW. That is completely wrong - can't do it. There are only two VPCs in the solution.
One in the primary region and one in the secondary (DR) region. So with the creation of only one TGW, this is invalid. You would need a TGW
in both Regions for this to work.
upvoted 1 times

" # riched99 7 months, 3 weeks ago


C: each VPC would need a TGW and there is no mention of a Transit VIF, the correct answer is B
upvoted 1 times

" # vbal 10 months, 2 weeks ago


B makes for sense to remove SPOF. 2 issues with c & d; Transit Gateway is SPOF & It can't take connection from different regional VPCs.
upvoted 1 times

" # Fuccon 8 months ago


You do not need additional transit gateways for high availability, because transit gateways are highly available by design.
https://docs.aws.amazon.com/vpc/latest/tgw/tgw-best-design-practices.html
upvoted 1 times

" # cldy 11 months ago


C. Create a transit gateway. Attach the VPCs to the transit gateway, and connect the transit gateway to the Direct Connect gateway. Configure an
AWS Site-to- Site VPN connection, and terminate it at the transit gateway.
upvoted 2 times

" # ByomkeshDas 11 months, 1 week ago


They haven't asked that the backup connection must terminate in the secondary region, so there is no need to connect VPC's of the secondary
region to the TGW. The existing VPCs (of main region) must connect to the TGW, it's enough. And why will you connect the S-toS VPN to only the
secondary region (as per option-B)?
Option-C is correct.
upvoted 1 times

" # student22 1 year ago


C
Better redundancy than B.
upvoted 2 times

" # Smartphone 1 year ago


Answer is B. In question it tells about quickly. Use Site-to-Site VPN is a good approach to use in a failover situation. However, if you are using an
AWS Site-to-Site VPN connection to a virtual gateway (VGW) that is associated with your Direct Connect gateway, you can use your VPN
connection for failover.
https://aws.amazon.com/directconnect/faqs/
upvoted 1 times

" # Smartphone 1 year ago


After reading the AWS documentation, I am changing my answer. The Correct Anwer is C. With option B, two site-to-site connections need to
be created to make the connection for two VPCs. Option C provides a way to create a single site-to-site connection for both the VPC.
Answer is C
https://aws.amazon.com/premiumsupport/knowledge-center/dx-configure-dx-and-vpn-failover-tgw/
upvoted 7 times

" # 38745 1 year ago


B. Two approaches to Direct Connect HA solution. The first is a second Direct Connection, terminate in a different endpoints. The second is a
VPN. A is wrong because it terminate at the same endpoint.
upvoted 2 times

626 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #770 Topic 1

A large company in Europe plans to migrate its applications to the AWS Cloud. The company uses multiple AWS accounts for various business
groups. A data privacy law requires the company to restrict developers' access to AWS European Regions only.
What should the solutions architect do to meet this requirement with the LEAST amount of management overhead?

A. Create IAM users and IAM groups in each account. Create IAM policies to limit access to non-European Regions. Attach the IAM policies to
the IAM groups.

B. Enable AWS Organizations, attach the AWS accounts, and create OUs for European Regions and non-European Regions. Create SCPs to limit
access to non-European Regions and attach the policies to the OUs.

C. Set up AWS Single Sign-On and attach AWS accounts. Create permission sets with policies to restrict access to non-European Regions.
Create IAM users and IAM groups in each account.

D. Enable AWS Organizations, attach the AWS accounts, and create OUs for European Regions and non-European Regions. Create permission
sets with policies to restrict access to non-European Regions. Create IAM users and IAM groups in the primary account.

Correct Answer: B

Community vote distribution


B (67%) C (33%)

" # mericov Highly Voted $ 1 year, 1 month ago


B - "This policy uses the Deny effect to deny access to all requests for operations that don't target one of the two approved regions (eu-central-1
and eu-west-1)." https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_general.html#example-
scp-deny-region
upvoted 19 times

" # SureNot Most Recent % 6 days, 21 hours ago


Selected Answer: B
B answer B is little bit weird. It's enough to have only one OU and attact SCP to it.
But having two two OUs with the same SCP is still ok.
upvoted 1 times

" # tomosabc1 2 weeks, 3 days ago


Selected Answer: C
C is correct.

B is wrong, because each account(meaning each business unit) has developers, meaning there are some IAM users in each account who has
access to AWS European Regions only. There is no point to create OUs for European Regions and non-European Regions. We can simply create
only one OU and attach SCP to that OU or root OU.
upvoted 1 times

" # tomosabc1 2 weeks, 3 days ago


https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_general.html#example-scp-deny-region
upvoted 1 times

" # Blair77 3 weeks ago


Selected Answer: B
+1 for BBB
upvoted 1 times

" # Ni_yot 10 months, 1 week ago


B defo. Use service control policies to restrict access to certain accounts
upvoted 2 times

" # cldy 10 months, 1 week ago


B is correct.
upvoted 1 times

" # acloudguru 11 months, 1 week ago


hope i can have this question in my exam
upvoted 1 times

" # andypham 1 year ago


B is correct

627 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

" # Liongeek 1 year ago


BBBBBBBBBBBB
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

" # johnnsmith 1 year ago


How about non-developers if B is correct? SCP will restrict them as well. It has to be A.
upvoted 1 times

" # AMKazi 9 months, 2 weeks ago


you can restrict which groups you want to deny access in the policy.
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 4 times

" # denccc 1 year ago


would go for B
upvoted 4 times

" # pkboy78 1 year, 1 month ago


I think it is B
upvoted 2 times

628 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #771 Topic 1

A company has several applications running in an on-premises data center. The data center runs a mix of Windows and Linux VMs managed by
VMware vCenter.
A solutions architect needs to create a plan to migrate the applications to AWS. However, the solutions architect discovers that the document for
the applications is not up to date and that there are no complete infrastructure diagrams. The company's developers lack time to discuss their
applications and current usage with the solutions architect.
What should the solutions architect do to gather the required information?

A. Deploy the AWS Server Migration Service (AWS SMS) connector using the OVA image on the VMware cluster to collect con+guration and
utilization data from the VMs.

B. Use the AWS Migration Portfolio Assessment (MPA) tool to connect to each of the VMs to collect the con+guration and utilization data.

C. Install the AWS Application Discovery Service on each of the VMs to collect the con+guration and utilization data.

D. Register the on-premises VMs with the AWS Migration Hub to collect con+guration and utilization data.

Correct Answer: C

Community vote distribution


C (100%)

" # Shanmahi Highly Voted $ 1 year ago


Ans: C
Reason: AWS Application Discovery Service collects and presents configuration, usage, and behavior data from your servers to help you better
understand your workloads.
Link: https://aws.amazon.com/application-discovery/
upvoted 15 times

" # sergioandreslq 1 year ago


answer C: it is the correct because AWS Application Discovery Service is used to collect data using Discovery Connector for VMWare or
Discovery Agent for VMware, Hyper-V or Physical Server
upvoted 4 times

" # AwsBRFan Most Recent % 3 weeks, 6 days ago


Selected Answer: C
I would to say C correct, but this seems a old question. D is wrong cuz you dont need register the servers, the discover connector brings this
data to migration Hub.

Reference:
https://www.youtube.com/watch?v=aq6ohCf6PBo
https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-connector.html

"We recommended that all customers currently using Discovery Connector transition to the new Agentless Collector. Customer's currently using
Discovery Connector can continue to do so until Aug 31, 2023. After this date, data sent to AWS Application Discovery Service by Discovery
Connector will not be processed. Going forward, Application Discovery Service Agentless Collector is the supported discovery tool for agentless
data collection by AWS Application Discovery Service. "
upvoted 1 times

" # dcdcdc3 1 month, 1 week ago


The Discovery connector VM is connected to vCenter not on all VMs but for the lack of better options it would be C
upvoted 1 times

" # cannottellname 8 months, 2 weeks ago


Q: What data does the AWS Application Discovery Agentless Connector capture?

The AWS Application Discovery Agentless Connector is delivered as an Open Virtual Appliance (OVA) package that can be deployed to a VMware
host. Once configured with credentials to connect to vCenter, the Discovery Connector collects VM inventory, configuration, and performance
history such as CPU, memory, and disk usage and uploads it to Application Discovery Service data store.
upvoted 1 times

" # cldy 10 months, 1 week ago


C: Application Discovery Service.
upvoted 1 times

" # AzureDP900 11 months ago


C is right!

629 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # acloudguru 11 months, 1 week ago
C, EASY ONE ,HOPE i can have it in my exam
upvoted 1 times

" # Ni_yot 11 months, 1 week ago


C of cause.
upvoted 1 times

" # Liongeek 1 year ago


CCCCCCCCCCCCC
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 1 times

" # tgv 1 year ago


CCC
---
upvoted 1 times

" # zolthar_z 1 year, 1 month ago


Answer is D, Application discovery is oriented to map resources from on-premise to AWS Cloud, Migration hub helps to create migration plans
upvoted 1 times

" # zolthar_z 1 year, 1 month ago


Sorry, Answer is C, Migration hub uses Application Discovery to collect the data
upvoted 5 times

" # mericov 1 year, 1 month ago


A - Deploy SMS connector, then with Server Migration Service, you have automated, incremental and scheduled migrations towards AMI images
which can be used to deploy EC2 instances. AWS SMS migrates VMware vSphere, Hyper-V and Azure VMs.
upvoted 3 times

" # vjawscert 1 year ago


They are just in the planning phase so no SMS needed here. My vote would be - C (that is the close one even not accurate as we can collect
information with agentless connectors)
upvoted 1 times

" # sergioandreslq 1 year ago


A: it is incorrect because SMS is used to migrate servers not to collect data to plan the migration, in this case, the issue is: "What should the
solutions architect do to GATHER!!! the required information.
So, answer C is the correct because AWS Application Discovery Service is used to collect data using Discovery Connector for VMWare or
Discovery Agent for VMware, Hyper-V or Physical Server
upvoted 1 times

" # Rmukh 1 year, 1 month ago


No it is C
upvoted 2 times

" # pkboy78 1 year, 1 month ago


I think it is D
upvoted 1 times

" # pkboy78 1 year ago


Yes I think C is correct
upvoted 2 times

630 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #772 Topic 1

A company has 50 AWS accounts that are members of an organization in AWS Organizations. Each account contains multiple VPCs. The company
wants to use
AWS Transit Gateway to establish connectivity between the VPCs in each member account. Each time a new member account is created, the
company wants to automate the process of creating a new VPC and a transit gateway attachment.
Which combination of steps will meet these requirements? (Choose two.)

A. From the management account, share the transit gateway with member accounts by using AWS Resource Access Manager.

B. From the management account, share the transit gateway with member accounts by using an AWS Organizations SCP.

C. Launch an AWS CloudFormation stack set from the management account that automatically creates a new VPC and a VPC transit gateway
attachment in a member account. Associate the attachment with the transit gateway in the management account by using the transit gateway
ID.

D. Launch an AWS CloudFormation stack set from the management account that automatically creates a new VPC and a peering transit
gateway attachment in a member account. Share the attachment with the transit gateway in the management account by using a transit
gateway service-linked role.

E. From the management account, share the transit gateway with member accounts by using AWS Service Catalog.

Correct Answer: AC

Community vote distribution


AC (100%)

" # AndySH Highly Voted $ 10 months, 1 week ago


A and C
upvoted 12 times

" # Ell89 Most Recent % 1 month ago


Selected Answer: AC
A&C
you need to share the TGW via the RAM.
the VPC TGW attachment needs to be associated with the TGW.
upvoted 3 times

631 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #773 Topic 1

A scienti+c company needs to process text and image data from an Amazon S3 bucket. The data is collected from several radar stations during a
live, time-critical phase of a deep space mission. The radar stations upload the data to the source S3 bucket. The data is pre+xed by radar station
identi+cation number.
The company created a destination S3 bucket in a second account. Data must be copied from the source S3 bucket to the destination S3 bucket
to meet a compliance objective. The replication occurs through the use of an S3 replication rule to cover all objects in the source S3 bucket.
One speci+c radar station is identi+ed as having the most accurate data. Data replication at this radar station must be monitored for completion
within 30 minutes after the radar station uploads the objects to the source S3 bucket.
What should a solutions architect do to meet these requirements?

A. Set up an AWS DataSync agent to replicate the pre+xed data from the source S3 bucket to the destination S3 bucket. Select to use all
available bandwidth on the task, and monitor the task to ensure that it is in the TRANSFERRING status. Create an Amazon EventBridge
(Amazon CloudWatch Events) rule to trigger an alert if this status changes.

B. In the second account, create another S3 bucket to receive data from the radar station with the most accurate data. Set up a new
replication rule for this new S3 bucket to separate the replication from the other radar stations. Monitor the maximum replication time to the
destination. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert when the time exceeds the desired threshold.

C. Enable Amazon S3 Transfer Acceleration on the source S3 bucket, and con+gure the radar station with the most accurate data to use the
new endpoint. Monitor the S3 destination bucket's TotalRequestLatency metric. Create an Amazon EventBridge (Amazon CloudWatch Events)
rule to trigger an alert if this status changes.

D. Create a new S3 replication rule on the source S3 bucket that +lters for the keys that use the pre+x of the radar station with the most
accurate data. Enable S3 Replication Time Control (S3 RTC). Monitor the maximum replication time to the destination. Create an Amazon
EventBridge (Amazon CloudWatch Events) rule to trigger an alert when the time exceeds the desired threshold.

Correct Answer: A

Community vote distribution


D (100%)

" # nl11121 Highly Voted $ 1 year, 1 month ago


Answer is D - https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-time-control.html
upvoted 10 times

" # Jupi Highly Voted $ 1 year, 1 month ago


D - S3 Replication Time Control is designed to replicate 99.99% of objects within 15 minutes after upload, with the majority of those new objects
replicated in seconds.
upvoted 5 times

" # [Removed] Most Recent % 4 months, 1 week ago


Selected Answer: D
Key: "S3 Replication Time Control"
"S3 Replication Time Control is designed to replicate 99.99% of objects within 15 minutes after upload, with the majority of those new objects
replicated in seconds"

https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-s3-replication-time-control-for-predictable-replication-time-backed-by-sla
upvoted 1 times

" # kangtamo 4 months, 1 week ago


Selected Answer: D
Agree with D.
upvoted 1 times

" # Ni_yot 9 months, 1 week ago


D seems to work. https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-time-control.html. not sure how A can help here. Once
you setup data sync there is now to ensure it completes on time.
upvoted 1 times

" # HellGate 9 months, 2 weeks ago


Answer is A
DataSync is designed for this kind of job.

632 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

https://cloudcompiled.com/tutorials/aws-datasync-transfer-data/
https://aws.amazon.com/blogs/storage/how-to-use-aws-datasync-to-migrate-data-between-amazon-s3-buckets/
upvoted 1 times
" # Gaurav_GGG 10 months, 2 weeks ago
D only talks about precise data expedite transfer. How about rest of the data? No options talk about it. So i am little confused.
upvoted 1 times

" # AzureDP900 11 months ago


It is D
upvoted 1 times

" # andylogan 1 year ago


It's D
upvoted 1 times

" # tgv 1 year ago


DDD
---
upvoted 1 times

" # blackgamer 1 year, 1 month ago


Yes, it is D.
upvoted 1 times

" # pkboy78 1 year, 1 month ago


I think it is D
upvoted 2 times

633 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #774 Topic 1

A company is serving +les to its customer through an SFTP server that is accessible over the Internet. The SFTP server is running on a single
Amazon EC2 instance with an Elastic IP address attached. Customers connect to the SFTP server through its Elastic IP address and use SSH for
authentication. The EC2 instance also has an attached security group that allows access from all customer IP addresses.
A solutions architect must implement a solution to improve availability, minimize the complexity of infrastructure management, and minimize the
disruption to customers who access +les. The solution must not change the way customers connect.
Which solution will meet these requirements?

A. Disassociate the Elastic IP address from the EC2 instance. Create an Amazon S3 bucket to be used for SFTP +le hosting. Create an AWS
Transfer Family server. Con+gure the Transfer Family server with a publicly accessible endpoint. Associate the SFTP Elastic IP address with
the new endpoint. Point the Transfer Family server to the S3 bucket. Sync all +les from the SFTP server to the S3 bucket.

B. Disassociate the Elastic IP address from the EC2 instance. Create an Amazon S3 bucket to be used for SFTP +le hosting. Create an AWS
Transfer Family server. Con+gure the Transfer Family server with a VPC-hosted, Internet-facing endpoint. Associate the SFTP Elastic IP
address with the new endpoint. Attach the security group with customer IP addresses to the new endpoint. Point the Transfer Family server to
the S3 bucket. Sync all +les from the SFTP server to the S3 bucket.

C. Disassociate the Elastic IP address from the EC2 instance. Create a new Amazon Elastic File System (Amazon EFS) +le system to be used
for SFTP +le hosting. Create an AWS Fargate task de+nition to run an SFTP server. Specify the EFS +le system as a mount in the task
de+nition. Create a Fargate service by using the task de+nition, and place a Network Load Balancer (NLB) in front of the service. When
con+guring the service, attach the security group with customer IP addresses to the tasks that run the SFTP server. Associate the Elastic IP
address with the NLB. Sync all +les from the SFTP server to the S3 bucket.

D. Disassociate the Elastic IP address from the EC2 instance. Create a multi-attach Amazon Elastic Block Store (Amazon EBS) volume to be
used for SFTP +le hosting. Create a Network Load Balancer (NLB) with the Elastic IP address attached. Create an Auto Scaling group with EC2
instances that run an SFTP server. De+ne in the Auto Scaling group that instances that are launched should attach the new multi-attach EBS
volume. Con+gure the Auto Scaling group to automatically add instances behind the NLB. Con+gure the Auto Scaling group to use the security
group that allows customer IP addresses for the EC2 instances that the Auto Scaling group launches. Sync all +les from the SFTP server to the
new multi-attach EBS volume.

Correct Answer: B

Community vote distribution


B (100%)

" # mericov Highly Voted $ 1 year, 1 month ago


I would say B. Reasons: "The EC2 instance also has an attached security group that allows access from all customer IP addresses" - There is no
option to put a security group in the publicly accessible endpoint (A).
upvoted 12 times

" # jj22222 Most Recent % 7 months, 1 week ago


Selected Answer: B
BBBBBBBBBB
upvoted 2 times

" # tkanmani76 9 months, 2 weeks ago


Why B ? What does "Attach the security group with customer IP addresses to the new endpoint." mean ? How do we attach so many IP
addresses of customers ? And how will such solution minimize infra complexity ?
A looks reasonable. Some additional reading link https://aws.amazon.com/premiumsupport/knowledge-center/aws-sftp-endpoint-type/
upvoted 2 times

" # tkanmani76 9 months, 2 weeks ago


Realize their is SG with client IP already which can be attached to endpoint - Option B will give access through Elastic IP.
upvoted 2 times

" # AzureDP900 11 months ago


I will go with B
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

634 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # mgurkan 1 year ago


How about availability? one EC2 does not provide high availability.
upvoted 1 times

" # andylogan 1 year ago


It's hosted with managed AWS Transfer Family server and S3 now, no need for EC2 - then B
upvoted 1 times

" # tgv 1 year ago


BBB
---
upvoted 1 times

" # blackgamer 1 year ago


B is correct.
https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html
upvoted 1 times

" # tvs 1 year, 1 month ago


B.https://aws.amazon.com/premiumsupport/knowledge-center/aws-sftp-endpoint-type/
upvoted 2 times

" # vjawscert 1 year, 1 month ago


Correct Answer: B
CD - Maintenance overhead with EC2
A - You can't use a static elastic ip with public hosted one (ref: https://aws.amazon.com/premiumsupport/knowledge-center/aws-sftp-endpoint-
type/)
upvoted 2 times

" # denccc 1 year, 1 month ago


Would go for B: https://aws.amazon.com/premiumsupport/knowledge-center/aws-sftp-endpoint-type/. Only this way you can use security
groups to restrict sources.
upvoted 2 times

" # pkboy78 1 year, 1 month ago


It is A
upvoted 1 times

" # tvs 1 year, 1 month ago


B. Not able to attaché EIP to public facing SFTP endpoint of AWS Transfer Family server.
upvoted 2 times

635 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #775 Topic 1

A company is running an application distributed over several Amazon EC2 instances in an Auto Scaling group behind an Application Load
Balancer. The security team requires that all application access attempts be made available for analysis. Information about the client IP address,
connection type, and user agent must be included.
Which solution will meet these requirements?

A. Enable EC2 detailed monitoring, and include network logs. Send all logs through Amazon Kinesis Data Firehose to an Amazon Elasticsearch
Service (Amazon ES) cluster that the security team uses for analysis.

B. Enable VPC Flow Logs for all EC2 instance network interfaces. Publish VPC Flow Logs to an Amazon S3 bucket. Have the security team use
Amazon Athena to query and analyze the logs.

C. Enable access logs for the Application Load Balancer, and publish the logs to an Amazon S3 bucket. Have the security team use Amazon
Athena to query and analyze the logs.

D. Enable Tramc Mirroring and specify all EC2 instance network interfaces as the source. Send all tramc information through Amazon Kinesis
Data Firehose to an Amazon Elasticsearch Service (Amazon ES) cluster that the security team uses for analysis.

Correct Answer: C

Community vote distribution


C (100%)

" # denccc Highly Voted $ 1 year ago


Would go for C
upvoted 8 times

" # denccc 1 year ago


https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html
upvoted 5 times

" # nm4u Most Recent % 4 months, 1 week ago


Only and Only C.
upvoted 1 times

" # Anhdd 5 months ago


Selected Answer: C
Clearly C
upvoted 1 times

" # Netaji 7 months ago


Selected Answer: C
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html
upvoted 1 times

" # jj22222 9 months, 1 week ago


Selected Answer: C
CCCCCCCCCCCCCC
upvoted 1 times

" # seyik 10 months ago


D
https://docs.aws.amazon.com/vpc/latest/mirroring/what-is-traffic-mirroring.html
ES is used for analyzing the steamed logs not Athena
upvoted 1 times

" # seyik 10 months ago


Please ignore this post
upvoted 1 times

" # cldy 11 months ago


C. Enable access logs for the Application Load Balancer, and publish the logs to an Amazon S3 bucket. Have the security team use Amazon
Athena to query and analyze the logs.
upvoted 1 times

636 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # AzureDP900 11 months ago


I will go with C
upvoted 1 times

" # andylogan 1 year ago


It's C
upvoted 2 times

" # tgv 1 year ago


CCC
---
upvoted 4 times

" # lifehacker0777 9 months, 2 weeks ago


i have a set of new sap c01 questions, how can i send. Thanks
upvoted 6 times

" # pkboy78 1 year, 1 month ago


It is C
upvoted 1 times

637 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #776 Topic 1

A company is running a legacy application on Amazon EC2 instances in multiple Availability Zones behind a software load balancer that runs on
an active/standby set of EC2 instances. For disaster recovery, the company has created a warm standby version of the application environment
that is deployed in another AWS
Region. The domain for the application uses a hosted zone from Amazon Route 53.
The company needs the application to use static IP addresses, even in the case of a failover event to the secondary Region. The company also
requires the client's source IP address to be available for auditing purposes.
Which solution meets these requirements with the LEAST amount of operational overhead?

A. Replace the software load balancer with an AWS Application Load Balancer. Create an AWS Global Accelerator accelerator. Add an endpoint
group for each Region. Con+gure Route 53 health checks. Add an alias record that points to the accelerator.

B. Replace the software load balancer with an AWS Network Load Balancer. Create an AWS Global Accelerator accelerator. Add an endpoint
group for each Region. Con+gure Route 53 health checks. Add a CNAME record that points to the DNS name of the accelerator.

C. Replace the software load balancer with an AWS Application Load Balancer. Use AWS Global Accelerator to create two separate
accelerators. Add an endpoint group for each Region. Con+gure Route 53 health checks. Add a record set that is con+gured for active-passive
DNS failover. Point the record set to the DNS names of the two accelerators.

D. Replace the software load balancer with an AWS Network Load Balancer. Use AWS Global Accelerator to create two separate accelerators.
Add an endpoint group for each Region. Con+gure Route 53 health checks. Add a record set that is con+gured for weighted round-robin DNS
failover. Point the record set to the DNS names of the two accelerators.

Correct Answer: C

Community vote distribution


A (75%) C (25%)

" # tvs Highly Voted $ 1 year, 1 month ago


A. https://docs.aws.amazon.com/global-accelerator/latest/dg/preserve-client-ip-address.how-to-enable-preservation.html
Global Accelerator does not support client IP address preservation for Network Load Balancer and Elastic IP address endpoints.
upvoted 11 times

" # sashenka 11 months, 3 weeks ago


Furthermore, for those questioning that the GA will route to both endpoints in the endpoing group (Active region as well as the DR region
ALB's), the GA can configure endpoints with weights for routing preference. By default they are set at 128 but one can configure one to send a
tiny portion of your traffic to one endpoint and the rest to another endpoint, you might specify weights of 1 and 255. GA will route based on
the weight unless to only healthy endpoints. Once the Primary region endpoint becomes unhealthy all traffic will route to the DR endpoint even
though it has a weight of 1. The unhealthy endpoints assumes a weight of 0 during an unhealthy state.

https://docs.aws.amazon.com/global-accelerator/latest/dg/about-endpoints-endpoint-weights.html
upvoted 2 times

" # sashenka 11 months, 3 weeks ago


This clarifies it a bit more at the above link. So you can actually set the weight to 0's for the endpoint group in the DR and it will route to it
in the event that Primary endpoint group becomes unhealthy.
"If there are no healthy endpoints in an endpoint group that have a weight greater than zero, Global Accelerator tries to failover to a healthy
endpoint with a weight greater than zero in another endpoint group. For this failover, Global Accelerator ignores the traffic dial setting. So if,
for example, an endpoint group has a traffic dial set to zero, Global Accelerator still includes that endpoint group in the failover attempt."
upvoted 1 times

" # Millari 4 months ago


There are two types of endpoints that can preserve the source IP address of the client in incoming packets: Application Load Balancers
and Amazon EC2 instances.
https://docs.aws.amazon.com/global-accelerator/latest/dg/preserve-client-ip-address.html
upvoted 1 times

" # Chuky64 5 months ago


If Global Accelerator detects that your application endpoint is failing in the primary AWS Region, it instantly triggers traffic re-routing to
your application endpoint in the next available, closest AWS Region.
upvoted 1 times

" # skywalker Most Recent % 1 month, 2 weeks ago


A.

B and D are out coz : Global Accelerator does not support client IP address preservation for Network Load Balancer and Elastic IP address

638 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

endpoints.

C is also out coz it create two seperate accelerators.. It need seperate endpoint in same accelerators instead..

Thus answer is A
upvoted 1 times

" # asfsdfsdf 3 months, 3 weeks ago


GA does not need two separate accelerators in order to route traffic between regions so C and D are out.
https://docs.aws.amazon.com/global-accelerator/latest/dg/introduction-how-it-works.html#introduction-traffic-dials-weights
between A and B - A is out since GA with NLB cannot support IP preservation.
Also its better to use alias records with AWS resources.
https://docs.aws.amazon.com/global-accelerator/latest/dg/preserve-client-ip-address.how-to-enable-preservation.html
upvoted 1 times

" # asfsdfsdf 3 months, 3 weeks ago


my bad, B is out between A and B - the correct answer is A - GA with endpoints in each region pointing to the ALB + Alias record
upvoted 2 times

" # TechX 4 months, 1 week ago


Selected Answer: A
A for me
upvoted 2 times

" # kangtamo 4 months, 1 week ago


Selected Answer: A
A sounds better.
upvoted 2 times

" # azurehunter 5 months ago


Answer is A. Both GC and R53 can be used for Regional Active-Passive Failover. One GC fulfills all requirements, and do not need to create two
GC.

Search keyword "Global Accelerator" in the link below:

https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html
upvoted 1 times

" # azurehunter 5 months ago


shall be "GA"
upvoted 1 times

" # cloudude 6 months, 2 weeks ago


Selected Answer: C
go with c
upvoted 1 times

" # pititcu667 9 months, 2 weeks ago


Selected Answer: C
going with c simply because of active failover. Not sure how you would do it with a.
upvoted 1 times

" # usmanbaigmughal 8 months, 2 weeks ago


Option A is right. you can manage Active passive in Global accelerator by adding passive endpoint weight to 0.
https://docs.aws.amazon.com/global-accelerator/latest/dg/about-endpoints-endpoint-weights.html
upvoted 1 times

" # Clandestine60 9 months, 3 weeks ago


Selected Answer: A
You don`t need DNS failover for Global accelerators. It handles failover it by default based on dials and weights you set for endpoint groups and
endpoints respectively.
"Failover
With a multi-Region active/active strategy, if your workload cannot operate in a Region, failover will route traffic away from the impacted Region
to healthy Region(s). You can accomplish this with Route 53 by updating the DNS records. Make sure you set TTL (time to live) on these records
low enough so that DNS resolvers will reflect your changes quickly enough to meet your RTO targets. Alternatively, you can use AWS Global
Accelerator for routing and failover. It does not rely on DNS. Global Accelerator gives you two static IP addresses. You then configure which
Regions user traffic goes to based on traffic dials and weights you set."
From https://aws.amazon.com/blogs/architecture/disaster-recovery-dr-architecture-on-aws-part-iv-multi-site-active-active/
upvoted 2 times

" # AzureDP900 11 months ago


A is right

639 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # ryu10_09 11 months, 3 weeks ago
***accelerator DOES NOT SUPPORT client ip address preservation for NLB*****
answer is B
upvoted 1 times

" # Viper57 1 year ago


An global accelerator supports multiple endpoints in different regions (which can be ALBs or NLBs, so two accelerators are not required. This
leaves only A and B.

Between A and B, A is the better option as its easier to preserve the client IP with an ALB.

Answer: A

See - https://docs.aws.amazon.com/global-accelerator/latest/dg/getting-started.html#getting-started-add-endpoints
upvoted 2 times

" # andylogan 1 year ago


It's A - since one GA standard accelerator can config for multiple Regions
upvoted 1 times

" # andylogan 1 year ago


And warm standby mean a scaled down, but fully functional, copy of your production environment in another Region, so it can handle fail-over
request
upvoted 1 times

" # Pratap 1 year ago


I will go with B
https://aws.amazon.com/global-accelerator/?blogs-global-accelerator.sort-by=item.additionalFields.createdDate&blogs-global-accelerator.sort-
order=desc&aws-global-accelerator-wn.sort-by=item.additionalFields.postDateTime&aws-global-accelerator-wn.sort-order=desc
upvoted 1 times

" # Bigbearcn 1 year ago


A is correct. ALB preserve the source IP address of the client. So B is wrong. 1 GA is enough.
upvoted 3 times

" # AkanshaR91 1 year ago


C - https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-workloads-on-aws.pdf
upvoted 1 times

" # DerekKey 1 year ago


In my opinion
C is correct
"needs ... to use static IP addresses, ... a failover event to the secondary Region" - means that primary is used and switched to secondary only if
primary fails therefore you must use two separate GA and active-passive DNS failover
upvoted 3 times

640 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #777 Topic 1

A company maintains a restaurant review website. The website is a single-page application where +les are stored in Amazon S3 and delivered
using Amazon
CloudFront. The company receives several fake postings every day that are manually removed.
The security team has identi+ed that most of the fake posts are from bots with IP addresses that have a bad reputation within the same global
region. The team needs to create a solution to help restrict the bots from accessing the website.
Which strategy should a solutions architect use?

A. Use AWS Firewall Manager to control the CloudFront distribution security settings. Create a geographical block rule and associate it with
Firewall Manager.

B. Associate an AWS WAF web ACL with the CloudFront distribution. Select the managed Amazon IP reputation rule group for the web ACL
with a deny action.

C. Use AWS Firewall Manager to control the CloudFront distribution security settings. Select the managed Amazon IP reputation rule group
and associate it with Firewall Manager with a deny action.

D. Associate an AWS WAF web ACL with the CloudFront distribution. Create a rule group for the web ACL with a geographical match statement
with a deny action.

Correct Answer: C

Community vote distribution


B (100%)

" # Rmukh Highly Voted $ 1 year, 1 month ago


Answer is B
upvoted 12 times

" # sergioandreslq 1 year ago


B: WAF with CloudFront using WebACL with Amazon IP reputation List which:
IP reputation rule groups allow you to block requests based on their source. Choose one or more of these rule groups if you want to reduce
your exposure to BOTS!!!! traffic or exploitation attempts
upvoted 6 times

" # azure_kai Most Recent % 6 months, 3 weeks ago


Selected Answer: B
B
https://docs.aws.amazon.com/waf/latest/developerguide/aws-managed-rule-groups-ip-rep.html
upvoted 1 times

" # pititcu667 8 months, 3 weeks ago


Selected Answer: B
You really do not need the firewall manager. It makes no mention of multiple accounts, hence waf with reputation is good enough.
upvoted 1 times

" # HellGate 9 months, 2 weeks ago


my answer is C.
I think WAF is good combination with CloudFront but WAF is best at blocking massive attacks like DDoS. Question is about blocking fraudulent
postings, so AWS Firewall Manager can be proper choice over WAF.
upvoted 1 times

" # futen0326 8 months, 3 weeks ago


WAF = SQL Injections, Cross-Site Scripting, Geolocation Blocks, Rate Awareness

AWS Shield = Provides AWS resources with DDoS protection


upvoted 1 times

" # AzureDP900 11 months ago


B is right
upvoted 1 times

" # andylogan 1 year ago


It's B
upvoted 1 times

641 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # andylogan 1 year ago


It's B
upvoted 1 times

" # student22 1 year ago


B
The Amazon IP reputation list rule group contains rules that are based on Amazon internal threat intelligence. This is useful if you would like to
block IP addresses typically associated with bots or other threats. Inspects for a list of IP addresses that have been identified as bots by Amazon
threat intelligence.
upvoted 2 times

" # tgv 1 year ago


BBB
---
upvoted 3 times

" # denccc 1 year ago


It's B
upvoted 1 times

" # blackgamer 1 year ago


IT is B.
upvoted 1 times

" # blackgamer 1 year ago


It is B. IT Reputation rule.
upvoted 1 times

" # dotchi 1 year, 1 month ago


I will go with D. IP reputation comes from Threat intelligence, this is not mentioned as the source.
upvoted 1 times

" # blackgamer 1 year ago


D is wrong, the IP coming from same region, only that those IPs are in bad reputation IP list.
upvoted 1 times

" # mericov 1 year, 1 month ago


I would say D - reason: "that have a bad reputation within the same global region"
upvoted 2 times

" # zolthar_z 1 year, 1 month ago


If You use Geo Match Will block the entire region. Even You can add whitelist... For this escenario is better answer B
upvoted 3 times

" # pkboy78 1 year, 1 month ago


yes I think it is B
upvoted 4 times

" # mate_kims 1 year, 1 month ago


my answer is B
upvoted 4 times

642 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #778 Topic 1

A software company has deployed an application that consumes a REST API by using Amazon API Gateway, AWS Lambda functions, and an
Amazon
DynamoDB table. The application is showing an increase in the number of errors during PUT requests. Most of the PUT calls come from a small
number of clients that are authenticated with speci+c API keys.
A solutions architect has identi+ed that a large number of the PUT requests originate from one client. The API is noncritical, and clients can
tolerate retries of unsuccessful calls. However, the errors are displayed to customers and are causing damage to the API's reputation.
What should the solutions architect recommend to improve the customer experience?

A. Implement retry logic with exponential backoff and irregular variation in the client application. Ensure that the errors are caught and
handled with descriptive error messages.

B. Implement API throttling through a usage plan at the API Gateway level. Ensure that the client application handles code 429 replies without
error.

C. Turn on API caching to enhance responsiveness for the production stage. Run 10-minute load tests. Verify that the cache capacity is
appropriate for the workload.

D. Implement reserved concurrency at the Lambda function level to provide the resources that are needed during sudden increases in tramc.

Correct Answer: C
API Gateway recommends that you run a 10-minute load test to verify that your cache capacity is appropriate for your workload.
Reference:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html

Community vote distribution


A (56%) B (44%)

" # Firelord Highly Voted $ 10 months, 1 week ago


Put can’t be cached, better throttling
upvoted 11 times

" # RVivek Highly Voted $ 8 months, 4 weeks ago


Answer A. Custermer can accept delay /even failed attempts.
https://aws.amazon.com/premiumsupport/knowledge-center/aws-batch-requests-error/
upvoted 7 times

" # Blair77 Most Recent % 1 week, 6 days ago


Selected Answer: A
AAA - all the information here:
https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-429-limit/
upvoted 2 times

" # wassb 3 weeks, 1 day ago


Selected Answer: A
I would go for A since the throttling can affect the other clients.
Answer B would have been more accurate if we were using the API key specific to the client for the usage plan.
upvoted 1 times

" # skywalker 4 weeks ago


Selected Answer: A
A.
B. is not ok since large number of the PUT requests originate from one client... Having Usage Plan to control throttle will not help.
C. No clue.
D. Problem is not Lambda issue
upvoted 1 times

" # joancarles 1 month, 1 week ago


Selected Answer: B
Usage plans seems the best way to improve the customer experience. Option A doesn't fix nothing in this sense, so I go for B here.
upvoted 2 times

" # wannaaws 3 months, 3 weeks ago

643 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Concern is "faults are visible to clients, jeopardizing the API's reputation", implies no failure should/error be made visible to client. App can retry
from backend in event of 429. Hence B is preferred.
upvoted 2 times
" # Hasitha99 6 months, 3 weeks ago
Selected Answer: B
API gateway support based on customers (since they are using API keys)
upvoted 2 times

" # pat100 8 months ago


Answer B -
1) REST API - allows Usage Plan and API Keys - which help enable throttle requests - by request per second or burst requests
2) API Keys can be associated to client
3) When client submit requests breaching defined threshold, they will receive 429 - client error, NOT API server error - This is the key requirement
as server errors dents API reputation
Clients can catch such exceptions [429 Too Many Requests error responses ], the client can resubmit the failed requests in a way that is rate
limiting
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
upvoted 5 times

" # user0001 6 months ago


B, since we know the majority of them are coming from 1 client, B will solve the issue
upvoted 2 times

" # Dohecadi 8 months ago


Selected Answer: A
A is correct.

Three points from the question:


a) The application is seeing an increase in the amount of PUT request failures.
b) However, the faults are visible to clients, jeopardizing the API's reputation.
It also says:
c) The API is not mission-critical, and customers may accept failed call retries.

A. From point "c" above, retries are permissable.


B. From point "b" above, throttling even further will only make it worse.
C. From point "a" above, caching on API is best suited for GET rqeuests.
D. Problem is with API Gateway not Lambda.
upvoted 1 times

" # GeniusMikeLiu 9 months, 3 weeks ago


A or B??
upvoted 1 times

" # GeniusMikeLiu 9 months, 3 weeks ago


should be A. customer can accept retry.
upvoted 1 times

" # Smartphone 10 months ago


A seems a better approach.
upvoted 1 times

644 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #779 Topic 1

A medical company is running an application in the AWS Cloud. The application simulates the effect of medical drugs in development.
The application consists of two parts: con+guration and simulation. The con+guration part runs in AWS Fargate containers in an Amazon Elastic
Container Service
(Amazon ECS) cluster. The simulation part runs on large, compute optimized Amazon EC2 instances. Simulations can restart if they are
interrupted.
The con+guration part runs 24 hours a day with a steady load. The simulation part runs only for a few hours each night with a variable load. The
company stores simulation results in Amazon S3, and researchers use the results for 30 days. The company must store simulations for 10 years
and must be able to retrieve the simulations within 5 hours.
Which solution meets these requirements MOST cost-effectively?

A. Purchase an EC2 Instance Savings Plan to cover the usage for the con+guration part. Run the simulation part by using EC2 Spot Instances.
Create an S3 Lifecycle policy to transition objects that are older than 30 days to S3 Intelligent-Tiering.

B. Purchase an EC2 Instance Savings Plan to cover the usage for the con+guration part and the simulation part. Create an S3 Lifecycle policy
to transition objects that are older than 30 days to S3 Glacier.

C. Purchase Compute Savings Plans to cover the usage for the con+guration part. Run the simulation part by using EC2 Spot Instances. Create
an S3 Lifecycle policy to transition objects that are older than 30 days to S3 Glacier.

D. Purchase Compute Savings Plans to cover the usage for the con+guration part. Purchase EC2 Reserved Instances for the simulation part.
Create an S3 Lifecycle policy to transition objects that are older than 30 days to S3 Glacier Deep Archive.

Correct Answer: D
Reference:
https://aws.amazon.com/savingsplans/faq/

Community vote distribution


C (100%)

" # AndySH Highly Voted $ 8 months, 2 weeks ago


C is correct.
A is wrong: "older than 30 days to S3 Intelligent-Tiering."- Good for unpredictable retrieval requirements. Not cost effective.
B is wrong: Savings Plan to cover also for simulation part which can be interrupted and restarted - Not cost effective.
D is wrong: "be able to recover them within five hours". Glacier Deep Archive retrieval time within 12 hours. Does not meet the requirements.
upvoted 11 times

" # wassb 3 weeks, 1 day ago


C is correct.
A,B are automatically ruled out since EC2 instance savings plans doesnt apply to Fargate
upvoted 1 times

" # Yamchi 6 months, 4 weeks ago


C is correct.
upvoted 1 times

" # Anhdd 5 months ago


agree with you, C should be the best option here
upvoted 1 times

" # hilft Most Recent % 3 months, 2 weeks ago


C. spot/s3 glacier
upvoted 1 times

" # aandc 4 months, 1 week ago


C
can be interrupted and restarted -> Spot Instances
upvoted 1 times

" # kangtamo 4 months, 1 week ago


Selected Answer: C
Agree with C: 30 days / S3 Glacier
upvoted 1 times

645 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Anhdd 5 months ago


Selected Answer: C
C is good for simulation part which can be interrupted and restarted + data retrieve within 5 hours -> best cost-effective
upvoted 1 times

" # JYZ 6 months, 2 weeks ago


No option is good. (see s3 glacier retrieval time from https://docs.aws.amazon.com/prescriptive-guidance/latest/backup-recovery/amazon-s3-
glacier.html
upvoted 1 times

" # Milind 9 months, 3 weeks ago


B is correct, C mentioned spot instance
D mentioned deep archive that requires 11 hours for retrieval.
upvoted 1 times

" # gm 9 months, 3 weeks ago


Correct answer is C,
upvoted 3 times

" # tkanmani76 10 months ago


C is correct - https://aws.amazon.com/about-aws/whats-new/2019/03/S3-glacier-deep-archive/ takes 12 hr so D is not right.
upvoted 2 times

" # Soumyashree 10 months ago


If a simulation is interrupted, it may be restarted. -- C mentions Spot Instances, I go for D
upvoted 2 times

" # Student1950 9 months, 1 week ago


I believe simulation can be restarted if interrupted with the selection of spot instances. I would go with C
upvoted 2 times

" # techn00b 10 months, 1 week ago


Answer is C - 5 hours of retrieval time
https://docs.aws.amazon.com/prescriptive-guidance/latest/backup-recovery/amazon-s3-glacier.html
upvoted 1 times

646 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #780 Topic 1

A company manages multiple AWS accounts by using AWS Organizations. Under the root OU, the company has two OUs: Research and DataOps.
Because of regulatory requirements, all resources that the company deploys in the organization must reside in the ap-northeast-1 Region.
Additionally, EC2 instances that the company deploys in the DataOps OU must use a prede+ned list of instance types.
A solutions architect must implement a solution that applies these restrictions. The solution must maximize operational emciency and must
minimize ongoing maintenance.
Which combination of steps will meet these requirements? (Choose two.)

A. Create an IAM role in one account under the DataOps OU. Use the ec2:InstanceType condition key in an inline policy on the role to restrict
access to speci+c instance type.

B. Create an IAM user in all accounts under the root OU. Use the aws:RequestedRegion condition key in an inline policy on each user to restrict
access to all AWS Regions except ap-northeast-1.

C. Create an SCP. Use the aws:RequestedRegion condition key to restrict access to all AWS Regions except ap-northeast-1. Apply the SCP to
the root OU.

D. Create an SCP. Use the ec2:Region condition key to restrict access to all AWS Regions except ap-northeast-1. Apply the SCP to the root OU,
the DataOps OU, and the Research OU.

E. Create an SCP. Use the ec2:InstanceType condition key to restrict access to speci+c instance types. Apply the SCP to the DataOps OU.

Correct Answer: BC
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_deny-requested-region.html https://summitroute.com
/blog/2020/03/25/aws_scp_best_practices/

Community vote distribution


CE (100%)

" # gsaini Highly Voted $ 9 months, 1 week ago


C & E should be right answer.
upvoted 7 times

" # gnandam Most Recent % 1 month, 1 week ago


C& E
- https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_deny-requested-region.html
upvoted 1 times

" # gondohwe 3 months ago


combination of C,E make sense
upvoted 1 times

" # Hasitha99 7 months ago


Selected Answer: CE
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_ec2.html)
upvoted 2 times

" # RVivek 9 months, 1 week ago


C & E.
B: Wrong . Creating user ccount in each account and adding and in line policy for each account is too much adminstrative work
upvoted 1 times

" # RVivek 9 months, 1 week ago


C & E.
upvoted 1 times

" # Buggie 10 months, 1 week ago


C and E.
upvoted 2 times

" # guruaws2021 10 months, 1 week ago


The answer should be CE here
upvoted 1 times

647 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # Firelord 10 months, 1 week ago


C & É (https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_ec2.html)
upvoted 1 times

Question #781 Topic 1

A company is hosting an image-processing service on AWS in a VPC. The VPC extends across two Availability Zones. Each Availability Zone
contains one public subnet and one private subnet.
The service runs on Amazon EC2 instances in the private subnets. An Application Load Balancer in the public subnets is in front of the service.
The service needs to communicate with the internet and does so through two NAT gateways. The service uses Amazon S3 for image storage. The
EC2 instances retrieve approximately 1 ‫׀‬¢‫ ’׀‬of data from an S3 bucket each day.
The company has promoted the service as highly secure. A solutions architect must reduce cloud expenditures as much as possible without
compromising the service's security posture or increasing the time spent on ongoing operations.
Which solution will meet these requirements?

A. Replace the NAT gateways with NAT instances. In the VPC route table, create a route from the private subnets to the NAT instances.

B. Move the EC2 instances to the public subnets. Remove the NAT gateways.

C. Set up an S3 gateway VPC endpoint in the VPC. Attach an endpoint policy to the endpoint to allow the required actions on the S3 bucket.

D. Attach an Amazon Elastic File System (Amazon EFS) volume to the EC2 instances. Host the image on the EFS volume.

Correct Answer: C
Create Amazon S3 gateway endpoint in the VPC and add a VPC endpoint policy. This VPC endpoint policy will have a statement that allows S3
access only via access points owned by the organization.
Reference:
https://lifesciences-resources.awscloud.com/aws-storage-blog/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points?
Languages=Korean

Community vote distribution


C (100%)

" # Ni_yot Highly Voted $ 9 months, 2 weeks ago


my bad meant to say C.
upvoted 10 times

" # TechX Most Recent % 4 months, 1 week ago


Selected Answer: C
Answer: C
Explanation:
Create Amazon S3 gateway endpoint in the VPC and add a VPC endpoint policy. This VPC endpoint policy will have a statement that allows S3
access only via access points owned by the organization.
upvoted 3 times

" # Bigbearcn 9 months ago


It's C.
upvoted 3 times

" # Ni_yot 9 months, 2 weeks ago


A for me. you dont want to replace the NAT gw with an instance. More administration. And you definitely dont want to move instances to the
public subnet. You dont attach EBS voulmes to EC2 instances
upvoted 1 times

648 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #782 Topic 1

A company needs to implement a patching process for its servers. The on-premises servers and Amazon EC2 instances use a variety of tools to
perform patching.
Management requires a single report showing the patch status of all the servers and instances.
Which set of actions should a solutions architect take to meet these requirements?

A. Use AWS Systems Manager to manage patches on the on-premises servers and EC2 instances. Use Systems Manager to generate patch
compliance reports

B. Use AWS OpsWorks to manage patches on the on-premises servers and EC2 instances. Use Amazon QuickSight integration with OpsWorks
to generate patch compliance reports.

C. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to apply patches by scheduling an AWS Systems Manager patch
remediation job. Use Amazon Inspector to generate patch compliance reports.

D. Use AWS OpsWorks to manage patches on the on-premises servers and EC2 instances. Use AWS X-Ray to post the patch status to AWS
Systems Manager OpsCenter to generate patch compliance reports.

Correct Answer: A
You can use AWS Systems Manager Con+guration Compliance to scan your jeet of managed instances for patch compliance.
Reference:
https://aws.amazon.com/blogs/mt/how-moodys-uses-aws-systems-manager-to-patch-servers-across-multiple-cloud-providers/

Community vote distribution


A (100%)

" # padel Highly Voted $ 10 months, 1 week ago


A for me
upvoted 5 times

" # GeniusMikeLiu 9 months, 3 weeks ago


A right. SSM does support on-premise machines.
upvoted 2 times

" # kubala Highly Voted $ 9 months, 2 weeks ago


Selected Answer: A
surely A https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html
upvoted 5 times

" # Blair77 Most Recent % 3 weeks, 5 days ago


Selected Answer: A
A- SSM all the way!
upvoted 1 times

" # Ell89 1 month ago


Selected Answer: A
answer is A
upvoted 1 times

" # techn00b 10 months, 1 week ago


Answer is B
upvoted 1 times

649 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #783 Topic 1

A company is running a large containerized workload in the AWS Cloud. The workload consists of approximately 100 different services. The
company uses
Amazon Elastic Container Service (Amazon ECS) to orchestrate the workload.
Recently, the company's development team started using AWS Fargate instead of Amazon EC2 instances in the ECS cluster. In the past, the
workload has come close to running the maximum number of EC2 instances that are available in the account.
The company is worried that the workload could reach the maximum number of ECS tasks that are allowed. A solutions architect must implement
a solution that will notify the development team when Fargate reaches 80% of the maximum number of tasks.
What should the solutions architect do to meet this requirement?

A. Use Amazon CloudWatch to monitor the Sample Count statistic for each service in the ECS cluster. Set an alarm for when the math
expression sample count/ SERVICE_QUOTA(service)*100 is greater than 80. Notify the development team by using Amazon Simple
Noti+cation Service (Amazon SNS).

B. Use Amazon CloudWatch to monitor service quotas that are published under the AWS/Usage metric namespace. Set an alarm for when the
math expression metric/SERVICE_QUOTA(metric)*100 is greater than 80. Notify the development team by using Amazon Simple Noti+cation
Service (Amazon SNS).

C. Create an AWS Lambda function to poll detailed metrics form the ECS cluster. When the number of running Fargate tasks is greater than 80,
invoke Amazon Simple Email Service (Amazon SES) to notify the development team.

D. Create an AWS Con+g rule to evaluate whether the Fargate SERVICE_QUOTA is greater than 80. Use Amazon Simple Email Service (Amazon
SES) to notify the development team when the AWS Con+g rule is not compliant.

Correct Answer: B
To visualize a service quota and optionally set an alarm.
Reference:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Quotas-Visualize-Alarms.html

Community vote distribution


B (100%)

" # sb333 1 month ago


Selected Answer: B
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Quotas-Visualize-Alarms.html#:~:text=
(Optional)%20To%20set,Create%20alarm.
upvoted 4 times

" # usmanbaigmughal 8 months, 2 weeks ago


B,
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Quotas-Visualize-Alarms.html
upvoted 3 times

" # glider79 10 months, 1 week ago


Answer is B, see https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Quotas-Visualize-Alarms.html
upvoted 3 times

" # techn00b 10 months, 1 week ago


Answer is B
upvoted 3 times

650 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #784 Topic 1

A company has a large number of AWS accounts in an organization in AWS Organizations. A different business group owns each account. All the
AWS accounts are bound by legal compliance requirements that restrict all operations outside the eu-west-2 Region.
The company's security team has mandated the use of AWS Systems Manager Session Manager across all AWS accounts.
Which solution should a solutions architect recommend to meet these requirements?

A. Create an SCP that denies access to all requests that do not target eu-west-2. Use the NotAction element to exempt global services from
the restriction. In AWS Organizations, apply the SCP to the root of the organization.

B. Create an SCP that denies access to all requests that do not target eu-west-2. Use the NotAction element to exempt global services from
the restriction. For each AWS account, use the AmNotLike condition key to add the ARN of the IAM role that is associated with the Session
Manager instance pro+le to the condition element of the SCP. In AWS Organizations apply, the SCP to the root of the organization.

C. Create an SCP that denies access to all requests that do not target eu-west-2. Use the NotAction element to exempt global services from
the restriction. In AWS Organizations, apply the SCP to the root of the organization. In each AWS account, create an IAM permissions
boundary that allows access to the IAM role that is associated with the Session Manager instance pro+le.

D. For each AWS account, create an IAM permissions boundary that denies access to all requests that do not target eu-west-2. For each AWS
account, apply the permissions boundary to the IAM role that is associated with the Session Manager instance pro+le.

Correct Answer: A
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_deny-requested-region.html

Community vote distribution


A (60%) B (40%)

" # SureNot 3 days, 7 hours ago


Selected Answer: A
Choosing between A and B..
Can't see a condition or any reason SSM should AVOID region restriction - so A.
upvoted 1 times

" # nsvijay04b1 1 week, 4 days ago


Selected Answer: B
A) SSM not handled.
B) region + SSM handled in SCP at root level ( ques says 'mandated')
C) region handled at root with SCP + SSM left for accounts to handle( not 'mandated') and permission boundary cannot give perm if SCP denies
it
D) not at all centrally managed , needs SCP
upvoted 2 times

" # asfsdfsdf 3 months, 3 weeks ago


Selected Answer: A
D will not work for sure - its applied to session manager only
C will not work also its aagain applied to SSM profile and it will not grant access
its either A or B - for B it means we will allow the SSM ARN role to work on all regions
only A left as the correct option - it will deny access to all regions with exception of eu-west-2 no need to do anything else as SSM is already
allowed by SCP by default
A - means we deny all access to other regions -
upvoted 2 times

" # asfsdfsdf 3 months, 3 weeks ago


if question was stated the
The company's security team has required that all AWS accounts utilize AWS Systems Manager Session Manager on all regions - I would
choose B
upvoted 1 times

" # Ddssssss 4 months, 3 weeks ago


Its not B because that would apply the SCP for all users except The session manager IAM which would allow that account to do whatever it
wants. I would say "D", why cant a simple permissions boundary simply deny access for sessions manager to all other regions? Why does it
need an SCP?
upvoted 1 times

" # DLML 8 months, 1 week ago

651 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

I am not too convinced with C nor A. How about B? SCP will have deny to run EC2 with condition ArnNotLike the session-manager-profile-role
upvoted 3 times

" # Alexey79 8 months, 2 weeks ago


It's C.

1. Create SCP policy to privent denies access to any operations outside of the specified Region.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_general.html#example-scp-deny-region

2. Create IAM Policy in each account from making certain changes


https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html

security team has required that all AWS accounts utilize AWS Systems Manager Session Manager

IAM must be configured to allow access.
upvoted 2 times

" # bobsmith2000 5 months, 3 weeks ago


C doesn't grant permission and don't override SCP
upvoted 2 times

" # SeanQi 4 months, 1 week ago


yes, C is missing granting permission from iam role, but it's not the point here. choosing C over B is to reduce the complexity of the setup
upvoted 1 times

" # SeanQi 4 months, 1 week ago


I mean: choose B over C to reduce the complexity
upvoted 1 times

" # user0001 6 months ago


B
C is not a scalable solution, plus the root account can change it
upvoted 2 times

" # Ddssssss 4 months, 3 weeks ago


Boundaries restrict access, they do not allow access.
upvoted 3 times

" # Bigbearcn 9 months, 1 week ago


It's C.
upvoted 2 times

" # bobsmith2000 5 months, 3 weeks ago


Permission boundaries don't grant permission and don't override SCP
upvoted 2 times

" # GeniusMikeLiu 9 months, 3 weeks ago


why A?
upvoted 1 times

" # usmanbaigmughal 8 months, 2 weeks ago


Because in A there is no solution for Session managers. C answer cover both Regional restriction and Session managers.
upvoted 3 times

" # timlow84 9 months, 1 week ago


why not C?
upvoted 1 times

" # user0001 6 months ago


root account can change it
upvoted 1 times

652 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #785 Topic 1

A company uses AWS Organizations. The company has an organization that has a central management account. The company plans to provision
multiple AWS accounts for different departments. All department accounts must be a member of the company's organization.
Compliance requirements state that each account must have only one VPC. Additionally, each VPC must have an identical network security
con+guration that includes fully con+gured subnets, gateways, network ACLs, and security groups.
The company wants this security setup to be automatically applied when a new department account is created. The company wants to use the
central management account for all security operations, but the central management account should not have the security setup.
Which approach meets these requirements with the LEAST amount of setup?

A. Create an OU within the company's organization. Add department accounts to the OU. From the central management account, create an
AWS CloudFormation template that includes the VPC and the network security con+gurations. Create a CloudFormation stack set by using this
template +le with automated deployment enabled. Apply the CloudFormation stack set to the OU.

B. Create a new organization with the central management account. Invite all AWS department accounts into the new organization. From the
central management account, create an AWS CloudFormation template that includes the VPC and the network security con+gurations. Create
a CloudFormation stack that is based on this template. Apply the CloudFormation stack to the newly created organization.

C. Invite department accounts to the company's organization. From the central management account, create an AWS CloudFormation
template that includes the VPC and the network security con+gurations. Create an AWS CodePipeline pipeline that will deploy the network
security setup to the newly created account. Specify the creation of an account as an event hook. Apply the event hook to the pipeline.

D. Invite department accounts to the company's organization. From the central management account, create an AWS CloudFormation template
that includes the VPC and the network security con+gurations. Create an AWS Lambda function that will deploy the VPC and the network
security setup to the newly created account. Create an event that watches for account creation. Con+gure the event to invoke the pipeline.

Correct Answer: B
Reference:
https://aws.amazon.com/blogs/security/how-to-use-aws-organizations-to-automate-end-to-end-account-creation/

Community vote distribution


A (100%)

" # Ell89 1 month ago


Selected Answer: A
A
C&D mention inviting accounts into the org. it doesnt state anywhere this needs to be done, says new accounts are to be provisioned.
B mentions creating an org, but there already is one.
leaves A by default.
upvoted 2 times

" # asfsdfsdf 3 months, 3 weeks ago


Selected Answer: A
only A mention CF automatic deployment - Also the organization already has a management accounts and it about to create new ones "The
organization intends to create many Amazon Web Services accounts for various departments" so no need to invite or create a new ORG
upvoted 4 times

" # cannottellname 9 months, 1 week ago


AAAAAA
upvoted 1 times

" # tkanmani76 9 months, 2 weeks ago


A - https://aws.amazon.com/blogs/aws/new-use-aws-cloudformation-stacksets-for-multiple-accounts-in-an-aws-organization/
upvoted 4 times

" # tkanmani76 9 months, 2 weeks ago


https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-manage-auto-deployment.html
upvoted 2 times

" # CloudChef 9 months, 4 weeks ago


https://aws.amazon.com/blogs/security/how-to-use-aws-organizations-to-automate-end-to-end-account-creation/
upvoted 2 times

" # CloudChef 9 months, 4 weeks ago

653 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

B "automated deployment enabled"? Not seeing how this is possible in answer A.


upvoted 2 times

" # wassb 3 weeks, 1 day ago


I think it's possible : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-manage-auto-deployment.html
upvoted 1 times

" # pititcu667 10 months ago


i will go with a since the the aws organization is already configured why make a new one when you can just add an OU?
upvoted 2 times

" # GeniusMikeLiu 10 months ago


Why not A?
upvoted 2 times

" # Tokyoboy 10 months ago


Existing accounts have to be invited into the OU.
upvoted 2 times

654 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #786 Topic 1

A company owns a chain of travel agencies and is running an application in the AWS Cloud. Company employees use the application to search for
information about travel destinations. Destination content is updated four times each year.
Two +xed Amazon EC2 instances serve the application. The company uses an Amazon Route 53 public hosted zone with a multivalue record of
travel.example.com that returns the Elastic IP addresses for the EC2 instances. The application uses Amazon DynamoDB as its primary data
store. The company uses a self-hosted Redis instance as a caching solution.
During content updates, the load on the EC2 instances and the caching solution increases drastically. This increased load has led to downtime on
several occasions. A solutions architect must update the application so that the application is highly available and can handle the load that is
generated by the content updates.
Which solution will meet these requirements?

A. Set up DynamoDB Accelerator (DAX) as in-memory cache. Update the application to use DAX. Create an Auto Scaling group for the EC2
instances. Create an Application Load Balancer (ALB). Set the Auto Scaling group as a target for the ALB. Update the Route 53 record to use a
simple routing policy that targets the ALB's DNS alias. Con+gure scheduled scaling for the EC2 instances before the content updates.

B. Set up Amazon ElastiCache for Redis. Update the application to use ElastiCache. Create an Auto Scaling group for the EC2 instances.
Create an Amazon CloudFront distribution, and set the Auto Scaling group as an origin for the distribution. Update the Route 53 record to use a
simple routing policy that targets the CloudFront distribution's DNS alias. Manually scale up EC2 instances before the content updates.

C. Set up Amazon ElastiCache for Memcached. Update the application to use ElastiCache Create an Auto Scaling group for the EC2 instances.
Create an Application Load Balancer (ALB). Set the Auto Scaling group as a target for the ALB. Update the Route 53 record to use a simple
routing policy that targets the ALB's DNS alias. Con+gure scheduled scaling for the application before the content updates.

D. Set up DynamoDB Accelerator (DAX) as in-memory cache. Update the application to use DAX. Create an Auto Scaling group for the EC2
instances. Create an Amazon CloudFront distribution, and set the Auto Scaling group as an origin for the distribution. Update the Route 53
record to use a simple routing policy that targets the CloudFront distribution's DNS alias. Manually scale up EC2 instances before the content
updates.

Correct Answer: A
Reference:
https://aws.amazon.com/dynamodb/dax/

Community vote distribution


A (80%) B (20%)

" # nsvijay04b1 1 week, 4 days ago


Selected Answer: B
Key points
-> he company uses a self-hosted Redis instance as a caching solution. ( Managed Redis is obvious choice as you would make use of existing
redis cache data migrated , DAX is overkill here micro secs latency ? really? )
-> During content updates, the load on the EC2 instances and the caching solution increases drastically. ( Need cloudfront as well, as load is on
EC2 too not only DB)
upvoted 1 times

" # asfsdfsdf 3 months, 3 weeks ago


Selected Answer: A
CF cant point to an ASG so B and D are out - MemCached is not HA so its out only A left
upvoted 3 times

" # AndySH 8 months, 3 weeks ago


Answer is A. DynamoDB Accelerator (DAX) is a fully managed, custom cache for DynamoDB. ElastiCache supports both Redis and Memcached
and typically used to cache results from relational databases
upvoted 4 times

" # Bigbearcn 9 months, 1 week ago


Selected Answer: A
It's A. B or D is wrong because cloudfront is not needed in this case.
upvoted 1 times

" # adsdadasdad 7 months, 1 week ago


Yet the content is "static" because it only gets updated 4 times a year? A makes sense but b and d are good alternatives. Go with A.

655 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times

656 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #787 Topic 1

A medical company is building a data lake on Amazon S3. The data must be encrypted in transit and at rest. The data must remain protected even
if S3 bucket is inadvertently made public.
Which combination of steps will meet these requirements? (Choose three.)

A. Ensure that each S3 bucket has a bucket policy that includes a Deny statement if the aws:SecureTransport condition is not present.

B. Create a CMK in AWS Key Management Service (AWS KMS). Turn on server-side encryption (SSE) on the S3 buckets, select SSE-KMS for the
encryption type, and use the CMK as the key.

C. Ensure that each S3 bucket has a bucket policy that includes a Deny statement for PutObject actions if the request does not include an
‫ג‬€s3:x-amz-server-side- encryption‫ג‬:€‫ג‬€aws:kms‫ג‬€ condition.

D. Turn on server-side encryption (SSE) on the S3 buckets and select SSE-S3 for the encryption type.

E. Ensure that each S3 bucket has a bucket policy that includes a Deny statement for PutObject actions if the request does not include an
‫ג‬€s3:x-amz-server-side- encryption‫ג‬:€‫ג‬€AES256‫ג‬€ condition.

F. Turn on AWS Con+g. Use the s3-bucket-public-read-prohibited, s3-bucket-public-write-prohibited, and s3-bucket-ssl-requests-only AWS
Con+g managed rules to monitor the S3 buckets.

Correct Answer: ABC


To determine HTTP or HTTPS requests in a bucket policy, use a condition that checks for the key "aws:SecureTransport". When this key is true,
then request is sent through HTTPS. To comply with the s3-bucket-ssl-requests-only rule, create a bucket policy that explicitly denies access
when the request meets the condition "aws:SecureTransport": "false". This policy explicitly denies access to HTTP requests.
When you create an object, you can specify the use of server-side encryption with AWS Key Management Service (AWS KMS) keys to encrypt
your data. This is true when you are either uploading a new object or copying an existing object. This encryption is known as SSE-KMS.
Enforce object encryption, create an S3 bucket policy that denies any S3 Put request that does not include the x-amz-server-side-encryption
header.
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-policy-for-con+g-rule/ https://docs.aws.amazon.com/AmazonS3/latest
/userguide/specifying-kms-encryption.html https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-
amazon-s3/

Community vote distribution


ABC (100%)

" # asfsdfsdf 3 months, 3 weeks ago


ABC - must use CMK to avoid public access of an unauthorized users (no access to the key)
Must enforce header of aws:KMS in order to make sure all object are encrypted with CMK (SSE-KMS)
Finally need to make sure aws:SecureTransport is set to true which means SSL
upvoted 4 times

" # RVD 7 months, 1 week ago


In order to enforce object encryption, create an S3 bucket policy that denies any S3 Put request that does not include the x-amz-server-side-
encryption header. There are two possible values for the x-amz-server-side-encryption header: AES256, which tells S3 to use S3-managed keys,
and aws:kms, which tells S3 to use AWS KMS–managed keys.
upvoted 1 times

" # kenchou73 7 months, 3 weeks ago


Selected Answer: ABC
Due to the requirement of the Data Lake, ABC is better than ADE. That needs SSE-KMS but not SSE-S3.
https://aws.amazon.com/blogs/big-data/build-secure-encrypted-data-lakes-with-aws-lake-formation/
upvoted 1 times

" # Serial_X25 7 months, 3 weeks ago


A,B,C is correct. In my opinion, If you are using SSE-S3 since you have access to the bucket, S3 service will automatically decrypt the file for
you, but with SSE-KMS we have:
"To upload an object encrypted with an AWS KMS key to Amazon S3, you need kms:GenerateDataKey permissions on the key. To download an
object encrypted with an AWS KMS key, you need kms:Decrypt permissions.", from https://docs.aws.amazon.com/AmazonS3/latest/userguide
/UsingKMSEncryption.html
Then, just making the S3 bucket public is not enough to have access to files, you also need access to KMS.
upvoted 2 times

657 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # panton 9 months, 2 weeks ago


I'll go ABC
A - to use encryption at rest SSL should be enabled
B - Create Key
C - Require Key on PutObject
upvoted 1 times

" # wahlbergusa 10 months, 1 week ago


A,B,C is correct. Reason SSE-S3 is not appropriate is that if the user has access privileges to S3 then they will automatically have access to the
S3 key as well.
upvoted 1 times

" # Bigbearcn 9 months, 4 weeks ago


I don't agree. SSE-S3 and SSE-KMS are same secure. The difference is who manage the key. ABC is a combination solution, but D isn't.
upvoted 2 times

" # wahlbergusa 9 months, 2 weeks ago


Not sure I understand. There is segregation of duties on SSE-KMS, you can assign a key policy to the KMS Key. Hence the reason it is
more secure in terms of access controls to the service.
upvoted 1 times

658 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #788 Topic 1

A company is building an electronic document management system in which users upload their documents. The application stack is entirely
serverless and runs on AWS in the eu-central-1 Region. The system includes a web application that uses an Amazon CloudFront distribution for
delivery with Amazon S3 as the origin.
The web application communicates with Amazon API Gateway Regional endpoints. The API Gateway APIs call AWS Lambda functions that store
metadata in an
Amazon Aurora Serverless database and put the documents into an S3 bucket.
The company is growing steadily and has completed a proof of concept with its largest customer. The company must improve latency outside of
Europe
Which combination of actions will meet these requirements? (Choose two.)

A. Enable S3 Transfer Acceleration on the S3 bucket. Ensure that the web application uses the Transfer Acceleration signed URLs.

B. Create an accelerator in AWS Global Accelerator. Attach the accelerator to the CloudFront distribution.

C. Change the API Gateway Regional endpoints to edge-optimized endpoints.

D. Provision the entire stack in two other locations that are spread across the world. Use global databases on the Aurora Serverless cluster.

E. Add an Amazon RDS proxy between the Lambda functions and the Aurora Serverless database.

Correct Answer: BC
Reference:
https://aws.amazon.com/global-accelerator/faqs/
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-endpoint-types.html

Community vote distribution


AC (73%) BD (18%) 9%

" # Smartphone Highly Voted $ 10 months, 1 week ago


Answer is A & C
upvoted 12 times

" # Ni_yot Most Recent % 2 weeks, 4 days ago


A and C for me. S3 Transfer speeds up the transfer and optimized edge endpoint reduces latency
upvoted 1 times

" # asfsdfsdf 3 months, 3 weeks ago


Selected Answer: AC
AC
GA is excluded since it cannot point to CF also no need GA if we use CF - this exclude B
D is not needed too complicated
so only ACE left - C for sure help, S3 transfer can help with upload times across the world
E - will help if there were errors connecting to the DB and not latency issues
so AC it is
upvoted 3 times

" # jj22222 7 months, 1 week ago


Selected Answer: AC
C and A look right here
upvoted 2 times

" # lifebegins 8 months ago


It cannot be Global Accelerator. CloudFront and GA are diffrent services and diffrent purposes.
https://aws.amazon.com/global-accelerator/faqs/
upvoted 3 times

" # lifebegins 8 months ago


Global Accelerator is only for Non-HTTP Services such as UDP. Not for Http Apps
upvoted 1 times

" # tobstar86 8 months ago


https://aws.amazon.com/global-accelerator/faqs/
GA good fit for.. "as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover."

659 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 2 times
" # jyrajan69 8 months, 2 weeks ago
For those choosing B, please justify your answer. Global Accelerator and CloudFront are 2 separate services, how can you attach a Global
Accelerator to CF? That option is not available as far as I can see. So based on elimination have to go with A and C
upvoted 3 times

" # Bigbearcn 8 months, 3 weeks ago


Selected Answer: AC
AAA CCC
upvoted 3 times

" # Clandestine60 9 months ago


Selected Answer: BD
i`ll go with BD. Global accelerators don`t have cloudfront as their endpoints
"For standard accelerators, the endpoints are Network Load Balancers, Application Load Balancers, Amazon EC2 instances, or Elastic IP
addresses. For custom routing accelerators, endpoints are virtual private cloud (VPC) subnets with one or more EC2 instances. The static IP
addresses accept incoming traffic onto the AWS global network from the edge location that is closest to your users."
https://docs.aws.amazon.com/global-accelerator/latest/dg/introduction-how-it-works.html
upvoted 2 times

" # vampiret 9 months, 1 week ago


Why not D?
upvoted 1 times

" # Cloudxie 3 weeks ago


No global aurora serverless db
upvoted 1 times

" # saeidp 9 months, 1 week ago


Accelerator can attach to ALB or NLB but not cloudfront
I vote for A and C
upvoted 2 times

" # Trueguy 9 months, 2 weeks ago


BC it seems
upvoted 1 times

" # tkanmani76 9 months, 3 weeks ago


BC - An edge-optimized API endpoint is best for geographically distributed clients. API requests are routed to the nearest CloudFront Point of
Presence (POP). This is the default endpoint type for API Gateway REST APIs.
upvoted 2 times

" # pititcu667 9 months, 3 weeks ago


Selected Answer: BC
i think b and c is correct. Because s3 is not exposed directly and we already use cf and there is no talk about provisioning in new regios hence
not d.
upvoted 1 times

" # lucesarano 9 months, 4 weeks ago


B,D. Not A since there is a deployed CF distribution
upvoted 1 times

" # lucesarano 9 months, 4 weeks ago


I meant B,C.
upvoted 1 times

660 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #789 Topic 1

A solutions architect is troubleshooting an application that runs on Amazon EC2 instances. The EC2 instances run in an Auto Scaling group. The
application needs to access user data in an Amazon DynamoDB table that has +xed provisioned capacity.
To match the increased workload, the solutions architect recently doubled the maximum size of the Auto Scaling group. Now, when many
instances launch at the same time, some application components are throttled when the components scan the DynamoDB table. The Auto Scaling
group terminates the failing instances and starts new instances until all applications are running
A solutions architect must implement a solution to mitigate the throttling issue in the MOST cost-effective manner
Which solution will meet these requirements?

A. Double the provisioned read capacity of the DynamoDB table.

B. Duplicate the DynamoDB table. Con+gure the running copy of the application to select at random which table it access.

C. Set the DynamoDB table to on-demand mode.

D. Add DynamoDB Accelerator (DAX) to the table.

Correct Answer: C
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/on-demand-table-throttling-dynamodb/

Community vote distribution


C (71%) D (29%)

" # AndySH Highly Voted $ 8 months, 2 weeks ago


D. DAX accommodates unpredictable bursts of reading activity.
upvoted 9 times

" # nsvijay04b1 Most Recent % 1 week, 3 days ago


Selected Answer: C
Its 100% C as per AWS documentation.
Read <Peak traffic and scaling properties> section in below link
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.OnDemand
upvoted 1 times

" # Ni_yot 2 weeks, 4 days ago


yeah i'd go for C as well. Several documents mention on demand mode as the way forward. there could still be queuing if data is not in cache
when using DAX
upvoted 1 times

" # wassb 3 weeks, 1 day ago


Selected Answer: D
"However, tables using the on-demand mode might still throttle."
https://aws.amazon.com/premiumsupport/knowledge-center/on-demand-table-throttling-dynamodb/
Considering that switching on demand-mode might not solve the problem, i will go for DAX
upvoted 1 times

" # JohnPi 1 month, 1 week ago


Selected Answer: D
I vote for DAX
upvoted 1 times

" # JohnPi 3 weeks, 6 days ago


https://aws.amazon.com/premiumsupport/knowledge-center/on-demand-table-throttling-dynamodb/
upvoted 1 times

" # JohnPi 3 weeks, 6 days ago


On demand will be trotheled if the traffic is more than double the prețioase peak
upvoted 1 times

" # asfsdfsdf 3 months, 3 weeks ago


Selected Answer: C
Since its with peaks better to use on-demand and its more cost effective than to use DAX nodes and pay for them 24/7 - on-demand reads is
0.25$ for 1 million unit

661 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 1 times
" # HellGate 9 months, 2 weeks ago
Is On-Demand cheaper than DAX?
upvoted 2 times

" # sTeVe86 8 months, 3 weeks ago


Yes, some application components are throttled when they scan the DynamoDB database.
Means READ capacity may not be sufficient.
DAX will fix.
upvoted 3 times

" # wannaaws 4 months, 3 weeks ago


throttling occurs when app scanning the DynamoDB database, note it's not querying, i.e, run query vs full scan. In the table full scan,
caching (DAX) not efficient. Perhaps, change DDB to on-demand may be more cost effective? https://cloud.in28minutes.com/aws-
certification-amazon-dynamodb-vs-rds
upvoted 2 times

" # wahlbergusa 9 months, 2 weeks ago


I'd go with D.
upvoted 3 times

" # lucesarano 9 months, 3 weeks ago


Selected Answer: C
C.

https://aws.amazon.com/premiumsupport/knowledge-center/on-demand-table-throttling-dynamodb/
upvoted 3 times

" # JohnPi 3 weeks, 6 days ago


It is D according to the document.
upvoted 1 times

662 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #790 Topic 1

A solutions architect must analyze a company's Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to determine
whether the company is using resources emciently. The company is running several large, high-memory EC2 instances to host database clusters
that are deployed in active/ passive con+gurations. The utilization of these EC2 instances varies by the applications that use the databases, and
the company has not identi+ed a pattern.
The solutions architect must analyze the environment and take action based on the +ndings.
Which solution meets these requirements MOST cost-effectively?

A. Create a dashboard by using AWS Systems Manager OpsCenter. Con+gure visualizations for Amazon CloudWatch metrics that are
associated with the EC2 instances and their EBS volumes. Review the dashboard periodically, and identify usage patterns. Rightsize the EC2
instances based on the peaks in the metrics.

B. Turn on Amazon CloudWatch detailed monitoring for the EC2 instances and their EBS volumes. Create and review a dashboard that is based
on the metrics. Identify usage patterns. Rightsize the EC2 instances based on the peaks in the metrics.

C. Install the Amazon CloudWatch agent on each of the EC2 instances. Turn on AWS Compute Optimizer, and let it run for at least 12 hours.
Review the recommendations from Compute Optimizer, and rightsize the EC2 instances as directed.

D. Sign up for the AWS Enterprise Support plan. Turn on AWS Trusted Advisor. Wait 12 hours. Review the recommendations from Trusted
Advisor, and rightsize the EC2 instances as directed.

Correct Answer: A

Community vote distribution


C (100%)

" # asfsdfsdf Highly Voted $ 3 months, 3 weeks ago


Selected Answer: C
C - for sure since its memory instance need to install CW agent and to configure memory metrics - optimizer will do the work and analyze and
suggest rightsizing for both Ec2 EBS and EC2 instances - need to pay just for one extra metric per EC2.
A is incorrect OpsCenter is identifying issue with resources like instance failures etc.. not for cost optimizing
D is not cost effective
B - will not fulfil the requirement - no memory data also enable detailed monitoring for all EC2 instances is expensive
upvoted 5 times

" # Ni_yot Most Recent % 2 months, 1 week ago


looks C according to this - CloudWatch Agent is a software package that autonomously and continuously runs on your servers. Using
CloudWatch Agent, we can collect metrics and logs from Amazon Elastic Compute Cloud (Amazon EC2), hybrid, and on-premises servers
running both Linux and Windows.
upvoted 1 times

" # hilft 3 months, 1 week ago


C. Directly from Jon Bonso's practice exam
upvoted 1 times

" # hilft 5 months ago


C. You need CW agent install first
upvoted 1 times

" # ablazleon 7 months, 4 weeks ago


Selected Answer: C
Both A and C seems to have the functional requirements to be useful, but C seems to be cheaper. I checked cheris opinion, aws compute
optimizer seems to have a free tier where it is free. System Manager ops center seems not to have this free tier (https://aws.amazon.com
/compute-optimizer/pricing/, https://aws.amazon.com/systems-manager/pricing/).
upvoted 2 times

" # cheris 8 months ago


I don't think option C is cost-effective ... A sounds less costly.
upvoted 1 times

" # kaush4u 9 months, 2 weeks ago


To sign up for AWS Compute Optimizer, go to the AWS Compute Optimizer console and click “opt in”. You must have an Amazon Web Services
account to access this service. Once you opt in, AWS Compute Optimizer immediately starts analyzing your AWS resources and starts delivering
recommendations. When you first opt in AWS Compute Optimizer, it may take up to 12 hours to fully analyze the AWS resources in your account.
upvoted 1 times

663 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

" # AMKazi 10 months ago


Selected Answer: C
https://aws.amazon.com/compute-optimizer/
upvoted 1 times

" # Smartphone 10 months ago


C is the correct answer. CloudWatch Agent needs to be install to work the AWS Compute Optimizer correctly.
upvoted 1 times

" # krisvija12 10 months, 1 week ago


If not for cost wise I would have gone with D. For D one has to purchase Enterprise plan, hence the right answer should be C. Install CW agent
on each EC2
upvoted 1 times

" # kubala 10 months, 1 week ago


Selected Answer: C
need to install cloudwatch agent
upvoted 1 times

" # kubala 10 months, 1 week ago


c - need to install cloudwatch agent
upvoted 1 times

664 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #791 Topic 1

A large mobile gaming company has successfully migrated all of its on-premises infrastructure to the AWS Cloud. A solutions architect is
reviewing the environment to ensure that it was built according to the design and that it is running in alignment with the Well-Architected
Framework.
While reviewing previous monthly costs in Cost Explorer, the solutions architect notices that the creation and subsequent termination of several
large instance types account for a high proportion of the costs. The solutions architect +nds out that the company's developers are launching new
Amazon EC2 instances as part of their testing and that the developers are not using the appropriate instance types.
The solutions architect must implement a control mechanism to limit the instance types that only the developers can launch.
Which solution will meet these requirements?

A. Create a desired-instance-type managed rule in AWS Con+g. Con+gure the rule with the instance types that are allowed. Attach the rule to
an event to run each time a new EC2 instance is launched.

B. In the EC2 console, create a launch template that speci+es the instance types that are allowed. Assign the launch template to the
developers' IAM accounts.

C. Create a new IAM policy. Specify the instance types that are allowed. Attach the policy to an IAM group that contains the IAM accounts for
the developers

D. Use EC2 Image Builder to create an image pipeline for the developers and assist them in the creation of a golden image.

Correct Answer: A
Reference:
https://docs.aws.amazon.com/con+g/latest/developerguide/evaluate-con+g_develop-rules_getting-started.html

Community vote distribution


C (100%)

" # bkrish Highly Voted $ 6 months, 4 weeks ago


Selected Answer: C
This is doable with IAM policy creation to restrict users to specific instance types. Found the below article.
https://blog.vizuri.com/limiting-allowed-aws-instance-type-with-iam-policy
upvoted 5 times

" # Ni_yot Most Recent % 2 weeks, 3 days ago


should be C
upvoted 1 times

" # gnic 2 months, 1 week ago


Selected Answer: C
It's C - AWS Config is for monitoring and alert, it doesn't prevent.
upvoted 1 times

" # hilft 3 months, 2 weeks ago


C.
tkanmani76 is right.
upvoted 3 times

" # Ni_yot 8 months, 2 weeks ago


will go with B. Better to restrict using IAM groups.
upvoted 1 times

" # Ni_yot 2 weeks, 3 days ago


its C my bad
upvoted 1 times

" # feddo 10 months, 1 week ago


Could this not be done with either AWS Config or an IAM Policy? Wouldn't both A and C be options in this case? I am not sure why I would pick
one over the other here...
upvoted 1 times

" # tkanmani76 9 months, 3 weeks ago


Its C - Option A doesnt restrict only for devs. It would impact everyone.

665 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

upvoted 11 times

" # wahlbergusa 9 months, 1 week ago


AWS Config is also a detection engine. You can automate things in the backend to correct whatever needs to be corrected. But it cannot
prefent it from happening.
upvoted 5 times

" # sodasu 3 weeks, 4 days ago


Right! →https://aws.amazon.com/tw/config/faq/
---
Q: Does the service prevent users from taking non-compliant actions?

Config rules do not directly affect how end-users consume AWS. Config rules evaluate resource configurations only after a configuration
change has been completed and recorded by AWS Config. Config rules do not prevent the user from making changes that could be
non-compliant. To control what a user can provision on AWS and configuration parameters allowed during provisioning, please use
AWS Identity and Access Management (IAM) Policies and AWS Service Catalog respectively.
upvoted 1 times

666 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

Question #792 Topic 1

A company with global omces has a single 1 Gbps AWS Direct Connect connection to a single AWS Region. The company's on-premises network
uses the connection to communicate with the company's resources in the AWS Cloud. The connection has a single private virtual interface that
connects to a single VPC.
A solutions architect must implement a solution that adds a redundant Direct Connect connection in the same Region. The solution also must
provide connectivity to other Regions through the same pair of Direct Connect connections as the company expands into other Regions.
Which solution meets these requirements?

A. Provision a Direct Connect gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct
Connect connection. Create a new private virtual interface on each connection, and connect both private virtual interfaces to the Direct
Connect gateway. Connect the Direct Connect gateway to the single VPC.

667 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

668 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

669 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

670 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

671 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

672 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

673 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

674 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

675 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

676 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

677 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

678 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

679 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

680 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

681 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

682 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

683 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

684 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

685 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

686 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

687 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

688 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

689 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

690 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

691 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

692 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

693 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

694 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

695 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

696 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

697 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

698 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

699 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

700 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

701 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

702 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

703 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

704 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

705 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

706 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

707 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

708 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

709 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

710 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

711 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

712 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

713 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

714 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

715 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

716 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

717 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

718 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

719 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

720 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

721 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

722 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

723 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

724 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

725 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

726 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

727 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

728 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

729 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

730 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

731 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

732 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

733 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

734 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

735 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

736 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

737 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

738 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

739 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

740 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

741 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

742 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

743 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

744 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

745 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

746 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

747 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

748 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

749 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

750 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

751 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

752 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

753 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

754 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

755 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

756 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

757 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

758 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

759 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

760 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

761 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

762 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

763 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

764 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

765 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

766 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

767 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

768 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

769 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

770 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

771 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

772 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

773 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

774 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

775 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

776 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

777 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

778 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

779 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

780 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

781 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

782 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

783 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

784 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

785 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

786 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

787 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

788 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

789 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

790 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

791 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

792 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

793 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

794 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

795 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

796 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

797 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

798 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

799 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

800 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

801 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

802 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

803 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

804 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

805 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

806 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

807 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

808 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

809 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

810 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

811 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

812 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

813 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

814 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

815 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

816 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

817 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

818 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

819 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

820 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

821 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

822 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

823 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

824 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

825 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

826 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

827 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

828 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

829 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

830 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

831 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

832 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

833 of 834 11/7/22, 3:45 PM


AWS Certified Solutions Architect - Professional Exam – Free Exam Q&As, Page 1 | ExamTopics https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/custom-view/

834 of 834 11/7/22, 3:45 PM

You might also like