Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
12 views46 pages

Amazon Pass4sure Aws-Solution-Architect-Associate Study Guide 2022-Jun-20 by Nathan 462q Vce

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 46

100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader

https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

AWS-Solution-Architect-Associate Dumps

AWS Certified Solutions Architect - Associate

https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

NEW QUESTION 1
You are trying to launch an EC2 instance, however the instance seems to go into a terminated status immediately. What would probably not be a reason that this
is happening?

A. The AMI is missing a required part.


B. The snapshot is corrupt.
C. You need to create storage in EBS first.
D. You've reached your volume limi

Answer: C

Explanation: Amazon EC2 provides a virtual computing environments, known as an instance.


After you launch an instance, AWS recommends that you check its status to confirm that it goes from the pending status to the running status, the not terminated
status.
The following are a few reasons why an Amazon EBS-backed instance might immediately terminate: You've reached your volume limit.
The AM is missing a required part. The snapshot is corrupt. Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_|nstanceStraightToTerminated.html

NEW QUESTION 2
You have set up an Auto Scaling group. The cool down period for the Auto Scaling group is 7 minutes. The first instance is launched after 3 minutes, while the
second instance is launched after 4 minutes. How many minutes after the first instance is launched will Auto Scaling accept another scaling actMty request?

A. 11 minutes
B. 7 minutes
C. 10 minutes
D. 14 minutes

Answer: A

Explanation: If an Auto Scaling group is launching more than one instance, the cool down period for each instance starts after that instance is launched. The
group remains locked until the last instance that was launched has completed its cool down period. In this case the cool down period for the first instance starts
after 3 minutes and finishes at the 10th minute (3+7 cool down), while for the second instance it starts at the 4th minute and finishes at the 11th minute (4+7 cool
down). Thus, the Auto Scaling group will receive another request only after 11 minutes.
Reference:http://docs.aws.amazon.com/AutoScaIing/latest/Deve|operGuide/AS_Concepts.htmI

NEW QUESTION 3
In Amazon EC2 Container Service components, what is the name of a logical grouping of container instances on which you can place tasks?

A. A cluster
B. A container instance
C. A container
D. A task definition

Answer: A

Explanation: Amazon ECS contains the following components:


A Cluster is a logical grouping of container instances that you can place tasks on.
A Container instance is an Amazon EC2 instance that is running the Amazon ECS agent and has been registered into a cluster.
A Task definition is a description of an application that contains one or more container definitions. A Scheduler is the method used for placing tasks on container
instances.
A Service is an Amazon ECS service that allows you to run and maintain a specified number of instances of a task definition simultaneously.
A Task is an instantiation of a task definition that is running on a container instance. A Container is a Linux container that was created as part of a task.
Reference: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

NEW QUESTION 4
Can a user get a notification of each instance start / terminate configured with Auto Scaling?

A. Yes, if configured with the Launch Config


B. Yes, always
C. Yes, if configured with the Auto Scaling group
D. No

Answer: C

Explanation: The user can get notifications using SNS if he has configured the notifications while creating the Auto Scaling group.
Reference: http://docs.aws.amazon.com/AutoScaIing/latest/DeveIoperGuide/GettingStartedTutoriaI.html

NEW QUESTION 5
Amazon EBS provides the ability to create backups of any Amazon EC2 volume into what is known as

A. snapshots
B. images
C. instance backups

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

D. mirrors

Answer: A

Explanation: Amazon allows you to make backups of the data stored in your EBS volumes through snapshots that can later be used to create a new EBS volume.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/Storage.htmI

NEW QUESTION 6
After you recommend Amazon Redshift to a client as an alternative solution to paying data warehouses to analyze his data, your client asks you to explain why you
are recommending Redshift. Which of the following would be a reasonable response to his request?

A. It has high performance at scale as data and query complexity grows.


B. It prevents reporting and analytic processing from interfering with the performance of OLTP workloads.
C. You don't have the administrative burden of running your own data warehouse and dealing with setup, durability, monitoring, scaling, and patching.
D. All answers listed are a reasonable response to his QUESTION

Answer: D

Explanation: Amazon Redshift delivers fast query performance by using columnar storage technology to improve I/O efficiency and parallelizing queries across
multiple nodes. Redshift uses standard PostgreSQL JDBC and ODBC drivers, allowing you to use a wide range of familiar SQL clients. Data load speed scales
linearly with cluster size, with integrations to Amazon S3, Amazon DynamoDB, Amazon Elastic MapReduce,
Amazon Kinesis or any SSH-enabled host.
AWS recommends Amazon Redshift for customers who have a combination of needs, such as: High performance at scale as data and query complexity grows
Desire to prevent reporting and analytic processing from interfering with the performance of OLTP workloads
Large volumes of structured data to persist and query using standard SQL and existing BI tools Desire to the administrative burden of running one's own data
warehouse and dealing with setup, durability, monitoring, scaling and patching
Reference: https://aws.amazon.com/running_databases/#redshift_anchor

NEW QUESTION 7
One of the criteria for a new deployment is that the customer wants to use AWS Storage Gateway. However you are not sure whether you should use gateway-
cached volumes or gateway-stored volumes or even what the differences are. Which statement below best describes those differences?

A. Gateway-cached lets you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locall
B. Gateway-stored enables you to configure youron-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of
this data to Amazon S3.
C. Gateway-cached is free whilst gateway-stored is not.
D. Gateway-cached is up to 10 times faster than gateway-stored.
E. Gateway-stored lets you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locall
F. Gateway-cached enables you to configure youron-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of
this data to Amazon S3.

Answer: A

Explanation: Volume gateways provide cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from
your on-premises application sewers. The gateway supports the following volume configurations:
Gateway-cached volumes — You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally.
Gateway-cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-
latency access to your frequently accessed data.
Gateway-stored volumes — If you need low-latency access to your entire data set, you can configure your on-premises gateway to store all your data locally and
then asynchronously back up point-in-time snapshots of this data to Amazon S3. This configuration provides durable and inexpensive off-site backups that you can
recover to your local data center or Amazon EC2. For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon
EC2.
Reference: http://docs.aws.amazon.com/storagegateway/latest/userguide/volume-gateway.html

NEW QUESTION 8
A user is launching an EC2 instance in the US East region. Which of the below mentioned options is recommended by AWS with respect to the selection of the
availability zone?

A. Always select the AZ while launching an instance


B. Always select the US-East-1-a zone for HA
C. Do not select the AZ; instead let AWS select the AZ
D. The user can never select the availability zone while launching an instance

Answer: C

Explanation: When launching an instance with EC2, AWS recommends not to select the availability zone (AZ). AWS specifies that the default Availability Zone
should be accepted. This is because it enables AWS to select the best Availability Zone based on the system health and available capacity. If the user launches
additional instances, only then an Availability Zone should be specified. This is to specify the same or different AZ from the running instances.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html

NEW QUESTION 9
What is a placement group in Amazon EC2?

A. It is a group of EC2 instances within a single Availability Zone.


B. It the edge location of your web content.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

C. It is the AWS region where you run the EC2 instance of your web content.
D. It is a group used to span multiple Availability Zone

Answer: A

Explanation: A placement group is a logical grouping of instances within a single Availability Zone. Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

NEW QUESTION 10
You are migrating an internal sewer on your DC to an EC2 instance with EBS volume. Your server disk usage is around 500GB so you just copied all your data to
a 2TB disk to be used with AWS Import/Export. Where will the data be imported once it arrives at Amazon?

A. to a 2TB EBS volume


B. to an S3 bucket with 2 objects of 1TB
C. to an 500GB EBS volume
D. to an S3 bucket as a 2TB snapshot

Answer: B

Explanation: An import to Amazon EBS will have different results depending on whether the capacity of your storage device is less than or equal to 1 TB or
greater than 1 TB. The maximum size of an Amazon EBS snapshot is 1 TB, so if the device image is larger than 1 TB, the image is chunked and stored on
Amazon S3. The target location is determined based on the total capacity of the device, not the amount of data on the device.
Reference: http://docs.aws.amazon.com/AWSImportExport/latest/DG/Concepts.html

NEW QUESTION 10
A client needs you to import some existing infrastructure from a dedicated hosting provider to AWS to try and save on the cost of running his current website. He
also needs an automated process that manages backups, software patching, automatic failure detection, and recovery. You are aware that his existing set up
currently uses an Oracle database. Which of the following AWS databases would be best for accomplishing this task?

A. Amazon RDS
B. Amazon Redshift
C. Amazon SimpIeDB
D. Amazon EIastiCache

Answer: A

Explanation: Amazon RDS gives you access to the capabilities of a familiar MySQL, Oracle, SQL Server, or PostgreSQL database engine. This means that the
code, applications, and tools you already use today with your existing databases can be used with Amazon RDS. Amazon RDS automatically patches the
database software and backs up your database, storing the backups for a user-defined retention period and enabling point-in-time recovery.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html

NEW QUESTION 11
An edge location refers to which Amazon Web Service?

A. An edge location is refered to the network configured within a Zone or Region


B. An edge location is an AWS Region
C. An edge location is the location of the data center used for Amazon CIoudFront.
D. An edge location is a Zone within an AWS Region

Answer: C

Explanation: Amazon CIoudFront is a content distribution network. A content delivery network or content distribution network (CDN) is a large distributed system
of sewers deployed in multiple data centers across the world. The location of the data center used for CDN is called edge location.
Amazon CIoudFront can cache static content at each edge location. This means that your popular static content (e.g., your site’s logo, navigational images,
cascading style sheets, JavaScript code, etc.) will be available at a nearby edge location for the browsers to download with low latency and improved performance
for viewers. Caching popular static content with Amazon CIoudFront also helps you offload requests for such files from your origin sever — CIoudFront serves the
cached copy when available and only makes a request to your origin server if the edge location receMng the browser’s request does not have a copy of the file.
Reference: http://aws.amazon.com/c|oudfront/

NEW QUESTION 13
Do Amazon EBS volumes persist independently from the running life of an Amazon EC2 instance?

A. Yes, they do but only if they are detached from the instance.
B. No, you cannot attach EBS volumes to an instance.
C. No, they are dependent.
D. Yes, they d

Answer: D

Explanation: An Amazon EBS volume behaves like a raw, unformatted, external block device that you can attach to a
single instance. The volume persists independently from the running life of an Amazon EC2 instance. Reference:
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/Storage.html

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

NEW QUESTION 17
Your supervisor has asked you to build a simple file synchronization service for your department. He doesn't want to spend too much money and he wants to be
notified of any changes to files by email. What do you think would be the best Amazon service to use for the email solution?

A. Amazon SES
B. Amazon CIoudSearch
C. Amazon SWF
D. Amazon AppStream

Answer: A

Explanation: File change notifications can be sent via email to users following the resource with Amazon Simple Email Service (Amazon SES), an easy-to-use,
cost-effective email solution.
Reference: http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_fiIesync_08.pdf

NEW QUESTION 19
In Amazon AWS, which of the following statements is true of key pairs?

A. Key pairs are used only for Amazon SDKs.


B. Key pairs are used only for Amazon EC2 and Amazon CIoudFront.
C. Key pairs are used only for Elastic Load Balancing and AWS IAM.
D. Key pairs are used for all Amazon service

Answer: B

Explanation: Key pairs consist of a public and private key, where you use the private key to create a digital signature, and then AWS uses the corresponding
public key to validate the signature. Key pairs are used only for Amazon EC2 and Amazon CIoudFront.
Reference: http://docs.aws.amazon.com/generaI/latest/gr/aws-sec-cred-types.html

NEW QUESTION 21
Does Amazon DynamoDB support both increment and decrement atomic operations?

A. Only increment, since decrement are inherently impossible with DynamoDB's data model.
B. No, neither increment nor decrement operations.
C. Yes, both increment and decrement operations.
D. Only decrement, since increment are inherently impossible with DynamoDB's data mode

Answer: C

Explanation: Amazon DynamoDB supports increment and decrement atomic operations.


Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/APISummary.html

NEW QUESTION 22
An organization has three separate AWS accounts, one each for development, testing, and production. The organization wants the testing team to have access to
certain AWS resources in the production account. How can the organization achieve this?

A. It is not possible to access resources of one account with another account.


B. Create the IAM roles with cross account access.
C. Create the IAM user in a test account, and allow it access to the production environment with the IAM policy.
D. Create the IAM users with cross account acces

Answer: B

Explanation: An organization has multiple AWS accounts to isolate a development environment from a testing or production environment. At times the users from
one account need to access resources in the other account, such as promoting an update from the development environment to the production environment. In
this case the IAM role with cross account access will provide a solution. Cross account access lets one account share access to their resources with users in the
other AWS accounts.
Reference: http://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf

NEW QUESTION 24
You need to import several hundred megabytes of data from a local Oracle database to an Amazon RDS DB instance. What does AWS recommend you use to
accomplish this?

A. Oracle export/import utilities


B. Oracle SQL Developer
C. Oracle Data Pump
D. DBMS_FILE_TRANSFER

Answer: C

Explanation: How you import data into an Amazon RDS DB instance depends on the amount of data you have and the number and variety of database objects in
your database.
For example, you can use Oracle SQL Developer to import a simple, 20 MB database; you want to use Oracle Data Pump to import complex databases or
databases that are several hundred megabytes or several terabytes in size.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.htmI

NEW QUESTION 29
A user has created an EBS volume with 1000 IOPS. What is the average IOPS that the user will get for most of the year as per EC2 SLA if the instance is attached
to the EBS optimized instance?

A. 950
B. 990
C. 1000
D. 900

Answer: D

Explanation: As per AWS SLA if the instance is attached to an EBS-Optimized instance, then the Provisioned IOPS volumes are designed to deliver within 10% of
the provisioned IOPS performance 99.9% of the time in a given year. Thus, if the user has created a volume of 1000 IOPS, the user will get a minimum 900 IOPS
99.9% time of the year.
Reference: http://aws.amazon.com/ec2/faqs/

NEW QUESTION 32
You are in the process of creating a Route 53 DNS failover to direct traffic to two EC2 zones. Obviously, if one fails, you would like Route 53 to direct traffic to the
other region. Each region has an ELB with some instances being distributed. What is the best way for you to configure the Route 53 health check?

A. Route 53 doesn't support ELB with an internal health check.You need to create your own Route 53 health check of the ELB
B. Route 53 natively supports ELB with an internal health chec
C. Turn "Eva|uate target health" off and "Associate with Health Check" on and R53 will use the ELB's internal health check.
D. Route 53 doesn't support ELB with an internal health chec
E. You need to associate your resource record set for the ELB with your own health check
F. Route 53 natively supports ELB with an internal health chec
G. Turn "Eva|uate target health" on and "Associate with Health Check" off and R53 will use the ELB's internal health check.

Answer: D

Explanation: With DNS Failover, Amazon Route 53 can help detect an outage of your website and redirect your end users to alternate locations where your
application is operating properly. When you enable this feature, Route 53 uses health checks-regularly making Internet requests to your appIication’s endpoints
from multiple locations around the world-to determine whether each endpoint of your application is up or down.
To enable DNS Failover for an ELB endpoint, create an Alias record pointing to the ELB and set the "EvaIuate Target HeaIth" parameter to true. Route 53 creates
and manages the health checks for your ELB automatically. You do not need to create your own Route 53 health check of the ELB. You also do not need to
associate your resource record set for the ELB with your own health check, because Route 53 automatically associates it with the health checks that Route 53
manages on your behalf. The ELB health check will also inherit the health of your backend instances behind that ELB.
Reference:
http://aws.amazon.com/about-aws/whats-new/2013/05/30/amazon-route-53-adds-elb-integration-for-dns- fai|over/

NEW QUESTION 36
A user wants to use an EBS-backed Amazon EC2 instance for a temporary job. Based on the input data, the job is most likely to finish within a week. Which of the
following steps should be followed to terminate the instance automatically once the job is finished?

A. Configure the EC2 instance with a stop instance to terminate it.


B. Configure the EC2 instance with ELB to terminate the instance when it remains idle.
C. Configure the CIoudWatch alarm on the instance that should perform the termination action once the instance is idle.
D. Configure the Auto Scaling schedule actMty that terminates the instance after 7 day

Answer: C

Explanation: Auto Scaling can start and stop the instance at a pre-defined time. Here, the total running time is unknown. Thus, the user has to use the
CIoudWatch alarm, which monitors the CPU utilization. The user can create an alarm that is triggered when the average CPU utilization percentage has been
lower than 10 percent
for 24 hours, signaling that it is idle and no longer in use. When the utilization is below the threshold limit, it will terminate the instance as a part of the instance
action.
Reference: http://docs.aws.amazon.com/AmazonCIoudWatch/|atest/Deve|operGuide/UsingAIarmActions.html

NEW QUESTION 38
Which of the following is true of Amazon EC2 security group?

A. You can modify the outbound rules for EC2-Classic.


B. You can modify the rules for a security group only if the security group controls the traffic for just one instance.
C. You can modify the rules for a security group only when a new instance is created.
D. You can modify the rules for a security group at any tim

Answer: D

Explanation: A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more
security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security
group at any time; the new rules are automatically applied to all instances that are associated with the security group.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-network-security.htmI

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

NEW QUESTION 41
An Elastic IP address (EIP) is a static IP address designed for dynamic cloud computing. With an EIP, you can mask the failure of an instance or software by
rapidly remapping the address to another instance in your account. Your EIP is associated with your AWS account, not a particular EC2 instance, and it remains
associated with your account until you choose to explicitly release it. By default how many EIPs is each AWS account limited to on a per region basis?

A. 1
B. 5
C. Unlimited
D. 10

Answer: B

Explanation: By default, all AWS accounts are limited to 5 Elastic IP addresses per region for each AWS account, because public (IPv4) Internet addresses are a
scarce public resource. AWS strongly encourages you to use an EIP primarily for load balancing use cases, and use DNS hostnames for all other inter-node
communication.
If you feel your architecture warrants additional EIPs, you would need to complete the Amazon EC2 Elastic IP Address Request Form and give reasons as to your
need for additional addresses. Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.htmI#using-instance-ad dressing-limit

NEW QUESTION 44
In Amazon EC2, partial instance-hours are billed .

A. per second used in the hour


B. per minute used
C. by combining partial segments into full hours
D. as full hours

Answer: D

Explanation: Partial instance-hours are billed to the next hour. Reference: http://aws.amazon.com/ec2/faqs/

NEW QUESTION 46
In EC2, what happens to the data in an instance store if an instance reboots (either intentionally or unintentionally)?

A. Data is deleted from the instance store for security reasons.


B. Data persists in the instance store.
C. Data is partially present in the instance store.
D. Data in the instance store will be los

Answer: B

Explanation: The data in an instance store persists only during the lifetime of its associated instance. If an instance reboots (intentionally or unintentionally), data
in the instance store persists. However, data on instance store volumes is lost under the following circumstances.
Failure of an underlying drive
Stopping an Amazon EBS-backed instance Terminating an instance
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/InstanceStorage.html

NEW QUESTION 48
Can you specify the security group that you created for a VPC when you launch an instance in EC2-Classic?

A. No, you can specify the security group created for EC2-Classic when you launch a VPC instance.
B. No
C. Yes
D. No, you can specify the security group created for EC2-Classic to a non-VPC based instance onl

Answer: B

Explanation: If you're using EC2-Classic, you must use security groups created specifically for EC2-Classic. When you launch an instance in EC2-Classic, you
must specify a security group in the same region as the instance. You can't specify a security group that you created for a VPC when you launch an instance in
EC2-Classic.
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.htmI#ec2-classic-securit y-groups

NEW QUESTION 53
While using the EC2 GET requests as URLs, the is the URL that serves as the entry point for the web service.

A. token
B. endpoint
C. action
D. None of these

Answer: B

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

Explanation: The endpoint is the URL that serves as the entry point for the web service.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-query-api.htmI

NEW QUESTION 54
You have been asked to build a database warehouse using Amazon Redshift. You know a little about it, including that it is a SQL data warehouse solution, and
uses industry standard ODBC and JDBC connections and PostgreSQL drivers. However you are not sure about what sort of storage it uses for database tables.
What sort of storage does Amazon Redshift use for database tables?

A. InnoDB Tables
B. NDB data storage
C. Columnar data storage
D. NDB CLUSTER Storage

Answer: C

Explanation: Amazon Redshift achieves efficient storage and optimum query performance through a combination of massively parallel processing, columnar data
storage, and very efficient, targeted data compression encoding schemes.
Columnar storage for database tables is an important factor in optimizing analytic query performance because it drastically reduces the overall disk I/O
requirements and reduces the amount of data you need to load from disk.
Reference: http://docs.aws.amazon.com/redshift/latest/dg/c_co|umnar_storage_disk_mem_mgmnt.html

NEW QUESTION 55
Which of the below mentioned options is not available when an instance is launched by Auto Scaling with EC2 Classic?

A. Public IP
B. Elastic IP
C. Private DNS
D. Private IP

Answer: B

Explanation: Auto Scaling supports both EC2 classic and EC2-VPC. When an instance is launched as a part of EC2 classic, it will have the public IP and DNS as
well as the private IP and DNS.
Reference: http://docs.aws.amazon.com/AutoScaIing/latest/DeveIoperGuide/GettingStartedTutoriaI.html

NEW QUESTION 57
You are building infrastructure for a data warehousing solution and an extra request has come through that there will be a lot of business reporting queries running
all the time and you are not sure if your current DB instance will be able to handle it. What would be the best solution for this?

A. DB Parameter Groups
B. Read Replicas
C. Multi-AZ DB Instance deployment
D. Database Snapshots

Answer: B

Explanation: Read Replicas make it easy to take advantage of MySQL’s built-in replication functionality to elastically scale out beyond the capacity constraints of
a single DB Instance for read-heavy database workloads. There are a variety of scenarios where deploying one or more Read Replicas for a given source DB
Instance may make sense. Common reasons for deploying a Read Replica include:
Scaling beyond the compute or I/O capacity of a single DB Instance for read-heavy database workloads. This excess read traffic can be directed to one or more
Read Replicas.
Serving read traffic while the source DB Instance is unavailable. If your source DB Instance cannot take I/O requests (e.g. due to I/O suspension for backups or
scheduled maintenance), you can direct read traffic to your Read RepIica(s). For this use case, keep in mind that the data on the Read Replica may be "staIe"
since the source DB Instance is unavailable.
Business reporting or data warehousing scenarios; you may want business reporting queries to run against a Read Replica, rather than your primary, production
DB Instance.
Reference: https://aws.amazon.com/rds/faqs/

NEW QUESTION 59
Much of your company's data does not need to be accessed often, and can take several hours for retrieval time, so it's stored on Amazon Glacier. However
someone within your organization has expressed concerns that his data is more sensitive than the other data, and is wondering whether the high
level of encryption that he knows is on S3 is also used on the much cheaper Glacier service. Which of the following statements would be most applicable in
regards to this concern?

A. There is no encryption on Amazon Glacier, that's why it is cheaper.


B. Amazon Glacier automatically encrypts the data using AES-128 a lesser encryption method than Amazon S3 but you can change it to AES-256 if you are willing
to pay more.
C. Amazon Glacier automatically encrypts the data using AES-256, the same as Amazon S3.
D. Amazon Glacier automatically encrypts the data using AES-128 a lesser encryption method than Amazon S3.

Answer: C

Explanation: Like Amazon S3, the Amazon Glacier service provides low-cost, secure, and durable storage. But where S3 is designed for rapid retrieval, Glacier is
meant to be used as an archival service for data that is not accessed often, and for which retrieval times of several hours are suitable.
Amazon Glacier automatically encrypts the data using AES-256 and stores it durably in an immutable form. Amazon Glacier is designed to provide average annual

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

durability of 99.999999999% for an archive. It stores each archive in multiple facilities and multiple devices. Unlike traditional systems which can require laborious
data verification and manual repair, Glacier performs regular, systematic data integrity checks, and is built to be automatically self-healing.
Reference: http://d0.awsstatic.com/whitepapers/Security/AWS%20Security%20Whitepaper.pdf

NEW QUESTION 62
A major finance organisation has engaged your company to set up a large data mining application. Using AWS you decide the best service for this is Amazon
Elastic MapReduce(EMR) which you know uses Hadoop. Which of the following statements best describes Hadoop?

A. Hadoop is 3rd Party software which can be installed using AMI


B. Hadoop is an open source python web framework
C. Hadoop is an open source Java software framework
D. Hadoop is an open source javascript framework

Answer: C

Explanation: Amazon EMR uses Apache Hadoop as its distributed data processing engine.
Hadoop is an open source, Java software framework that supports data-intensive distributed applications running on large clusters of commodity hardware.
Hadoop implements a programming model named "MapReduce," where the data is dMded into many small fragments of work, each of which may be executed on
any node in the cluster.
This framework has been widely used by developers, enterprises and startups and has proven to be a reliable software platform for processing up to petabytes of
data on clusters of thousands of commodity machines.
Reference: http://aws.amazon.com/elasticmapreduce/faqs/

NEW QUESTION 64
In Amazon EC2 Container Service, are other container types supported?

A. Yes, EC2 Container Service supports any container service you need.
B. Yes, EC2 Container Service also supports Microsoft container service.
C. No, Docker is the only container platform supported by EC2 Container Service presently.
D. Yes, EC2 Container Service supports Microsoft container service and Openstac

Answer: C

Explanation: In Amazon EC2 Container Service, Docker is the only container platform supported by EC2 Container Service presently.
Reference: http://aws.amazon.com/ecs/faqs/

NEW QUESTION 69
is a fast, filexible, fully managed push messaging service.

A. Amazon SNS
B. Amazon SES
C. Amazon SQS
D. Amazon FPS

Answer: A

Explanation: Amazon Simple Notification Service (Amazon SNS) is a fast, filexible, fully managed push messaging service. Amazon SNS makes it simple and
cost-effective to push to mobile devices such as iPhone, iPad, Android, Kindle Fire, and internet connected smart devices, as well as pushing to other distributed
services.
Reference: http://aws.amazon.com/sns/?nc1=h_I2_as

NEW QUESTION 74
You have just been given a scope for a new client who has an enormous amount of data(petabytes) that he constantly needs analysed. Currently he is paying a
huge amount of money for a data warehousing company to do this for him and is wondering if AWS can provide a cheaper solution. Do you think AWS has a
solution for this?

A. Ye
B. Amazon SimpIeDB
C. N
D. Not presently
E. Ye
F. Amazon Redshift
G. Ye
H. Your choice of relational AMIs on Amazon EC2 and EBS

Answer: C

Explanation: Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple and cost-effective to efficiently analyze all
your data using your existing business intelligence tools. You can start small for just $0.25 per hour with no commitments or upfront costs and scale to a petabyte
or more for $1,000 per terabyte per year, less than a tenth of most other data warehousing solutions. Amazon Redshift delivers fast query performance by using
columnar storage technology to improve I/O efficiency and parallelizing queries across multiple nodes. Redshift uses standard PostgreSQL JDBC and ODBC
drivers, allowing you to use a wide range of familiar SQL clients. Data load speed scales linearly with cluster size, with integrations to Amazon S3, Amazon
DynamoDB, Amazon Elastic MapReduce, Amazon Kinesis or any SSH-enabled host.
Reference: https://aws.amazon.com/running_databases/#redshift_anchor

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

NEW QUESTION 78
An organization has created an application which is hosted on the AWS EC2 instance. The application stores images to S3 when the end user uploads to it. The
organization does not want to store the AWS secure credentials required to access the S3 inside the instance. Which of the below mentioned options is a possible
solution to avoid any security threat?

A. Use the IAM based single sign between the AWS resources and the organization application.
B. Use the IAM role and assign it to the instance.
C. Since the application is hosted on EC2, it does not need credentials to access S3.
D. Use the X.509 certificates instead of the access and the secret access key

Answer: B

Explanation: The AWS IAM role uses temporary security credentials to access AWS services. Once the role is assigned to an instance, it will not need any
security credentials to be stored on the instance. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

NEW QUESTION 83
Can resource record sets in a hosted zone have a different domain suffix (for example, www.bIog. acme.com and www.acme.ca)?

A. Yes, it can have for a maximum of three different TLDs.


B. Yes
C. Yes, it can have depending on the TLD.
D. No

Answer: D

Explanation: The resource record sets contained in a hosted zone must share the same suffix. For example, the exampIe.com hosted zone can contain resource
record sets for www.exampIe.com and wvvw.aws.exampIe.com subdomains, but it cannot contain resource record sets for a www.exampIe.ca subdomain.
Reference: http://docs.aws.amazon.com/Route53/Iatest/DeveIoperGuide/AboutHostedZones.html

NEW QUESTION 88
You are running PostgreSQL on Amazon RDS and it seems to be all running smoothly deployed in one availability zone. A database administrator asks you if DB
instances running PostgreSQL support MuIti-AZ deployments. What would be a correct response to this QUESTION ?

A. Yes.
B. Yes but only for small db instances.
C. No.
D. Yes but you need to request the service from AW

Answer: A

Explanation: Amazon RDS supports DB instances running several versions of PostgreSQL. Currently we support PostgreSQL versions 9.3.1, 9.3.2, and 9.3.3.
You can create DB instances and DB snapshots,
point-in-time restores and backups.
DB instances running PostgreSQL support MuIti-AZ deployments, Provisioned IOPS, and can be created inside a VPC. You can also use SSL to connect to a DB
instance running PostgreSQL.
You can use any standard SQL client application to run commands for the instance from your client computer. Such applications include pgAdmin, a popular Open
Source administration and development tool for PostgreSQL, or psql, a command line utility that is part of a PostgreSQL installation. In order to deliver a managed
service experience, Amazon RDS does not provide host access to DB instances, and it restricts access to certain system procedures and tables that require
advanced prMleges. Amazon RDS supports access to databases on a DB instance using any standard SQL client application. Amazon RDS does not allow direct
host access to a DB instance via Telnet or Secure Shell (SSH).
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.htmI

NEW QUESTION 92
A user has launched 10 EC2 instances inside a placement group. Which of the below mentioned statements is true with respect to the placement group?

A. All instances must be in the same AZ


B. All instances can be across multiple regions
C. The placement group cannot have more than 5 instances
D. All instances must be in the same region

Answer: A

Explanation: A placement group is a logical grouping of EC2 instances within a single Availability Zone. Using placement groups enables applications to
participate in a low-latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network latency, high network throughput
or both.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

NEW QUESTION 95
Which of the following AWS CLI commands is syntactically incorrect?
1. $ aws ec2 describe-instances
2. $ aws ec2 start-instances --instance-ids i-1348636c
3. $ aws sns publish --topic-arn arn:aws:sns:us-east-1:546419318123:OperationsError -message "Script Failure"
4. $ aws sqs receive-message --queue-urI https://queue.amazonaws.com/546419318123/Test

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

A. 3
B. 4
C. 2
D. 1

Answer: A

Explanation: The following CLI command is missing a hyphen before "-message".


aws sns publish --topic-arn arn:aws:sns:us-east-1:546419318123:OperationsError -message "Script Failure"
It has been added below in red
aws sns publish --topic-arn arn:aws:sns:us-east-1:546419318123:OperationsError ---message "Script Failure"
Reference: http://aws.amazon.com/c|i/

NEW QUESTION 97
An organization has developed a mobile application which allows end users to capture a photo on their mobile device, and store it inside an application. The
application internally uploads the data to AWS S3. The organization wants each user to be able to directly upload data to S3 using their Google ID. How will the
mobile app allow this?

A. Use the AWS Web identity federation for mobile applications, and use it to generate temporary security credentials for each user.
B. It is not possible to connect to AWS S3 with a Google ID.
C. Create an IAM user every time a user registers with their Google ID and use IAM to upload files to S3.
D. Create a bucket policy with a condition which allows everyone to upload if the login ID has a Google part to it.

Answer: A

Explanation: For Amazon Web Services, the Web identity federation allows you to create cloud-backed mobile apps that use public identity providers, such as
login with Facebook, Google, or Amazon. It will create temporary security credentials for each user, which will be authenticated by the AWS services, such as S3.
Reference: http://docs.aws.amazon.com/STS/latest/UsingSTS/CreatingWIF.htmI

NEW QUESTION 101


You are architecting an auto-scalable batch processing system using video processing pipelines and Amazon Simple Queue Service (Amazon SQS) for a
customer. You are unsure of the limitations of SQS and need to find out. What do you think is a correct statement about the limitations of Amazon SQS?

A. It supports an unlimited number of queues but a limited number of messages per queue for each user but automatically deletes messages that have been in the
queue for more than 4 weeks.
B. It supports an unlimited number of queues and unlimited number of messages per queue for each user but automatically deletes messages that have been in
the queue for more than 4 days.
C. It supports an unlimited number of queues but a limited number of messages per queue for each user but automatically deletes messages that have been in the
queue for more than 4 days.
D. It supports an unlimited number of queues and unlimited number of messages per queue for each user but automatically deletes messages that have been in
the queue for more than 4 weeks.

Answer: B

Explanation: Amazon Simple Queue Service (Amazon SQS) is a messaging queue service that handles message or workflows between other components in a
system.
Amazon SQS supports an unlimited number of queues and unlimited number of messages per queue for each user. Please be aware that Amazon SQS
automatically deletes messages that have been in the queue for more than 4 days.
Reference: http://aws.amazon.com/documentation/sqs/

NEW QUESTION 104


An online gaming site asked you if you can deploy a database that is a fast, highly scalable NoSQL database service in AWS for a new site that he wants to build.
Which database should you recommend?

A. Amazon DynamoDB
B. Amazon RDS
C. Amazon Redshift
D. Amazon SimpIeDB

Answer: A

Explanation: Amazon DynamoDB is ideal for database applications that require very low latency and predictable performance at any scale but don’t need
complex querying capabilities like joins or transactions. Amazon DynamoDB is a fully-managed NoSQL database service that offers high performance, predictable
throughput and low cost. It is easy to set up, operate, and scale.
With Amazon DynamoDB, you can start small, specify the throughput and storage you need, and easily scale your capacity requirements on the fly. Amazon
DynamoDB automatically partitions data over a number of servers to meet your request capacity. In addition, DynamoDB automatically replicates your data
synchronously across multiple Availability Zones within an AWS Region to ensure high-availability and data durability.
Reference: https://aws.amazon.com/running_databases/#dynamodb_anchor

NEW QUESTION 106


You have been doing a lot of testing of your VPC Network by deliberately failing EC2 instances to test whether instances are failing over properly. Your customer
who will be paying the AWS bill for all this asks you if he being charged for all these instances. You try to explain to him how the billing works on EC2 instances to
the best of your knowledge. What would be an appropriate response to give to the customer
in regards to this?

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

A. Billing commences when Amazon EC2 AM instance is completely up and billing ends as soon as the instance starts to shutdown.
B. Billing only commences only after 1 hour of uptime and billing ends when the instance terminates.
C. Billing commences when Amazon EC2 initiates the boot sequence of an AM instance and billing ends when the instance shuts down.
D. Billing commences when Amazon EC2 initiates the boot sequence of an AM instance and billing ends as soon as the instance starts to shutdown.

Answer: C

Explanation: Billing commences when Amazon EC2 initiates the boot sequence of an AM instance. Billing ends when the instance shuts down, which could occur
through a web services command, by running "shutdown -h", or through instance failure.
Reference: http://aws.amazon.com/ec2/faqs/#BiIIing

NEW QUESTION 108


You log in to IAM on your AWS console and notice the following message. "Delete your root access keys." Why do you think IAM is requesting this?

A. Because the root access keys will expire as soon as you log out.
B. Because the root access keys expire after 1 week.
C. Because the root access keys are the same for all users.
D. Because they provide unrestricted access to your AWS resource

Answer: D

Explanation: In AWS an access key is required in order to sign requests that you make using the command-line interface (CLI), using the AWS SDKs, or using
direct API calls. Anyone who has the access key for your root account has unrestricted access to all the resources in your account, including billing information.
One of the best ways to protect your account is to not have an access key for your root account. We recommend that unless you must have a root access key (this
is very rare), that you do not generate one. Instead, AWS best practice is to create one or more AWS Identity and Access Management (IAM) users, give them the
necessary permissions, and use IAM users for everyday interaction with AWS.
Reference:
http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.htmI#root-password

NEW QUESTION 112


Once again your customers are concerned about the security of their sensitive data and with their latest enquiry ask about what happens to old storage devices on
AWS. What would be the best answer to this QUESTION ?

A. AWS reformats the disks and uses them again.


B. AWS uses the techniques detailed in DoD 5220.22-M to destroy data as part of the decommissioning process.
C. AWS uses their own proprietary software to destroy data as part of the decommissioning process.
D. AWS uses a 3rd party security organization to destroy data as part of the decommissioning proces

Answer: B

Explanation: When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that is designed to prevent
customer data from being exposed to unauthorized indMduals.
AWS uses the techniques detailed in DoD 5220.22-M ("Nationa| Industrial Security Program Operating ManuaI ") or NIST 800-88 ("GuideIines for Media
Sanitization") to destroy data as part of the decommissioning process.
All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance
with industry-standard practices.
Reference: http://d0.awsstatic.com/whitepapers/Security/AWS%20Security%20Whitepaper.pdf

NEW QUESTION 115


Your company has been storing a lot of data in Amazon Glacier and has asked for an inventory of what is in there exactly. So you have decided that you need to
download a vault inventory. Which of the following statements is incorrect in relation to Vault Operations in Amazon Glacier?

A. You can use Amazon Simple Notification Service (Amazon SNS) notifications to notify you when the job completes.
B. A vault inventory refers to the list of archives in a vault.
C. You can use Amazon Simple Queue Service (Amazon SQS) notifications to notify you when the job completes.
D. Downloading a vault inventory is an asynchronous operatio

Answer: C

Explanation: Amazon Glacier supports various vault operations.


A vault inventory refers to the list of archives in a vault. For each archive in the list, the inventory provides archive information such as archive ID, creation date,
and size. Amazon Glacier updates the vault inventory approximately once a day, starting on the day the first archive is uploaded to the vault. A vault inventory
must exist for you to be able to download it.
Downloading a vault inventory is an asynchronous operation. You must first initiate a job to download the inventory. After receMng the job request, Amazon Glacier
prepares your inventory for download. After the job completes, you can download the inventory data.
Given the asynchronous nature of the job, you can use Amazon Simple Notification Service (Amazon SNS) notifications to notify you when the job completes. You
can specify an Amazon SNS topic for each indMdual job request or configure your vault to send a notification when specific vault events occur. Amazon Glacier
prepares an inventory for each vault periodically, every 24 hours. If there have been no archive additions or deletions to the vault since the last inventory, the
inventory date is not updated. When you initiate a job for a vault inventory, Amazon Glacier returns the last inventory it generated, which is a point-in-time snapshot
and not real-time data. You might not find it useful to retrieve vault inventory for each archive upload. However, suppose you maintain a database on the client-side
associating metadata about the archives you upload to Amazon Glacier. Then, you might find the vault inventory useful to reconcile information in your database
with the actual vault inventory.
Reference: http://docs.aws.amazon.com/amazongIacier/latest/dev/working-with-vaults.html

NEW QUESTION 120

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

You are in the process of building an online gaming site for a client and one of the requirements is that it must be able to process vast amounts of data easily.
Which AWS Service would be very helpful in processing all this data?

A. Amazon S3
B. AWS Data Pipeline
C. AWS Direct Connect
D. Amazon EMR

Answer: D

Explanation: Managing and analyzing high data volumes produced by online games platforms can be difficult. The back-end infrastructures of online games can
be challenging to maintain and operate. Peak usage periods, multiple players, and high volumes of write operations are some of the most common problems that
operations teams face.
Amazon Elastic MapReduce (Amazon EMR) is a service that processes vast amounts of data easily. Input data can be retrieved from web server logs stored on
Amazon S3 or from player data stored in Amazon DynamoDB tables to run analytics on player behavior, usage patterns, etc. Those results can be stored again on
Amazon S3, or inserted in a relational database for further analysis with classic business intelligence tools.
Reference: http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_games_10.pdf

NEW QUESTION 122


You need to change some settings on Amazon Relational Database Service but you do not want the database to reboot immediately which you know might
happen depending on the setting that you change. Which of the following will cause an immediate DB instance reboot to occur?

A. You change storage type from standard to PIOPS, and Apply Immediately is set to true.
B. You change the DB instance class, and Apply Immediately is set to false.
C. You change a static parameter in a DB parameter group.
D. You change the backup retention period for a DB instance from 0 to a nonzero value or from a nonzero value to 0, and Apply Immediately is set to false.

Answer: A

Explanation: A DB instance outage can occur when a DB instance is rebooted, when the DB instance is put into a state that prevents access to it, and when the
database is restarted. A reboot can occur when you manually reboot your DB instance or when you change a DB instance setting that requires a reboot before it
can take effect.
A DB instance reboot occurs immediately when one of the following occurs:
You change the backup retention period for a DB instance from 0 to a nonzero value or from a nonzero value to 0 and set Apply Immediately to true.
You change the DB instance class, and Apply Immediately is set to true.
You change storage type from standard to PIOPS, and Apply Immediately is set to true.
A DB instance reboot occurs during the maintenance window when one of the following occurs:
You change the backup retention period for a DB instance from 0 to a nonzero value or from a nonzero value to 0, and Apply Immediately is set to false.
You change the DB instance class, and Apply Immediately is set to false. Reference:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Troub|eshooting.htm|#CHAP_TroubI eshooting.Security

NEW QUESTION 126


You are setting up a very complex financial services grid and so far it has 5 Elastic IP (EIP) addresses.
You go to assign another EIP address, but all accounts are limited to 5 Elastic IP addresses per region by default, so you aren't able to. What is the reason for
this?

A. For security reasons.


B. Hardware restrictions.
C. Public (IPV4) internet addresses are a scarce resource.
D. There are only 5 network interfaces per instanc

Answer: C

Explanation: Public (IPV4) internet addresses are a scarce resource. There is only a limited amount of public IP space available, and Amazon EC2 is committed
to helping use that space efficiently.
By default, all accounts are limited to 5 Elastic IP addresses per region. If you need more than 5 Elastic IP addresses, AWS asks that you apply for your limit to be
raised. They will ask you to think through your use case and help them understand your need for additional addresses.
Reference: http://aws.amazon.com/ec2/faqs/#How_many_instances_can_I_run_in_Amazon_EC2

NEW QUESTION 129


Amazon RDS provides high availability and failover support for DB instances using .

A. customized deployments
B. Appstream customizations
C. log events
D. MuIti-AZ deployments

Answer: D

Explanation: Amazon RDS provides high availability and failover support for DB instances using MuIti-AZ deployments. MuIti-AZ deployments for Oracle,
PostgreSQL, MySQL, and MariaDB DB instances use Amazon technology, while SQL Server DB instances use SQL Server Mrroring.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.IV|u|tiAZ.htmI

NEW QUESTION 134


What does Amazon DynamoDB provide?

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

A. A predictable and scalable MySQL database


B. A fast and reliable PL/SQL database cluster
C. A standalone Cassandra database, managed by Amazon Web Services
D. A fast, highly scalable managed NoSQL database service

Answer: D

Explanation: Amazon DynamoDB is a managed NoSQL database service offered by Amazon. It automatically manages tasks like scalability for you while it
provides high availability and durability for your data, allowing you to concentrate in other aspects of your application.
Reference: check link - https://aws.amazon.com/running_databases/

NEW QUESTION 135


You want to use AWS Import/Export to send data from your S3 bucket to several of your branch offices. What should you do if you want to send 10 storage units to
AWS?

A. Make sure your disks are encrypted prior to shipping.


B. Make sure you format your disks prior to shipping.
C. Make sure your disks are 1TB or more.
D. Make sure you submit a separate job request for each devic

Answer: D

Explanation: When using Amazon Import/Export, a separate job request needs to be submitted for each physical device even if they belong to the same import or
export job.
Reference: http://docs.aws.amazon.com/AWSImportExport/latest/DG/Concepts.html

NEW QUESTION 140


What would be the best way to retrieve the public IP address of your EC2 instance using the CLI?

A. Using tags
B. Using traceroute
C. Using ipconfig
D. Using instance metadata

Answer: D

Explanation: To determine your instance's public IP address from within the instance, you can use instance metadata. Use the following command to access the
public IP address: For Linux use, $ curl
http://169.254.169.254/latest/meta-data/public-ipv4, and for Windows use, $ wget http://169.254.169.254/latest/meta-data/public-ipv4.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.htm|

NEW QUESTION 141


You need to measure the performance of your EBS volumes as they seem to be under performing. You have come up with a measurement of 1,024 KB I/O but
your colleague tells you that EBS volume performance is measured in IOPS. How many IOPS is equal to 1,024 KB I/O?

A. 16
B. 256
C. 8
D. 4

Answer: D

Explanation: Several factors can affect the performance of Amazon EBS volumes, such as instance configuration, I/O characteristics, workload demand, and
storage configuration.
IOPS are input/output operations per second. Amazon EBS measures each I/O operation per second
(that is 256 KB or smaller) as one IOPS. I/O operations that are larger than 256 KB are counted in 256 KB capacity units.
For example, a 1,024 KB I/O operation would count as 4 IOPS.
When you provision a 4,000 IOPS volume and attach it to an EBS-optimized instance that can provide the necessary bandwidth, you can transfer up to 4,000
chunks of data per second (provided that the I/O does not exceed the 128 MB/s per volume throughput limit of General Purpose (SSD) and Provisioned IOPS
(SSD) volumes).
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.htmI

NEW QUESTION 145


Having set up a website to automatically be redirected to a backup website if it fails, you realize that there are different types of failovers that are possible. You
need all your resources to be available the majority of the time. Using Amazon Route 53 which configuration would best suit this requirement?

A. Active-active failover.
B. Non
C. Route 53 can't failover.
D. Active-passive failover.
E. Active-active-passive and other mixed configuration

Answer: A

Explanation:

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

You can set up a variety of failover configurations using Amazon Route 53 alias: weighted, latency, geolocation routing, and failover resource record
sets.
Active-active failover: Use this failover configuration when you want all of your resources to be available the majority of the time. When a resource becomes
unavailable, Amazon Route 53 can detect that it's unhealthy and stop including it when responding to queries.
Active-passive failover: Use this failover configuration when you want a primary group of resources to be available the majority of the time and you want a
secondary group of resources to be on standby in case all of the primary resources become unavailable. When responding to queries, Amazon Route 53 includes
only the healthy primary resources. If all of the primary resources are unhealthy, Amazon Route 53 begins to include only the healthy secondary resources in
response to DNS queries.
Active-active-passive and other mixed configurations: You can combine alias and non-alias resource record sets to produce a variety of Amazon Route 53
behaviors.
Reference: http://docs.aws.amazon.com/Route53/Iatest/DeveIoperGuide/dns-failover.html

NEW QUESTION 149


AWS CIoudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those
resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon
EC2 instances or Amazon RDS DB instances), and AWS CIoudFormation takes care of provisioning and configuring those resources for you. What formatting is
required for this template?

A. JSON-formatted document
B. CSS-formatted document
C. XML-formatted document
D. HTML-formatted document

Answer: A

Explanation: You can write an AWS CIoudFormation template (a JSON-formatted document) in a text editor or pick an existing template. The template describes
the resources you want and their settings. For example,
suppose you want to create an Amazon EC2. Your template can declare an instance Amazon EC2 and describe its properties, as shown in the following example:
{
"AWSTemp|ateFormatVersion" : "2010-09-O9",
"Description" : "A simple Amazon EC2 instance", "Resources" : {
"MyEC2Instance" : {
"Type" : "AWS::EC2::Instance", "Properties" : {
"Image|d" : "ami-2f726546", "|nstanceType" : "t1.micro"
}
}
}
}
Reference:
http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/cfn-whatis-howdoesitwork.html

NEW QUESTION 152


You decide that you need to create a number of Auto Scaling groups to try and save some money as you have noticed that at certain times most of your EC2
instances are not being used. By default, what is the maximum number of Auto Scaling groups that AWS will allow you to create?

A. 12
B. Unlimited
C. 20
D. 2

Answer: C

Explanation: Auto Scaling is an AWS service that allows you to increase or decrease the number of EC2 instances within your appIication's architecture. With
Auto Scaling, you create collections of EC2 instances, called Auto Scaling groups. You can create these groups from scratch, or from existing EC2 instances that
are already in production.
Reference: http://docs.aws.amazon.com/general/latest/gr/aws_service_|imits.htm|#Iimits_autoscaIing

NEW QUESTION 156


A user needs to run a batch process which runs for 10 minutes. This will only be run once, or at maximum twice, in the next month, so the processes will be
temporary only. The process needs 15 X-Large instances. The process downloads the code from S3 on each instance when it is launched, and then generates a
temporary log file. Once the instance is terminated, all the data will be lost. Which of the below mentioned pricing models should the user choose in this case?

A. Spot instance.
B. Reserved instance.
C. On-demand instance.
D. EBS optimized instanc

Answer: A

Explanation: In Amazon Web Services, the spot instance is useful when the user wants to run a process temporarily. The spot instance can terminate the
instance if the other user outbids the existing bid. In this case all storage is temporary and the data is not required to be persistent. Thus, the spot instance is a
good option to save money.
Reference: http://aws.amazon.com/ec2/purchasing-options/spot-instances/

NEW QUESTION 160

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

You have been storing massive amounts of data on Amazon Glacier for the past 2 years and now start to wonder if there are any limitations on this. What is the
correct answer to your QUESTION ?

A. The total volume of data is limited but the number of archives you can store are unlimited.
B. The total volume of data is unlimited but the number of archives you can store are limited.
C. The total volume of data and number of archives you can store are unlimited.
D. The total volume of data is limited and the number of archives you can store are limite

Answer: C

Explanation: An archive is a durably stored block of information. You store your data in Amazon Glacier as archives. You may upload a single file as an archive,
but your costs will be lower if you aggregate your data. TAR and ZIP are common formats that customers use to aggregate multiple files into a single file before
uploading to Amazon Glacier.
The total volume of data and number of archives you can store are unlimited. IndMdual Amazon Glacier archives can range in size from 1 byte to 40 terabytes.
The largest archive that can be uploaded in a single upload request is 4 gigabytes.
For items larger than 100 megabytes, customers should consider using the MuItipart upload capability. Archives stored in Amazon Glacier are immutable, i.e.
archives can be uploaded and deleted but cannot be edited or overwritten.
Reference: https://aws.amazon.com/gIacier/faqs/

NEW QUESTION 162


You are setting up your first Amazon Virtual Private Cloud (Amazon VPC) so you decide to use the VPC wizard in the AWS console to help make it easier for you.
Which of the following statements is correct regarding instances that you launch into a default subnet via the VPC wizard?

A. Instances that you launch into a default subnet receive a public IP address and 10 private IP addresses.
B. Instances that you launch into a default subnet receive both a public IP address and a private IP address.
C. Instances that you launch into a default subnet don't receive any ip addresses and you need to define them manually.
D. Instances that you launch into a default subnet receive a public IP address and 5 private IP addresse

Answer: B

Explanation: Instances that you launch into a default subnet receive both a public IP address and a private IP address. Instances in a default subnet also receive
both public and private DNS hostnames. Instances that you launch into a nondefault subnet in a default VPC don't receive a public IP address or a DNS hostname.
You can change your subnet's default public IP addressing behavior.
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html

NEW QUESTION 165


A user has configured ELB with two EBS backed EC2 instances. The user is trying to understand the DNS access and IP support for ELB. Which of the below
mentioned statements may not help the user understand the IP mechanism supported by ELB?

A. The client can connect over IPV4 or IPV6 using Dualstack


B. Communication between the load balancer and back-end instances is always through IPV4
C. ELB DNS supports both IPV4 and IPV6
D. The ELB supports either IPV4 or IPV6 but not both

Answer: D

Explanation: Elastic Load Balancing supports both Internet Protocol version 6 (IPv6) and Internet Protocol version 4 (IPv4). Clients can connect to the user’s load
balancer using either IPv4 or IPv6 (in EC2-Classic) DNS. However, communication between the load balancer and its back-end instances uses only IPv4. The
user can use the Dualstack-prefixed DNS name to enable IPv6 support for communications between the client and the load balancers. Thus, the clients are able to
access the load balancer using either IPv4 or IPv6 as their indMdual connectMty needs dictate.
Reference: http://docs.aws.amazon.com/EIasticLoadBaIancing/latest/DeveIoperGuide/UserScenariosForEC2.html

NEW QUESTION 170


Does AWS CIoudFormation support Amazon EC2 tagging?

A. Yes, AWS CIoudFormation supports Amazon EC2 tagging


B. No, CIoudFormation doesn’t support any tagging
C. No, it doesn’t support Amazon EC2 tagging.
D. It depends if the Amazon EC2 tagging has been defined in the templat

Answer: A

Explanation: In AWS CIoudFormation, Amazon EC2 resources that support the tagging feature can also be tagged in an AWS template. The tag values can refer
to template parameters, other resource names, resource attribute values (e.g. addresses), or values computed by simple functions (e.g., a concatenated list of
strings).
Reference: http://aws.amazon.com/c|oudformation/faqs/

NEW QUESTION 174


Amazon S3 allows you to set per-file permissions to grant read and/or write access. However you have decided that you want an entire bucket with 100 files
already in it to be accessible to the public. You don't want to go through 100 files indMdually and set permissions. What would be the best way to do this?

A. Move the bucket to a new region


B. Add a bucket policy to the bucket.
C. Move the files to a new bucket.
D. Use Amazon EBS instead of S3

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

Answer: B

Explanation: Amazon S3 supports several mechanisms that give you filexibility to control who can access your data as well as how, when, and where they can
access it. Amazon S3 provides four different access control mechanisms: AWS Identity and Access Management (IAM) policies, Access Control Lists (ACLs),
bucket policies, and query string authentication. IAM enables organizations to create and manage multiple users under a single AWS account. With IAM policies,
you can grant IAM users fine-grained control to your Amazon S3 bucket or objects. You can use ACLs to selectively add (grant) certain permissions on indMdual
objects.
Amazon S3 bucket policies can be used to add or deny permissions across some or all of the objects within a single bucket.
With Query string authentication, you have the ability to share Amazon S3 objects through URLs that are
valid for a specified period of time.
Reference: http://aws.amazon.com/s3/detai|s/#security

NEW QUESTION 179


You need to set up a high level of security for an Amazon Relational Database Service (RDS) you have just built in order to protect the confidential information
stored in it. What are all the possible security groups that RDS uses?

A. DB security groups, VPC security groups, and EC2 security groups.


B. DB security groups only.
C. EC2 security groups only.
D. VPC security groups, and EC2 security group

Answer: A

Explanation: A security group controls the access to a DB instance. It does so by allowing access to IP address ranges or Amazon EC2 instances that you
specify.
Amazon RDS uses DB security groups, VPC security groups, and EC2 security groups. In simple terms, a DB security group controls access to a DB instance that
is not in a VPC, a VPC security group controls access to a DB instance inside a VPC, and an Amazon EC2 security group controls access to an EC2 instance and
can be used with a DB instance.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html

NEW QUESTION 183


A user wants to achieve High Availability with PostgreSQL DB. Which of the below mentioned functionalities helps achieve HA?

A. Mu|ti AZ
B. Read Replica
C. Multi region
D. PostgreSQL does not support HA

Answer: A

Explanation: The Multi AZ feature allows the user to achieve High Availability. For Multi AZ, Amazon RDS automatically provisions and maintains a synchronous
"standby" replica in a different Availability Zone. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html

NEW QUESTION 184


A user has created an application which will be hosted on EC2. The application makes calls to DynamoDB to fetch certain data. The application is using the
DynamoDB SDK to connect with from the EC2 instance. Which of the below mentioned statements is true with respect to the best practice for security in this
scenario?

A. The user should create an IAM user with DynamoDB access and use its credentials within the application to connect with DynamoDB
B. The user should attach an IAM role with DynamoDB access to the EC2 instance
C. The user should create an IAM role, which has EC2 access so that it will allow deploying the application
D. The user should create an IAM user with DynamoDB and EC2 acces
E. Attach the user with the application so that it does not use the root account credentials

Answer: B

Explanation: With AWS IAM a user is creating an application which runs on an EC2 instance and makes requests to
AWS, such as DynamoDB or S3 calls. Here it is recommended that the user should not create an IAM user and pass the user's credentials to the application or
embed those credentials inside the application. Instead, the user should use roles for EC2 and give that role access to DynamoDB /S3. When the roles are
attached to EC2, it will give temporary security credentials to the application hosted on that EC2, to connect with DynamoDB / S3.
Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_WorkingWithGroupsAndUsers.htmI

NEW QUESTION 189


After setting up several database instances in Amazon Relational Database Service (Amazon RDS) you decide that you need to track the performance and health
of your databases. How can you do this?

A. Subscribe to Amazon RDS events to be notified when changes occur with a DB instance, DB snapshot, DB parameter group, or DB security group.
B. Use the free Amazon CIoudWatch service to monitor the performance and health of a DB instance.
C. All of the items listed will track the performance and health of a database.
D. View, download, or watch database log files using the Amazon RDS console or Amazon RDS API
E. You can also query some database log files that are loaded into database tables.

Answer: C

Explanation:

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the
cloud. It provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks.
There are several ways you can track the performance and health of a database or a DB instance. You can:
Use the free Amazon CIoudWatch service to monitor the performance and health of a DB instance. Subscribe to Amazon RDS events to be notified when changes
occur with a DB instance, DB snapshot, DB parameter group, or DB security group.
View, download, or watch database log files using the Amazon RDS console or Amazon RDS APIs. You can also query some database log files that are loaded
into database tables.
Use the AWS CIoudTraiI service to record AWS calls made by your AWS account. The calls are recorded in log files and stored in an Amazon S3 bucket.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitoring.htmI

NEW QUESTION 192


You are building a system to distribute confidential documents to employees. Using CIoudFront, what method could be used to serve content that is stored in S3,
but not publically accessible from S3 directly?

A. Add the CIoudFront account security group "amazon-cf/amazon-cf-sg" to the appropriate S3 bucket policy.
B. Create a S3 bucket policy that lists the C|oudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).
C. Create an Identity and Access Management (IAM) User for CIoudFront and grant access to the objects in your S3 bucket to that IAM User.
D. Create an Origin Access Identity (OAI) for CIoudFront and grant access to the objects in your S3 bucket to that OAI.

Answer: D

Explanation: You restrict access to Amazon S3 content by creating an origin access identity, which is a special CIoudFront user. You change Amazon S3
permissions to give the origin access identity permission to access your objects, and to remove permissions from everyone else. When your users access your
Amazon S3 objects using CIoudFront URLs, the CIoudFront origin access identity gets the objects on your users' behalf. If your users try to access objects using
Amazon S3 URLs, they're denied access. The origin access identity has permission to access objects in your Amazon S3 bucket, but users don't. Reference:
http://docs.aws.amazon.com/AmazonCIoudFront/latest/Deve|operGuide/private-content-restricting-acces s-to-s3.htmI

NEW QUESTION 194


A user has attached 1 EBS volume to a VPC instance. The user wants to achieve the best fault tolerance of data possible. Which of the below mentioned options
can help achieve fault tolerance?

A. Attach one more volume with RAID 1 configuration.


B. Attach one more volume with RAID 0 configuration.
C. Connect multiple volumes and stripe them with RAID 6 configuration.
D. Use the EBS volume as a root devic

Answer: A

Explanation: The user can join multiple provisioned IOPS volumes together in a RAID 1 configuration to achieve better fault tolerance. RAID 1 does not provide a
write performance improvement; it requires more bandwidth than non-RAID configurations since the data is written simultaneously to multiple volumes.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html

NEW QUESTION 198


A user is aware that a huge download is occurring on his instance. He has already set the Auto Scaling policy to increase the instance count when the network I/O
increases beyond a certain limit. How can the user ensure that this temporary event does not result in scaling?

A. The network I/O are not affected during data download


B. The policy cannot be set on the network I/O
C. There is no way the user can stop scaling as it is already configured
D. Suspend scaling

Answer: D

Explanation: The user may want to stop the automated scaling processes on the Auto Scaling groups either to perform manual operations or during emergency
situations. To perform this, the user can suspend one or more scaling processes at any time. Once it is completed, the user can resume all the suspended
processes. Reference:http://docs.aws.amazon.com/AutoScaIing/latest/Deve|operGuide/AS_Concepts.htmI

NEW QUESTION 203


Select a true statement about Amazon EC2 Security Groups (EC2-Classic).

A. After you launch an instance in EC2-Classic, you can't change its security groups.
B. After you launch an instance in EC2-Classic, you can change its security groups only once.
C. After you launch an instance in EC2-Classic, you can only add rules to a security group.
D. After you launch an instance in EC2-Classic, you cannot add or remove rules from a security grou

Answer: A

Explanation: After you launch an instance in EC2-Classic, you can't change its security groups. However, you can add rules to or remove rules from a security
group, and those changes are automatically applied to all instances that are associated with the security group.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-network-security.html

NEW QUESTION 206


A user has created photo editing software and hosted it on EC2. The software accepts requests from the user about the photo format and resolution and sends a

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

message to S3 to enhance the picture accordingly. Which of the below mentioned AWS services will help make a scalable software with the AWS infrastructure in
this scenario?

A. AWS Simple Notification Service


B. AWS Simple Queue Service
C. AWS Elastic Transcoder
D. AWS Glacier

Answer: B

Explanation: Amazon Simple Queue Service (SQS) is a fast, reliable, scalable, and fully managed message queuing service. SQS provides a simple and cost-
effective way to decouple the components of an application. The user can configure SQS, which will decouple the call between the EC2 application and S3. Thus,
the application does not keep waiting for S3 to provide the data.
Reference: http://aws.amazon.com/sqs/faqs/

NEW QUESTION 210


Which one of the following answers is not a possible state of Amazon CIoudWatch Alarm?

A. INSUFFICIENT_DATA
B. ALARM
C. OK
D. STATUS_CHECK_FAILED

Answer: D

Explanation: Amazon CIoudWatch Alarms have three possible states: OK: The metric is within the defined threshold ALARM: The metric is outside of the defined
threshold
INSUFFICIENT_DATA: The alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state
Reference: http://docs.aws.amazon.com/AmazonCIoudWatch/latest/DeveloperGuide/AlarmThatSendsEmaiI.html

NEW QUESTION 212


A client of yours has a huge amount of data stored on Amazon S3, but is concerned about someone stealing it while it is in transit. You know that all data is
encrypted in transit on AWS, but which of the following is wrong when describing server-side encryption on AWS?

A. Amazon S3 server-side encryption employs strong multi-factor encryption.


B. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.
C. In server-side encryption, you manage encryption/decryption of your data, the encryption keys, and related tools.
D. Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data as it writes it to disks.

Answer: C

Explanation: Amazon S3 encrypts your object before saving it on disks in its data centers and decrypts it when you download the objects. You have two options
depending on how you choose to manage the encryption keys: Server-side encryption and client-side encryption.
Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it for you when
you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted
objects. Amazon S3 manages encryption and decryption for you. For example, if you share your objects using a pre-signed URL, that URL works the same way for
both encrypted and unencrypted objects.
In client-side encryption, you manage encryption/decryption of your data, the encryption keys, and related tools. Server-side encryption is an alternative to client-
side encryption in which Amazon S3 manages the encryption of your data, freeing you from the tasks of managing encryption and encryption keys.
Amazon S3 server-side encryption employs strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it
encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit
Advanced Encryption Standard (AES-256), to encrypt your data.
Reference: http://docs.aws.amazon.com/AmazonS3/Iatest/dev/UsingServerSideEncryption.htmI

NEW QUESTION 216


A user is running a batch process which runs for 1 hour every day. Which of the below mentioned options is the right instance type and costing model in this case if
the user performs the same task for the whole year?

A. EBS backed instance with on-demand instance pricing.


B. EBS backed instance with heavy utilized reserved instance pricing.
C. EBS backed instance with low utilized reserved instance pricing.
D. Instance store backed instance with spot instance pricin

Answer: A

Explanation: For Amazon Web Services, the reserved instance helps the user save money if the user is going to run the same instance for a longer period.
Generally if the user uses the instances around 30-40% annually it is recommended to use RI. Here as the instance runs only for 1 hour daily it is not
recommended to have RI as it will be costlier. The user should use on-demand with EBS in this case.
Reference: http://aws.amazon.com/ec2/purchasing-options/reserved-instances/

NEW QUESTION 217


Name the disk storage supported by Amazon Elastic Compute Cloud (EC2).

A. None of these
B. Amazon AppStream store

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

C. Amazon SNS store


D. Amazon Instance Store

Answer: D

Explanation: Amazon EC2 supports the following storage options: Amazon Elastic Block Store (Amazon EBS) Amazon EC2 Instance Store Amazon Simple
Storage Service (Amazon S3)
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/Storage.html

NEW QUESTION 222


You are signed in as root user on your account but there is an Amazon S3 bucket under your account that you cannot access. What is a possible reason for this?

A. An IAM user assigned a bucket policy to an Amazon S3 bucket and didn't specify the root user as a principal
B. The S3 bucket is full.
C. The S3 bucket has reached the maximum number of objects allowed.
D. You are in the wrong availability zone

Answer: A

Explanation: With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which AWS resources users can
access.
In some cases, you might have an IAM user with full access to IAM and Amazon S3. If the IAM user assigns a bucket policy to an Amazon S3 bucket and doesn't
specify the root user as a principal, the root user is denied access to that bucket. However, as the root user, you can still access the bucket by modifying the
bucket policy to allow root user access.
Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/iam-troubleshooting.htmI#testing2

NEW QUESTION 226


You have a number of image files to encode. In an Amazon SQS worker queue, you create an Amazon SQS message for each file specifying the command (jpeg-
encode) and the location of the file in Amazon S3. Which of the following statements best describes the functionality of Amazon SQS?

A. Amazon SQS is a distributed queuing system that is optimized for horizontal scalability, not for single-threaded sending or receMng speeds.
B. Amazon SQS is for single-threaded sending or receMng speeds.
C. Amazon SQS is a non-distributed queuing system.
D. Amazon SQS is a distributed queuing system that is optimized for vertical scalability and for single-threaded sending or receMng speeds.

Answer: A

Explanation: Amazon SQS is a distributed queuing system that is optimized for horizontal scalability, not for
single-threaded sending or receMng speeds. A single client can send or receive Amazon SQS messages at a rate of about 5 to 50 messages per second. Higher
receive performance can be achieved by requesting multiple messages (up to 10) in a single call. It may take several seconds before a message that has been to
a queue is available to be received.
Reference: http://media.amazonwebservices.com/AWS_Storage_Options.pdf

NEW QUESTION 230


A scope has been handed to you to set up a super fast gaming server and you decide that you will use Amazon DynamoDB as your database. For efficient access
to data in a table, Amazon DynamoDB creates and maintains indexes for the primary key attributes. A secondary index is a data structure that contains a subset of
attributes from a table, along with an alternate key to support Query operations. How many types of secondary indexes does DynamoDB support?

A. 2
B. 16
C. 4
D. As many as you nee

Answer: A

Explanation: DynamoDB supports two types of secondary indexes:


Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is "IocaI" in the sense that every
partition of a local secondary index is scoped to a table partition that has the same hash key.
Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered "gIobaI"
because queries on the index can span all of the data in a table, across all partitions.
Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Secondarylndexes.html

NEW QUESTION 232


Select the correct statement: Within Amazon EC2, when using Linux instances, the device name
/dev/sda1 is .

A. reserved for EBS volumes


B. recommended for EBS volumes
C. recommended for instance store volumes
D. reserved for the root device

Answer: D

Explanation: Within Amazon EC2, when using a Linux instance, the device name /dev/sda1 is reserved for the root device.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.htmI

NEW QUESTION 235


The common use cases for DynamoDB Fine-Grained Access Control (FGAC) are cases in which the end user wants .

A. to change the hash keys of the table directly


B. to check if an IAM policy requires the hash keys of the tables directly
C. to read or modify any codecommit key of the table directly, without a middle-tier service
D. to read or modify the table directly, without a middle-tier service

Answer: D

Explanation: FGAC can benefit any application that tracks information in a DynamoDB table, where the end user (or application client acting on behalf of an end
user) wants to read or modify the table directly, without a middle-tier service. For instance, a developer of a mobile app named Acme can use FGAC to track the
top score of every Acme user in a DynamoDB table. FGAC allows the application client to modify only the top score for the user that is currently running the
application.
Reference: http://aws.amazon.com/dynamodb/faqs/#security_anchor

NEW QUESTION 237


A user has set up the CIoudWatch alarm on the CPU utilization metric at 50%, with a time interval of 5 minutes and 10 periods to monitor. What will be the state of
the alarm at the end of 90 minutes, if the CPU utilization is constant at 80%?

A. ALERT
B. ALARM
C. OK
D. INSUFFICIENT_DATA

Answer: B

Explanation: In this case the alarm watches a metric every 5 minutes for 10 intervals. Thus, it needs at least 50 minutes to come to the "OK" state.
Till then it will be in the |NSUFFUCIENT_DATA state.
Since 90 minutes have passed and CPU utilization is at 80% constant, the state of alarm will be "ALARNI". Reference:
http://docs.aws.amazon.com/AmazonCIoudWatch/latest/DeveloperGuide/AlarmThatSendsEmaiI.html

NEW QUESTION 241


You need to set up security for your VPC and you know that Amazon VPC provides two features that you can use to increase security for your VPC: security
groups and network access control lists (ACLs). You have already looked into security groups and you are now trying to understand ACLs. Which statement below
is incorrect in relation to ACLs?

A. Supports allow rules and deny rules.


B. Is stateful: Return traffic is automatically allowed, regardless of any rules.
C. Processes rules in number order when deciding whether to allow traffic.
D. Operates at the subnet level (second layer of defense).

Answer: B

Explanation: Amazon VPC provides two features that you can use to increase security for your VPC:
Security groups—Act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the instance level
Network access control lists (ACLs)—Act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level
Security groups are stateful: (Return traffic is automatically allowed, regardless of any rules) Network ACLs are stateless: (Return traffic must be explicitly allowed
by rules)
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Security.html

NEW QUESTION 243


A user comes to you and wants access to Amazon CIoudWatch but only wants to monitor a specific LoadBaIancer. Is it possible to give him access to a specific
set of instances or a specific LoadBaIancer?

A. No because you can't use IAM to control access to CIoudWatch data for specific resources.
B. Ye
C. You can use IAM to control access to CIoudWatch data for specific resources.
D. No because you need to be Sysadmin to access CIoudWatch data.
E. Ye
F. Any user can see all CIoudWatch data and needs no access right

Answer: A

Explanation: Amazon CIoudWatch integrates with AWS Identity and Access Management (IAM) so that you can
specify which CIoudWatch actions a user in your AWS Account can perform. For example, you could create an IAM policy that gives only certain users in your
organization permission to use GetMetricStatistics. They could then use the action to retrieve data about your cloud resources.
You can't use IAM to control access to CIoudWatch data for specific resources. For example, you can't give a user access to CIoudWatch data for only a specific
set of instances or a specific LoadBaIancer. Permissions granted using IAM cover all the cloud resources you use with CIoudWatch. In addition, you can't use IAM
roles with the Amazon CIoudWatch command line tools.
Using Amazon CIoudWatch with IAM doesn't change how you use CIoudWatch. There are no changes to CIoudWatch actions, and no new CIoudWatch actions
related to users and access control.
Reference: http://docs.aws.amazon.com/AmazonC|oudWatch/latest/DeveloperGuide/UsingIAM.htmI

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

NEW QUESTION 248


You have multiple VPN connections and want to provide secure communication between sites using the AWS VPN CIoudHub. Which statement is the most
accurate in describing what you must do to set this up correctly?

A. Create a virtual private gateway with multiple customer gateways, each with unique Border Gateway Protocol (BGP) Autonomous System Numbers (ASNs)
B. Create a virtual private gateway with multiple customer gateways, each with a unique set of keys
C. Create a virtual public gateway with multiple customer gateways, each with a unique Private subnet
D. Create a virtual private gateway with multiple customer gateways, each with unique subnet id

Answer: A

Explanation: If you have multiple VPN connections, you can provide secure communication between sites using the AWS VPN CIoudHub. The VPN CIoudHub
operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing
Internet connections who'd like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectMty between these remote
offices.
To use the AWS VPN CIoudHub, you must create a virtual private gateway with multiple customer
gateways, each with unique Border Gateway Protocol (BGP) Autonomous System Numbers (ASNs). Customer gateways advertise the appropriate routes (BGP
prefixes) over their VPN connections. These routing advertisements are received and re-advertised to each BGP peer, enabling each site to send data to and
receive data from the other sites. The routes for each spoke must have unique ASNs and the sites must not have overlapping IP ranges. Each site can also send
and receive data from the VPC as if they were using a standard VPN connection.
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPN_CIoudHub.htmI

NEW QUESTION 252


Which one of the below is not an AWS Storage Service?

A. Amazon S3
B. Amazon Glacier
C. Amazon CIoudFront
D. Amazon EBS

Answer: C

Explanation: AWS Storage Services are: Amazon S3


Amazon Glacier Amazon EBS
AWS Storage Gateway
Reference: https://consoIe.aws.amazon.com/console

NEW QUESTION 255


A user has deployed an application on his private cloud. The user is using his own monitoring tool. He wants to configure it so that whenever there is an error, the
monitoring tool will notify him via SMS. Which of the below mentioned AWS services will help in this scenario?

A. AWS SES
B. AWS SNS
C. None because the user infrastructure is in the private cloud.
D. AWS SMS

Answer: B

Explanation: Amazon Simple Notification Service (Amazon SNS) is a fast, filexible, and fully managed push messaging service. Amazon SNS can be used to
make push notifications to mobile devices. Amazon SNS can
deliver notifications by SMS text message or email to the Amazon Simple Queue Service (SQS) queues or to any HTTP endpoint. In this case user can use the
SNS apis to send SMS.
Reference: http://aws.amazon.com/sns/

NEW QUESTION 259


After setting up an EC2 security group with a cluster of 20 EC2 instances, you find an error in the security group settings. You quickly make changes to the security
group settings. When will the changes to the settings be effective?

A. The settings will be effective immediately for all the instances in the security group.
B. The settings will be effective only when all the instances are restarted.
C. The settings will be effective for all the instances only after 30 minutes.
D. The settings will be effective only for the new instances added to the security grou

Answer: A

Explanation: Amazon Redshift applies changes to a cluster security group immediately. So if you have associated the cluster security group with a cluster,
inbound cluster access rules in the updated cluster security group apply immediately.
Reference: http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-security-groups.htm|

NEW QUESTION 261


You have a lot of data stored in the AWS Storage Gateway and your manager has come to you asking about how the billing is calculated, specifically the Virtual
Tape Shelf usage. What would be a correct response to this?

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

A. You are billed for the virtual tape data you store in Amazon Glacier and are billed for the size of the virtual tape.
B. You are billed for the virtual tape data you store in Amazon Glacier and billed for the portion of virtual tape capacity that you use, not for the size of the virtual
tape.
C. You are billed for the virtual tape data you store in Amazon S3 and billed for the portion of virtual tape capacity that you use, not for the size of the virtual tape.
D. You are billed for the virtual tape data you store in Amazon S3 and are billed for the size of the virtual tape.

Answer: B

Explanation: The AWS Storage Gateway is a service connecting an on-premises software appliance with cloud-based storage to provide seamless and secure
integration between an organization’s on-premises IT environment and AWS’s storage infrastructure.
AWS Storage Gateway billing is as follows. Volume storage usage (per GB per month):
You are billed for the Cached volume data you store in Amazon S3. You are only billed for volume capacity you use, not for the size of the volume you create.
Snapshot Storage usage (per GB per month): You are billed for the snapshots your gateway stores in Amazon S3. These snapshots are stored and billed as
Amazon EBS snapshots. Snapshots are incremental backups, reducing your storage charges. When taking a new snapshot, only the data that has changed since
your last snapshot is stored.
Virtual Tape Library usage (per GB per month):
You are billed for the virtual tape data you store in Amazon S3. You are only billed for the portion of virtual tape capacity that you use, not for the size of the virtual
tape.
Virtual Tape Shelf usage (per GB per month):
You are billed for the virtual tape data you store in Amazon Glacier. You are only billed for the portion of virtual tape capacity that you use, not for the size of the
virtual tape.
Reference: https://aws.amazon.com/storagegateway/faqs/

NEW QUESTION 266


You are configuring a new VPC for one of your clients for a cloud migration project, and only a public VPN will be in place. After you created your VPC, you
created a new subnet, a new internet gateway, and attached your internet gateway to your VPC. When you launched your first instance into your VPC, you
realized that you aren't able to connect to the instance, even if it is configured with an elastic IP. What should be done to access the instance?

A. A route should be created as 0.0.0.0/0 and your internet gateway as target.


B. Attach another ENI to the instance and connect via new ENI.
C. A NAT instance should be created and all traffic should be forwarded to NAT instance.
D. A NACL should be created that allows all outbound traffi

Answer: A

Explanation: All traffic should be routed via Internet Gateway. So, a route should be created with 0.0.0.0/0 as a source, and your Internet Gateway as your target.
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario1.htmI

NEW QUESTION 270


A user is currently building a website which will require a large number of instances in six months, when a demonstration of the new site will be given upon launch.
Which of the below mentioned options allows the user to procure the resources beforehand so that they need not worry about infrastructure availability during the
demonstration?

A. Procure all the instances as reserved instances beforehand.


B. Launch all the instances as part of the cluster group to ensure resource availability.
C. Pre-warm all the instances one month prior to ensure resource availability.
D. Ask AWS now to procure the dedicated instances in 6 month

Answer: A

Explanation: Amazon Web Services has massive hardware resources at its data centers, but they are finite. The best way for users to maximize their access to
these resources is by reserving a portion of the computing capacity that they require. This can be done through reserved instances. With reserved instances, the
user literally reserves the computing capacity in the Amazon Web Services cloud.
Reference: http://media.amazonwebservices.com/AWS_Building_FauIt_To|erant_AppIications.pdf

NEW QUESTION 274


You receive a bill from AWS but are confused because you see you are incurring different costs for the exact same storage size in different regions on Amazon S3.
You ask AWS why this is so. What response would you expect to receive from AWS?

A. We charge less in different time zones.


B. We charge less where our costs are less.
C. This will balance out next bill.
D. It must be a mistak

Answer: B

Explanation: Amazon S3 is storage for the internet. |t’s a simple storage service that offers software developers a highly-scalable, reliable, and low-latency data
storage infrastructure at very low costs.
AWS charges less where their costs are less.
For example, their costs are lower in the US Standard Region than in the US West (Northern California) Region.
Reference: https://aws.amazon.com/s3/faqs/

NEW QUESTION 279


You are setting up some EBS volumes for a customer who has requested a setup which includes a RAID (redundant array of inexpensive disks). AWS has some
recommendations for RAID setups. Which RAID setup is not recommended for Amazon EBS?

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

A. RAID 5 only
B. RAID 5 and RAID 6
C. RAID 1 only
D. RAID 1 and RAID 6

Answer: B

Explanation: With Amazon EBS, you can use any of the standard RAID configurations that you can use with a traditional bare metal server, as long as that
particular RAID configuration is supported by the operating system for your instance. This is because all RAID is accomplished at the software level. For greater
I/O performance than you can achieve with a single volume, RAID 0 can stripe multiple volumes together; for on-instance redundancy, RAID 1 can mirror two
volumes together.
RAID 5 and RAID 6 are not recommended for Amazon EBS because the parity write operations of these RAID modes consume some of the IOPS available to your
volumes.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html

NEW QUESTION 280


What is the default maximum number of Access Keys per user?

A. 10
B. 15
C. 2
D. 20

Answer: C

Explanation: The default maximum number of Access Keys per user is 2.


Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.htmI

NEW QUESTION 285


Doug has created a VPC with CIDR 10.201.0.0/16 in his AWS account. In this VPC he has created a public subnet with CIDR block 10.201.31.0/24. While
launching a new EC2 from the console, he is not able to assign the private IP address 10.201.31.6 to this instance. Which is the most likely reason for this issue?

A. Private IP address 10.201.31.6 is blocked via ACLs in Amazon infrastructure as a part of platform security.
B. Private address IP 10.201.31.6 is currently assigned to another interface.
C. Private IP address 10.201.31.6 is not part of the associated subnet's IP address range.
D. Private IP address 10.201.31.6 is reserved by Amazon for IP networking purpose

Answer: B

Explanation: In Amazon VPC, you can assign any Private IP address to your instance as long as it is: Part of the associated subnet's IP address range
Not reserved by Amazon for IP networking purposes Not currently assigned to another interface Reference: http://aws.amazon.com/vpc/faqs/

NEW QUESTION 289


You need to create a JSON-formatted text file for AWS CIoudFormation. This is your first template and the only thing you know is that the templates include
several major sections but there is only one that is required for it to work. What is the only section required?

A. Mappings
B. Outputs
C. Resources
D. Conditions

Answer: C

Explanation: AWS CIoudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing
those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like
Amazon EC2 instances or Amazon RDS DB instances), and AWS CIoudFormation takes care of provisioning and configuring those resources for you.
A template is a JSON-formatted text file that describes your AWS infrastructure. Templates include several major sections.
The Resources section is the only section that is required.
The first character in the template must be an open brace ({), and the last character must be a closed brace (}). The following template fragment shows the
template structure and sections.
Reference: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/template-anatomy.html

NEW QUESTION 292


You are planning and configuring some EBS volumes for an application. In order to get the most performance out of your EBS volumes, you should attach them to
an instance with enough to support your volumes.

A. Redundancy
B. Storage
C. Bandwidth
D. Memory

Answer: C

Explanation: When you plan and configure EBS volumes for your application, it is important to consider the configuration of the instances that you will attach the

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

volumes to. In order to get the most performance out of your EBS volumes, you should attach them to an instance with enough bandwidth to support your volumes,
such as an EBS-optimized instance or an instance with 10 Gigabit network connectMty. This is especially important when you use General Purpose (SSD) or
Provisioned IOPS (SSD) volumes, or when you stripe multiple volumes together in a RAID configuration.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-ec2-config.htmI

NEW QUESTION 293


Can a single EBS volume be attached to multiple EC2 instances at the same time?

A. Yes
B. No
C. Only for high-performance EBS volumes.
D. Only when the instances are located in the US region

Answer: B

Explanation: You can't attach an EBS volume to multiple EC2 instances. This is because it is equivalent to using a single hard drive with many computers at the
same time.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.htmI

NEW QUESTION 297


How long does an AWS free usage tier EC2 last for?

A. Forever
B. 12 Months upon signup
C. 1 Month upon signup
D. 6 Months upon signup

Answer: B

Explanation: The AWS free usage tier will expire 12 months from the date you sign up. When your free usage expires or if your application use exceeds the free
usage tiers, you simply pay the standard, pay-as-you-go service rates.
Reference: http://aws.amazon.com/free/faqs/

NEW QUESTION 301


A user is hosting a website in the US West-1 region. The website has the highest client base from the Asia-Pacific (Singapore / Japan) region. The application is
accessing data from S3 before serving it to client. Which of the below mentioned regions gives a better performance for S3 objects?

A. Japan
B. Singapore
C. US East
D. US West-1

Answer: D

Explanation: Access to Amazon S3 from within Amazon EC2 in the same region is fast. In this aspect, though the client base is Singapore, the application is
being hosted in the US West-1 region. Thus, it is recommended that S3 objects be stored in the US-West-1 region.
Reference: http://media.amazonwebservices.com/AWS_Storage_Options.pdf

NEW QUESTION 305


Which of the following statements is true of tagging an Amazon EC2 resource?

A. You don't need to specify the resource identifier while terminating a resource.
B. You can terminate, stop, or delete a resource based solely on its tags.
C. You can't terminate, stop, or delete a resource based solely on its tags.
D. You don't need to specify the resource identifier while stopping a resourc

Answer: C

Explanation: You can assign tags only to resources that already exist. You can't terminate, stop, or delete a resource based solely on its tags; you must specify
the resource identifier.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/Using_Tags.html

NEW QUESTION 308


You have been asked to tighten up the password policies in your organization after a serious security breach, so you need to consider every possible security
measure. Which of the following is not an account password policy for IAM Users that can be set?

A. Force IAM users to contact an account administrator when the user has allowed his or her password to expue.
B. A minimum password length.
C. Force IAM users to contact an account administrator when the user has entered his password incorrectly.
D. Prevent IAM users from reusing previous password

Answer: C

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

Explanation: IAM users need passwords in order to access the AWS Management Console. (They do not need passwords if they will access AWS resources
programmatically by using the CLI, AWS SDKs, or the APIs.)
You can use a password policy to do these things: Set a minimum password length.
Require specific character types, including uppercase letters, lowercase letters, numbers, and non-alphanumeric characters. Be sure to remind your users that
passwords are case sensitive. Allow all IAM users to change their own passwords.
Require IAM users to change their password after a specified period of time (enable password expiration). Prevent IAM users from reusing previous passwords.
Force IAM users to contact an account administrator when the user has allowed his or her password to expue.
Reference: http://docs.aws.amazon.com/|AM/Iatest/UserGuide/Using_ManagingPasswordPoIicies.htm|

NEW QUESTION 312


You have three Amazon EC2 instances with Elastic IP addresses in the US East (Virginia) region, and you want to distribute requests across all three IPs evenly
for users for whom US East (Virginia) is the appropriate region.
How many EC2 instances would be sufficient to distribute requests in other regions?

A. 3
B. 9
C. 2
D. 1

Answer: D

Explanation: If your application is running on Amazon EC2 instances in two or more Amazon EC2 regions, and if you have more than one Amazon EC2 instance
in one or more regions, you can use latency-based routing to route traffic to the correct region and then use weighted resource record sets to route traffic to
instances within the region based on weights that you specify.
For example, suppose you have three Amazon EC2 instances with Elastic IP addresses in the US East (Virginia) region and you want to distribute requests across
all three IPs evenly for users for whom US East (Virginia) is the appropriate region. Just one Amazon EC2 instance is sufficient in the other regions, although you
can apply the same technique to many regions at once.
Reference: http://docs.aws.amazon.com/Route53/Iatest/DeveIoperGuide/Tutorials.html

NEW QUESTION 313


A major client who has been spending a lot of money on his internet service provider asks you to set up an AWS Direct Connection to try and save him some
money. You know he needs high-speed connectMty. Which connection port speeds are available on AWS Direct Connect?

A. 500Mbps and 1Gbps


B. 1Gbps and 10Gbps
C. 100Mbps and 1Gbps
D. 1Gbps

Answer: B

Explanation: AWS Direct Connect is a network service that provides an alternative to using the internet to utilize AWS cloud services.
Using AWS Direct Connect, data that would have previously been transported over the Internet can now be delivered through a private network connection
between AWS and your datacenter or corporate network.
1Gbps and 10Gbps ports are available. Speeds of 50Mbps, 100Mbps, 200Mbps, 300Mbps, 400Mbps, and 500Mbps can be ordered from any APN partners
supporting AWS Direct Connect.
Reference: https://aws.amazon.com/directconnect/faqs/

NEW QUESTION 317


You have just set up yourfirst Elastic Load Balancer (ELB) but it does not seem to be configured properly. You discover that before you start using ELB, you have
to configure the listeners for your load balancer. Which protocols does ELB use to support the load balancing of applications?

A. HTTP and HTTPS


B. HTTP, HTTPS , TCP, SSL and SSH
C. HTTP, HTTPS , TCP, and SSL
D. HTTP, HTTPS , TCP, SSL and SFTP

Answer: C

Explanation: Before you start using Elastic Load BaIancing(ELB), you have to configure the listeners for your load balancer. A listener is a process that listens for
connection requests. It is configured with a protocol and a port number for front-end (client to load balancer) and back-end (load balancer to back-end instance)
connections.
Elastic Load Balancing supports the load balancing of applications using HTTP, HTTPS (secure HTTP), TCP, and SSL (secure TCP) protocols. The HTTPS uses
the SSL protocol to establish secure connections over the HTTP layer. You can also use SSL protocol to establish secure connections over the TCP layer.
The acceptable ports for both HTTPS/SSL and HTTP/TCP connections are 25, 80, 443, 465, 587, and
1024-65535.
Reference:
http://docs.aws.amazon.com/E|asticLoadBaIancing/latest/DeveIoperGuide/elb-listener-config.htmI

NEW QUESTION 318


When does the billing of an Amazon EC2 system begin?

A. It starts when the Status column for your distribution changes from Creating to Deployed.
B. It starts as soon as you click the create instance option on the main EC2 console.
C. It starts when your instance reaches 720 instance hours.
D. It starts when Amazon EC2 initiates the boot sequence of an AM instanc

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

Answer: D

Explanation: Billing commences when Amazon EC2 initiates the boot sequence of an AM instance. Billing ends when the instance terminates, which could occur
through a web services command, by running "shutdown -h", or through instance failure. When you stop an instance, Amazon shuts it down but doesn/Et charge
hourly usage for a stopped instance, or data transfer fees, but charges for the storage for any Amazon EBS volumes.
Reference: http://aws.amazon.com/ec2/faqs/

NEW QUESTION 323


You havejust discovered that you can upload your objects to Amazon S3 using MuItipart Upload API. You start to test it out but are unsure of the benefits that it
would provide. Which of the following is not a benefit of using multipart uploads?

A. You can begin an upload before you know the final object size.
B. Quick recovery from any network issues.
C. Pause and resume object uploads.
D. It's more secure than normal uploa

Answer: D

Explanation: MuItipart upload in Amazon S3 allows you to upload a single object as a set of parts. Each part is a contiguous portion ofthe object's data. You can
upload these object parts independently and in any order.
If transmission of any part fails, you can re-transmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these
parts and creates the object. In general, when
your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.
Using multipart upload provides the following advantages:
Improved throughput—You can upload parts in parallel to improve throughput.
Quick recovery from any network issues—SmaIIer part size minimizes the impact of restarting a failed upload due to a network error.
Pause and resume object upIoads—You can upload object parts over time. Once you initiate a multipart upload there is no expiry; you must explicitly complete or
abort the multipart upload.
Begin an upload before you know the final object size—You can upload an object as you are creating it. Reference:
http://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.htmI

NEW QUESTION 326


What happens to Amazon EBS root device volumes, by default, when an instance terminates?

A. Amazon EBS root device volumes are moved to IAM.


B. Amazon EBS root device volumes are copied into Amazon RDS.
C. Amazon EBS root device volumes are automatically deleted.
D. Amazon EBS root device volumes remain in the database until you delete the

Answer: C

Explanation: By default, Amazon EBS root device volumes are automatically deleted when the instance terminates. Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html

NEW QUESTION 327


A gaming company comes to you and asks you to build them infrastructure for their site. They are not sure how big they will be as with all start ups they have
limited money and big ideas. What they do tell you is that if the game becomes successful, like one of their previous games, it may rapidly grow to millions of users
and generate tens (or even hundreds) of thousands of writes and reads per second. After
considering all of this, you decide that they need a fully managed NoSQL database service that provides fast and predictable performance with seamless
scalability. Which of the following databases do you think would best fit their needs?

A. Amazon DynamoDB
B. Amazon Redshift
C. Any non-relational database.
D. Amazon SimpIeDB

Answer: A

Explanation: Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable
performance with seamless scalability. Amazon DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed
databases to AWS, so they don’t have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaHng.
Today’s web-based applications generate and consume massive amounts of data. For example, an
online game might start out with only a few thousand users and a light database workload consisting of 10 writes per second and 50 reads per second. However, if
the game becomes successful, it may rapidly grow to millions of users and generate tens (or even hundreds) of thousands of writes and reads per second. It may
also create terabytes or more of data per day. Developing your applications against Amazon DynamoDB enables you to start small and simply dial-up your request
capacity for a table as your requirements scale, without incurring downtime. You pay highly cost-efficient rates for the request capacity you provision, and let
Amazon DynamoDB do the work over partitioning your data and traffic over sufficient server capacity to meet your needs. Amazon DynamoDB does the database
management and administration, and you simply store and request your data. Automatic replication and failover provides built-in fault tolerance, high availability,
and data durability. Amazon DynamoDB gives you the peace of mind that your database is fully managed and can grow with your application requirements.
Reference: http://aws.amazon.com/dynamodb/faqs/

NEW QUESTION 332


A favored client needs you to quickly deploy a database that is a relational database service with minimal administration as he wants to spend the least amount of
time administering it. Which database would be the best option?

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

A. Amazon Simp|eDB
B. Your choice of relational AMs on Amazon EC2 and EBS.
C. Amazon RDS
D. Amazon Redshift

Answer: C

Explanation: Amazon Relational Database Service (Amazon RDS) is a web service that makes it easy to set up, operate, and scale a relational database in the
cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications
and business.
Amazon RDS gives you access to the capabilities of a familiar MySQL, Oracle, SQL Server, or PostgreSQL database engine. This means that the code,
applications, and tools you already use today with your existing databases can be used with Amazon RDS. Amazon RDS automatically patches the database
software and backs up your database, storing the backups for a user-defined retention period and enabling point-in-time recovery.
Reference: https://aws.amazon.com/running_databases/#rds_anchor

NEW QUESTION 333


How many types of block devices does Amazon EC2 support?

A. 4
B. 5
C. 2
D. 1

Answer: C

Explanation: Amazon EC2 supports 2 types of block devices. Reference:


http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html

NEW QUESTION 337


You are setting up some IAM user policies and have also become aware that some services support resource-based permissions, which let you attach policies to
the service's resources instead of to IAM users or groups. Which of the below statements is true in regards to resource-level permissions?

A. All services support resource-level permissions for all actions.


B. Resource-level permissions are supported by Amazon CIoudFront
C. All services support resource-level permissions only for some actions.
D. Some services support resource-level permissions only for some action

Answer: D

Explanation: AWS Identity and Access Management is a web service that enables Amazon Web Services (AWS) customers to manage users and user
permissions in AWS. The service is targeted at organizations with multiple users or systems that use AWS products such as Amazon EC2, Amazon RDS, and the
AWS Management Console. With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which AWS
resources users can access.
In addition to supporting IAM user policies, some services support resource-based permissions, which let you attach policies to the service's resources instead of
to IAM users or groups. Resource-based permissions are supported by Amazon S3, Amazon SNS, and Amazon SQS.
The resource-level permissions service supports IAM policies in which you can specify indMdual resources using Amazon Resource Names (ARNs) in the poIicy's
Resource element.
Some services support resource-level permissions only for some actions.
Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_SpecificProducts.html

NEW QUESTION 338


In relation to AWS CIoudHSM, High-availability (HA) recovery is hands-off resumption by failed HA group members.
Prior to the introduction of this function, the HA feature provided redundancy and performance, but required that a failed/lost group member be reinstated.

A. automatically
B. periodically
C. manually
D. continuosly

Answer: C

Explanation: In relation to AWS CIoudHS|VI, High-availability (HA) recovery is hands-off resumption by failed HA group members.
Prior to the introduction of this function, the HA feature provided redundancy and performance, but required that a failed/lost group member be manually
reinstated.
Reference: http://docs.aws.amazon.com/cloudhsm/latest/userguide/ha-best-practices.html

NEW QUESTION 342


Any person or application that interacts with AWS requires security credentials. AWS uses these credentials to identify who is making the call and whether to allow
the requested access. You have just set up a VPC network for a client and you are now thinking about the best way to secure this network. You set up a security
group called vpcsecuritygroup. Which following statement is true in respect to the initial settings that will be applied to this security group if you choose to use the
default settings for this group?

A. Allow all inbound traffic and allow no outbound traffic.


B. Allow no inbound traffic and allow all outbound traffic.
C. Allow inbound traffic on port 80 only and allow all outbound traffic.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

D. Allow all inbound traffic and allow all outbound traffi

Answer: B

Explanation: Amazon VPC provides advanced security features such as security groups and network access control lists to enable inbound and outbound filtering
at the instance level and subnet level.
AWS assigns each security group a unique ID in the form sg-xxxxxxxx. The following are the initial settings for a security group that you create:
Allow no inbound traffic Allow all outbound traffic
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html

NEW QUESTION 343


Having just set up your first Amazon Virtual Private Cloud (Amazon VPC) network, which defined a default network interface, you decide that you need to create
and attach an additional network interface, known as an elastic network interface (ENI) to one of your instances. Which of the following statements is true
regarding attaching network interfaces to your instances in your VPC?

A. You can attach 5 EN|s per instance type.


B. You can attach as many ENIs as you want.
C. The number of ENIs you can attach varies by instance type.
D. You can attach 100 ENIs total regardless of instance typ

Answer: C

Explanation: Each instance in your VPC has a default network interface that is assigned a private IP address from the IP address range of your VPC. You can
create and attach an additional network interface, known as an elastic network interface (ENI), to any instance in your VPC. The number of EN|s you can attach
varies by instance type.

NEW QUESTION 347


A for a VPC is a collection of subnets (typically private) that you may want to designate for your backend RDS DB Instances.

A. DB Subnet Set
B. RDS Subnet Group
C. DB Subnet Group
D. DB Subnet Collection

Answer: C

Explanation: DB Subnet Groups are a set of subnets (one per Availability Zone of a particular region) designed for your DB instances that reside in a VPC. They
make easy to manage Multi-AZ deployments as well as the conversion from a Single-AZ to a Mut|i-AZ one.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSVPC.htmI

NEW QUESTION 349


Amazon Elastic Load Balancing is used to manage traffic on a fileet of Amazon EC2 instances, distributing traffic to instances across all availability zones within a
region. Elastic Load Balancing has all the advantages of an on-premises load balancer, plus several security benefits.
Which of the following is not an advantage of ELB over an on-premise load balancer?

A. ELB uses a four-tier, key-based architecture for encryption.


B. ELB offers clients a single point of contact, and can also serve as the first line of defense against attacks on your network.
C. ELB takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer.
D. ELB supports end-to-end traffic encryption using TLS (previously SSL) on those networks that use secure HTTP (HTTPS) connections.

Answer: A

Explanation: Amazon Elastic Load Balancing is used to manage traffic on a fileet of Amazon EC2 instances, distributing traffic to instances across all availability
zones within a region. Elastic Load Balancing has all the advantages of an on-premises load balancer, plus several security benefits:
Takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer
Offers clients a single point of contact, and can also serve as the first line of defense against attacks on your network
When used in an Amazon VPC, supports creation and management of security groups associated with your Elastic Load Balancing to provide additional
networking and security options
Supports end-to-end traffic encryption using TLS (previously SSL) on those networks that use secure HTTP (HTTPS) connections. When TLS is used, the TLS
server certificate used to terminate client connections can be managed centrally on the load balancer, rather than on every indMdual instance. Reference:
http://d0.awsstatic.com/whitepapers/Security/AWS%20Security%20Whitepaper.pdf

NEW QUESTION 350


You have set up an S3 bucket with a number of images in it and you have decided that you want anybody to be able to access these images, even anonymous
users. To accomplish this you create a bucket policy. You will need to use an Amazon S3 bucket policy that specifies a in the principal element,
which means anyone can access the bucket.

A. hash tag (#)


B. anonymous user
C. wildcard (*)
D. S3 user

Answer: C

Explanation:

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

You can use the AWS Policy Generator to create a bucket policy for your Amazon S3 bucket. You can then use the generated document to set your
bucket policy by using the Amazon S3 console, by a number of third-party tools, or via your application.
You use an Amazon S3 bucket policy that specifies a wildcard (*) in the principal element, which means anyone can access the bucket. With anonymous access,
anyone (including users without an AWS account) will be able to access the bucket.
Reference: http://docs.aws.amazon.com/IAM/|atest/UserGuide/iam-troubleshooting.htm|#d0e20565

NEW QUESTION 354


You have been asked to build AWS infrastructure for disaster recovery for your local applications and within that you should use an AWS Storage Gateway as part
of the solution. Which of the following best describes the function of an AWS Storage Gateway?

A. Accelerates transferring large amounts of data between the AWS cloud and portable storage devices .
B. A web service that speeds up distribution of your static and dynamic web content.
C. Connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between your on-premises IT environment
and AWS's storage infrastructure.
D. Is a storage service optimized for infrequently used data, or "cold data."

Answer: C

Explanation: AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security
features between your on-premises IT environment and the Amazon Web Services (AWS) storage infrastructure. You can use the service to store data in the AWS
cloud for scalable and cost-effective storage that helps maintain data security. AWS Storage Gateway offers both volume-based and tape-based storage solutions:
Volume gateways Gateway-cached volumes Gateway-stored volumes
Gateway-virtual tape library (VTL)
Reference:
http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_disasterrecovery_07.pdf

NEW QUESTION 357


A government client needs you to set up secure cryptographic key storage for some of their extremely confidential data. You decide that the AWS CIoudHSM is
the best service for this. However, there seem to be a few pre-requisites before this can happen, one of those being a security group that has certain ports open.
Which of the following is correct in regards to those security groups?

A. A security group that has port 22 (for SSH) or port 3389 (for RDP) open to your network.
B. A security group that has no ports open to your network.
C. A security group that has only port 3389 (for RDP) open to your network.
D. A security group that has only port 22 (for SSH) open to your network.

Answer: A

Explanation: AWS CIoudHSM provides secure cryptographic key storage to customers by making hardware security modules (HSMs) available in the AWS cloud.
AWS C|oudHSM requires the following environment before an HSM appliance can be provisioned. A virtual private cloud (VPC) in the region where you want the
AWS CIoudHSM service.
One private subnet (a subnet with no Internet gateway) in the VPC. The HSM appliance is provisioned into this subnet.
One public subnet (a subnet with an Internet gateway attached). The control instances are attached to this subnet.
An AWS Identity and Access Management (IAM) role that delegates access to your AWS resources to AWS CIoudHSM.
An EC2 instance, in the same VPC as the HSM appliance, that has the SafeNet client software installed. This instance is referred to as the control instance and is
used to connect to and manage the HSM appliance.
A security group that has port 22 (for SSH) or port 3389 (for RDP) open to your network. This security group is attached to your control instances so you can
access them remotely.

NEW QUESTION 358


In Amazon Elastic Compute Cloud, which ofthe following is used for communication between instances in the same network (EC2-Classic or a VPC)?

A. Private IP addresses
B. Elastic IP addresses
C. Static IP addresses
D. Public IP addresses

Answer: A

Explanation: A private IP address is an IP address that's not reachable over the Internet. You can use private IP addresses for communication between instances
in the same network (EC2-Classic or a VPC). Reference:
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-instance-addressing.htmI

NEW QUESTION 362


A friend tells you he is being charged $100 a month to host his WordPress website, and you tell him you can move it to AWS for him and he will only pay a fraction
of that, which makes him very happy. He then tells you he is being charged $50 a month for the domain, which is registered with the same people that
set it up, and he asks if it's possible to move that to AWS as well. You tell him you aren't sure, but will look into it. Which of the following statements is true in
regards to transferring domain names to AWS?

A. You can't transfer existing domains to AWS.


B. You can transfer existing domains into Amazon Route 53’s management.
C. You can transfer existing domains via AWS Direct Connect.
D. You can transfer existing domains via AWS Import/Expor

Answer: B

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

Explanation: With Amazon Route 53, you can create and manage your public DNS records with the AWS Management Console or with an easy-to-use API. If you
need a domain name, you can find an available name and register it using Amazon Route 53. You can also transfer existing domains into Amazon Route 53’s
management.
Reference: http://aws.amazon.com/route53/

NEW QUESTION 364


Are penetration tests allowed as long as they are limited to the customer's instances?

A. Yes, they are allowed but only for selected regions.


B. No, they are never allowed.
C. Yes, they are allowed without any permission.
D. Yes, they are allowed but only with approval.

Answer: D

Explanation: Penetration tests are allowed after obtaining permission from AWS to perform them. Reference: http://aws.amazon.com/security/penetration-testing/

NEW QUESTION 367


A user has created an ELB with the availability zone US-East-1A. The user wants to add more zones to ELB to achieve High Availability. How can the user add
more zones to the existing ELB?

A. The user should stop the ELB and add zones and instances as required
B. The only option is to launch instances in different zones and add to ELB
C. It is not possible to add more zones to the existing ELB
D. The user can add zones on the fly from the AWS console

Answer: D

Explanation: The user has created an Elastic Load Balancer with the availability zone and wants to add more zones to the existing ELB. The user can do so in
two ways:
From the console or CLI, add new zones to ELB;
Launch instances in a separate AZ and add instances to the existing ELB. Reference:
http://docs.aws.amazon.com/EIasticLoadBaIancing/latest/DeveIoperGuide/enable-disable-az.html

NEW QUESTION 370


A user is sending bulk emails using AWS SES. The emails are not reaching some of the targeted audience because they are not authorized by the ISPs. How can
the user ensure that the emails are all delivered?

A. Send an email using DKINI with SES.


B. Send an email using SMTP with SES.
C. Open a ticket with AWS support to get it authorized with the ISP.
D. Authorize the ISP by sending emails from the development accoun

Answer: A

Explanation: Domain Keys Identified MaiI (DKIM) is a standard that allows senders to sign their email messages and ISPs, and use those signatures to verify that
those messages are legitimate and have not been modified by a third party in transit.
Reference: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/dkim.html

NEW QUESTION 375


A user has launched a large EBS backed EC2 instance in the US-East-1a region. The user wants to achieve Disaster Recovery (DR) for that instance by creating
another small instance in Europe. How can the user achieve DR?

A. Copy the instance from the US East region to the EU region


B. Use the "Launch more like this" option to copy the instance from one region to another
C. Copy the running instance using the "|nstance Copy" command to the EU region
D. Create an AMI of the instance and copy the AMI to the EU regio
E. Then launch the instance from the EU AMI

Answer: D

Explanation: To launch an EC2 instance it is required to have an AMI in that region. If the AMI is not available in that region, then create a new AMI or use the
copy command to copy the AMI from one region to the other region.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.htmI

NEW QUESTION 380


A user is trying to launch a similar EC2 instance from an existing instance with the option "Launch More like this". The AMI ofthe selected instance is deleted. What
will happen in this case?

A. AWS does not need an AMI for the "Launch more like this" option
B. AWS will launch the instance but will not create a new AMI
C. AWS will create a new AMI and launch the instance

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

D. AWS will throw an error saying that the AMI is deregistered

Answer: D

Explanation: If the user has deregistered the AMI of an EC2 instance and is trying to launch a similar instance with the option "Launch more like this", AWS will
throw an error saying that the AMI is deregistered or not available.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html

NEW QUESTION 382


Your company has multiple IT departments, each with their own VPC. Some VPCs are located within the same AWS account, and others in a different AWS
account. You want to peer together all VPCs to enable the IT departments to have full access to each others' resources. There are certain limitations placed on
VPC peering. Which of the following statements is incorrect in relation to VPC peering?

A. Private DNS values cannot be resolved between instances in peered VPCs.


B. You can have up to 3 VPC peering connections between the same two VPCs at the same time.
C. You cannot create a VPC peering connection between VPCs in different regions.
D. You have a limit on the number active and pending VPC peering connections that you can have per VPC.

Answer: B

Explanation: To create a VPC peering connection with another VPC, you need to be aware of the following limitations and rules:
You cannot create a VPC peering connection between VPCs that have matching or overlapping CIDR blocks.
You cannot create a VPC peering connection between VPCs in different regions.
You have a limit on the number active and pending VPC peering connections that you can have per VPC. VPC peering does not support transitive peering
relationships; in a VPC peering connection, your VPC will not have access to any other VPCs that the peer VPC may be peered with. This includes VPC peering
connections that are established entirely within your own AWS account.
You cannot have more than one VPC peering connection between the same two VPCs at the same time. The Maximum Transmission Unit (MTU) across a VPC
peering connection is 1500 bytes.
A placement group can span peered VPCs; however, you will not get full-bisection bandwidth between instances in peered VPCs.
Unicast reverse path forwarding in VPC peering connections is not supported.
You cannot reference a security group from the peer VPC as a source or destination for ingress or egress rules in your security group. Instead, reference CIDR
blocks of the peer VPC as the source or destination of your security group's ingress or egress rules.
Private DNS values cannot be resolved between instances in peered VPCs. Reference:
http://docs.aws.amazon.com/AmazonVPC/Iatest/PeeringGuide/vpc-peering-overview.htmI#vpc-peering-Ii mitations

NEW QUESTION 383


A company wants to review the security requirements of Glacier. Which of the below mentioned statements is true with respect to the AWS Glacier data security?

A. All data stored on Glacier is protected with AES-256 serverside encryption.


B. All data stored on Glacier is protected with AES-128 serverside encryption.
C. The user can set the serverside encryption flag to encrypt the data stored on Glacier.
D. The data stored on Glacier is not encrypted by defaul

Answer: A

Explanation: For Amazon Web Services, all the data stored on Amazon Glacier is protected using serverside encryption. AWS generates separate unique
encryption keys for each Amazon Glacier archive, and encrypts it using AES-256. The encryption key then encrypts itself using AES-256 with a master key that is
stored in a secure location.
Reference: http://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf

NEW QUESTION 388


You are architecting a highly-scalable and reliable web application which will have a huge amount of content .You have decided to use Cloudfront as you know it
will speed up distribution of your static and dynamic web content and know that Amazon C|oudFront integrates with Amazon CIoudWatch metrics so that you can
monitor your web application. Because you live in Sydney you have chosen the the Asia Pacific (Sydney) region in the AWS console. However you have set up
this up but no CIoudFront metrics seem to be appearing in the CIoudWatch console. What is the most likely reason from the possible choices below for this?

A. Metrics for CIoudWatch are available only when you choose the same region as the application you aremonitoring.
B. You need to pay for CIoudWatch for it to become active.
C. Metrics for CIoudWatch are available only when you choose the US East (
D. Virginia)
E. Metrics for CIoudWatch are not available for the Asia Pacific region as ye

Answer: C

Explanation: CIoudFront is a global service, and metrics are available only when you choose the US East (N. Virginia) region in the AWS console. If you choose
another region, no CIoudFront metrics will appear in the CIoudWatch console.
Reference:
http://docs.aws.amazon.com/AmazonCIoudFront/latest/Deve|operGuide/monitoring-using-cloudwatch.ht ml

NEW QUESTION 392


After a major security breach your manager has requested a report of all users and their credentials in AWS. You discover that in IAM you can generate and
download a credential report that lists all users in your account and the status of their various credentials, including passwords, access keys, MFA devices,
and signing certificates. Which following statement is incorrect in regards to the use of credential reports?

A. Credential reports are downloaded XML files.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

B. You can get a credential report using the AWS Management Console, the AWS CLI, or the IAM API.
C. You can use the report to audit the effects of credential lifecycle requirements, such as password rotation.
D. You can generate a credential report as often as once every four hour

Answer: A

Explanation: To access your AWS account resources, users must have credentials.
You can generate and download a credential report that lists all users in your account and the status of their various credentials, including passwords, access
keys, MFA devices, and signing certificates. You can get a credential report using the AWS Management Console, the AWS CLI, or the IAM API.
You can use credential reports to assist in your auditing and compliance efforts. You can use the report to audit the effects of credential lifecycle requirements,
such as password rotation. You can provide the report to an external auditor, or grant permissions to an auditor so that he or she can download the report directly.
You can generate a credential report as often as once every four hours. When you request a report, IAM first checks whether a report for the account has been
generated within the past four hours. If so, the most recent report is downloaded. If the most recent report for the account is more than four hours old, or if there
are no previous reports for the account, IAM generates and downloads a new report.
Credential reports are downloaded as comma-separated values (CSV) files.
You can open CSV files with common spreadsheet software to perform analysis, or you can build an application that consumes the CSV files programmatically and
performs custom analysis. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/credential-reports.html

NEW QUESTION 397


A user has configured a website and launched it using the Apache web server on port 80. The user is using ELB with the EC2 instances for Load Balancing. What
should the user do to ensure that the EC2 instances accept requests only from ELB?

A. Configure the security group of EC2, which allows access to the ELB source security group
B. Configure the EC2 instance so that it only listens on the ELB port
C. Open the port for an ELB static IP in the EC2 security group
D. Configure the security group of EC2, which allows access only to the ELB listener

Answer: A

Explanation: When a user is configuring ELB and registering the EC2 instances with it, ELB will create a source security group. If the user wants to allow traffic
only from ELB, he should remove all the rules set for the other requests and open the port only for the ELB source security group.
Reference:
http://docs.aws.amazon.com/EIasticLoadBaIancing/latest/DeveIoperGuide/using-elb-security-groups.htmI

NEW QUESTION 401


You are playing around with setting up stacks using JSON templates in C|oudFormation to try and understand them a little better. You have set up about 5 or 6 but
now start to wonder if you are being charged for these stacks. What is AWS's billing policy regarding stack resources?

A. You are not charged for the stack resources if they are not taking any traffic.
B. You are charged for the stack resources for the time they were operating (even if you deleted the stack right away)
C. You are charged for the stack resources for the time they were operating (but not if you deleted the stack within 60 minutes)
D. You are charged for the stack resources for the time they were operating (but not if you deleted the stack within 30 minutes)

Answer: B

Explanation: A stack is a collection of AWS resources that you can manage as a single unit. In other words, you can create, update, or delete a collection of
resources by creating, updating, or deleting stacks. All the resources in a stack are defined by the stack's AWS CIoudFormation template. A stack, for instance,
can include all the resources required to run a web application, such as a web server, a database, and networking rules. If you no longer require that web
application, you can simply delete the stack, and all of its related resources are deleted.
You are charged for the stack resources for the time they were operating (even if you deleted the stack right away).
Reference: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/stacks.html

NEW QUESTION 403


You have been given a scope to set up an AWS Media Sharing Framework for a new start up photo
sharing company similar to flickr. The first thing that comes to mind about this is that it will obviously need a huge amount of persistent data storage for this
framework. Which of the following storage options would be appropriate for persistent storage?

A. Amazon Glacier or Amazon S3


B. Amazon Glacier or AWS Import/Export
C. AWS Import/Export or Amazon C|oudFront
D. Amazon EBS volumes or Amazon S3

Answer: D

Explanation: Persistent storage-If you need persistent virtual disk storage similar to a physical disk drive for files or other data that must persist longer than the
lifetime of a single Amazon EC2 instance, Amazon EBS volumes or Amazon S3 are more appropriate.
Reference: http://media.amazonwebservices.com/AWS_Storage_Options.pdf

NEW QUESTION 405


After deploying a new website for a client on AWS, he asks if you can set it up so that if it fails it can be automatically redirected to a backup website that he has
stored on a dedicated server elsewhere. You are wondering whether Amazon Route 53 can do this. Which statement below is correct in regards to Amazon Route
53?

A. Amazon Route 53 can't help detect an outag


B. You need to use another service.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

C. Amazon Route 53 can help detect an outage of your website and redirect your end users to alternate locations.
D. Amazon Route 53 can help detect an outage of your website but can't redirect your end users to alternate locations.
E. Amazon Route 53 can't help detect an outage of your website, but can redirect your end users to alternate locations.

Answer: B

Explanation: With DNS Failover, Amazon Route 53 can help detect an outage of your website and redirect your end users to alternate locations where your
application is operating properly.
Reference:
http://aws.amazon.com/about-aws/whats-new/2013/02/11/announcing-dns-faiIover-for-route-53/

NEW QUESTION 410


In Route 53, what does a Hosted Zone refer to?

A. A hosted zone is a collection of geographical load balancing rules for Route 53.
B. A hosted zone is a collection of resource record sets hosted by Route 53.
C. A hosted zone is a selection of specific resource record sets hosted by CIoudFront for distribution to Route 53.
D. A hosted zone is the Edge Location that hosts the Route 53 records for a use

Answer: B

Explanation: A Hosted Zone refers to a selection of resource record sets hosted by Route 53.
Reference: http://docs.aws.amazon.com/Route53/Iatest/DeveIoperGuide/AboutHostedZones.html

NEW QUESTION 415


Which of the following statements is true of Amazon EC2 security groups?

A. You can change the outbound rules for EC2-Classi


B. Also, you can add and remove rules to a group at any time.
C. You can modify an existing rule in a grou
D. However, you can't add and remove rules to a group.
E. None of the statements are correct.
F. You can't change the outbound rules for EC2-Classi
G. However, you can add and remove rules to agroup at any tim

Answer: D

Explanation: When dealing with security groups, bear in mind that you can freely add and remove rules from a group, but you can't change the outbound rules for
EC2-Classic. If you're using the Amazon EC2 console, you can modify existing rules, and you can copy the rules from an existing security group to a new security
group.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-network-security.htmI

NEW QUESTION 419


All Amazon EC2 instances are assigned two IP addresses at launch. Which are those?

A. 2 Elastic IP addresses
B. A private IP address and an Elastic IP address
C. A public IP address and an Elastic IP address
D. A private IP address and a public IP address

Answer: D

Explanation: In Amazon EC2-Classic every instance is given two IP Addresses: a private IP address and a public IP address
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.htmI#differences

NEW QUESTION 424


Your manager has asked you to set up a public subnet with instances that can send and receive internet traffic, and a private subnet that can't receive traffic
directly from the internet, but can initiate traffic to the internet (and receive responses) through a NAT instance in the public subnet. Hence, the following 3 rules
need to be allowed:
Inbound SSH traffic.
Web sewers in the public subnet to read and write to MS SQL servers in the private subnet Inbound RDP traffic from the Microsoft Terminal Services gateway in
the public private subnet What are the respective ports that need to be opened for this?

A. Ports 22,1433,3389
B. Ports 21,1433,3389
C. Ports 25,1433,3389
D. Ports 22,1343,3999

Answer: A

Explanation: A network access control list (ACL) is an optional layer of security that acts as a firewall for controlling traffic in and out of a subnet. You might set up
network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.
The following ports are recommended by AWS for a single subnet with instances that can receive and send Internet traffic and a private subnet that can't receive

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

traffic directly from the Internet. However, it can initiate traffic to the Internet (and receive responses) through a NAT instance in the public subnet. Inbound SSH
traffic. Port 22
Web sewers in the public subnet to read and write to MS SQL sewers in the private subnet. Port 1433 Inbound RDP traffic from the Microsoft Terminal Sewices
gateway in the public private subnet. Port 3389 Reference:
http://docs.aws.amazon.com/AmazonVPC/Iatest/UserGuide/VPC_Appendix_NACLs.htm|#VPC_Appendi x_NAC Ls_Scenario_2

NEW QUESTION 429


A user has launched an EC2 instance. The instance got terminated as soon as it was launched. Which of the below mentioned options is not a possible reason for
this?

A. The user account has reached the maximum volume limit


B. The AM is missin
C. It is the required part
D. The snapshot is corrupt
E. The user account has reached the maximum EC2 instance limit

Answer: D

Explanation: When the user account has reached the maximum number of EC2 instances, it will not be allowed to launch an instance. AWS will throw an
‘Instance Limit Exceeded’ error. For all other reasons, such as
"AMI is missing part", "Corrupt Snapshot" or "VoIume limit has reached" it will launch an EC2 instance and then terminate it.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_|nstanceStraightToTerminated.html

NEW QUESTION 430


Can you encrypt EBS volumes?

A. Yes, you can enable encryption when you create a new EBS volume using the AWS Management Console, API, or CLI.
B. No, you should use a third-party software to perform raw block-level encryption of an EBS volume.
C. Yes, but you must use a third-party API for encrypting data before it's loaded on EBS.
D. Yes, you can encrypt with the special "ebs_encrypt" command through Amazon API

Answer: A

Explanation: With Amazon EBS encryption, you can now create an encrypted EBS volume and attach it to a supported instance type. Data on the volume, disk
I/O, and snapshots created from the volume are then all encrypted. The encryption occurs on the servers that host the EC2 instances, providing encryption of data
as it moves between EC2 instances and EBS storage. EBS encryption is based on the industry standard AES-256 cryptographic algorithm.
To get started, simply enable encryption when you create a new EBS volume using the AWS Management Console, API, or CLI. Amazon EBS encryption is
available for all the latest EC2 instances in all commercially available AWS regions.
Reference:
https://aws.amazon.com/about-aws/whats-new/2014/05/21/Amazon-EBS-encryption-now-avai|abIe/

NEW QUESTION 432


A user has created an ELB with Auto Scaling. Which of the below mentioned offerings from ELB helps the
user to stop sending new requests traffic from the load balancer to the EC2 instance when the instance is being deregistered while continuing in-flight requests?

A. ELB sticky session


B. ELB deregistration check
C. ELB auto registration Off
D. ELB connection draining

Answer: D

Explanation: The Elastic Load Balancer connection draining feature causes the load balancer to stop sending new requests to the back-end instances when the
instances are deregistering or become unhealthy, while ensuring that in-flight requests continue to be served.
Reference:
http://docs.aws.amazon.com/EIasticLoadBaIancing/latest/DeveIoperGuide/config-conn-drain.htmI

NEW QUESTION 434


While controlling access to Amazon EC2 resources, which of the following acts as a firewall that controls the traffic allowed to reach one or more instances?

A. A security group
B. An instance type
C. A storage cluster
D. An object

Answer: A

Explanation: A security group acts as a firewall that controls the traffic allowed to reach one or more instances. When you launch an instance, you assign it one or
more security groups.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/UsingIAM.htmI

NEW QUESTION 439


A user is running a webserver on EC2. The user wants to receive the SMS when the EC2 instance utilization is above the threshold limit. Which AWS services

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

should the user configure in this case?

A. AWS C|oudWatch + AWS SQS.


B. AWS CIoudWatch + AWS SNS.
C. AWS CIoudWatch + AWS SES.
D. AWS EC2 + AWS Cloudwatc

Answer: B

Explanation: Amazon SNS makes it simple and cost-effective to push to mobile devices, such as iPhone, iPad, Android, Kindle Fire, and internet connected smart
devices, as well as pushing to other distributed services. In this case, the user can configure that Cloudwatch sends an alarm on when the threshold is crossed to
SNS which will trigger an SMS.
Reference: http://aws.amazon.com/sns/

NEW QUESTION 443


A user is making a scalable web application with compartmentalization. The user wants the log module to be able to be accessed by all the application
functionalities in an asynchronous way. Each module of the application sends data to the log module, and based on the resource availability it will process the logs.
Which AWS service helps this functionality?

A. AWS Simple Queue Service.


B. AWS Simple Notification Service.
C. AWS Simple Workflow Service.
D. AWS Simple Email Servic

Answer: A

Explanation: Amazon Simple Queue Service (SQS) is a highly reliable distributed messaging system for storing messages as they travel between computers. By
using Amazon SQS, developers can simply move data between distributed application components. It is used to achieve compartmentalization or loose coupling.
In this case all the modules will send a message to the logger queue and the data will be processed by queue as per the resource availability.
Reference: http://media.amazonwebservices.com/AWS_Building_FauIt_To|erant_AppIications.pdf

NEW QUESTION 444


Which one of the following can't be used as an origin server with Amazon CIoudFront?

A. A web server running in your infrastructure


B. Amazon S3
C. Amazon Glacier
D. A web server running on Amazon EC2 instances

Answer: C

Explanation: Amazon CIoudFront is designed to work with Amazon S3 as your origin server, customers can also use Amazon C|oudFront with origin sewers
running on Amazon EC2 instances or with any other custom origin.
Reference: http://docs.aws.amazon.com/AmazonCIoudFront/latest/DeveIoperGuide/distribution-web.html

NEW QUESTION 447


You have written a CIoudFormation template that creates I Elastic Load Balancer fronting 2 EC2 Instances. Which section of the template should you edit so that
the DNS of the load balancer is returned upon creation of the stack?

A. Resources
B. Outputs
C. Parameters
D. Mappings

Answer: B

Explanation: You can use AWS CIoudFormation’s sample templates or create your own templates to describe the AWS resources, and any associated
dependencies or runtime parameters, required to run your application.
Reference:
http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/outputs-section-structure.html

NEW QUESTION 452


You have been asked to set up a database in AWS that will require frequent and granular updates. You know that you will require a reasonable amount of storage
space but are not sure of the best option. What is the recommended storage option when you run a database on an instance with the above criteria?

A. Amazon S3
B. Amazon EBS
C. AWS Storage Gateway
D. Amazon Glacier

Answer: B

Explanation: Amazon EBS provides durable, block-level storage volumes that you can attach to a running Amazon EC2 instance. You can use Amazon EBS as a
primary storage device for data that requires frequent and granular updates. For example, Amazon EBS is the recommended storage option when you run a

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

database on an instance.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html

NEW QUESTION 456


A user has hosted an application on EC2 instances. The EC2 instances are configured with ELB and Auto Scaling. The application server session time out is 2
hours. The user wants to configure connection draining to ensure that all in-flight requests are supported by ELB even though the instance is being deregistered.
What time out period should the user specify for connection draining?

A. 1 hour
B. 30 minutes
C. 5 minutes
D. 2 hours

Answer: A

Explanation: The Elastic Load Balancer connection draining feature causes the load balancer to stop sending new requests to the back-end instances when the
instances are deregistering or become unhealthy, while ensuring that in-flight requests continue to be served. The user can specify a maximum time of 3600
seconds (1 hour) for the load balancer to keep the connections alive before reporting the instance as deregistered. If the user does not specify the maximum
timeout period, by default, the load balancer will close the connections to the deregistering instance after 300 seconds.
Reference:
http://docs.aws.amazon.com/EIasticLoadBaIancing/latest/DeveIoperGuide/config-conn-drain.htmI

NEW QUESTION 461


How can you apply more than 100 rules to an Amazon EC2-Classic?

A. By adding more security groups


B. You need to create a default security group specifying your required rules if you need to use more than 100 rules per security group.
C. By default the Amazon EC2 security groups support 500 rules.
D. You can't add more than 100 rules to security groups for an Amazon EC2 instanc

Answer: D

Explanation: In EC2-Classic, you can associate an instance with up to 500 security groups and add up to 100 rules to a security group.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-network-security.htmI

NEW QUESTION 462


Identify a true statement about the On-Demand instances purchasing option provided by Amazon EC2.

A. Pay for the instances that you use by the hour, with no long-term commitments or up-front payments.
B. Make a low, one-time, up-front payment for an instance, reserve it for a one- or three-year term, and pay a significantly lower hourly rate for these instances.
C. Pay for the instances that you use by the hour, with long-term commitments or up-front payments.
D. Make a high, one-time, all-front payment for an instance, reserve it for a one- or three-year term, andpay a significantly higher hourly rate for these instance

Answer: A

Explanation: On-Demand instances allow you to pay for the instances that you use by the hour, with no long-term commitments or up-front payments.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/reserved-instances-offerings.html

NEW QUESTION 463


Which of the following statements is NOT true about using Elastic IP Address (EIP) in EC2-Classic and EC2-VPC platforms?

A. In the EC2-VPC platform, the Elastic IP Address (EIP) does not remain associated with the instance when you stop it.
B. In the EC2-Classic platform, stopping the instance disassociates the Elastic IP Address (EIP) from it.
C. In the EC2-VPC platform, if you have attached a second network interface to an instance, when you disassociate the Elastic IP Address (EIP) from that
instance, a new public IP address is not assigned to the instance automatically; you'II have to associate an EIP with it manually.
D. In the EC2-Classic platform, if you disassociate an Elastic IP Address (EIP) from the instance, the instance is automatically assigned a new public IP address
within a few minutes.

Answer: A

Explanation: In the EC2-Classic platform, when you associate an Elastic IP Address (EIP) with an instance, the instance's current public IP address is released to
the EC2-Classic public IP address pool. If you disassociate an EIP from the instance, the instance is automatically assigned a new public IP address within a few
minutes. In addition, stopping the instance also disassociates the EIP from it.
But in the EC2-VPC platform, when you associate an EIP with an instance in a default Virtual Private Cloud (VPC), or an instance in which you assigned a public
IP to the eth0 network interface during launch, its current public IP address is released to the EC2-VPC public IP address pool. If you disassociate an
EIP from the instance, the instance is automatically assigned a new public IP address within a few minutes. However, if you have attached a second network
interface to the instance, the instance is not automatically assigned a new public IP address; you'II have to associate an EIP with it manually. The EIP remains
associated with the instance when you stop it.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.htmI

NEW QUESTION 464


You have a Business support plan with AWS. One of your EC2 instances is running Mcrosoft Windows Server 2008 R2 and you are having problems with the
software. Can you receive support from AWS for this software?

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

A. Yes
B. No, AWS does not support any third-party software.
C. No, Mcrosoft Windows Server 2008 R2 is not supported.
D. No, you need to be on the enterprise support pla

Answer: A

Explanation: Third-party software support is available only to AWS Support customers enrolled for Business or Enterprise Support. Third-party support applies
only to software running on Amazon EC2 and does not extend to assisting with on-premises software. An exception to this is a VPN tunnel configuration running
supported devices for Amazon VPC.
Reference: https://aws.amazon.com/premiumsupport/features/

NEW QUESTION 466


You need to create a load balancer in a VPC network that you are building. You can make your load balancer internal (private) or internet-facing (public). When
you make your load balancer internal, a DNS name will be created, and it will contain the private IP address of the load balancer. An internal load balancer is not
exposed to the internet. When you make your load balancer internet-facing, a DNS name will be created with the public IP address. If you want the Internet-facing
load balancer to be connected to the Internet, where must this load balancer reside?

A. The load balancer must reside in a subnet that is connected to the internet using the internet gateway.
B. The load balancer must reside in a subnet that is not connected to the internet.
C. The load balancer must not reside in a subnet that is connected to the internet.
D. The load balancer must be completely outside of your VP

Answer: A

Explanation: When you create an internal Elastic Load Balancer in a VPC, you need to select private subnets that are in the same Availability Zone as your
instances. If the VPC Elastic Load Balancer is to be public facing, you need to create the Elastic Load Balancer in a public subnet. A subnet is a public subnet if it
is attached to an Internet Gateway (IGW) with a defined route to that gateway. Selecting more than one public subnet increases the availability of your Elastic Load
Balancer.
NB - Elastic Load Balancers in EC2-Classic are always Internet-facing load balancers. Reference:
http://docs.aws.amazon.com/EIasticLoadBaIancing/|atest/DeveIoperGuide/elb-internet-facing-load-baIan cers.htmI

NEW QUESTION 468


You need to develop and run some new applications on AWS and you know that Elastic Beanstalk and CIoudFormation can both help as a deployment
mechanism for a broad range of AWS resources. Which of the following statements best describes the differences between Elastic Beanstalk and
C|oudFormation?

A. Elastic Beanstalk uses Elastic load balancing and CIoudFormation doesn't.


B. CIoudFormation is faster in deploying applications than Elastic Beanstalk.
C. Elastic Beanstalk is faster in deploying applications than C|oudFormation.
D. CIoudFormation is much more powerful than Elastic Beanstalk, because you can actually design and script custom resources

Answer: D

Explanation: These services are designed to complement each other. AWS Elastic Beanstalk provides an environment to easily develop and run applications in
the cloud. It is integrated with developer tools and provides a one-stop experience for you to manage the lifecycle of your applications. AWS CIoudFormation is a
convenient deployment mechanism for a broad range of AWS resources. It supports the infrastructure needs of many different types of applications such as
existing enterprise applications, legacy applications, applications built using a variety of AWS resources and container-based solutions (including those built using
AWS Elastic Beanstalk).
AWS CIoudFormation introduces two new concepts: The template, a JSON-format, text-based file that describes all the AWS resources you need to deploy to run
your application and the stack, the set of AWS resources that are created and managed as a single unit when AWS CIoudFormation instantiates a template.
Reference: http://aws.amazon.com/c|oudformation/faqs/

NEW QUESTION 472


Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (50GB) Oracle
database information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery whole server
and whole disk restores, and indMdual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database
Which backup architecture will meet these requirements?

A. Backup RDS using automated daily DB backups Backup the EC2 instances using AMs andsupplement with file-level backup to 53 using traditional enterprise
backup software to provide fi Ie level restore
B. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using Amis, and supplement by copying file system data to 53 to provide file level restore.
C. Backup RDS using automated daily DB backups Backup the EC2 instances using EBS snapshots and supplement with file-level backups to Amazon Glacier
using traditional enterprise backup software to provide file level restore
D. Backup RDS database to 53 using Oracle RMAN Backup the EC2 instances using Amis, and supplement with EBS snapshots for indMdual volume restore.

Answer: A

Explanation: Point-In-Time Recovery


In addition to the daily automated backup, Amazon RDS archives database change logs. This enables you to recover your database to any point in time during the
backup retention period, up to the last five minutes of database usage.
Amazon RDS stores multiple copies of your data, but for Single-AZ DB instances these copies are stored in a single availability zone. If for any reason a Single-AZ
DB instance becomes unusable, you can use point-in-time recovery to launch a new DB instance with the latest restorable data. For more information on working
with point-in-time recovery, go to Restoring a DB Instance to a Specified Time.
Note
Mu|ti-AZ deployments store copies of your data in different Availability Zones for greater levels of data durability. For more information on Multi-AZ deployments,
see High Availability (MuIti-AZ).

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

NEW QUESTION 477


A customer has a 10 GB AWS Direct Connect connection to an AWS region where they have a web application hosted on Amazon Elastic Computer Cloud (EC2).
The application has dependencies on an on-premises mainframe database that uses a BASE (Basic Available. Sort stale Eventual consistency) rather than an
ACID (Atomicity. Consistency isolation. Durability) consistency model.
The application is exhibiting undesirable behavior because the database is not able to handle the volume of writes. How can you reduce the load on your on-
premises database resources in the most
cost-effective way?

A. Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the on-premises database and a Hadoop cluster on AWS.
B. Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database.
C. Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to write to the on-premises database.
D. Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline.

Answer: A

Explanation: Reference: https://aws.amazon.com/blogs/aws/category/amazon-elastic-map-reduce/

NEW QUESTION 478


You have launched an EC2 instance with four (4) 500GB EBS Provisioned IOPS volumes attached The EC2 Instance Is EBS-Optimized and supports 500 Mbps
throughput between EC2 and EBS The two EBS volumes are configured as a single RAID o device, and each Provisioned IOPS volume is provisioned with
4.000 IOPS (4 000 16KB reads or writes) for a total of 16.000 random IOPS on the instance The EC2 Instance initially delivers the expected 16 000 IOPS random
read and write performance Sometime later in order to increase the total random 1/0 performance of the instance, you add an additional two 500 GB EBS
Provisioned IOPS volumes to the RAID Each volume Is provisioned to 4.000 IOPs like the original four for a total of 24.000 IOPS on the EC2 instance Monitoring
shows that the EC2 instance CPU utilization increased from 50% to 70%. but the total random IOPS measured at the instance level does not increase at all.
What is the problem and a valid solution?

A. Larger storage volumes support higher Provisioned IOPS rates: increase the provisioned volumestorage of each of the 6 EBS volumes to ITB
B. The EBS-Optimized throughput limits the total IOPS that can be utilized use an EBS-Optimized instance that provides larger throughput.
C. Small block sizes cause performance degradation, limiting the 1'0 throughput, configure the instance device driver and file system to use 64KB blocks to
increase throughput.
D. RAID 0 only scales linearly to about 4 devices, use RAID 0 with 4 EBS Provisioned IOPS volumes but increase each Provisioned IOPS EBS volume to 6.000
IOPS.
E. The standard EBS instance root volume limits the total IOPS rate, change the instant root volume to also be a 500GB 4.000 Provisioned IOPS volume.

Answer: E

NEW QUESTION 483


You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot
deployment of around 100 sensors for 3 months each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS.
During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database.
The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard
storage.
The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least
IOOK sensors which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year
Improvements.
To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup win meet the
requirements?

A. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
B. Ingest data into a DynamoDB table and move old data to a Redshift cluster
C. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
D. Keep the current architecture but upgrade RDS storage to 3TB and IOK provisioned IOPS

Answer: C

NEW QUESTION 487


Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for
their pets Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing
health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the
following requirements are met.
Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable. Elastic and parallel The results of
the analytic processing should be persisted for data mining
Which architecture outlined below win meet the initial requirements for the collection platform?

A. Utilize 53 to collect the inbound sensor data analyze the data from 53 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
B. Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Red shift cluster using EMR.
C. Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Mcrosoft SQL Server RDS instance.
D. Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to Dynamo DB.

Answer: B

NEW QUESTION 489


You need a persistent and durable storage to trace call actMty of an IVR (Interactive Voice Response) system. Call duration is mostly in the 2-3 minutes
timeframe. Each traced call can be either active or terminated. An external application needs to know each minute the list of currently active calls, which are
usually a few calls/second. Put once per month there is a periodic peak up to 1000 calls/second for a few hours. The system is open 24/7 and any downtime

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

should be avoided.
Historical data is periodically archived to files. Cost saving is a priority for this project.
What database implementation would better fit this scenario, keeping costs as low as possible?

A. Use RDS Multi-AZ with two tables, one for -Active calls" and one for -Terminated ca Ils". In this way the "Active caIIs_ table is always small and effective to
access.
B. Use DynamoDB with a "Calls" table and a Global Secondary Index on a "IsActive"' attribute that is present for active calls only In this way the Global Secondary
index is sparse and more effective.
C. Use DynamoDB with a 'Calls" table and a Global secondary index on a 'State" attribute that can equal to "active" or "terminated" in this way the Global
Secondary index can be used for all Items in the table.
D. Use RDS Multi-AZ with a "CALLS" table and an Indexed "STATE* field that can be equal to 'ACTIVE" or -TERMNATED" In this way the SOL query Is optimized
by the use of the Index.

Answer: A

NEW QUESTION 490


You have been asked to design the storage layer for an application. The application requires disk
performance of at least 100,000 IOPS in addition, the storage layer must be able to survive the loss of an indMdual disk. EC2 instance, or Availability Zone without
any data loss. The volume you provide must have a capacity of at least 3 TB. Which of the following designs will meet these objectives'?

A. Instantiate a c3.8x|arge instance in us-east-1. Provision 4x1TB EBS volumes, attach them to the instance, and configure them as a single RAID 5 volum
B. Ensure that EBS snapshots are performed every 15 minutes.
C. Instantiate a c3.8xIarge instance in us-east-1. Provision 3xiTB EBS volumes, attach them to the Instance, and configure them as a single RAID 0 volum
D. Ensure that EBS snapshots are performed every 15 minutes.
E. Instantiate an i2.8xIarge instance in us-east-I
F. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instanc
G. Provision 3x1TB EBS volumes, attach them to the instance, and configure them as a second RAID 0 volum
H. Configure synchronous, block-level replication from the ephemeral-backed volume to the EBS-backed volume.
I. Instantiate a c3.8xIarge instance in us-east-1. Provision an AWS Storage Gateway and configure it for 3 TB of storage and 100,000 IOP
J. Attach the volume to the instanc
K. Instantiate an i2.8x|arge instance in us-east-I
L. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instanc
M. Configure synchronous, block- level replication to an identically configured instance inus-east-I

Answer: C

NEW QUESTION 491


You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do
not need to be recreated in the second region? (Choose 2 answers)

A. Route 53 Record Sets


B. IM Roles
C. Elastic IP Addresses (EIP)
D. EC2 Key Pairs
E. Launch configurations
F. Security Groups

Answer: AC

Explanation: Reference:
http://tech.com/wp-content/themes/optimize/download/AWSDisaster_Recovery.pdf (page 6)

NEW QUESTION 492


An International company has deployed a multi-tier web application that relies on DynamoDB in a single region For regulatory reasons they need disaster recovery
capability In a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours They should synchronize their data on a
regular basis and be able to provision me web application rapidly using CIoudFormation.
The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize
only the modified elements.
Which design would you choose to meet these requirements?

A. Use AWS data Pipeline to schedule a DynamoDB cross region copy once a da
B. create a Last updated' attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter.
C. Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to Dynamo DB in the second
region.
D. Use AWS data Pipeline to schedule an export of the DynamoDB table to 53 in the current region once a day then schedule another task immediately after it that
will import data from 53 to DynamoDB in the other region.
E. Send also each Ante into an SOS queue in me second region; use an auto-scaling group behind the SOS queue to replay the write in the second region.

Answer: A

NEW QUESTION 494


Refer to the architecture diagram above of a batch processing solution using Simple Queue Service (SQS) to set up a message queue between EC2 instances
which are used as batch processors Cloud Watch monitors the number of Job requests (queued messages) and an Auto Scaling group adds or deletes
batch sewers automatically based on parameters set in Cloud Watch alarms. You can use this architecture to implement which of the following features in a cost
effective and efficient manner?

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

A. Reduce the overall lime for executing jobs through parallel processing by allowing a busy EC2 instance that receives a message to pass it to the next instance
in a daisy-chain setup.
B. Implement fault tolerance against EC2 instance failure since messages would remain in SQS and worn can continue with recovery of EC2 instances implement
fault tolerance against SQS failure by backing up messages to 53.
C. Implement message passing between EC2 instances within a batch by exchanging messages through SQS.
D. Coordinate number of EC2 instances with number of job requests automatically thus Improving cost effectiveness.
E. Handle high priority jobs before lower priority jobs by assigning a priority metadata fie Id to SQS messages.

Answer: D

Explanation: Reference:
There are cases where a large number of batch jobs may need processing, and where the jobs may need to be re-prioritized.
For example, one such case is one where there are differences between different levels of services for unpaid users versus subscriber users (such as the time
until publication) in services enabling, for example, presentation fi les to be uploaded for publication from a web browser. When the user uploads a presentation
file, the conversion processes, for example, for publication are performed as batch
processes on the system side, and the file is published after the conversion. Is it then necessary to be able to assign the level of priority to the batch processes for
each type of subscriber.
Explanation of the Cloud Solution/Pattern
A queue is used in controlling batch jobs. The queue need only be provided with priority numbers. Job requests are controlled by the queue, and the job requests
in the queue are processed by a batch server. In Cloud computing, a highly reliable queue is provided as a service, which you can use to
structure a highly reliable batch system with ease. You may prepare multiple queues depending on priority levels, with job requests put into the queues depending
on their priority levels, to apply prioritization to batch processes. The performance (number) of batch servers corresponding to a queue must be in accordance with
the priority level thereof.
Implementation
In AWS, the queue service is the Simple Queue Service (SQS). MuItipIe SQS queues may be prepared to prepare queues for indMdual priority levels (with a
priority queue and a secondary queue).
Moreover, you may also use the message Delayed Send function to delay process execution. Use SQS to prepare multiple queues for the indMdual priority levels.
Place those processes to be executed immediately (job requests) in the high priority queue. Prepare numbers of batch servers, for processing the job requests of
the queues, depending on the priority levels.
Queues have a message "Delayed Send" function. You can use this to delay the time for starting a process.
Configuration
Benefits
You can increase or decrease the number of servers for processing jobs to change automatically the processing speeds of the priority queues and secondary
queues.
You can handle performance and service requirements through merely increasing or decreasing the number of EC2 instances used in job processing.
Even if an EC2 were to fail, the messages (jobs) would remain in the queue service, enabling processing to be continued immediately upon recovery of the EC2
instance, producing a system that is robust to failure.
Cautions
Depending on the balance between the number of EC2 instances for performing the processes and the number of messages that are queued, there may be cases
where processing in the secondary queue may be completed first, so you need to monitor the processing speeds in the primary queue and the secondary queue.

NEW QUESTION 499


An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and
the Recovery Point Objective (RPO) must be 15 minutes the customer realizes that data corruption occurred roughly 1.5 hours ago.
What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure?

A. Take hourly DB backups to 53, with transaction logs stored in 53 every 5 minutes.
B. Use synchronous database master-slave replication between two availability zones.
C. Take hourly DB backups to EC2 Instance store volumes with transaction logs stored In 53 every 5 minutes.
D. Take 15 minute DB backups stored In Glacier with transaction logs stored in 53 every 5 minute

Answer: A

NEW QUESTION 500


Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders
taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months.
Orders coming in are checked for consistency men dispatched to your manufacturing plant for production quality control packaging shipment and payment
processing If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step Customers are

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

notified via email about order status and any critical issues with their orders such as payment failure.
Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders.
How can you implement the order fulfillment process while making sure that the emails are delivered reliably?

A. Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS database for tracking order status use one of the
Elastic Beanstalk instances to send emails to customers.
B. Use SWF with an Auto Scaling group of actMty workers and a decider instance in another Auto Scaling group with min/max=I Use the decider instance to send
emails to customers.
C. Use SWF with an Auto Scaling group of actMty workers and a decider instance in another Auto Scaling group with min/max=I use SES to send emails to
customers.
D. Use an SOS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute the
E. Use SES to send emails to customers.

Answer: C

NEW QUESTION 502


You have deployed a web application targeting a global audience across multiple AWS Regions under the domain name.exampIe.com. You decide to use Route53
Latency-Based Routing to serve web requests to users from the region closest to the user. To provide business continuity in the event of server downtime you
configure weighted record sets associated with two web servers in separate Availability Zones per region. Dunning a DR test you notice that when you disable all
web sewers in one of the regions Route53 does not automatically direct all users to the other region. What could be happening? {Choose 2 answers)

A. Latency resource record sets cannot be used in combination with weighted resource record sets.
B. You did not setup an HTIP health check tor one or more of the weighted resource record sets associated with me disabled web sewers.
C. The value of the weight associated with the latency alias resource record set in the region with the disabled sewers is higher than the weight for the other
region.
D. One of the two working web sewers in the other region did not pass its HTIP health check.
E. You did not set "Evaluate Target Health" to "Yes" on the latency alias resource record set associated with example com in the region where you disabled the
sewers.

Answer: BE

Explanation: How Health Checks Work in Complex Amazon Route 53 Configurations


Checking the health of resources in complex configurations works much the same way as in simple configurations. However, in complex configurations, you use a
combination of alias resource record sets (including weighted alias, latency alias, and failover alias) and nonalias resource record sets to build a decision tree that
gives you greater control over how Amazon Route 53 responds to requests.
For more information, see How Health Checks Work in Simple Amazon Route 53 Configurations.
For example, you might use latency alias resource record sets to select a region close to a user and use weighted resource record sets for two or more resources
within each region to protect against the failure of a single endpoint or an Availability Zone. The following diagram shows this configuration.
Here's how Amazon EC2 and Amazon Route 53 are configured:
You have Amazon EC2 instances in two regions, us-east-1 and ap-southeast-2. You want Amazon Route 53 to respond to queries by using the resource record
sets in the region that provides the lowest latency for your customers, so you create a latency alias resource record set for each region.
(You create the latency alias resource record sets after you create resource record sets for the indMdual Amazon EC2 instances.)
Within each region, you have two Amazon EC2 instances. You create a weighted resource record set for each instance. The name and the type are the same for
both of the weighted resource record sets in each region.
When you have multiple resources in a region, you can create weighted or failover resource record sets for your resources. You can also create even more
complex configurations by creating weighted alias or failover alias resource record sets that, in turn, refer to multiple resources.
Each weighted resource record set has an associated health check. The IP address for each health check matches the I P address for the corresponding resource
record set. This isn't required, but it's the most common configuration.
For both latency alias resource record sets, you set the value of Evaluate Target Health to Yes.
You use the Evaluate Target Health setting for each latency alias resource record set to make Amazon Route 53 evaluate the health of the alias targets-the
weighted resource record sets-and respond accordingly.
The preceding diagram illustrates the following sequence of events:
Amazon Route 53 receives a query for exampIe.com. Based on the latency for the user making the request, Amazon Route 53 selects the latency alias resource
record set for the us-east-1 region.
Amazon Route 53 selects a weighted resource record set based on weight. Evaluate Target Health is Yes for the latency alias resource record set, so Amazon
Route 53 checks the health of the selected weighted resource record set.
The health check failed, so Amazon Route 53 chooses another weighted resource record set based on weight and checks its health. That resource record set also
is unhealthy.
Amazon Route 53 backs out of that branch of the tree, looks for the latency alias resource record set with the next-best latency, and chooses the resource record
set for ap-southeast-2.
Amazon Route 53 again selects a resource record set based on weight, and then checks the health of the selected resource record set . The health check passed,
so Amazon Route 53 returns the applicable value in response to the query.
What Happens When You Associate a Health Check with an Alias Resource Record Set?
You can associate a health check with an alias resource record set instead of or in addition to setting the value of Evaluate Target Health to Yes. However, it's
generally more useful if Amazon Route 53 responds to queries based on the health of the underlying resources- the HTTP sewers, database servers, and
other resources that your alias resource record sets refer to. For example, suppose the following configuration:
You assign a health check to a latency alias resource record set for which the alias target is a group of weighted resource record sets.
You set the value of Evaluate Target Health to Yes for the latency alias resource record set.
In this configuration, both of the following must be true before Amazon Route 53 will return the applicable value for a weighted resource record set:
The health check associated with the latency alias resource record set must pass.
At least one weighted resource record set must be considered healthy, either because it's associated with a health check that passes or because it's not
associated with a health check. In the latter case, Amazon Route 53 always considers the weighted resource record set healthy.
If the health check for the latency alias resource record set fails, Amazon Route 53 stops responding to queries using any of the weighted resource record sets in
the alias target, even if they're all healthy. Amazon Route 53 doesn't know the status of the weighted resource record sets because it never looks past the failed
health check on the alias resource record set.
What Happens When You Omit Health Checks?
In a complex configuration, it's important to associate health checks with all of the non-alias resource record sets. Let's return to the preceding example, but
assume that a health check is missing on one of the weighted resource record sets in the us-east-1 region:
Here's what happens when you omit a health check on a non-alias resource record set in this configuration:
Amazon Route 53 receives a query for exampIe.com. Based on the latency for the user making the request, Amazon Route 53 selects the latency alias resource
record set for the us-east-1 region.
Amazon Route 53 looks up the alias target for the latency alias resource record set, and checks the status of the corresponding health checks. The health check

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

for one weighted resource record set failed, so that resource record set is omitted from consideration.
The other weighted resource record set in the alias target for the us-east-1 region has no health check. The corresponding resource might or might not be healthy,
but without a health check, Amazon Route 53 has no way to know. Amazon Route 53 assumes that the resource is healthy and returns the applicable value in
response to the query.
What Happens When You Set Evaluate Target Health to No?
In general, you also want to set Evaluate Target Health to Yes for all of the alias resource record sets. In the following example, all of the weighted resource record
sets have associated health checks, but Evaluate Target Health is set to No for the latency alias resource record set for the us-east-1 region:
Here's what happens when you set Evaluate Target Health to No for an alias resource record set in this configuration:
Amazon Route 53 receives a query for exampIe.com. Based on the latency for the user making the request, Amazon Route 53 selects the latency alias resource
record set for the us-east-1 region.
Amazon Route 53 determines what the alias target is for the latency alias resource record set, and checks the corresponding health checks. They're both failing.
Because the value of Evaluate Target Health is No for the latency alias resource record set for the us-east-1 region, Amazon Route 53 must choose one resource
record set in this branch instead of backing out of the branch and looking for a healthy resource record set in the ap-southeast-2 region.

NEW QUESTION 506


Your company hosts a social media site supporting users in multiple countries. You have been asked to provide a highly available design tor the application that
leverages multiple regions tor the most recently accessed content and latency sensitive portions of the wet) site The most latency sensitive component of the
application involves reading user preferences to support web site personalization and ad selection. In addition to running your application in multiple regions, which
option will support this app|ication's requirements?

A. Serve user content from 53. CIoudFront and use Route53 latency-based routing between ELBs in each region Retrieve user preferences from a local
DynamoDB table in each region and leverage SQS to capture changes to user preferences with 505 workers for propagating updates to each table.
B. Use the 53 Copy API to copy recently accessed content to multiple regions and serve user content from 53. C|oudFront with dynamic content and an ELB in
each region Retrieve user preferences from an EIasticCache cluster in each region and leverage SNS notifications to propagate user preference changes to a
worker node in each region.
C. Use the 53 Copy API to copy recently accessed content to multiple regions and serve user content from 53 CIoudFront and Route53 latency-based routing
Between ELBs In each region Retrieve user preferences from a DynamoDB table and leverage SQS to capture changes to user preferences with 505 workers for
propagating DynamoDB updates.
D. Serve user content from 53. CIoudFront with dynamic content, and an ELB in each region Retrieve user preferences from an EIastiCache cluster in each region
and leverage Simple Workflow (SWF) to manage the propagation of user preferences from a centralized OB to each EIastiCache cluster.

Answer: A

NEW QUESTION 509


Your system recently experienced down time during the troubleshooting process. You found that a new administrator mistakenly terminated several production
EC2 instances.
Which of the following strategies will help prevent a similar situation in the future? The administrator still must be able to:
- launch, start stop, and terminate development resources.
- launch and start production instances.

A. Create an IAM user, which is not allowed to terminate instances by leveraging production EC2 termination protection.
B. Leverage resource based tagging along with an IAM user, which can prevent specific users from terminating production EC2 resources.
C. Leverage EC2 termination protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances
D. Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances.

Answer: B

Explanation: Working with volumes


When an API action requires a caller to specify multiple resources, you must create a policy statement that allows users to access all required resources. If you
need to use a Condition element with one or more of these resources, you must create multiple statements as shown in this example.
The following policy allows users to attach volumes with the tag "volume_user=iam-user-name" to instances with the tag "department=dev", and to detach those
volumes from those instances. If you attach this policy to an IAM group, the aws:username policy variable gives each IAM user in the group permission to attach or
detach volumes from the instances with a tag named voIume_ user that has his or her IAM user name as a value.
{
"Version": "2012-10-I7",
"Statement": [{
"Effect": "A||ow", "Action": [ "ec2:AttachVoIume",
"ec2:DetachVoIume" I,
"Resource": "arn :aws:ec2:us-east-1:123456789012:instanee/*", "Condition": {
"StringEqua|s": { "ec2:ResourceTag/department": "dev" I
I I,
{
"Effect": "A||ow", "Action": [ "ec2:AttachVoIume", "ec2:DetachVoIume" I,
"Resource": "arn:aws:ec2:us-east-1:123456789012:voIume/*", "Condition": {
"StringEqua|s": {
"ec2:ResourceTag/voIume_user": "${aws:username}" I
IIII
Launching instances (Runlnstances)
The Runlnstances API action launches one or more instances. Runlnstances requires an AM and creates an instance; and users can specify a key pair and
security group in the request. Launching into EC2-VPC requires a subnet, and creates a network interface. Launching from an Amazon EBS-backed AM creates a
volume. Therefore, the user must have permission to use these Amazon EC2 resources. The caller can also configure the instance using optional parameters to
Run Instances, such as the instance type and a subnet. You can create a policy statement that requires users to specify an optional parameter, or restricts users to
particular values for a parameter. The examples in this section demonstrate some of the many possible ways that you can control the configuration of an instance
that a user can launch.
Note that by default, users don't have permission to describe, start, stop, or terminate the resulting instances. One way to grant the users permission to manage
the resulting instances is to create a specific tag for each instance, and then create a statement that enables them to manage instances with that tag. For more
information, see 2: Working with instances.
a. AMI
The following policy allows users to launch instances using only the AM|s that have the specified tag, "department=dev", associated with them. The users can't
launch instances using other ANI Is because the Condition element of the first statement requires that users specify an AM that has this tag. The users also can't

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

launch into a subnet, as the policy does not grant permissions for the subnet and network interface resources. They can, however, launch into EC2-Ciassic. The
second statement uses a wildcard to enable users to create instance resources, and requires users to specify the key pair
project_keypair and the security group sg-1a2b3c4d. Users are still able to launch instances without a key pair.
{
"Version": "2012-10-I7",
"Statement": [{ I,
{
"Effect": "A||ow",
"Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:region::image/ami-*" I,
"Condition": { "StringEqua|s": {
"ec2:ResourceTag/department": "dev" I
I I,
{
"Effect": "A||ow",
"Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:region:account:instance/*", "arn:aws:ec2:region:account:voIume/*",
"arn:aws:ec2:region:account:key-pair/project_keypair",
"arn :aws :ec2: region: account:security-group/sg-1a 2b3c4d" I
I
}
Alternatively, the following policy allows users to launch instances using only the specified AMIs, ami-9e1670f7 and ami-45cf5c3c. The users can't launch an
instance using other AMIs (unless another statement grants the users permission to do so), and the users can't launch an instance into a subnet.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "A||ow",
"Action": "ec2:RunInstances", "Resource": [
"arn:aws:ec2:region::image/ami-9e1670f7", "arn:aws:ec2:region::image/ami-45cf5c3c", "arn:aws:ec2:region:account:instance/*",
"arn:aws:ec2:region:account:voIume/*", "arn:aws:ec2:region:account:key-pair/*", "arn:aws:ec2:region:account:security-group/*"
}
}
Alternatively, the following policy allows users to launch instances from all AMs owned by Amazon. The Condition element of the first statement tests whether
ec2:0wner is amazon. The users can't launch an instance using other AM Is (unless another statement grants the users permission to do so).
The users are able to launch an instance into a subnet. "Version": "2012-10-17",
"Statement": [{
"Effect": "A| low",
"Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:region::image/ami-*" l,
"Condition": { "StringEqua|s": { "ec2:0wner": "amazon"
}
},
{
"Effect": "A||ow",
"Action": "ec2:RunInstances", "Resource" : [ "arn:aws:ec2:region:account:instance/*", "arn:aws:ec2:region:account:subnet/*",
"arn:aws:ec2:region:account:voIume/*",
"arn:aws:ec2:region:account:network-interface/*", "arn:aws:ec2:region:account:key-pair/*", "arn:aws:ec2:region:account:security-group/*"
I
}I
}
b. Instance type
The following policy allows users to launch instances using only the t2.micro or t2.sma|I instance type, which you might do to control costs. The users can't launch
larger instances because the Condition element of the first statement tests whether ec2:1nstanceType is either t2.micro or t2.smaII.
{
"Version": "2012-10-I7",
"Statement": [{
"Effect": "A| low",
"Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:region:account:instance/*" I,
"Condition": { "StringEqua|s": {
"ec2:1nstanceType": ["t2.micro", "t2.smaII"]
}
}
},
{
"Effect": "A||ow",
"Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:region::image/ami-*", "arn:aws:ec2:region:account:subnet/*",
"arn:aws:ec2:region:account:network-interface/*", "arn:aws:ec2:region:account:voIume/*", "arn:aws:ec2:region:account:key-pair/*",
"arn:aws:ec2:region:account:security-group/*"
I
}I
}
Alternatively, you can create a policy that denies users permission to launch any instances except t2.micro and t2.sma|I instance types.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Deny",
"Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:region:account:instance/*" l,
"Condition": { "StringNotEqua|s": {
"ec2:1nstanceType": ["t2.micro", "t2.smaII"]
}
}
},
{
"Effect": "A||ow",
"Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:region::image/ami-*",
"arn:aws:ec2:region:account:network-interface/* "arn:aws:ec2:region:account:instance/*", "arn:aws:ec2:region:account:subnet/*",

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

"arn:aws:ec2:region:account:voIume/*", "arn:aws:ec2:region:account:key-pair/*", "arn:aws:ec2:region:account:security-group/*"


}
}
c. Subnet
The following policy allows users to launch instances using only the specified subnet, subnet-12345678. The group can't launch instances into any another subnet
(unless another statement grants the users permission to do so). Users are still able to launch instances into EC2-Ciassic.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "A||ow",
"Action": "ec2:RunInstances", "Resource": [
"arn :aws :ec2: region:account:subnet/subnet-123456 78",
"arn:aws:ec2:region:account:network-interface/*", "arn:aws:ec2:region:account:instance/*", "arn:aws:ec2:region:account:voIume/*",
"arn:aws:ec2:region::image/ami-*", "arn:aws:ec2:region:account:key-pair/*", "arn:aws:ec2:region:account:security-group/*"
}
}
Alternatively, you could create a policy that denies users permission to launch an instance into any other subnet. The statement does this by denying permission to
create a network interface, except where subnet subnet-12345678 is specified. This denial overrides any other policies that are created to allow launching
instances into other subnets. Users are still able to launch instances into EC2-Classic.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Deny",
"Action": "ec2:RunInstances", "Resource": [
"arn:aws:ec2:region:account:network-interface/*" l,
"Condition": { "ArnNotEquaIs": {
"ec2:Subnet": "arn :aws:ec2:region:account:subnet/subnet-12345678"
}
}
},
{
"Effect": "A||ow",
"Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:region::image/ami-*",
"arn:aws:ec2:region:account:network-interface/*", "arn:aws:ec2:region:account:instance/*", "arn:aws:ec2:region:account:subnet/*",
"arn:aws:ec2:region:account:voIume/*", "arn:aws:ec2:region:account:key-pair/*", "arn:aws:ec2:region:account:security-group/*"
}
}

NEW QUESTION 512


......

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader
https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (672 Q&As)

Thank You for Trying Our Product

* 100% Pass or Money Back


All our products come with a 90-day Money Back Guarantee.
* One year free update
You can enjoy free update one year. 24x7 online support.
* Trusted by Millions
We currently serve more than 30,000,000 customers.
* Shop Securely
All transactions are protected by VeriSign!

100% Pass Your AWS-Solution-Architect-Associate Exam with Our Prep Materials Via below:

https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html

The Leader of IT Certification visit - https://www.certleader.com


Powered by TCPDF (www.tcpdf.org)

You might also like