Qset (65) 3
Qset (65) 3
Qset (65) 3
Which Amazon service can you use to define a virtual network which closely
resembles a traditional data center?
Amazon VPC
(Correct)
Amazon ServiceBus
Amazon EMR
Amazon RDS
Explanation
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section
of the AWS Cloud where you can launch AWS resources in a virtual network that you
define. You have complete control over your virtual networking environment, including
selection of your own IP address range, creation of subnets, and configuration of route
tables and network gateways. You can use both IPv4 and IPv6 in your VPC for secure and
easy access to resources and applications.
You can easily customize the network configuration for your Amazon VPC. For example,
you can create a public-facing subnet for your web servers that has access to the Internet,
and place your backend systems such as databases or application servers in a private-
facing subnet with no Internet access. You can leverage multiple layers of security,
including security groups and network access control lists, to help control access to Amazon
EC2 instances in each subnet.
Resources:
https://aws.amazon.com/vpc/
Question 2: Incorrect
Your manager has asked you to deploy a web application that can collect votes for a
very popular television show. Millions of users will submit votes using mobile
phones. These votes must be collected and stored into a durable, scalable, and
highly available data store for real-time public tabulation.
Amazon DynamoDB
(Correct)
Amazon Redshift
(Incorrect)
Explanation
When the word durability pops out, the first service that should come to your mind is
Amazon S3. Since this service is not available in the answer options, we can look at the
other data store available which is Amazon DynamoDB.
DynamoDB is durable, scalable, and highly available data store which can be used for real-
time tabulation. When using the DynamoDB Storage Backend for Titan, your data enjoys
the strong protection of DynamoDB, which runs across Amazon’s proven, high-availability
data centers. The service replicates data across three facilities in an AWS Region to
provide fault tolerance in the event of a server failure or Availability Zone outage.
Option 2 is incorrect as Amazon Redshift is mainly used as a data warehouse and for online
analytic processing (OLAP).
Option 3 is incorrect as Amazon Kinesis is used for processing streams and not for storage.
Option 4 is incorrect as Amazon Simple Queue Service is a de-coupling solution.
References:
https://aws.amazon.com/dynamodb/faqs/
Question 3: Correct
What is one of the major advantages of having a Virtual Private Network in AWS?
You can connect your AWS cloud resources to on-premise data centers using VPN
connections.
(Correct)
Explanation
One main advantage of a VPN connection is that you will be able to connect your Amazon
VPC to other remote networks.
You can create an IPsec VPN connection between your VPC and your remote network. On
the AWS side of the VPN connection, a virtual private gateway provides two VPN endpoints
(tunnels) for automatic failover. You configure your customer gateway on the remote side of
the VPN connection. If you have more than one remote network (for example, multiple
branch offices), you can create multiple AWS managed VPN connections via your virtual
private gateway to enable communication between these networks.
You can create a VPN connection to your remote network by using an Amazon EC2
instance in your VPC that's running a third party software VPN appliance. AWS does not
provide or maintain third party software VPN appliances; however, you can choose from a
range of products provided by partners and open source communities.
References:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpn-connections.html
Question 4: Correct
You are the Solutions Architect for your company's AWS account of approximately
300 IAM users. They have a new company policy that will change the access of 100 of
the IAM users to have a particular sort of access to Amazon S3 buckets.
What will you do to avoid the time-consuming task of applying the policy at the
individual user?
Create a new IAM group and then add the users that require access to the S3 bucket.
Afterwards, apply the policy to IAM group.
(Correct)
Create a new policy and apply it to multiple IAM users using a shell script.
Create a new S3 bucket access policy with unlimited access for each IAM user.
Create a new IAM role and add each user to the IAM role.
Explanation
In this scenario, the best option is to group the set of users in an IAM Group and then apply
a policy with the required access to the Amazon S3 bucket. This will enable you to easily
add, remove, and manage the users instead of manually adding a policy to each and every
100 IAM users.
Option 2 is incorrect because you need a new IAM Group for this scenario and not assign a
policy to each users via a shell script. This method can save you time but afterwards, it will
be difficult to manage all 100 users that are not contained in an IAM Group.
Option 3 is incorrect because you need a new IAM Group and the method is also time
consuming.
Option 4 is incorrect because you need to use an IAM Group and not an IAM role.
References:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html
Question 5: Correct
You are a Solutions Architect for a global news company. You are configuring a fleet
of EC2 instances in a subnet which currently is in a VPC with an Internet gateway
attached. All of these EC2 instances can be accessed from the Internet. You then
launch another subnet and launch an EC2 instance in it, however you are not able to
access the EC2 instance from the Internet.
The Amazon EC2 instance does not have a public IP address associated with it.
(Correct)
The Amazon EC2 instance is not a member of the same Auto Scaling group.
The Amazon EC2 instance is running in an Availability Zone that does not support Internet
access.
The route table is not configured properly to send traffic from the EC2 instance to the
Internet through the Internet gateway.
(Correct)
Explanation
In cases where your EC2 instance cannot access the Internet, you usually have to check
two things:
-Amazon EC2 instance does not have a public IP address associated with it.
-The route table is not configured properly to send traffic from the EC2 instance to the
Internet through the Internet gateway.
References:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html
Question 6: Correct
In your Amazon VPC, you have 1000 IAM users. There is a new project management
application that uses Amazon S3. To test this application, you were instructed by
your boss to allow 300 IAM users to have unlimited privileges to the S3 bucket that
the application is using.
As a Solutions Architect, how can you implement this effectively to avoid the time-
consuming task of applying the policy for each and every 300 IAM users?
Create a new IAM Role and add each user to the IAM role.
Create a new IAM group and add the 300 users and then apply the policy to allow access to
the S3 bucket to the group.
(Correct)
Create a new policy and apply it to all 300 users using a shell script.
Create an S3 bucket policy with unlimited access and allow 300 users to access it via ACL.
Explanation
In this scenario, the best option is to group the set of users in an IAM Group and then apply
a policy with the required access to the Amazon S3 bucket. This will enable you to easily
add, remove, and manage the users instead of manually adding a policy to each and every
300 IAM users.
Option 1 is incorrect because you need to use an IAM Group and not an IAM role.
Option 3 is incorrect because you need a new IAM Group for this scenario and not assign a
policy to each user via a shell script. This method can save you time but afterwards, it will
be difficult to manage all 300 users which are not grouped in an IAM Group.
Option 4 is incorrect because you need a new IAM Group for this and not by using an ACL.
References:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html
Question 7: Correct
A manufacturing company has EC2 instances running in AWS. The EC2 instances
are configured with Auto Scaling. There are a lot of requests being lost because of
too much load on the servers. The Auto Scaling is launching new EC2 instances to
take the load accordingly yet, there are still some requests that are being lost.
Which of the following is the most cost-effective solution to avoid losing recently
submitted requests?
(Correct)
Keep one extra Spot EC2 instance always ready in case a spike occurs.
Explanation
In this scenario, Amazon SQS is the best solution to avoid having lost messages.
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that
makes it easy to decouple and scale microservices, distributed systems, and serverless
applications. Building applications from individual components that each perform a discrete
function improves scalability and reliability, and is best practice design for modern
applications. SQS makes it simple and cost-effective to decouple and coordinate the
components of a cloud application. Using SQS, you can send, store, and receive messages
between software components at any volume, without losing messages or requiring other
services to be always available.
References:
https://aws.amazon.com/sqs/
Question 8: Correct
You have an On-Demand EC2 instance located in a subnet in AWS which hosts a web
application. The security group attached to this EC2 instance has the following
Inbound Rules:
Larger image
The Route table attached to the VPC is shown blow. You can do a SSH connection
into the EC2 instance from the internet. However, you are not able to connect to the
web server using your Chrome browser.
Larger image
(Correct)
In the Route table, add this new route entry: 10.0.0.0/27 -> local
Explanation
The scenario is that you can already connect to the EC2 instance via SSH. This means that
there is no problem in the Route Table of your VPC. To fix this issue, you simply need to
update your Security Group and add an Inbound rule to allow HTTP traffic.
Option 2 is incorrect as removing the SSH rule will not solve the issue. It will just disable SSH traffic that
is already available.
Options 3 and 4 are incorrect as there is no need to change the Route Tables.
References:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html
Question 9: Correct
Which of the following EC2 features should you use to optimize performance for a
compute cluster that requires low network latency?
Placement Groups
(Correct)
References:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
Question 10: Correct
What is the retention period for a one-minute datapoint in Amazon CloudWatch?
14 days
15 days
(Correct)
1 month
1 year
Explanation
CloudWatch retains metric data as follows:
Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-
resolution custom metrics.
Data points with a period of 60 seconds (1 minute) are available for 15 days.
Data points with a period of 300 seconds (5 minute) are available for 63 days.
Data points with a period of 3600 seconds (1 hour) are available for 455 days (15 months).
References:
https://aws.amazon.com/cloudwatch/faqs/
(Correct)
Amazon Kinesis
(Incorrect)
Amazon Redshift
Amazon Macie
Explanation
Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores
and analytics tools. It can capture, transform, and load streaming data into Amazon S3,
Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time
analytics with existing business intelligence tools and dashboards you’re already using
today.
It is a fully managed service that automatically scales to match the throughput of your data
and requires no ongoing administration. It can also batch, compress, and encrypt the data
before loading it, minimizing the amount of storage used at the destination and increasing
security.
References:
https://aws.amazon.com/kinesis/data-firehose/
Question 12: Correct
You are a Solutions Architect in your company where you are tasked to set up a
cloud infrastructure. In the planning, it was discussed that you will need two EC2
instances which should continuously run for three years. The CPU utilization of the
EC2 instances is also expected to be stable and predictable.
Which is the most cost-efficient Amazon EC2 Pricing type that is most appropriate
for this scenario?
Reserved Instances
(Correct)
On-Demand instances
Spot instances
Dedicated Hosts
Explanation
Reserved Instances provide you with a significant discount (up to 75%) compared to On-
Demand instance pricing. In addition, when Reserved Instances are assigned to a specific
Availability Zone, they provide a capacity reservation, giving you additional confidence in
your ability to launch instances when you need them.
For applications that have steady state or predictable usage, Reserved Instances can
provide significant savings compared to using On-Demand instances.
Reserved Instances are recommended for:
References:
https://aws.amazon.com/ec2/pricing/
https://aws.amazon.com/ec2/pricing/reserved-instances/
There is a problem in the sensors. They probably had some intermittent connection hence,
the data is not sent to the stream.
By default, Amazon S3 stores the data for 1 day and moves it to Amazon Glacier.
Your AWS account was hacked and someone has deleted some data in your Kinesis
stream.
By default, the data records are only accessible for 24 hours from the time they are added
to a Kinesis stream.
(Correct)
Explanation
Kinesis Data Streams supports changes to the data record retention period of your stream.
A Kinesis data stream is an ordered sequence of data records meant to be written to and
read from in real-time. Data records are therefore stored in shards in your stream
temporarily.
The time period from when a record is added to when it is no longer accessible is called
the retention period. A Kinesis data stream stores records from 24 hours by default to a
maximum of 168 hours.
This is the reason why there are missing data in your S3 bucket. To fix this, you can either
configure your sensors to send the data everyday instead of every other day or
alternatively, you can increase the retention period of your Kinesis data stream.
References:
http://docs.aws.amazon.com/streams/latest/dev/kinesis-extended-retention.html
Question 14: Correct
You are automating the creation of EC2 instances in your VPC. Hence, you wrote a
python script to trigger the Amazon EC2 API to request 50 EC2 instances in a single
Availability Zone. However, you noticed that after 20 successful requests,
subsequent requests failed.
What could be a reason for this issue and how would you resolve it?
There was an issue with the Amazon EC2 API. Just resend the requests and these will be
provisioned successfully.
By default, AWS allows you to provision a maximum of 20 instances per region. Select a
different region and retry the failed request.
By default, AWS allows you to provision a maximum of 20 instances per Availability Zone.
Select a different Availability Zone and retry the failed request.
There is a soft limit of 20 instances per region which is why subsequent requests failed. Just
submit the limit increase form to AWS and retry the failed requests once approved.
(Correct)
Explanation
You are limited to running up to a total of 20 On-Demand instances across the instance
family, purchasing 20 Reserved Instances and requesting Spot Instances per your dynamic
Spot limit per region. If you wish to run more than 20 instances, complete the Amazon EC2
instance request form.
References:
https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ec2
https://aws.amazon.com/ec2/faqs/#How_many_instances_can_I_run_in_Amazon_EC2
Question 15: Correct
Which of the following is a valid bucket name on Amazon S3?
tutorialsdojo
(Correct)
TutorialsDojo
.tutorialsdojo
tutorialsdojo!
Explanation
The rules for DNS-compliant bucket names in Amazon S3 are as follows:
Create a new VPC peering connection between PROD and DEV with the appropriate
routes.
(Correct)
Create a new entry to PROD in the DEV route table using the VPC peering connection as
the target.
Attach a second gateway to DEV. Add a new entry in the PROD route table identifying the
gateway as the target.
Change the DEV and PROD VPCs to have overlapping CIDR blocks to be able to connect
them.
Do nothing. Since these two VPCs are already connected via UAT, they already have a
connection to each other.
(Incorrect)
Explanation
A VPC peering connection is a networking connection between two VPCs that enables you
to route traffic between them privately. Instances in either VPC can communicate with each
other as if they are within the same network. You can create a VPC peering connection
between your own VPCs, with a VPC in another AWS account, or with a VPC in a different
AWS Region.
AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is
neither a gateway nor a VPN connection, and does not rely on a separate piece of physical
hardware. There is no single point of failure for communication or a bandwidth bottleneck.
Options 2 and 3 are incorrect. Even if you configure the route tables, the two VPCs will still
be disconnected until you set up a VPC peering connection between them.
Option 4 is incorrect because you cannot peer two VPCs with overlapping CIDR blocks.
Option 5 is incorrect as transitive VPC peering is not allowed hence, even though DEV and
PROD are both connected in UAT, these two VPCs do not have a direct connection to each
other.
References:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html
(Correct)
Explanation
The decider can be viewed as a special type of worker. Like workers, it can be written in
any language and asks Amazon SWF for tasks. However, it handles special tasks called
decision tasks.
Amazon SWF issues decision tasks whenever a workflow execution has transitions such as
an activity task completing or timing out. A decision task contains information on the inputs,
outputs, and current state of previously initiated activity tasks. Your decider uses this data to
decide the next steps, including any new activity tasks, and returns those to Amazon SWF.
Amazon SWF in turn enacts these decisions, initiating new activity tasks where appropriate
and monitoring them.
By responding to decision tasks in an ongoing manner, the decider controls the order,
timing, and concurrency of activity tasks and consequently the execution of processing
steps in the application. SWF issues the first decision task when an execution starts. From
there on, Amazon SWF enacts the decisions made by your decider to drive your execution.
The execution continues until your decider makes a decision to complete it.
Resources:
https://aws.amazon.com/swf/faqs/
http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dg-dev-deciders.html
Question 18: Correct
A music company is storing data on Amazon Simple Storage Service (S3). The
company’s security policy requires that data is encrypted at rest.
Use Amazon S3 server-side encryption with AWS Key Management Service managed
keys.
(Correct)
(Correct)
(Correct)
Explanation
Data protection refers to protecting data while in-transit (as it travels to and from Amazon
S3) and at rest (while it is stored on disks in Amazon S3 data centers). You can protect data
in transit by using SSL or by using client-side encryption. You have the following options of
protecting data at rest in Amazon S3:
Use Server-Side Encryption – You request Amazon S3 to encrypt your object before
saving it on disks in its data centers and decrypt it when you download the objects.
Use Client-Side Encryption – You can encrypt data client-side and upload the encrypted
data to Amazon S3. In this case, you manage the encryption process, the encryption keys,
and related tools.
-Use Client-Side Encryption with AWS KMS–Managed Customer Master Key (CMK)
-Use Client-Side Encryption Using a Client-Side Master Key
References:
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html
Question 19: Incorrect
You need to upload a large file to your S3 bucket which is 2 GB in size. What is the
best way to upload the file?
(Incorrect)
Use Amazon Snowball
(Correct)
Explanation
The total volume of data and number of objects you can store are unlimited. Individual
Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5
terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For
objects larger than 100 megabytes, customers should consider using the Multipart Upload
capability.
The Multipart upload API enables you to upload large objects in parts. You can use this API
to upload new large objects or make a copy of an existing object. Multipart uploading is a
three-step process: you initiate the upload, you upload the object parts, and after you have
uploaded all the parts, you complete the multipart upload. Upon receiving the complete
multipart upload request, Amazon S3 constructs the object from the uploaded parts and you
can then access the object just as you would any other object in your bucket.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html
https://aws.amazon.com/s3/faqs/
(Correct)
(Correct)
Explanation
Amazon EBS encryption offers a simple encryption solution for your EBS volumes without
the need to build, maintain, and secure your own key management infrastructure.
In Amazon S3, data protection refers to protecting data while in-transit (as it travels to and
from Amazon S3) and at rest (while it is stored on disks in Amazon S3 data centers). You
can protect data in transit by using SSL or by using client-side encryption. You have the
following options to protect data at rest in Amazon S3.
Use Server-Side Encryption – You request Amazon S3 to encrypt your object before saving it on disks
in its data centers and decrypt it when you download the objects.
Use Client-Side Encryption – You can encrypt data client-side and upload the encrypted data to
Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.
References:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html
Question 21: Incorrect
One of your clients wants to leverage on Amazon S3 and Amazon Glacier as part of
their backup and archive infrastructure. They created a new S3 bucket called
“tutorialsdojobackup”. To support this integration between AWS and their on-
premise network, they decided to use a third-party software.
Which approach will limit the access of the third party software to the Amazon S3
bucket only and not to other AWS resources?
Setup a custom bucket policy limited to the Amazon S3 API in the Amazon Glacier archive
“tutorialsdojobackup”.
(Incorrect)
A custom IAM user policy limited to the Amazon S3 API for the Amazon Glacier archive
“tutorialsdojobackup”.
In IAM, setup a custom user policy for the third party software that is limited to the Amazon
S3 API in the "tutorialsdojobackup" bucket.
(Correct)
Explanation
In this scenario, you have to provide access to your VPC to the third party software by
creating a new IAM user. Since you want to limit the access of the third party software, you
can simply manage the available AWS resources that it can communicate with by setting up
a custom user policy, which will only allow access to a specific S3 bucket.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/example-policies-s3.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/walkthrough1.html
Question 22: Correct
Which of the following is a durable key-based object store service?
(Correct)
Explanation
Amazon S3 is a simple key-based object store. When you store data, you assign a unique
object key that can later be used to retrieve the data. Keys can be any string, and can be
constructed to mimic hierarchical attributes. Amazon S3 is the storage for the Internet. It’s a
simple storage service that offers software developers a highly-scalable, reliable, and low-
latency data storage infrastructure at very low costs.
Resources:
https://aws.amazon.com/s3/faqs/
Question 23: Incorrect
You are working for a startup company that has resources deployed on the AWS
Cloud. Your company is now going through a set of scheduled audits by an external
auditing firm for compliance.
Which of the following services available in AWS can be utilized to help ensure the
right information is present for auditing purposes?
AWS CloudTrail
(Correct)
AWS VPC
AWS EC2
AWS Cloudwatch
(Incorrect)
Explanation
AWS CloudTrail is a service that enables governance, compliance, operational auditing,
and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor,
and retain account activity related to actions across your AWS infrastructure. CloudTrail
provides event history of your AWS account activity, including actions taken through the
AWS Management Console, AWS SDKs, command line tools, and other AWS services.
This event history simplifies security analysis, resource change tracking, and
troubleshooting.
CloudTrail provides visibility into user activity by recording actions taken on your account.
CloudTrail records important information about each action, including who made the
request, the services used, the actions performed, parameters for the actions, and the
response elements returned by the AWS service. This information helps you to track
changes made to your AWS resources and troubleshoot operational issues. CloudTrail
makes it easier to ensure compliance with internal policies and regulatory standards.
References:
https://aws.amazon.com/cloudtrail/
(Correct)
(Correct)
Explanation
This question did not mention the specific type of EC2 instance however, it says that it will
be stopped and started. Since only EBS-backed instances can be stopped and restarted, it
is implied that the instance is EBS-backed. Remember that an instance store-backed
instance can only be rebooted or terminated and its data will be erased if the EC2 instance
is terminated.
If you stopped an EBS-backed EC2 instance, the volume is preserved but the data in any
attached Instance store volumes will be erased. Keep in mind that an EC2 instance has an
underlying physical host computer. If the instance is stopped, AWS usually moves the
instance to a new host computer. Your instance may stay on the same host computer if
there are no problems with the host computer. In addition, its Elastic IP address is
disassociated from the instance if it is an EC2-Classic instance. Otherwise, if it is an EC2-
VPC instance, the Elastic IP address remains associated.
Resources:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-lifecycle.html
Yes
No
(Correct)
Explanation
The answer is No. The standby instance will not perform any read and write operations
while the primary instance is running.
Multi-AZ deployments for the MySQL, MariaDB, Oracle, and PostgreSQL engines utilize
synchronous physical replication to keep data on the standby up-to-date with the primary.
Multi-AZ deployments for the SQL Server engine use synchronous logical replication to
achieve the same result, employing SQL Server-native Mirroring technology. Both
approaches safeguard your data in the event of a DB Instance failure or loss of an
Availability Zone.
If a storage volume on your primary instance fails in a Multi-AZ deployment, Amazon RDS
automatically initiates a failover to the up-to-date standby (or to a replica in the case of
Amazon Aurora). Compare this to a Single-AZ deployment: in case of a Single-AZ database
failure, a user-initiated point-in-time-restore operation will be required. This operation can
take several hours to complete, and any data updates that occurred after the latest
restorable time (typically within the last five minutes) will not be available.
Resources:
https://aws.amazon.com/rds/details/multi-az/
Amazon S3
(Correct)
References:
https://aws.amazon.com/s3/faqs/
Question 27: Correct
Which AWS service can you use to collect and process large streams of data records
in real time?
Amazon S3
Amazon Redshift
Amazon SWF
(Correct)
Explanation
Amazon Kinesis Data Streams is used to collect and process large streams of data records
in real time. You can use Kinesis Data Streams for rapid and continuous data intake and
aggregation. The type of data used includes IT infrastructure log data, application logs,
social media, market data feeds, and web clickstream data. Because the response time for
the data intake and processing is in real time, the processing is typically lightweight.
Resources:
https://docs.aws.amazon.com/streams/latest/dev/introduction.html
Question 28: Correct
Which of the following can be used as an origin server in Amazon CloudFront?
(Choose 3)
(Correct)
(Correct)
Amazon Glacier
Amazon S3 bucket
(Correct)
Explanation
When you create a web distribution, you specify where CloudFront sends requests for the
files that it distributes to edge locations. CloudFront supports the following as origins:
-Amazon S3 buckets
-HTTP or web servers hosted in EC2 or in your private web servers.
References:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3An
dCustomOrigins.html
Question 29: Incorrect
You are working as an IT Consultant for a large media company where you are tasked
to design a web application that stores static assets in an Amazon Simple Storage
Service (S3) bucket. You expect this S3 bucket to immediately receive over 200 PUT
requests and 400 GET requests per second at peak hour.
(Correct)
(Incorrect)
Use a predictable naming scheme in the key names such as sequential numbers or date
time sequences.
Explanation
If your workload in an Amazon S3 bucket routinely exceeds 100 PUT/LIST/DELETE
requests per second or more than 300 GET requests per second, then you need to perform
some actions to ensure the best performance and scalability of your service.
If you have workloads that include a mix of request types such as a mix of GET, PUT,
DELETE, or GET Bucket (list objects), then choosing appropriate key names for your
objects ensures better performance by providing low-latency access to the Amazon S3
index. It also ensures scalability regardless of the number of requests you send per
second.
If you have workloads that are GET-intensive then it is recommended to use Amazon
CloudFront content delivery service.
The correct answer is 2 because using random prefix to the key names provides low-
latency access to the Amazon S3 index which improves performance.
Option 1 is incorrect because Amazon Glacier is mainly used to archive data.
Option 3 is incorrect because although S3 is scalable, it could not automatically optimize
performance if the bucket has more than 100 PUT/LIST/DELETE requests per second.
Option 4 is incorrect because using a sequential prefix, such as time stamp or an
alphabetical sequence, increases the likelihood that Amazon S3 will target a specific
partition for a large number of your keys, overwhelming the I/O capacity of the partition.
References:
http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html
FTP
(Correct)
Amazon S3 RRS
Explanation
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long
distances between your client and your Amazon S3 bucket. Transfer Acceleration leverages
Amazon CloudFront’s globally distributed AWS Edge Locations. As data arrives at an AWS
Edge Location, data is routed to your Amazon S3 bucket over an optimized network path.
Resources:
http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
Question 31: Correct
You are a new Solutions Architect working for a financial company. Your manager
wants to have the ability to automatically transfer obsolete data from their S3 bucket
to a low cost storage system in AWS.
Use an EC2 instance and a scheduled job to transfer the obsolete data from their S3
location to Amazon Glacier.
(Correct)
Explanation
In this scenario, you can use lifecycle policies in S3 to automatically move obsolete data to
Glacier.
Lifecycle configuration in Amazon S3 enables you to specify the lifecycle management of
objects in a bucket. The configuration is a set of one or more rules, where each rule defines
an action for Amazon S3 to apply to a group of objects. These actions can be classified as
follows:
Transition actions – In which you define when objects transition to another storage
class. For example, you may choose to transition objects to the STANDARD_IA (IA, for
infrequent access) storage class 30 days after creation, or archive objects to the GLACIER
storage class one year after creation.
Expiration actions – In which you specify when the objects expire. Then Amazon S3
deletes the expired objects on your behalf.
Option 1 is incorrect because you don't need to create a scheduled job in EC2 as you can
just simply use the lifecycle policy in S3.
Options 3 and 4 are incorrect as SQS and SWF are not storage services.
References:
http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
https://aws.amazon.com/blogs/aws/archive-s3-to-glacier/
Question 32: Correct
You are designing a social media website for a startup company and the founders
want to know the ways to mitigate distributed denial-of-service (DDoS) attacks to
their website.
Write a shell script to quickly add and remove rules to the instance firewall.
Use Dedicated EC2 instances to ensure that each instance has the maximum performance
possible.
Use an Amazon CloudFront service for distributing both static and dynamic content.
(Correct)
Use an Application Load Balancer with Auto Scaling groups for your EC2 instances then
restrict direct Internet traffic to your Amazon RDS database by deploying to a private
subnet.
(Correct)
Setup alerts in Amazon CloudWatch to look for high Network In and CPU utilization.
(Correct)
Add multiple elastic network interfaces (ENIs) to each EC2 instance to increase the network
bandwidth.
Explanation
A Denial of Service (DoS) attack is an attack that can make your website or application
unavailable to end users. To achieve this, attackers use a variety of techniques that
consume network or other resources, disrupting access for legitimate end users.
To protect your system from SoS attack, you can the following:
-Use an Amazon CloudFront service for distributing both static and dynamic content.
-Use an Application Load Balancer with Auto Scaling groups for your EC2 instances then restrict direct
Internet traffic to your Amazon RDS database by deploying to a private subnet.
-Setup alerts in Amazon CloudWatch to look for high Network In and CPU utilization.
Services that are available within AWS Regions, like Elastic Load Balancing and Amazon
Elastic Compute Cloud (EC2), allow you to build Distributed Denial of Service resiliency and
scale to handle unexpected volumes of traffic within a given region. Services that are
available in AWS edge locations, like Amazon CloudFront, AWS WAF, Amazon Route53,
and Amazon API Gateway, allow you to take advantage of a global network of edge
locations that can provide your application with greater fault tolerance and increased scale
for managing larger volumes of traffic.
Resources:
https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June2015.pdf
Question 33: Correct
You are working for a tech company that is using a lot of EBS volumes in their EC2
instances. An incident occured that requires you to delete the EBS volumes and then
re-create them again.
What step should you do before you delete the EBS volumes?
(Correct)
Explanation
You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-
time snapshots. Snapshots are incremental backups, which means that only the blocks on
the device that have changed after your most recent snapshot are saved.
When you no longer need an Amazon EBS volume, you can delete it. After deletion, its data
is gone and the volume can't be attached to any instance. However, before deletion, you
can store a snapshot of the volume, which you can use to re-create the volume later.
References:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-deleting-volume.html
Explanation
Amazon S3 runs on the world’s largest global cloud infrastructure and was built from the
ground up to deliver a customer promise of 99.999999999% durability. Data is automatically
distributed across a minimum of three physical facilities that are geographically separated
within an AWS Region, and Amazon S3 can also automatically replicate data to any other
AWS Region.
Since the question did not say that the Cross-region replication (CRR) is enabled, then the
correct answer is Option 3. Amazon S3 replicates the data to multiple facilities in the same
region where it is located, which is ap-southeast-1
References:
https://aws.amazon.com/s3/
Question 35: Correct
AWS Lambda automatically monitors functions on your behalf, reporting metrics
through Amazon CloudWatch. What type of data do these metrics monitor? (Choose
3)
Latency
(Correct)
Total Requests
(Correct)
Error Rates
(Correct)
Security Group changes
Explanation
AWS Lambda automatically monitors functions on your behalf, reporting metrics through
Amazon CloudWatch. These metrics include total requests, latency, and error rates. The
throttles, Dead Letter Queues errors and Iterator age for stream-based invocations are also
monitored.
You can monitor metrics for Lambda and view logs by using the Lambda console, the
CloudWatch console, the AWS CLI, or the CloudWatch API.
Resources:
https://docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-access-metrics.html
https://docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-metrics.html
Question 36: Correct
A company is hosting EC2 instances that are on non-production environment and
processing non-priority batch loads, which can be interrupted at any time.
What is the best pricing model which can be used for EC2 instances in this case?
Reserved Instances
On-Demand Instances
Spot Instances
(Correct)
Regular Instances
Explanation
Amazon EC2 Spot instances are spare compute capacity in the AWS cloud available to you
at steep discounts compared to On-Demand prices. It can be interrupted by AWS EC2 with
two minutes of notification when the EC2 needs the capacity back.
Resources:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html
https://aws.amazon.com/ec2/spot/
Reserved instances
Spot instances
(Correct)
Dedicated instances
On-demand instances
Explanation
You require an instance that will be used not as a primary server but as a spare compute
resource to augment the transcoding process of your application. These instances should
also be terminated once the backlog has been significantly reduced. Hence, an Amazon
EC2 Spot instance is the best option for this scenario.
Amazon EC2 Spot instances are spare compute capacity in the AWS cloud available to you
at steep discounts compared to On-Demand prices. EC2 Spot enables you to optimize your
costs on the AWS cloud and scale your application's throughput up to 10X for the same
budget. By simply selecting Spot when launching EC2 instances, you can save up-to 90%
on On-Demand prices. The only difference between On-Demand instances and Spot
Instances is that Spot instances can be interrupted by EC2 with two minutes of notification
when the EC2 needs the capacity back.
Options 1 and 3 are incorrect as Reserved and Dedicated instances do not act as spare
computer capacity.
Option 4 is a valid option but a Spot instance is much cheaper than On-Demand.
References:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/how-spot-instances-work.html
(Correct)
VPC with Public and Private Subnets and Hardware VPN Access
(Correct)
Explanation
The VPC Wizard offers the following configuration:
References:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenarios.html
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario3.html#VPC_S
cenario3_Implementation
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario4.html#VPC_S
cenario4_Implementation
Question 39: Correct
Your manager asked you to look for a hybrid cloud storage solution for your
company. You want to recommend AWS Storage Gateway as a cloud storage but
first, you need to prepare a list of the use cases for this service to ensure that it is
indeed the right service for your organization.
(Correct)
Explanation
The AWS Storage Gateway service helps customers seamlessly integrate existing on-
premises applications, infrastructure, and data with the AWS Cloud. The service uses
locally deployed virtual appliances and industry-standard storage protocols to connect
existing storage applications and workflows to AWS cloud storage services for minimal
process disruption.
Local Storage Gateway appliances cache frequently accessed data on-premises to provide
low-latency performance while securely and durably storing data in Amazon S3, Amazon
EBS, or Amazon Glacier cloud storage.
Customers commonly use hybrid cloud storage for use cases such as:
Hybrid cloud workloads. Cloud-backed file services, big data analytics and data lakes,
cloud bursting, or cloud data migration architectures may need local capacity and
performance with a connection to a central storage repository in the cloud. Storage
Gateway streamlines moving data between your organization and AWS to manage
workloads in the cloud.
Backup, archive, and disaster recovery. Storage Gateway is a drop-in replacement for
tape and tape automation, and integrates with leading industry backup software packages.
Storage Gateway can take snapshots of your local volumes, which can restored as Amazon
EBS volumes in the event of a local site disaster.
Tiered Storage. Some customers design storage architectures that preserve or extend high
performance on-premises investments by adding a lower cost, on-demand cloud tier. This is
ideal for archival or cost-reduction projects.
References:
https://aws.amazon.com/elasticbeanstalk/faqs/
(Correct)
Transitive Peering
(Correct)
Edge to Edge routing via a gateway
(Correct)
Explanation
All of the options are invalid, except for option 4.
A VPC peering connection is a networking connection between two VPCs that enables you
to route traffic between them privately. Instances in either VPC can communicate with each
other as if they are within the same network. You can create a VPC peering connection
between your own VPCs, with a VPC in another AWS account, or with a VPC in a different
AWS Region.
AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is
neither a gateway nor a VPN connection, and does not rely on a separate piece of physical
hardware. There is no single point of failure for communication or a bandwidth bottleneck.
References:
http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/invalid-peering-
configurations.html
Larger image
If a computer with an IP address of 110.238.109.37 sends a request to your VPC, what
will happen?
Initially, it will be allowed and then after a while, the connection will be denied.
Initially, it will be denied and then after a while, the connection will be allowed.
It will be allowed.
(Correct)
It will be denied.
Explanation
Rules are evaluated starting with the lowest numbered rule. As soon as a rule matches
traffic, it's applied immediately regardless of any higher-numbered rule that may contradict
it.
We have 3 rules here:
The Rule 100 will first be evaluated. If there is a match, then it will allow the request.
Otherwise, it will then go to Rule 101 to repeat the same process until it goes to the default
rule. In this case, when there is a request from 110.238.109.37, it will go through Rule 100
first. As Rule 100 says it will permit all traffic from any source, it will allow this request and
will not further evaluate Rule 101 (which denies 110.238.109.37) nor the default rule.
References:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html
Question 42: Incorrect
Your company has an e-commerce application that saves the transaction logs to an
S3 bucket. You are instructed by the CTO to configure the application to keep the
transaction logs for one month for troubleshooting purposes, and then afterwards,
purge the logs. What should you do to accomplish this requirement?
Configure the lifecycle configuration rules on the Amazon S3 bucket to purge the
transaction logs after a month
(Correct)
Create a new IAM policy for the Amazon S3 bucket that automatically deletes the logs after
a month
Enable CORS on the Amazon S3 bucket which will enable the automatic monthly deletion
of data
(Incorrect)
Explanation
In this scenario, the best way to accomplish the requirement is to simply configure the
lifecycle configuration rules on the Amazon S3 bucket to purge the transaction logs after a
month.
Lifecycle configuration enables you to specify the lifecycle management of objects in a
bucket. The configuration is a set of one or more rules, where each rule defines an action
for Amazon S3 to apply to a group of objects. These actions can be classified as follows:
Transition actions – In which you define when objects transition to another storage class. For example,
you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30
days after creation, or archive objects to the GLACIER storage class one year after creation.
Expiration actions – In which you specify when the objects expire. Then Amazon S3 deletes the
expired objects on your behalf.
Resources:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
Question 43: Incorrect
In Amazon EC2 security groups, what does the revoke-security-group-ingress command
do?
(Incorrect)
(Correct)
Explanation
The revoke-security-group-ingress command removes one or more ingress rules from a
security group.
This example removes TCP port 22 access for the 203.0.113.0/24 address range from the
security group named MySecurityGroup. If the command succeeds, no output is returned.
Command:
Resources:
https://aws.amazon.com/rds/details/multi-az/
Configure the IAM role to permit SSH connections to your EC2 instance.
Configure the Security Group of the EC2 instance to permit ingress traffic over port 3389
from your IP.
Configure the Security Group of the EC2 instance to permit ingress traffic over port 22 from
your IP.
(Correct)
Configure the Security Group of the EC2 instance to permit ingress traffic over port 443
from your IP.
Explanation
When connecting to your EC2 instance via SSH, you need to ensure that port 22 is allowed
on the security group of your EC2 instance.
A security group acts as a virtual firewall that controls the traffic for one or more instances.
When you launch an instance, you associate one or more security groups with the instance.
You add rules to each security group that allow traffic to or from its associated instances.
You can modify the rules for a security group at any time; the new rules are automatically
applied to all instances that are associated with the security group.
Option 1 is incorrect as it is unlikely that the issue is caused by a missing OS security patch.
Option 2 is incorrect because an IAM role is not pertinent to security groups.
Option 3 is incorrect because this is relevant to RDP and not SSH.
Option 5 is incorrect as port 443 is for HTTPS and not SSH.
References:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html
Question 45: Correct
In which of the following scenarios can you use both Simple Workflow Service (SWF)
and Amazon EC2 as a solution? (Choose 2)
(Correct)
(Correct)
Explanation
You can use a combination of EC2 and SWF for the following scenarios:
Amazon Simple Workflow Service (SWF) is a web service that makes it easy to coordinate
work across distributed application components. Amazon SWF enables applications for a
range of use cases, including media processing, web application back-ends, business
process workflows, and analytics pipelines, to be designed as a coordination of tasks.
Tasks represent invocations of various processing steps in an application which can be
performed by executable code, web service calls, human actions, and scripts.
Option 1 is incorrect as Elasticache is the best option for distributed session managment.
Option 4 is incorrect as SQS is the best service to use as a message queue.
Option 5 is incorrect as Cloudfront is the best option for applications that require a global
content delivery network.
References:
http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
https://aws.amazon.com/blogs/aws/archive-s3-to-glacier/
Question 46: Correct
In AWS Simple Queue Service, which of the following statements is false?
Standard queues provide at-least-once delivery, which means that each message is
delivered at least once.
(Correct)
Amazon SQS can help you build a distributed application with decoupled components.
Explanation
All of the answers are correct except for option 2. Only FIFO queues can preserve the order
of messages and not standard queues.
References:
https://aws.amazon.com/sqs/faqs/
Question 47: Correct
You are a new Solutions Architect to a large insurance firm. To maintain compliance
with HIPPA laws, all data being backed up or stored on Amazon S3 needs to be
encrypted at rest. In this scenario, what is the best method of encryption for your
data, assuming S3 is being used for storing financial-related data? (Choose 2)
(Correct)
Encrypt the data locally using your own encryption keys, then copy the data to Amazon S3
over HTTPS endpoints
(Correct)
Store the data on EBS volumes with encryption enabled instead of using Amazon S3
Explanation
Data protection refers to protecting data while in-transit (as it travels to and from Amazon
S3) and at rest (while it is stored on disks in Amazon S3 data centers). You can protect data
in transit by using SSL or by using client-side encryption. You have the following options of
protecting data at rest in Amazon S3.
Use Server-Side Encryption – You request Amazon S3 to encrypt your object before
saving it on disks in its data centers and decrypt it when you download the objects.
Use Client-Side Encryption – You can encrypt data client-side and upload the encrypted
data to Amazon S3. In this case, you manage the encryption process, the encryption keys,
and related tools.
Resources:
https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
Question 48: Correct
A tech company is currently using Auto Scaling for their web application. A new AMI
now needs to be used for launching a fleet of EC2 instances.
Do nothing. You can start directly launching EC2 instances in the Auto Scaling group with
the same launch configuration.
(Correct)
Explanation
For this scenario, you have to create a new launch configuration. Remember that you can't
modify a launch configuration after you've created it.
A launch configuration is a template that an Auto Scaling group uses to launch EC2
instances. When you create a launch configuration, you specify information for the
instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key
pair, one or more security groups, and a block device mapping. If you've launched an EC2
instance before, you specified the same information in order to launch the instance.
You can specify your launch configuration with multiple Auto Scaling groups. However, you
can only specify one launch configuration for an Auto Scaling group at a time, and you can't
modify a launch configuration after you've created it. Therefore, if you want to change the
launch configuration for an Auto Scaling group, you must create a launch configuration and
then update your Auto Scaling group with the new launch configuration.
References:
http://docs.aws.amazon.com/autoscaling/latest/userguide/LaunchConfiguration.html
Question 49: Correct
You have a new EC2 Instance in the eu-central-1 region. This EC2 Instance has a pre-
configured software running on it. You have been requested by your manager to
create a Disaster Recovery solution in case the instance in the region unexpectedly
fails.
Create a duplicate EC2 Instance in another Availability Zone. Afterwards, keep it in the
shutdown state. When the instance is required, bring it back up.
Backup the EBS data volume of the instance. If the instance fails, bring up a new EC2
instance and attach the volume.
Store the EC2 data on Amazon S3. If the instance fails, bring up a new EC2 instance and
restore the data from S3.
Create a new Amazon Machine Image out of the EC2 Instance and copy it to another
region.
(Correct)
Explanation
Remember that an AMI is region-specific, which means that you cannot use the exact same
AMI from one region to another. However, you can copy an Amazon Machine Image (AMI)
within or across an AWS region using the AWS Management Console, the AWS command
line tools or SDKs, or the Amazon EC2 API, all of which support the CopyImage action. You
can copy both Amazon EBS-backed AMIs and instance store-backed AMIs. You can copy
encrypted AMIs and AMIs with encrypted snapshots.
References:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html
Question 50: Incorrect
You are working as a Solutions Architect for a global game development company.
They have a web application currently running on twenty EC2 instances as part of an
Auto Scaling group. All twenty instances have been running at a maximum of 100%
CPU Utilization for the past 40 minutes however, the Auto Scaling group has not
added any additional EC2 instances to the group.
(Correct)
(Correct)
The scale down policy of your Auto Scaling group is too high.
(Incorrect)
The scale up policy of your Auto Scaling group is not reached yet.
Explanation
The correct answers are:
You are limited to running up to a total of 20 On-Demand instances across the instance
family, purchasing 20 Reserved Instances, and requesting Spot Instances per your dynamic
Spot limit per region.
If the maximum size of your Auto Scaling group has already been reached, then it would not
create any new EC2 instance.
References:
https://aws.amazon.com/ec2/faqs/
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html
As a Solutions Architect, what advise can you give to them with regards to the cost?
Tell them that the cost is only less than $10 per month.
Tell them that there is no cost in using Cloudformation templates and they only pay for the
AWS resources created by their template.
(Correct)
Tell them that there is no cost in using Cloudformation templates including the AWS
resources created by their template.
Explanation
AWS CloudFormation provides a common language for you to describe and provision all
the infrastructure resources in your cloud environment. CloudFormation allows you to use a
simple text file to model and provision, in an automated and secure manner, all the
resources needed for your applications across all regions and accounts. This file serves as
the single source of truth for your cloud environment. AWS CloudFormation is available at
no additional charge, and you pay only for the AWS resources needed to run your
applications.
References:
https://aws.amazon.com/cloudformation/
Question 52: Correct
You are working for a large telecommunications company where you need to run
analytics against all combined log files from your Application Load Balancer as part
of the regulatory requirements.
Which AWS services can be used together to collect logs and then easily perform log
analysis?
Amazon DynamoDB for storing and EC2 for analyzing the logs.
Amazon EC2 with EBS volumes for storing and analyzing the log files.
Amazon S3 for storing the ELB log files and an EC2 instance for analyzing the log files
using a custom-built application.
Amazon S3 for storing ELB log files and Amazon EMR for analyzing the log files.
(Correct)
Explanation
In this scenario, it is best to use a combination of Amazon S3 and Amazon EMR: Amazon
S3 for storing ELB log files and Amazon EMR for analyzing the log files. Access logging in
the ELB is stored in Amazon S3 which means that options 3 and 4 are both valid answers.
However, log analysis can be automatically provided by Amazon EMR, which is more
economical than building a custom-built log analysis application and hosting it in EC2.
Hence, option 4 is the best answer between the two.
Access logging is an optional feature of Elastic Load Balancing that is disabled by default.
After you enable access logging for your load balancer, Elastic Load Balancing captures the
logs and stores them in the Amazon S3 bucket that you specify as compressed files. You
can disable access logging at any time.
Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-
effective to process vast amounts of data across dynamically scalable Amazon EC2
instances. It securely and reliably handles a broad set of big data use cases, including log
analysis, web indexing, data transformations (ETL), machine learning, financial analysis,
scientific simulation, and bioinformatics. You can also run other popular distributed
frameworks such as Apache Spark, HBase, Presto, and Flink in Amazon EMR, and interact
with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB.
References:
https://aws.amazon.com/emr/
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-
logs.html
Question 53: Incorrect
By default, what happens to your data when an EC2 instance created terminates?
(Choose 3)
(Incorrect)
(Correct)
(Correct)
(Correct)
Explanation
By default, EBS volumes that are created and attached to an instance at launch are deleted
when that instance is terminated. You can modify this behavior by changing the value of the
flag DeleteOnTermination to false when you launch the instance. This modified value
causes the volume to persist even after the instance is terminated, and enables you to
attach the volume to another instance.
Options 2, 3, and 4 are correct. The root device volume is deleted by default. For EBS-
backed instances, the volume is deleted as well. For Instance Store-Backed AMI, all the
ephemeral (temporary) data are also deleted.
Option 1 is incorrect. When the instance is terminated, the volume of an EBS-backed
instance is deleted by default unless the DeleteOnTermination flag is set to false .
References:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html
Question 54: Incorrect
In Amazon S3, which of the following statements are true? (Choose 2)
The total volume of data and number of objects you can store are unlimited.
(Correct)
(Incorrect)
(Correct)
Explanation
The correct answers are:
Option 2 is incorrect as the largest object that can be uploaded in a single PUT is 5 GB and
not 5 TB.
Option 4 is incorrect as you can store virtually any kind of data in any format in S3.
References:
https://aws.amazon.com/s3/faqs/
Question 55: Correct
Your company is planning to deploy their new web application written in NodeJS to
AWS. Which AWS service will you use to easily deploy the new web application?
(Correct)
AWS Cloudfront
AWS Cloudformation
AWS DevOps
Explanation
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud
without worrying about the infrastructure that runs those applications. AWS Elastic
Beanstalk reduces management complexity without restricting choice or control. You simply
upload your application, and Elastic Beanstalk automatically handles the details of capacity
provisioning, load balancing, scaling, and application health monitoring.
References:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html
Question 56: Correct
How does Amazon ElastiCache improve database performance?
(Correct)
Explanation
ElastiCache improves the performance of your database through caching query results.
The primary purpose of an in-memory key-value store is to provide ultra-fast
(submillisecond latency) and inexpensive access to copies of data. Most data stores have
areas of data that are frequently accessed but seldom updated. Additionally, querying a
database is always slower and more expensive than locating a key in a key-value pair
cache. Some database queries are especially expensive to perform, for example, queries
that involve joins across multiple tables or queries with intensive calculations.
By caching such query results, you pay the price of the query once and then are able to
quickly retrieve the data multiple times without having to re-execute the query.
Resources:
https://aws.amazon.com/elasticache/
(Incorrect)
The security group of the EC2 instances does not allow HTTP traffic.
Cross-Zone Load Balancing is disabled.
(Correct)
Explanation
The reason why only half of your EC2 instances are actually receiving traffic is because the
Cross-Zone Load Balancing option is disabled. This option is disabled by default.
Cross-zone load balancing reduces the need to maintain equivalent numbers of instances in
each enabled Availability Zone, and improves your application's ability to handle the loss of
one or more instances.
When you create a Classic Load Balancer, the default for cross-zone load balancing
depends on how you create the load balancer. With the API or CLI, cross-zone load
balancing is disabled by default. With the AWS Management Console, the option to enable
cross-zone load balancing is selected by default. After you create a Classic Load Balancer,
you can enable or disable cross-zone load balancing at any time.
References:
http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-crosszone-
lb.html
Question 58: Incorrect
You are trying to convince a team to use Amazon RDS Read Replica for your multi-
tier web application. What are two benefits of using read replicas? (Choose 2)
(Correct)
Allows both read and write operations on the read replica to complement the primary
database.
(Correct)
Automatic failover in the case of Availability Zone service failures.
(Incorrect)
Explanation
Amazon RDS Read Replicas provide enhanced performance and durability for database
(DB) instances. This feature makes it easy to elastically scale out beyond the capacity
constraints of a single DB instance for read-heavy database workloads.
You can create one or more replicas of a given source DB Instance and serve high-volume
application read traffic from multiple copies of your data, thereby increasing aggregate read
throughput. Read replicas can also be promoted when needed to become standalone DB
instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, and
PostgreSQL as well as Amazon Aurora.
Option 2 is incorrect as the Read Replica only offers read operations.
Option 4 is incorrect as this is a benefit of Multi-AZ and not of a Read Replica.
Option 5 is incorrect because a Read Replica does not enhance the read performance of
your primary database.
Resources:
https://aws.amazon.com/rds/details/read-replicas/
EC2 instances in a private subnet can communicate with the Internet only if they have an
Elastic IP.
The allowed block size in VPC is between a /16 netmask (65,536 IP addresses) and /27
netmask (16 IP addresses).
Every subnet that you create is automatically associated with the main route table for the
VPC.
(Correct)
If a subnet's traffic is routed to an Internet gateway, the subnet is known as a public subnet.
(Correct)
Explanation
Options B, D, and F are the right answers:
Option 1 is incorrect because EC2 instances in a private subnet can communicate with the
Internet not just by having an Elastic IP, but also with a public IP address.
Option 3 is incorrect because the allowed block size in VPC is between a /16 netmask
(65,536 IP addresses) and /28 netmask (16 IP addresses) and not /27 netmask. For you to
easily remember this, /27 netmask is equivalent to exactly 27 IP addresses but keep in mind
that the limit is until /28 netmask.
Option 5 is incorrect because each subnet must reside entirely within one Availability Zone
and cannot span zones.
References:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html
Question 60: Correct
You are working as a solutions architect for a tech company where you are
instructed to build a web architecture using On-Demand EC2 instances and a
database in AWS. However, due to budget constraints, the company instructed you
to choose a database service in which they no longer need to worry about database
management tasks such as hardware or software provisioning, setup, configuration,
scaling and backups.
AWS RDS
DynamoDB
(Correct)
Amazon ElastiCache
Redshift
Explanation
Basically, a database service in which you no longer need to worry about database
management tasks such as hardware or software provisioning, setup and configuration is
called a fully managed database. This means that AWS fully manages all of the database
management tasks and the underlying host server.
DynamoDB is the best option to use in this scenario. It is a fully managed non-relational
database service – you simply create a database table, set your target utilization for Auto
Scaling, and let the service handle the rest. You no longer need to worry about database
management tasks such as hardware or software provisioning, setup and configuration,
software patching, operating a reliable, distributed database cluster, or partitioning data
over multiple instances as you scale. DynamoDB also lets you backup and restore all your
tables for data archival, helping you meet your corporate and governmental regulatory
requirements.
Option 1 is incorrect because AWS RDS is just a managed service and not fully managed.
This means that you still have to handle the software patching, backups and many other
administrative tasks.
Option 3 is incorrect because although ElastiCache is fully managed, is not a database
service but more of a In-Memory Data Store.
Option 4 is incorrect because although Redshift is fully managed, it is not a database
service but a Data Warehouse.
Option 5 is incorrect because this service is totally managed by you. If you have a MySQL
database running on an EBS-backed EC2 instance, AWS does not manage any
administrative database task.
References:
https://aws.amazon.com/dynamodb/
https://aws.amazon.com/products/databases/
Question 61: Incorrect
You are a Solutions Architect for a major TV network. They have a web application
running on eight Amazon EC2 instances, consuming about 55% of resources on each
instance. You are using Auto Scaling to make sure that eight instances are running at
all times. The number of requests that this application processes are consistent and
do not experience spikes. Your manager instructed you to ensure high availability of
this web application at all times to avoid any loss of revenue. You want the load to be
distributed evenly between all instances. You also want to use the same Amazon
Machine Image (AMI) for all EC2 instances.
Deploy eight EC2 instances in one Availability Zone behind an Amazon Elastic Load
Balancer.
Deploy four EC2 instances in one region and four in another region behind an Amazon
Elastic Load Balancer.
(Incorrect)
Deploy four EC2 instances in one Availability Zone and four in another availability zone in
the same region behind an Amazon Elastic Load Balancer.
(Correct)
Deploy two EC2 instances in four regions behind an Amazon Elastic Load Balancer.
Explanation
The best option to take is to deploy four EC2 instances in one Availability Zone and four in
another availability zone in the same region behind an Amazon Elastic Load Balancer. In
this way, if one availability zone goes down, there is still another available zone that can
accomodate traffic.
Option 1 is incorrect because this architecture is not highly available. If that Availability Zone
goes down, then your web application will be unreachable.
Options 2 and 4 are incorrect because the ELB is designed to only run in one region and
not across multiple regions.
References:
https://aws.amazon.com/elasticloadbalancing/
Question 62: Correct
Your IT Manager instructed you to set up a bastion host n the cheapest, most secure
way, and that you should be the only person that can access it via SSH.
Setup a small EC2 instance and a security group which only allows access on port 22 via
your IP address
(Correct)
Setup a large EC2 instance and a security group which only allows access on port 22 via
your IP address
Setup a large EC2 instance and a security group which only allows access on port 22
Setup a small EC2 instance and a security group which only allows access on port 22
Explanation
A bastion host is a server whose purpose is to provide access to a private network from an
external network, such as the Internet. Because of its exposure to potential attack, a bastion
host must minimize the chances of penetration.
To create a bastion host, you can create a new EC2 instance which should only have a
security group from a particular IP address for maximum security. Since the cost is also
considered in the question, you should choose a small instance for your host. By
default, t2.micro instance is used by AWS but you can change these settings during
deployment.
References:
https://docs.aws.amazon.com/quickstart/latest/linux-bastion/architecture.html
https://aws.amazon.com/blogs/security/how-to-record-ssh-sessions-established-through-a-
bastion-host/
pending
(Correct)
rebooting
(Correct)
running
(Correct)
stand-by
Explanation
pending, rebooting,` and running are valid EC2 Lifecycle states. There is no stand-by state.
References:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-lifecycle.html
Question 64: Correct
You are a new Solutions Architect in your department and you have created 7
CloudFormation templates. Each template has been defined for a specific purpose.
CloudFormation templates are free but you are charged for the underlying resources it
builds.
(Correct)
Explanation
There is no additional charge for AWS CloudFormation. You pay for AWS resources (such
as Amazon EC2 instances, Elastic Load Balancing load balancers, etc.) created using AWS
CloudFormation in the same manner as if you created them manually. You only pay for
what you use, as you use it; there are no minimum fees and no required upfront
commitments.
References:
https://aws.amazon.com/cloudformation/pricing/
Question 65: Correct
You are working for a large financial firm and you are instructed to set up a Linux
bastion host. It will allow access to the Amazon EC2 instances running in their VPC.
For security purposes, only the clients connecting from the corporate external public
IP address 175.45.116.100 should have SSH access to the host.
Which is the best option that can meet the customer's requirement?
Security Group Inbound Rule: Protocol – TCP. Port Range – 22, Source 175.45.116.100/32
(Correct)
Security Group Inbound Rule: Protocol – UDP, Port Range – 22, Source 175.45.116.100/32
Network ACL Inbound Rule: Protocol – UDP, Port Range – 22, Source 175.45.116.100/32
Network ACL Inbound Rule: Protocol – TCP, Port Range-22, Source 175.45.116.100/0
Explanation
The SSH protocol uses TCP and port 22. Hence, Options 2 and 3 are incorrect.
A bastion host is a special purpose computer on a network specifically designed and
configured to withstand attacks. The computer generally hosts a single application, for
example a proxy server, and all other services are removed or limited to reduce the threat to
the computer.
When setting up a bastion host in AWS, you should only allow the individual IP of the client
and not the entire network. Therefore, in the Source, the proper CIDR notation should be
used. The /32 denotes one IP address and the /0 refers to the entire network. That is why
Option 4 is incorrect as it allowed the entire network instead of a single IP.