saa-c03_1
saa-c03_1
saa-c03_1
SAA-C03 Dumps
https://www.certleader.com/SAA-C03-dumps.html
NEW QUESTION 1
- (Topic 1)
A company hosts more than 300 global websites and applications. The company requires a platform to analyze more than 30 TB of clickstream data each day.
What should a solutions architect do to transmit and process the clickstream data?
A. Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR duster with the data to generate analytics
B. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to use tor analysis
C. Cache the data to Amazon CloudFron: Store the data in an Amazon S3 bucket When an object is added to the S3 bucket, run an AWS Lambda function to
process the data tor analysis.
D. Collect the data from Amazon Kinesis Data Stream
E. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake Load the data in Amazon Redshift for analysis
Answer: D
Explanation:
https://aws.amazon.com/es/blogs/big-data/real-time-analytics-with-amazon-redshift-streaming-ingestion/
NEW QUESTION 2
- (Topic 1)
A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC The EC2 instances run inside several subnets across
multiple Availability Zones. The EC2 instances do not communicate with each other However, the EC2 instances download images from Amazon S3 and upload
images to Amazon S3 through a single NAT gateway The company is concerned about data transfer charges
What is the MOST cost-effective way for the company to avoid Regional data transfer charges?
Answer: A
Explanation:
In this scenario, the company wants to avoid regional data transfer charges while downloading and uploading images from Amazon S3. To accomplish this at the
lowest cost, the NAT gateway should be launched in each availability zone that the EC2 instances are running in. This allows the EC2 instances to route traffic
through the local NAT gateway instead of sending traffic across an availability zone boundary and incurring regional data transfer fees. This method will help
reduce the data transfer costs since inter- Availability Zone data transfers in a single region are free of charge.
Reference:
AWS NAT Gateway documentation: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
NEW QUESTION 3
- (Topic 1)
A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application has grown and
there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that allows for both timely backups to
Amazon S3 and with minimal impact on internet connectivity for internal users.
Which solution meets these requirements?
A. Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint
B. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
C. Order daily AWS Snowball devices Load the data onto the Snowball devices and return the devices to AWS each day.
D. Submit a support ticket through the AWS Management Console Request the removal ofS3 service limits from the account.
Answer: B
Explanation:
To address the issue of bandwidth limitations on the company's on-premises application, and to minimize the impact on internal user connectivity, a new AWS
Direct Connect connection should be established to direct backup traffic through this new connection. This solution will offer a secure, high-speed connection
between the company's data center and AWS, which will allow the company to transfer data quickly without consuming internet bandwidth.
Reference:
AWS Direct Connect documentation: https://aws.amazon.com/directconnect/
NEW QUESTION 4
- (Topic 1)
A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights enabled. The
testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of running the tests without reducing the
compute and memory attributes of the DB instance.
Which solution meets these requirements MOST cost-effectively?
Answer: A
Explanation:
To reduce the cost of running the tests without reducing the compute and memory attributes of the Amazon RDS for MySQL DB instance, the development team
can stop the instance when tests are completed and restart it when required. Stopping the DB instance when not in use can help save costs because customers
are only charged for storage while the DB instance is stopped. During this time, automated backups and automated DB instance maintenance are suspended.
When the instance is restarted, it retains the same configurations, security groups, and DB parameter groups as when it was stopped.
Reference:
Amazon RDS Documentation: Stopping and Starting a DB instance (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_StopInstance.html)
NEW QUESTION 5
- (Topic 1)
A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and
dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and
dynamic data. The company is using its own domain name registered with Amazon Route 53.
What should a solutions architect do to meet these requirements?
A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins Configure Route 53 to route traffic to the CloudFront distribution.
B. Create an Amazon CloudFront distribution that has the ALB as an origin Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an
endpoin
C. Configure Route 53 to route traffic to the CloudFront distribution.
D. Create an Amazon CloudFront distribution that has the S3 bucket as an origin Create an AWS Global Accelerator standard accelerator that has the ALB and the
CloudFront distribution as endpoints Create a custom domain name that points to the accelerator DNS name Use the custom domain name as an endpoint for the
web application.
E. Create an Amazon CloudFront distribution that has the ALB as an origin
F. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint Create two domain name
G. Point one domain name to the CloudFront DNS name for dynamic content, Point the other domain name to the accelerator DNS name for static content Use the
domain names as endpoints for the web application.
Answer: C
Explanation:
Static content can be cached at Cloud front Edge locations from S3 and dynamic content EC2 behind the ALB whose performance can be improved by Global
Accelerator whose one endpoint is ALB and other Cloud front. So with regards to custom domain name endpoint is web application is R53 alias records for the
custom domain point to web application https://aws.amazon.com/blogs/networking-and-content-delivery/improving-availability-and-performance-for-application-load-
balancers-using-one-click-integration- with-aws-global-accelerator/
NEW QUESTION 6
- (Topic 1)
A company runs an on-premises application that is powered by a MySQL database The company is migrating the application to AWS to Increase the application's
elasticity and availability
The current architecture shows heavy read activity on the database during times of normal operation Every 4 hours the company's development team pulls a full
export of the production database to populate a database in the staging environment During this period, users experience unacceptable application latency The
development team is unable to use the staging environment until the procedure completes
A solutions architect must recommend replacement architecture that alleviates the application latency issue The replacement architecture also must give the
development team the ability to continue using the staging environment without delay
Which solution meets these requirements?
A. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for productio
B. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility.
C. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production Use database cloning to create the staging database on-demand
D. Use Amazon RDS for MySQL with a Mufti AZ deployment and read replicas for production Use the standby instance tor the staging database.
E. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for productio
F. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility.
Answer: B
Explanation:
https://aws.amazon.com/blogs/aws/amazon-aurora-fast-database-cloning/
NEW QUESTION 7
- (Topic 1)
A company hosts an application on AWS Lambda functions mat are invoked by an Amazon API Gateway API The Lambda functions save customer data to an
Amazon Aurora MySQL database Whenever the company upgrades the database, the Lambda functions fail to establish database connections until the upgrade is
complete The result is that customer data Is not recorded for some of the event
A solutions architect needs to design a solution that stores customer data that is created during database upgrades
Which solution will meet these requirements?
A. Provision an Amazon RDS proxy to sit between the Lambda functions and the database Configure the Lambda functions to connect to the RDS proxy
B. Increase the run time of me Lambda functions to the maximum Create a retry mechanism in the code that stores the customer data in the database
C. Persist the customer data to Lambda local storag
D. Configure new Lambda functions to scan the local storage to save the customer data to the database.
E. Store the customer data m an Amazon Simple Queue Service (Amazon SOS) FIFO queue Create a new Lambda function that polls the queue and stores the
customer data in the database
Answer: D
Explanation:
https://www.learnaws.org/2020/12/13/aws-rds-proxy-deep-dive/
RDS proxy can improve application availability in such a situation by waiting for the new database instance to be functional and maintaining any requests received
from the application during this time. The end result is that the application is more resilient to issues with the underlying database.
This will enable solution to hold data till the time DB comes back to normal. RDS proxy is to optimally utilize the connection between Lambda and DB. Lambda can
open multiple connection concurrently which can be taxing on DB compute resources, hence RDS proxy was introduced to manage and leverage these
connections efficiently.
NEW QUESTION 8
- (Topic 1)
A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload images and
metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and Amazon DynamoDB to store
the metadata.
The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary significantly
depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the growing user base.
Which solution meats these requirements?
Answer: C
Explanation:
https://www.quora.com/How-can-I-use-DynamoDB-for-storing-metadata-for-Amazon-S3-objects
This solution meets the requirements of scalability, performance, and availability. AWS Lambda can process the photos in parallel and scale up or down
automatically depending on the demand. Amazon S3 can store the photos and metadata reliably and durably, and provide high availability and low latency.
DynamoDB can store the metadata efficiently and provide consistent performance. This solution also reduces the cost and complexity of managing EC2 instances
and EBS volumes.
Option A is incorrect because storing the photos in DynamoDB is not a good practice, as it can increase the storage cost and limit the throughput. Option B is
incorrect because Kinesis Data Firehose is not designed for processing photos, but for streaming data to destinations such as S3 or Redshift. Option D is incorrect
because increasing the number of EC2 instances and using Provisioned IOPS SSD volumes does not guarantee scalability, as it depends on the load balancer
and the application code. It also increases the cost and complexity of managing the infrastructure.
References:
? https://aws.amazon.com/certification/certified-solutions-architect-professional/
? https://www.examtopics.com/discussions/amazon/view/7193-exam-aws-certified-solutions-architect-professional-topic-1/
? https://aws.amazon.com/architecture/
NEW QUESTION 9
- (Topic 1)
A company needs to keep user transaction data in an Amazon DynamoDB table. The company must retain the data for 7 years.
What is the MOST operationally efficient solution that meets these requirements?
Answer: C
NEW QUESTION 10
- (Topic 1)
A company runs multiple Windows workloads on AWS. The company's employees use Windows file shares that are hosted on two Amazon EC2 instances. The
file shares synchronize data between themselves and maintain duplicate copies. The company wants a highly available and durable storage solution that
preserves how users currently access the files.
What should a solutions architect do to meet these requirements?
A. Migrate all the data to Amazon S3 Set up IAM authentication for users to access files
B. Set up an Amazon S3 File Gatewa
C. Mount the S3 File Gateway on the existing EC2 Instances.
D. Extend the file share environment to Amazon FSx for Windows File Server with a Multi- AZ configuratio
E. Migrate all the data to FSx for Windows File Server.
F. Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuratio
G. Migrate all the data to Amazon EFS.
Answer: C
Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/AmazonEFS.html Amazon FSx for Windows File Server provides fully managed Microsoft Windows
file servers, backed by a fully native Windows file system. https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
NEW QUESTION 10
- (Topic 1)
A company is running an SMB file server in its data center. The file server stores large files that are accessed frequently for the first few days after the files are
created. After 7 days the files are rarely accessed.
The total data size is increasing and is close to the company's total storage capacity. A solutions architect must increase the company's available storage space
without losing low-latency access to the most recently accessed files. The solutions architect must also provide file lifecycle management to avoid future storage
issues.
Which solution will meet these requirements?
A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
B. Create an Amazon S3 File Gateway to extend the company's storage spac
C. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
D. Create an Amazon FSx for Windows File Server file system to extend the company's storage space.
E. Install a utility on each user's computer to access Amazon S3. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.
Answer: B
Explanation:
Amazon S3 File Gateway is a hybrid cloud storage service that enables on- premises applications to seamlessly use Amazon S3 cloud storage. It provides a file
interface to Amazon S3 and supports SMB and NFS protocols. It also supports S3 Lifecycle policies that can automatically transition data from S3 Standard to S3
Glacier Deep Archive after a specified period of time. This solution will meet the requirements of increasing the company’s available storage space without losing
low-latency access to the most recently accessed files and providing file lifecycle management to avoid future storage issues.
Reference:
https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.ht ml
NEW QUESTION 13
- (Topic 1)
A company hosts an application on multiple Amazon EC2 instances The application processes messages from an Amazon SQS queue writes to an Amazon RDS
table and deletes the message from the queue Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate
messages.
What should a solutions architect do to ensure messages are being processed once only?
Answer: D
Explanation:
The visibility timeout begins when Amazon SQS returns a message. During this time, the consumer processes and deletes the message. However, if the
consumer fails before deleting the message and your system doesn't call the DeleteMessage action for that message before the visibility timeout expires, the
message becomes visible to other consumers and the message is received again. If a message must be received only once, your consumer should delete it within
the duration of the visibility timeout. https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs- visibility-timeout.html
Keyword: SQS queue writes to an Amazon RDS From this, Option D best suite & other Options ruled out [Option A - You can't intruduce one more Queue in
the existing one; Option B - only Permission & Option C - Only Retrieves Messages] FIF O queues are designed to never introduce duplicate messages.
However, your message producer might introduce duplicates in certain scenarios: for example, if the producer sends a message, does not receive a response, and
then resends the same message. Amazon SQS APIs provide deduplication functionality that prevents your message producer from sending duplicates. Any
duplicates introduced by the message producer are removed within a 5-minute deduplication interval. For standard queues, you might occasionally receive a
duplicate copy of a message (at-least- once delivery). If you use a standard queue, you must design your applications to be idempotent (that is, they must not be
affected adversely when processing the same message more than once).
NEW QUESTION 16
- (Topic 1)
A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to hundreds of
terabytes The application data must be stored in a standard file system structure The company wants a solution that scales automatically, is highly available, and
requires minimum operational overhead.
Which solution will meet these requirements?
A. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS) Use Amazon S3 for storage
B. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use Amazon Elastic Block Store (Amazon EBS) for storage
C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling grou
D. Use Amazon Elastic File System (Amazon EFS) for storage.
E. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling grou
F. Use Amazon Elastic Block Store (Amazon EBS) for storage.
Answer: C
Explanation:
EFS is a standard file system, it scales automatically and is highly available.
NEW QUESTION 17
- (Topic 1)
A company needs to store its accounting records in Amazon S3. The records must be immediately accessible for 1 year and then must be archived for an
additional 9 years. No one at the company, including administrative users and root users, can be able to delete the records during the entire 10-year period. The
records must be stored with maximum resiliency.
Which solution will meet these requirements?
Answer: C
Explanation:
To meet the requirements of immediately accessible records for 1 year and then archived for an additional 9 years with maximum resiliency, we can use S3
Lifecycle policy to transition records from S3 Standard to S3 Glacier Deep Archive after 1 year. And to ensure that the records cannot be deleted by anyone,
including administrative and root users, we can use S3 Object Lock in compliance mode for a period of 10 years. Therefore, the correct answer is option C.
Reference: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html
NEW QUESTION 18
- (Topic 1)
A company is deploying a new public web application to AWS. The application will run behind an Application Load Balancer (ALB). The application needs to be
encrypted at the edge with an SSL/TLS certificate that is issued by an external certificate authority (CA). The certificate must be rotated each year before the
certificate expires.
What should a solutions architect do to meet these requirements?
Answer: D
Explanation:
https://www.amazonaws.cn/en/certificate-manager/faqs/#Managed_renewal_and_deployment
NEW QUESTION 22
- (Topic 1)
A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles during peak operating hours The company wants to use these
data points in its existing analytics platform A solutions architect must determine the most viable multi-tier option to support this architecture The data points must
be accessible from the REST API.
Which action meets these requirements for storing and retrieving location data?
Answer: D
Explanation:
https://aws.amazon.com/solutions/implementations/aws-streaming-data-solution-for- amazon-kinesis/
NEW QUESTION 23
- (Topic 1)
A company runs a photo processing application that needs to frequently upload and download pictures from Amazon S3 buckets that are located in the same AWS
Region. A solutions architect has noticed an increased cost in data transfer fees and needs to implement a solution to reduce these costs.
How can the solutions architect meet this requirement?
A. Deploy Amazon API Gateway into a public subnet and adjust the route table to route S3 calls through It.
B. Deploy a NAT gateway into a public subnet and attach an end point policy that allows access to the S3 buckets.
C. Deploy the application Into a public subnet and allow it to route through an internet gateway to access the S3 Buckets
D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets.
Answer: D
Explanation:
The correct answer is Option D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets. By
deploying an S3 VPC gateway endpoint, the application can access the S3 buckets over a private network connection within the VPC, eliminating the need for data
transfer over the internet. This can help reduce data transfer fees as well as improve the performance of the application. The endpoint policy can be used to
specify which S3 buckets the application has access to.
NEW QUESTION 25
- (Topic 1)
A company is preparing to store confidential data in Amazon S3 For compliance reasons the data must be encrypted at rest Encryption key usage must be logged
tor auditing purposes. Keys must be rotated every year.
Which solution meets these requirements and «the MOST operationally efferent?
Answer: D
Explanation:
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html When you enable automatic key rotation for a customer managed key, AWS KMS
generates new cryptographic material for the KMS key every year. AWS KMS also saves the KMS key's older cryptographic material in perpetuity so it can be
used to decrypt data that the KMS key encrypted.
Key rotation in AWS KMS is a cryptographic best practice that is designed to be transparent and easy to use. AWS KMS supports optional automatic key rotation
only for customer managed CMKs. Enable and disable key rotation. Automatic key rotation is disabled by default on customer managed CMKs. When you enable
(or re-enable) key rotation, AWS KMS automatically rotates the CMK 365 days after the enable date and every 365 days thereafter.
NEW QUESTION 30
- (Topic 1)
A solutions architect is designing a two-tier web application The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets The
database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet Security is a high priority for the company
How should security groups be configured in this situation? (Select TWO )
A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0.
B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0.
C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier.
D. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier.
E. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier.
Answer: AC
Explanation:
"Security groups create an outbound rule for every inbound rule." Not completely right. Statefull does NOT mean that if you create an inbound (or outbound) rule, it
will create an outbound (or inbound) rule. What it does mean is: suppose you create an inbound rule on port 443 for the X ip. When a request enters on port 443
from X ip, it will allow traffic out for that request in the port 443. However, if you look at the outbound rules, there will not be any outbound rule on port 443 unless
explicitly create it. In ACLs, which are stateless, you would have to create an inbound rule to allow incoming requests and an outbound rule to allow your
application responds to those incoming requests.
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#SecurityGro upRules
NEW QUESTION 34
- (Topic 1)
An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to
access the S3 bucket without connectivity to the internet.
Which solution will provide private network connectivity to Amazon S3?
Answer: A
Explanation:
VPC endpoint allows you to connect to AWS services using a private network instead of using the public Internet
NEW QUESTION 35
- (Topic 1)
A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solutions architect
needs to implement a solution to ingest and store the alerts for future analysis.
The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Ad
ditionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.
What is the MOST operationally efficient solution that meets these requirements?
A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3
bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days
B. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts Create a script on the EC2
instances that will store tne alerts m an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days
C. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon
Elasticsearch Service (Amazon ES) duster Set up the Amazon ES cluster to take manual snapshots every day and delete data from the duster that is older than 14
days
D. Create an Amazon Simple Queue Service (Amazon SQS i standard queue to ingest the alerts and set the message retention period to 14 days Configure
consumers to poll the SQS queue check the age of the message and analyze the message data as needed If the message is 14 days old the consumer should
copy the message to an Amazon S3 bucketand delete the message from the SQS queue
Answer: A
Explanation:
https://aws.amazon.com/kinesis/data-firehose/features/?nc=sn&loc=2#:~:text=into%20Amazon%20S3%2C%20Amazon%20Redshift%2C%20Amazon%20OpenS
earch%20Service%2C%20Kinesis,Delivery%20streams
NEW QUESTION 38
- (Topic 1)
A company has a production web application in which users upload documents through a web interlace or a mobile app. According to a new regulatory
requirement, new documents cannot be modified or deleted after they are stored.
What should a solutions architect do to meet this requirement?
A. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning and S3 Object Lock enabled
B. Store the uploaded documents in an Amazon S3 bucke
C. Configure an S3 Lifecycle policy to archive the documents periodically.
D. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning enabled Configure an ACL to restrict all access to read-only.
E. Store the uploaded documents on an Amazon Elastic File System (Amazon EFS) volum
F. Access the data by mounting the volume in read-only mode.
Answer: A
Explanation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
NEW QUESTION 43
- (Topic 1)
A company recently launched a variety of new workloads on Amazon EC2 instances in its AWS account. The company needs to create a strategy to access and
administer the instances remotely and securely. The company needs to implement a repeatable process that works with native AWS services and follows the AWS
Well-Architected Framework.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use the EC2 serial console to directly access the terminal interface of each instance foradministration.
B. Attach the appropriate IAM role to each existing instance and new instanc
C. Use AWS Systems Manager Session Manager to establish a remote SSH session.
D. Create an administrative SSH key pai
E. Load the public key into each EC2 instanc
F. Deploy a bastion host in a public subnet to provide a tunnel for administration of each instance.
G. Establish an AWS Site-to-Site VPN connectio
H. Instruct administrators to use their local on-premises machines to connect directly to the instances by using SSH keys across the VPN tunnel.
Answer: B
Explanation:
https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-launch-managed-instance.html
NEW QUESTION 44
- (Topic 1)
A company wants to reduce the cost of its existing three-tier web architecture. The web, application, and database servers are running on Amazon EC2 instances
for the development, test, and production environments. The EC2 instances average 30% CPU utilization during peak hours and 10% CPU utilization during non-
peak hours.
The production EC2 instances run 24 hours a day. The development and test EC2 instances run for at least 8 hours each day. The company plans to implement
automation to stop the development and test EC2 instances when they are not in use.
Which EC2 instance purchasing solution will meet the company's requirements MOST cost-effectively?
Answer: B
NEW QUESTION 46
- (Topic 2)
A company sells ringtones created from clips of popular songs. The files containing the ringtones are stored in Amazon S3 Standard and are at least 128 KB in
size. The company has millions of files, but downloads are infrequent for ringtones older than 90 days. The company needs to save money on storage while
keeping the most accessed files readily available for its users.
Which action should the company take to meet these requirements MOST cost-effectively?
A. Configure S3 Standard-Infrequent Access (S3 Standard-IA) storage for the initial storage tier of the objects.
B. Move the files to S3 Intelligent-Tiering and configure it to move objects to a less expensive storage tier after 90 days.
C. Configure S3 inventory to manage objects and move them to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.
D. Implement an S3 Lifecycle policy that moves the objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.
Answer: D
Explanation:
This solution meets the requirements of saving money on storage while keeping the most accessed files readily available for the users. S3 Lifecycle policy can
automatically move objects from one storage class to another based on predefined rules. S3 Standard-IA is a lower-cost storage class for data that is accessed
less frequently, but requires rapid access when needed. It is suitable for ringtones older than 90 days that are downloaded infrequently.
Option A is incorrect because configuring S3 Standard-IA for the initial storage tier of the objects can incur higher costs for frequent access and retrieval fees.
Option B is incorrect
because moving the files to S3 Intelligent-Tiering can incur additional monitoring and automation fees that may not be necessary for ringtones older than 90 days.
Option C is incorrect because using S3 inventory to manage objects and move them to S3 Standard-IA can be complex and time-consuming, and it does not
provide automatic cost savings. References:
? https://aws.amazon.com/s3/storage-classes/
? https://aws.amazon.com/s3/cloud-storage-cost-optimization-ebook/
NEW QUESTION 48
- (Topic 2)
A company has a Windows-based application that must be migrated to AWS. The application requires the use of a shared Windows file system attached to
multiple Amazon EC2 Windows instances that are deployed across multiple Availability Zones.
What should a solutions architect do to meet this requirement?
Answer: B
Explanation:
This solution meets the requirement of migrating a Windows-based application that requires the use of a shared Windows file system attached to multiple Amazon
EC2 Windows instances that are deployed across multiple Availability Zones. Amazon FSx for Windows File Server provides fully managed shared storage built on
Windows Server, and delivers a wide range of data access, data management, and administrative capabilities. It supports the Server Message Block (SMB)
protocol and can be mounted to EC2 Windows instances across multiple Availability Zones.
Option A is incorrect because AWS Storage Gateway in volume gateway mode provides cloud-backed storage volumes that can be mounted as iSCSI devices
from on-premises application servers, but it does not support SMB protocol or EC2 Windows instances. Option C is incorrect because Amazon Elastic File System
(Amazon EFS) provides a scalable and elastic NFS file system for Linux-based workloads, but it does not support SMB protocol or EC2 Windows instances.
Option D is incorrect because Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with EC2 instances, but it does not
support SMB protocol or attaching multiple instances to the same volume.
References:
? https://aws.amazon.com/fsx/windows/
? https://docs.aws.amazon.com/fsx/latest/WindowsGuide/using-file-shares.html
NEW QUESTION 51
- (Topic 2)
A company wants to migrate its on-premises data center to AWS. According to the company's compliance requirements, the company can use only the ap-
northeast-3 Region. Company administrators are not permitted to connect VPCs to the internet.
Which solutions will meet these requirements? (Choose two.)
A. Use AWS Control Tower to implement data residency guardrails to deny internet access and deny access to all AWS Regions except ap-northeast-3.
B. Use rules in AWS WAF to prevent internet acces
C. Deny access to all AWS Regions except ap-northeast-3 in the AWS account settings.
D. Use AWS Organizations to configure service control policies (SCPS) that prevent VPCs from gaining internet acces
E. Deny access to all AWS Regions except ap-northeast-3.
F. Create an outbound rule for the network ACL in each VPC to deny all traffic from 0.0.0.0/0. Create an IAM policy for each user to prevent the use of any AWS
Region other than ap-northeast-3.
G. Use AWS Config to activate managed rules to detect and alert for internet gateways and to detect and alert for new resources deployed outside of ap-
northeast-3.
Answer: AC
Explanation:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_vpc.html#example_vpc_2
NEW QUESTION 56
- (Topic 2)
A company wants to use the AWS Cloud to make an existing application highly available and resilient. The current version of the application resides in the
company's data center. The application recently experienced data loss after a database server crashed because of an unexpected power outage.
The company needs a solution that avoids any single points of failure. The solution must give the application the ability to scale to meet user demand.
Which solution will meet these requirements?
A. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zone
B. Use an Amazon RDS DB instance in a Multi-AZ configuration.
C. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group in a single Availability Zon
D. Deploy the databaseon an EC2 instanc
E. Enable EC2 Auto Recovery.
F. Deploy the application servers by using Amazon EC2 instances in an Auto Scalinggroup across multiple Availability Zone
G. Use an Amazon RDS DB instance with a read replica in a single Availability Zon
H. Promote the read replica to replace the primary DB instance if the primary DB instance fails.
I. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones Deploy the primary and secondary
database servers on EC2 instances across multiple Availability Zones Use Amazon Elastic Block Store (Amazon EBS) Multi-Attach to create shared storage
between the instances.
Answer: A
Explanation:
Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon RDS DB instance in a
Multi-AZ configuration. To make an existing application highly available and resilient while avoiding any single points of failure and giving the application the ability
to scale to meet user demand, the best solution would be to deploy the application servers using Amazon EC2 instances in an Auto Scaling group across multiple
Availability Zones and use an Amazon RDS DB instance in a Multi-AZ configuration. By using an Amazon RDS DB instance in a Multi-AZ configuration, the
database is automatically replicated across multiple Availability Zones, ensuring that the database is highly available and can withstand the failure of a single
Availability Zone. This provides fault tolerance and avoids any single points of failure.
NEW QUESTION 59
- (Topic 2)
An ecommerce company hosts its analytics application in the AWS Cloud. The application generates about 300 MB of data each month. The data is stored in
JSON format. The company is evaluating a disaster recovery solution to back up the data. The data must be accessible in milliseconds if it is needed, and the data
must be kept for 30 days.
Which solution meets these requirements MOST cost-effectively?
Answer: C
Explanation:
This solution meets the requirements of a disaster recovery solution to back up the data that is generated by an analytics application, stored in JSON format, and
must be accessible in milliseconds if it is needed. Amazon S3 Standard is a durable and scalable storage class for frequently accessed data. It can store any
amount of data and provide high availability and performance. It can also support millisecond access time for data retrieval.
Option A is incorrect because Amazon OpenSearch Service (Amazon Elasticsearch Service) is a search and analytics service that can index and query data, but it
is not a backup solution for data stored in JSON format. Option B is incorrect because Amazon S3 Glacier is a low-cost storage class for data archiving and long-
term backup, but it does not support millisecond access time for data retrieval. Option D is incorrect because Amazon RDS for PostgreSQL is a relational database
service that can store and query structured data, but it is not a backup solution for data stored in JSON format.
References:
? https://aws.amazon.com/s3/storage-classes/
? https://aws.amazon.com/s3/faqs/#Durability_and_data_protection
NEW QUESTION 64
- (Topic 2)
A company is building a web-based application running on Amazon EC2 instances in multiple Availability Zones. The web application will provide access to a
repository of text documents totaling about 900 TB in size. The company anticipates that the web application will experience periods of high demand. A solutions
architect must ensure that the storage component for the text documents can scale to meet the demand of the application at all times. The company is concerned
about the overall cost of the solution.
Which storage solution meets these requirements MOST cost-effectively?
Answer: D
Explanation:
Amazon S3 is cheapest and can be accessed from anywhere.
NEW QUESTION 65
- (Topic 2)
A company runs an application using Amazon ECS. The application creates esi/ed versions of an original image and then makes Amazon S3 API calls to store the
resized images in Amazon S3.
How can a solutions architect ensure that the application has permission to access Amazon S3?
A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the container.
B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleAm in the task definition.
C. Create a security group that allows access from Amazon ECS to Amazon S3, and update the launch configuration used by the ECS cluster.
D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instancesfor the ECS cluster while logged in as this account.
Answer: B
Explanation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-taskdefinition.html
NEW QUESTION 66
- (Topic 2)
A company has an ecommerce checkout workflow that writes an order to a database and calls a service to process the payment. Users are experiencing timeouts
during the checkout process. When users resubmit the checkout form, multiple unique orders are created for the same desired transaction.
How should a solutions architect refactor this workflow to prevent the creation of multiple orders?
A. Configure the web application to send an order message to Amazon Kinesis Data Firehos
B. Set the payment service to retrieve the message from Kinesis Data Firehose and process the order.
C. Create a rule in AWS CloudTrail to invoke an AWS Lambda function based on the logged application path request Use Lambda to query the database, call the
payment service, and pass in the order information.
D. Store the order in the databas
E. Send a message that includes the order number to Amazon Simple Notification Service (Amazon SNS). Set the payment service to pollAmazon SN
F. retrieve the message, and process the order.
Answer: D
Explanation:
This approach ensures that the order creation and payment processing steps are separate and atomic. By sending the order information to an SQS FIFO queue,
the payment service can process the order one at a time and in the order they were received. If the payment service is unable to process an order, it can be retried
later, preventing the creation of multiple orders. The deletion of the message from the queue after it is processed will prevent the same message from being
processed multiple times.
NEW QUESTION 68
- (Topic 2)
A new employee has joined a company as a deployment engineer. The deployment engineer will be using AWS CloudFormation templates to create multiple AWS
resources. A solutions architect wants the deployment engineer to perform job activities while following the principle of least privilege.
Which steps should the solutions architect do in conjunction to reach this goal? (Select two.)
A. Have the deployment engineer use AWS account roof user credentials for performing AWS CloudFormation stack operations.
B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached.
C. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the Administrate/Access IAM policy attached.
D. Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.
E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM
role.
Answer: DE
Explanation:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html
NEW QUESTION 71
- (Topic 2)
A company is running an online transaction processing (OLTP) workload on AWS. This workload uses an unencrypted Amazon RDS DB instance in a Multi-AZ
deployment. Daily database snapshots are taken from this instance.
What should a solutions architect do to ensure the database and snapshots are always encrypted moving forward?
Answer: A
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapsho t.html#USER_RestoreFromSnapshot.CON
Under "Encrypt unencrypted resources" - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
NEW QUESTION 76
- (Topic 2)
A company is planning to move its data to an Amazon S3 bucket. The data must be encrypted when it is stored in the S3 bucket. Additionally, the encryption key
must be automatically rotated every year.
Which solution will meet these requirements with the LEAST operational overhead?
Answer: B
Explanation:
SSE-S3 - is free and uses AWS owned CMKs (CMK = Customer Master Key). The encryption key is owned and managed by AWS, and is shared among many
accounts. Its rotation is automatic with time that varies as shown in the table here. The time is not explicitly defined.
SSE-KMS - has two flavors:
AWS managed CMK. This is free CMK generated only for your account. You can only view it policies and audit usage, but not manage it. Rotation is automatic -
once per 1095 days (3 years),
Customer managed CMK. This uses your own key that you create and can manage. Rotation is not enabled by default. But if you enable it, it will be automatically
rotated every 1 year. This variant can also use an imported key material by you. If you create such key with an imported material, there is no automated rotation.
Only manual rotation.
SSE-C - customer provided key. The encryption key is fully managed by you outside of AWS. AWS will not rotate it.
This solution meets the requirements of moving data to an Amazon S3 bucket, encrypting the data when it is stored in the S3 bucket, and automatically rotating the
encryption key every year with the least operational overhead. AWS Key Management Service (AWS KMS) is a service that enables you to create and manage
encryption keys for your data. A customer managed key is a symmetric encryption key that you create and manage in AWS KMS. You can enable automatic key
rotation for a customer managed key, which means that AWS KMS generates new cryptographic material for the key every year. You can set the S3 bucket’s
default encryption behavior to use the customer managed KMS key, which means that any object that is uploaded to the bucket without specifying an encryption
method will be encrypted with that key.
Option A is incorrect because using server-side encryption with Amazon S3 managed encryption keys (SSE-S3) does not allow you to control or manage the
encryption keys. SSE-S3 uses a unique key for each object, and encrypts that key with a master key that is regularly rotated by S3. However, you cannot enable or
disable key rotation for SSE-S3 keys, or specify the rotation interval. Option C is incorrect because manually rotating the KMS key every year can increase the
operational overhead and complexity, and it may not meet the requirement of rotating the key every year if you forget or delay the rotation
process. Option D is incorrect because encrypting the data with customer key material before moving the data to the S3 bucket can increase the operational
overhead and complexity, and it may not provide consistent encryption for all objects in the bucket. Creating a KMS key without key material and importing the
customer key material into the KMS key can enable you to use your own source of random bits to generate your KMS keys, but it does not support automatic key
rotation.
References:
? https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html
? https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
? https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html
NEW QUESTION 77
- (Topic 2)
A company has a highly dynamic batch processing job that uses many Amazon EC2 instances to complete it. The job is stateless in nature, can be started and
stopped at any given time with no negative impact, and typically takes upwards of 60 minutes total to complete. The company has asked a solutions architect to
design a scalable and cost- effective solution that meets the requirements of the job.
What should the solutions architect recommend?
Answer: A
Explanation:
EC2 Spot Instances allow users to bid on spare Amazon EC2 computing capacity and can be a cost-effective solution for stateless, interruptible workloads that can
be started and stopped at any time. Since the batch processing job is stateless, can be started and stopped at any time, and typically takes upwards of 60 minutes
to complete, EC2 Spot Instances would be a good fit for this workload.
NEW QUESTION 82
- (Topic 2)
A company runs a web-based portal that provides users with global breaking news, local alerts, and weather updates. The portal delivers each user a personalized
view by using mixture of static and dynamic content. Content is served over HTTPS through an API server running on an Amazon EC2 instance behind an
Application Load Balancer (ALB). The company wants the portal to provide this content to its users across the world as quickly as possible.
How should a solutions architect design the application to ensure the LEAST amount of latency for all users?
Answer: A
Explanation:
https://aws.amazon.com/blogs/networking-and-content-delivery/deliver-your-apps-dynamic-content-using-amazon-cloudfront-getting-started-template/
NEW QUESTION 87
- (Topic 2)
A hospital wants to create digital copies for its large collection of historical written records. The hospital will continue to add hundreds of new documents each day.
The hospital's data team will scan the documents and will upload the documents to the AWS Cloud.
A solutions architect must implement a solution to analyze the documents, extract the medical information, and store the documents so that an application can run
SQL queries on the data. The solution must maximize scalability and operational efficiency.
Which combination of steps should the solutions architect take to meet these requirements? (Select TWO.)
A. Write the document information to an Amazon EC2 instance that runs a MySQL database.
B. Write the document information to an Amazon S3 bucke
C. Use Amazon Athena to query the data.
D. Create an Auto Scaling group of Amazon EC2 instances to run a custom application that processes the scanned files and extracts the medical information.
E. Create an AWS Lambda function that runs when new documents are uploade
F. Use Amazon Rekognition to convert the documents to raw tex
G. Use Amazon Transcribe Medical to detect and extract relevant medical information from the text.
H. Create an AWS Lambda function that runs when new documents are uploade
I. Use Amazon Textract to convert the documents to raw tex
J. Use Amazon Comprehend Medical to detect and extract relevant medical information from the text.
Answer: BE
Explanation:
This solution meets the requirements of creating digital copies for a large collection of historical written records, analyzing the documents, extracting the medical
information, and storing the documents so that an application can run SQL queries on the data. Writing the document information to an Amazon S3 bucket can
provide scalable and durable storage for the scanned files. Using Amazon Athena to query the data can provide serverless and interactive SQL analysis on data
stored in S3. Creating an AWS Lambda function that runs when new documents are uploaded can provide event-driven and serverless processing of the scanned
files. Using Amazon Textract to convert the documents to raw text can provide
accurate optical character recognition (OCR) and extraction of structured data such as tables and forms from documents using artificial intelligence (AI). Using
Amazon Comprehend Medical to detect and extract relevant medical information from the text can provide natural language processing (NLP) service that uses
machine learning that has been pre-trained to understand and extract health data from medical text.
Option A is incorrect because writing the document information to an Amazon EC2 instance that runs a MySQL database can increase the infrastructure overhead
and complexity, and it may not be able to handle large volumes of data. Option C is incorrect because creating an Auto Scaling group of Amazon EC2 instances to
run a custom application that processes the scanned files and extracts the medical information can increase the infrastructure overhead and complexity, and it may
not be able to leverage existing AI and NLP services such as Textract and Comprehend Medical. Option D is incorrect because using Amazon Rekognition to
convert the documents to raw text can provide image and video analysis, but it does not support OCR or extraction of structured data from documents. Using
Amazon Transcribe Medical to detect and extract relevant medical information from the text can provide speech-to-text transcription service for medical
conversations, but it does not support text analysis or extraction of health data from medical text.
References:
? https://aws.amazon.com/s3/
? https://aws.amazon.com/athena/
? https://aws.amazon.com/lambda/
? https://aws.amazon.com/textract/
? https://aws.amazon.com/comprehend/medical/
NEW QUESTION 91
- (Topic 2)
A company owns an asynchronous API that is used to ingest user requests and, based on the request type, dispatch requests to the appropriate microservice for
processing. The company is using Amazon API Gateway to deploy the API front end, and an AWS Lambda function that invokes Amazon DynamoDB to store user
requests before dispatching them to the processing microservices.
The company provisioned as much DynamoDB throughput as its budget allows, but the company is still experiencing availability issues and is losing user requests.
What should a solutions architect do to address this issue without impacting existing users?
Answer: D
Explanation:
By using an SQS queue and Lambda, the solutions architect can decouple the API front end from the processing microservices and improve the overall scalability
and availability of the system. The SQS queue acts as a buffer, allowing the API front end to continue accepting user requests even if the processing microservices
are experiencing high workloads or are temporarily unavailable. The Lambda function can then retrieve requests from the SQS queue and write them to
DynamoDB, ensuring that all user requests are stored and processed. This approach allows the company to scale the processing microservices independently
from the API front end, ensuring that the API remains available to users even during periods of high demand.
NEW QUESTION 94
- (Topic 2)
A company uses AWS Organizations to create dedicated AWS accounts for each business unit to manage each business unit's account independently upon
request. The root email recipient missed a notification that was sent to the root user email address of one account. The company wants to ensure that all future
notifications are not missed. Future notifications must be limited to account administrators.
Which solution will meet these requirements?
A. Configure the company's email server to forward notification email messages that are sent to the AWS account root user email address to all users in the
organization.
B. Configure all AWS account root user email addresses as distribution lists that go to a few administrators who can respond to alert
C. Configure AWS account alternate contacts in the AWS Organizations console or programmatically.
D. Configure all AWS account root user email messages to be sent to one administrator who is responsible for monitoring alerts and forwarding those alerts to the
appropriate groups.
E. Configure all existing AWS accounts and all newly created accounts to use the same root user email addres
F. Configure AWS account alternate contacts in the AWS Organizations console or programmatically.
Answer: B
Explanation:
Use a group email address for the management account's root user https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-
acct.html#best-practices_mgmt-acct_email-address
NEW QUESTION 99
- (Topic 2)
A company runs its ecommerce application on AWS. Every new order is published as a message in a RabbitMQ queue that runs on an Amazon EC2 instance in a
single Availability Zone. These messages are processed by a different application that runs on a separate EC2 instance. This application stores the details in a
PostgreSQL database on another EC2 instance. All the EC2 instances are in the same Availability Zone.
The company needs to redesign its architecture to provide the highest availability with the least operational overhead.
What should a solutions architect do to meet these requirements?
B. Create a Multi-AZ Auto Scaling group (or EC2 instances that host the applicatio
C. Create another Multi-AZAuto Scaling group for EC2 instances that host the PostgreSQL database.
D. Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon M
E. Create a Multi-AZ Auto Scaling group for EC2 instances that host the applicatio
F. Migrate the database to run on a Multi-AZ deployment of Amazon RDS for PostgreSQL.
G. Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queu
H. Create another Multi-AZ Auto Scaling group for EC2 instances that host the application.Migrate the database to run on a Multi-AZ deployment of Amazon RDS
fqjPostgreSQL.
I. Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queue.Create another Multi-AZ Auto Scaling group for EC2 instances that host
the applicatio
J. Create a third Multi-AZ Auto Scaling group for EC2 instances that host the PostgreSQL database.
Answer: B
Explanation:
Migrating to Amazon MQ reduces the overhead on the queue management. C and D are dismissed. Deciding between A and B means deciding to go for an
AutoScaling group for EC2 or an RDS for Postgress (both multi- AZ). The RDS option has less operational impact, as provide as a service the tools and software
required. Consider for instance, the effort to add an additional node like a read replica, to the DB. https://docs.aws.amazon.com/amazon-mq/latest/developer-
guide/active-standby-broker- deployment.html https://aws.amazon.com/rds/postgresql/
A. Host the application on AWS Lambda Integrate the application with Amazon API Gateway.
B. Host the application with AWS Amplif
C. Connect the application to an Amazon API Gateway API that is integrated with AWS Lambda.
D. Host the application on Amazon EC2 instance
E. Set up an Application Load Balancer with EC2 instances in an Auto Scaling group as targets.
F. Host the application on Amazon Elastic Container Service (Amazon ECS) Set up an Application Load Balancer with Amazon ECS as the target.
Answer: D
Explanation:
https://aws.amazon.com/blogs/compute/microservice-delivery-with-amazon-ecs-and-application-load-balancers/
Answer: A
Explanation:
Queue has Limited throughput (300 msg/s without batching, 3000 msg/s with batching whereby up-to 10 msg per batch operation; Msg duplicates not allowed in
the queue (exactly-once delivery); Msg order is preserved (FIFO); Queue name must end with .fifo
A. Use Amazon DynamoDB with auto scaling Use on-demand backups and Amazon DynamoDB Streams
B. Use Amazon Redshif
C. Configure concurrency scalin
D. Activate audit loggin
E. Perform database snapshots every 4 hours.
F. Use Amazon RDS with Provisioned IOPS Activate the database auditing parameter Perform database snapshots every 5 hours
Answer: A
Explanation:
This solution meets the requirements of a customer-facing application that has a clearly defined access pattern throughout the year and a variable number of reads
and writes that depend on the time of year. Amazon DynamoDB is a fully managed NoSQL database service that can handle any level of request traffic and data
size. DynamoDB auto scaling can automatically adjust the provisioned read and write capacity based on the actual workload. DynamoDB on-demand backups can
create full backups of the tables for data protection and archival purposes. DynamoDB Streams can capture a time-ordered sequence of item-level modifications in
the tables for audit purposes.
Option B is incorrect because Amazon Redshift is a data warehouse service that is designed for analytical workloads, not for customer-facing applications. Option
C is incorrect because Amazon RDS with Provisioned IOPS can provide consistent performance for relational databases, but it may not be able to handle
unpredictable spikes in traffic and data size. Option D is incorrect because Amazon Aurora MySQL with auto scaling can provide high performance and availability
for relational databases, but it does not support audit logging as a parameter.
References:
? https://aws.amazon.com/dynamodb/
? https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScalin g.html
? https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/BackupRe store.html
? https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.ht ml
A. Create an Amazon Route 53 geolocation routing policy to route requests to one of the two NLB
B. Create an Amazon CloudFront distributio
C. Use the Route 53 record as the distribution's origin.
D. Create a standard accelerator in AWS Global Accelerato
E. Create endpoint groups in us- west-2 and eu-west-1. Add the two NLBs as endpoints for the endpoint groups.
F. Attach Elastic IP addresses to the six EC2 instance
G. Create an Amazon Route 53 geolocation routing policy to route requests to one of the six EC2 instance
H. Create an Amazon CloudFront distributio
I. Use the Route 53 record as the distribution's origin.
J. Replace the two NLBs with two Application Load Balancers (ALBs). Create an Amazon Route 53 latency routing policy to route requests to one of the two ALB
K. Create an Amazon CloudFront distributio
L. Use the Route 53 record as the distribution's origin.
Answer: B
Explanation:
For standard accelerators, Global Accelerator uses the AWS global network to route traffic to the optimal regional endpoint based on health, client location, and
policies that you configure, which increases the availability of your applications. Endpoints for standard accelerators can be Network Load Balancers, Application
Load Balancers, Amazon EC2 instances, or Elastic IP addresses that are located in one AWS Region or multiple Regions.
https://docs.aws.amazon.com/global-accelerator/latest/dg/what-is-global-accelerator.html
A. Create a separate application tier using EC2 instances dedicated to email processing.
B. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES).
C. Configure the web instance to send email through Amazon Simple Notification Service (Amazon SNS)
D. Create a separate application tier using EC2 instances dedicated to email processin
E. Place the instances in an Auto Scaling group.
Answer: B
Explanation:
Amazon SES is a cost-effective and scalable email service that enables businesses to send and receive email using their own email addresses and domains.
Configuring the web instance to send email through Amazon SES is a simple and effective solution that can reduce the time spent resolving complex email
delivery issues and minimize operational overhead.
A. Store the static files on Amazon S3. Use Amazon CloudFront to cache objects at the edge.
B. Store the static files on Amazon S3. Use Amazon ElastiCache to cache objects at the edge.
C. Store the server-side code on Amazon Elastic File System (Amazon EFS). Mount the EFS volume on each EC2 instance to share the files.
D. Store the server-side code on Amazon FSx for Windows File Serve
E. Mount the FSx for Windows File Server volume on each EC2 instance to share the files.
F. Store the server-side code on a General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volum
G. Mount the EBS volume on each EC2 instance to share the files.
Answer: AD
Explanation:
A because Elasticache, despite being ideal for leaderboards per Amazon, doesn't cache at edge locations. D because FSx has higher performance for low latency
needs. https://www.techtarget.com/searchaws/tip/Amazon-FSx-vs-EFS-Compare-the-AWS-file- services "FSx is built for high performance and submillisecond
latency using solid-state drive storage volumes. This design enables users to select storage capacity and latency independently. Thus, even a subterabyte file
system can have 256 Mbps or higher throughput and support volumes up to 64 TB."
Amazon S3 is an object storage service that can store static files such as images, videos, documents, etc. Amazon EFS is a file storage service that can store files
in a hierarchical structure and supports NFS protocol. Amazon FSx for Windows File Server is a file storage service that can store files in a hierarchical structure
and supports SMB protocol. Amazon EBS is a block storage service that can store data in fixed-size blocks and attach to EC2 instances.
Based on these definitions, the combination of steps that should be taken to meet the requirements are:
* A. Store the static files on Amazon S3. Use Amazon CloudFront to cache objects at the edge. D. Store the server-side code on Amazon FSx for Windows File
Server. Mount the FSx for Windows File Server volume on each EC2 instance to share the files.
Answer: C
Explanation:
https://docs.aws.amazon.com/cli/latest/reference/transfer/describe- server.html
Answer: B
Explanation:
Amazon Transcribe now supports speaker labeling for streaming transcription. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it
easy for you to convert speech-to-text. In live audio transcription, each stream of audio may contain multiple speakers. Now you can conveniently turn on the ability
to label speakers, thus helping to identify who is saying what in the output transcript. https://aws.amazon.com/about-aws/whats-new/2020/08/amazon-transcribe-
supports- speaker-labeling-streaming-transcription/
A. Launch an Amazon EC2 instance to serve as the HTTPS endpoint and to process the messages Configure the EC2 instance to save the results to an Amazon
S3 bucket.
Answer: B
Explanation:
Deploy Amazon API Gateway as an HTTPS endpoint and AWS Lambda to process and save the messages to an Amazon DynamoDB table. This option provides
a highly available and scalable solution that can easily handle large amounts of data. It also integrates with other AWS services, making it easier to analyze and
visualize the data for the security team.
A. Use AWS Auto Scaling to adjust the ALB capacity based on request rate
B. Use AWS Auto Scaling to scale the capacity of the VPC internet gateway
C. Launch the EC2 instances in multiple AWS Regions to distribute the load across Regions
D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization
E. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends Revert to the default values at the
start of the week
Answer: DE
Explanation:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target- tracking.html#target-tracking-choose-metrics
A target tracking scaling policy is a type of dynamic scaling policy that adjusts the capacity of an Auto Scaling group based on a specified metric and a target
value1. A target tracking scaling policy can automatically scale out or scale in the Auto Scaling group to keep the actual metric value at or near the target value1. A
target tracking scaling policy is suitable for scenarios where the load on the application changes frequently and unpredictably, such as during working hours2.
To meet the requirements of the scenario, the solutions architect should use a target tracking scaling policy to scale the Auto Scaling group based on instance
CPU utilization. Instance CPU utilization is a common metric that reflects the demand on the
application1. The solutions architect should specify a target value that represents the ideal average CPU utilization level for the application, such as 50 percent1.
Then, the Auto Scaling group will scale out or scale in to maintain that level of CPU utilization.
Scheduled scaling is a type of scaling policy that performs scaling actions based on a date and time3. Scheduled scaling is suitable for scenarios where the load
on the application changes periodically and predictably, such as on weekends2.
To meet the requirements of the scenario, the solutions architect should also use scheduled scaling to change the Auto Scaling group minimum, maximum, and
desired capacity to zero for weekends. This way, the Auto Scaling group will terminate all instances on weekends when they are not required to operate. The
solutions architect should also revert to the default values at the start of the week, so that the Auto Scaling group can resume normal operation.
Answer: BD
Explanation:
To resolve the issue of slow page loads for a rapidly growing e-commerce website hosted on AWS, a solutions architect can take the following two actions:
* 1. Set up an Amazon CloudFront distribution
* 2. Create a read replica for the RDS DB instance
Configuring an Amazon Redshift cluster is not relevant to this issue since Redshift is a data warehousing service and is typically used for the analytical processing
of large amounts of data.
Hosting the dynamic web content in Amazon S3 may not necessarily improve performance since S3 is an object storage service, not a web application server.
While S3 can be used to host static web content, it may not be suitable for hosting dynamic web content since S3 doesn't support server-side scripting or
processing.
Configuring a Multi-AZ deployment for the RDS DB instance will improve high availability but may not necessarily improve performance.
E. Create a NAT instance, and place it in the same subnet where the EC2 instance is locate
F. Configure the private subnet route table to use the NAT instance as the default route.
G. Create an internet gateway, and attach it to the VP
H. Create a NAT instance, and place it in the same subnet where the EC2 instance is locate
I. Configure the private subnet route table to use the internet gateway as the default route.
Answer: B
Explanation:
This approach will allow the EC2 instance to access the internet and download the monthly security updates while still being located in a private subnet. By
creating a NAT gateway and placing it in a public subnet, it will allow the instances in the private subnet to access the internet through the NAT gateway. And then,
configure the private subnet route table to use the NAT gateway as the default route. This will ensure that all outbound traffic is directed through the NAT gateway,
allowing the EC2 instance to access the internet while still maintaining the security of the private subnet.
Answer: AD
Explanation:
Auto scaling storage RDS will ease storage issues and migrating Oracle Pl/Sql to Aurora is cumbersome. Also Aurora has auto storage scaling by default.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes. html#USER_PIOPS.Autoscaling
A. Write an AWS Lambda function that schedules nightly snapshots of the application's EBS volumes and copies the snapshots to a different Region
B. Create a backup plan by using AWS Backup to perform nightly backup
C. Copy the backups to another Region Add the application's EC2 instances as resources
D. Create a backup plan by using AWS Backup to perform nightly backups Copy the backups to another Region Add the application's EBS volumes as resources
E. Write an AWS Lambda function that schedules nightly snapshots of the application's EBS volumes and copies the snapshots to a different Availability Zone
Answer: B
Explanation:
The most operationally efficient solution to meet these requirements would be to create a backup plan by using AWS Backup to perform nightly backups and
copying the backups to another Region. Adding the application's EBS volumes as resources will ensure that the application's EC2 instance configuration and data
are backed up, and copying the backups to another Region will ensure that the application is recoverable in a different AWS Region.
A. Use the aws ec2 register-image command to create an AMI from a snapshot Use AWS Step Functions to replace the AMI in the Auto Scaling group
B. Enable Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot Provision an AMI by using the snapshot Replace the AMI m the Auto
Scaling group with the new AMI
C. Enable AMI creation and define lifecycle rules in Amazon Data Lifecycle Manager (Amazon DLM) Create an AWS Lambda function that modifies the AMI in the
Auto Scaling group
D. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke AWS Backup lifecycle policies that provision AMIs Configure Auto Scaling group capacity
limits as an event source in EventBridge
Answer: B
Explanation:
Enabling Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot allows you to quickly create a new Amazon Machine Image (AMI) from a
snapshot, which can help reduce the initialization latency when provisioning new instances. Once the AMI is provisioned, you can replace the AMI in the Auto
Scaling group with the new AMI. This will ensure that new instances are launched from the updated AMI and are able to meet the increased demand quickly.
microservice also must use AWS identity and Access Management (IAM) to authentication calls. The soltions architect will write the logic for this microservice by
using a single AWS Lambda function that is written in Go 1.x.
Which solution will deploy the function in the in the MOST operationally efficient way?
Answer: A
Explanation:
A. Create an Amazon API Gateway REST API. Configure the method to use the Lambda function. Enable IAM authentication on the API. This option is the most
operationally efficient as it allows you to use API Gateway to handle the HTTPS endpoint and also allows you to use IAM to authenticate the calls to the
microservice. API Gateway also provides many additional features such as caching, throttling, and monitoring, which can be useful for a microservice.
A. Amazon GuardDuty
B. Amazon Inspector
C. AWS CloudTrail
D. AWS Config
Answer: C
Explanation:
The best option is to use AWS CloudTrail to find the desired information. AWS CloudTrail is a service that enables governance, compliance, operational auditing,
and risk auditing of AWS account activities. CloudTrail can be used to log all changes made to resources in an AWS account, including changes made by IAM
users, EC2 instances, AWS management console, and other AWS services. By using CloudTrail, the solutions architect can identify the IAM user who made the
configuration changes to the security group rules.
A. Create multiple Amazon Kinesis data streams based on the quote type Configure the web application to send messages to the proper data stream Configure
each backend group of application servers to use the Kinesis Client Library (KCL) to pool messages from its own data stream
B. Create an AWS Lambda function and an Amazon Simple Notification Service (Amazon SNS) topic for each quote type Subscribe the Lambda function to its
associated SNS topic Configure the application to publish requests tot quotes to the appropriate SNS topic
C. Create a single Amazon Simple Notification Service (Amazon SNS) topic Subscribe Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic
Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type Configure each backend application server to use its own
SQS queue
D. Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to deliver data streams to an Amazon Elasucsearch Service (Amazon
ES) cluster Configure the application to send messages to the proper delivery stream Configure each backend group of application servers to search for the
messages from Amazon ES and process them accordingly
Answer: C
Explanation:
https://aws.amazon.com/getting-started/hands-on/filter-messages-published- to-topics/
A. Use AWS Key Management Service (AWS KMS) customer master keys (CMKs) to create key
B. Configure the application to load the database credentials from AWS KM
C. Enable automatic key rotation.
D. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manage
E. Configure the application to load the database credentials from Secrets Manage
F. Create an AWS Lambda function that rotates the credentials in Secret Manager.
G. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manage
H. Configure the application to load the database credentials from Secrets Manage
I. Set up a credentials rotation schedule for the application user in the RDS for MySQL database using Secrets Manager.
J. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Systems Manager Parameter Stor
K. Configure the application to load the database credentials from Parameter Stor
L. Set up a credentials rotation schedule for the application user in the RDS for MySQL database using Parameter Store.
Answer: C
Explanation:
https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/
Answer: A
Explanation:
AWS Snowball is a secure data transport solution that accelerates moving large amounts of data into and out of the AWS cloud. It can move up to 80 TB of data
at a time, and provides a network bandwidth of up to 50 Mbps, so it is well-suited for the task. Additionally, it is secure and easy to use, making it the ideal solution
for this migration.
Answer: BE
Explanation:
https://docs.aws.amazon.com/prescriptive-guidance/latest/dynamodb-hierarchical-data-model/introduction.html
A. Extend the Auto Scaling groups for the web tier and the application tier to deploy instances in Availability Zones in a second Regio
B. Use an Aurora global database to deploy the database in the primary Region and the second Regio
C. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
D. Deploy the web tier and the application tier to a second Regio
E. Add an Aurora PostgreSQL cross-Region Aurara Replica in the second Regio
F. Use Amazon Route 53 health checks with a failovers routing policy to the second Region, Promote the secondary to primary as needed.
G. Deploy the web tier and the applicatin tier to a second Regio
H. Create an Aurora PostSQL database in the second Regio
I. Use AWS Database Migration Service (AWS DMS) to replicate the primary database to the second Regio
J. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
K. Deploy the web tier and the application tier to a second Regio
L. Use an Amazon Aurora global database to deploy the database in the primary Region and the second Regio
M. Use Amazon Route 53 health checks with a failover routing policy to the second Regio
N. Promote the secondary to primary as needed.
Answer: D
Explanation:
This option is the most efficient because it deploys the web tier and the application tier to a second Region, which provides high availability and redundancy for the
application. It also uses an Amazon Aurora global database, which is a feature that allows a single Aurora database to span multiple AWS Regions1. It also
deploys the database in the primary Region and the second Region, which provides low latency global reads and fast recovery from a Regional outage. It also
uses Amazon Route 53 health checks with a failover routing policy to the second Region, which provides data protection by routing traffic to healthy endpoints in
different Regions2. It also promotes the secondary to primary as needed, which provides data consistency by allowing write operations in one of the Regions at a
time3. This solution meets the requirement of expanding globally and ensuring that its application has minimal downtime. Option A is less efficient because it
extends the Auto Scaling groups for the web tier and the application tier to deploy instances in Availability Zones in a second Region, which could incur higher
costs and complexity than deploying them separately. It also uses an Aurora global database to deploy the database in the primary Region and the second
Region, which is correct. However, it does not use Amazon Route 53 health checks with a failover routing policy to the second Region, which could result in traffic
being routed to unhealthy endpoints. Option B is less efficient because it deploys the web tier and the application tier to a second Region, which is correct. It also
adds an Aurora PostgreSQL cross-Region Aurora Replica in the second Region, which provides read scalability across Regions. However, it does not use an
Aurora global database, which provides faster replication and recovery than cross-Region replicas. It also uses Amazon Route 53 health checks with a failover
routing policy to the second Region, which is correct. However, it does not promote the secondary to primary as needed, which could result in data inconsistency
or loss. Option C is less efficient because it deploys the web tier and the application tier to a second Region, which is correct. It also creates an Aurora
PostgreSQL database in the second Region, which provides data redundancy across Regions. However, it does not use an Aurora global database or cross-
Region replicas, which provide faster replication and recovery than creating separate databases. It also uses AWS Database Migration Service (AWS DMS) to
replicate the primary database to the second Region, which provides data migration between different sources and targets. However, it does not use an Aurora
global database or cross-Region replicas, which provide faster replication and recovery than using AWS DMS. It also uses Amazon Route 53 health checks with a
failover routing policy to the second Region, which is correct.
Answer: B
Explanation:
https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html
Answer: B
Explanation:
The solution that will meet the requirement of ensuring that all data that is written to the EBS volumes is encrypted at rest is B. Create the EBS volumes as
encrypted volumes and attach the encrypted EBS volumes to the EC2 instances. When you create an EBS volume, you can specify whether to encrypt the
volume. If you choose to encrypt the volume, all data written to the volume is automatically encrypted at rest using AWS-managed keys. You can also use
customer-managed keys (CMKs) stored in AWS KMS to encrypt and protect your EBS volumes. You can create encrypted EBS volumes and attach them to EC2
instances to ensure that all data written to the volumes is encrypted at rest.
A. Create a usage plan with an API key that is shared with genuine users only.
B. Integrate logic within the Lambda function to ignore the requests from fraudulent IP addresses.
C. Implement an AWS WAF rule to target malicious requests and trigger actions to filter them out.
D. Convert the existing public API to a private AP
E. Update the DNS records to redirect users to the new API endpoint.
F. Create an IAM role for each user attempting to access the AP
G. A user will assume the role when making the API call.
Answer: AC
Explanation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html#:~:text=Don%27t%20rely%20on%20API%20keys%20as%20y
our%20only%20means%20of%20authentication%20and%20authorization%20for%20your%20APIs
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage- plans.html
A. Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0.
B. Create a security group with a rule to allow TCP port 443 to destination 0.0.0.0/0.
C. Update the network ACL to allow TCP port 443 from source 0.0.0.0/0.
D. Update the network ACL to allow inbound/outbound TCP port 443 from source 0.0.0.0/0 and to destination 0.0.0.0/0.
E. Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and outbound TCP port 32768-65535 to destination 0.0.0.0/0.
Answer: AC
Explanation:
The combination of steps that will accomplish the task of making the web server accessible from everywhere on port 443 is to create a security group with a rule
to allow TCP port 443 from source 0.0.0.0/0 (A) and to update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 (C). This will ensure that
traffic to port 443 is allowed both at the security group level and at the network ACL level, which will make the web server accessible from everywhere on port 443.
Answer: A
Explanation:
Amazon EFS provides a simple, scalable, fully managed file system that can be simultaneously accessed from multiple EC2 instances and provides built-in
redundancy. It is optimized for multiple EC2 instances to access the same files, and it is designed to be highly available, durable, and secure. It can scale up to
petabytes of data and can handle thousands of concurrent connections, and is a cost-effective solution for storing and accessing large amounts of data.
A. Request an Amazon issued private certificate from AWS Certificate Manager (ACM) in the us-east-1 Region
B. Request an Amazon issued private certificate from AWS Certificate Manager (ACM) in the us-west-1 Region.
C. Request an Amazon issued public certificate from AWS Certificate Manager (ACU) in the us-east-1 Region
D. Request an Amazon issued public certificate from AWS Certificate Manager (ACU) in the us-west-1 Regon.
Answer: C
Explanation:
This option is the most efficient because it requests an Amazon issued public certificate from AWS Certificate Manager (ACM), which is a service that lets you
easily provision, manage, and deploy public and private SSL/TLS certificates for use with AWS services and your internal connected resources1. It also requests
the certificate in the us-east-1 Region, which is required for using an ACM certificate with CloudFront2. It also meets the requirement of deploying the certificate
without incurring any additional costs, as ACM does not charge for certificates that are used with supported AWS services3. This solution meets the requirement of
configuring its CloudFront distribution to use SSL/TLS certificates and using a different domain name for the distribution. Option A is less efficient because it
requests an Amazon issued private certificate from ACM, which is a type of certificate that can be used only within your organization or virtual private cloud (VPC).
However, this does not meet the requirement of configuring its CloudFront distribution to use SSL/TLS certificates, as CloudFront requires a public certificate. It
also requests the certificate in the us-east-1 Region, which is correct. Option B is less efficient because it requests an Amazon issued private certificate from ACM,
which is incorrect for the same reason as option A. It also requests the certificate in the us-west-1 Region, which is incorrect as CloudFront requires a certificate in
the us-east-1 Region. Option D is less efficient because it requests an Amazon issued public certificate from ACM, which is correct. However, it requests the
certificate in the us-west-1 Region, which is incorrect as CloudFront requires a certificate in the us-east-1 Region.
A. Use DynamoDB transactions to write new event data to the table Configure the transactions to notify internal teams.
B. Have the current application publish a message to four Amazon Simple Notification Service (Amazon SNS) topic
C. Have each team subscribe to one topic.
D. Enable Amazon DynamoDB Streams on the tabl
E. Use triggers to write to a mingle Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.
F. Add a custom attribute to each record to flag new item
G. Write a cron job that scans the table every minute for items that are new and notifies an Amazon Simple Queue Service (Amazon SOS) queue to which the
teams can subscribe.
Answer: C
Explanation:
The best solution to meet these requirements with the least amount of operational overhead is to enable Amazon DynamoDB Streams on the table and use
triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe. This solution requires minimal configuration
and infrastructure setup, and Amazon DynamoDB Streams provide a low-latency way to capture changes to the DynamoDB table. The triggers automatically
capture the changes and publish them to the SNS topic, which notifies the internal teams.
A. Use Amazon EC2 Auto Scaling to scale at certain periods based on previous traffic patterns
B. Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm
C. Use Amazon EC2 Auto Scaling with simple scaling policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm
D. Use AWS Application Auto Scaling with target tracking policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm
Answer: D
Explanation:
https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto- scaling.html
A. Update the Kinesis Data Streams default settings by modifying the data retention period.
B. Update the application to use the Kinesis Producer Library (KPL) lo send the data to Kinesis Data Streams.
C. Update the number of Kinesis shards lo handle the throughput of me data that is sent to Kinesis Data Streams.
D. Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is ingested in the S3 bucket.
Answer: A
Explanation:
The data retention period of a Kinesis data stream is the time period from when a record is added to when it is no longer accessible1. The default retention period
for a Kinesis data stream is 24 hours, which can be extended up to 8760 hours (365 days)1. The data retention period can be updated by using the AWS
Management Console, the AWS CLI, or the Kinesis Data Streams API1.
To meet the requirements of the scenario, the solutions architect should update the Kinesis Data Streams default settings by modifying the data retention period.
The solutions architect should increase the retention period to a value that is greater than or equal to the frequency of consuming the data and writing it to S32.
This way, the company can ensure that S3 receives all the data that the application sends to Kinesis Data Streams.
A. 10.0.1.0/32
B. 192.168.0.0/24
C. 192.168.1.0/32
D. 10.0.1.0/24
Answer: D
Explanation:
The allowed block size is between a /28 netmask and /16 netmask. The CIDR block must not overlap with any existing CIDR block that's associated with the VPC.
https://docs.aws.amazon.com/vpc/latest/userguide/configure-your-vpc.html
Answer: D
Explanation:
Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the date and time
of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput. TTL is provided at no extra cost as a means to
reduce stored data volumes by retaining only the items that remain current for your workload’s needs.
TTL is useful if you store items that lose relevance after a specific time. The following are example TTL use cases:
Remove user or sensor data after one year of inactivity in an application.
Archive expired items to an Amazon S3 data lake via Amazon DynamoDB Streams and AWS Lambda.
Retain sensitive data for a certain amount of time according to contractual or regulatory obligations.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html
100% Pass Your SAA-C03 Exam with Our Prep Materials Via below:
https://www.certleader.com/SAA-C03-dumps.html