Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

saa-c03_1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader

https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

SAA-C03 Dumps

AWS Certified Solutions Architect - Associate (SAA-C03)

https://www.certleader.com/SAA-C03-dumps.html

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

NEW QUESTION 1
- (Topic 1)
A company hosts more than 300 global websites and applications. The company requires a platform to analyze more than 30 TB of clickstream data each day.
What should a solutions architect do to transmit and process the clickstream data?

A. Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR duster with the data to generate analytics
B. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to use tor analysis
C. Cache the data to Amazon CloudFron: Store the data in an Amazon S3 bucket When an object is added to the S3 bucket, run an AWS Lambda function to
process the data tor analysis.
D. Collect the data from Amazon Kinesis Data Stream
E. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake Load the data in Amazon Redshift for analysis

Answer: D

Explanation:
https://aws.amazon.com/es/blogs/big-data/real-time-analytics-with-amazon-redshift-streaming-ingestion/

NEW QUESTION 2
- (Topic 1)
A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC The EC2 instances run inside several subnets across
multiple Availability Zones. The EC2 instances do not communicate with each other However, the EC2 instances download images from Amazon S3 and upload
images to Amazon S3 through a single NAT gateway The company is concerned about data transfer charges
What is the MOST cost-effective way for the company to avoid Regional data transfer charges?

A. Launch the NAT gateway in each Availability Zone


B. Replace the NAT gateway with a NAT instance
C. Deploy a gateway VPC endpoint for Amazon S3
D. Provision an EC2 Dedicated Host to run the EC2 instances

Answer: A

Explanation:
In this scenario, the company wants to avoid regional data transfer charges while downloading and uploading images from Amazon S3. To accomplish this at the
lowest cost, the NAT gateway should be launched in each availability zone that the EC2 instances are running in. This allows the EC2 instances to route traffic
through the local NAT gateway instead of sending traffic across an availability zone boundary and incurring regional data transfer fees. This method will help
reduce the data transfer costs since inter- Availability Zone data transfers in a single region are free of charge.
Reference:
AWS NAT Gateway documentation: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html

NEW QUESTION 3
- (Topic 1)
A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application has grown and
there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that allows for both timely backups to
Amazon S3 and with minimal impact on internet connectivity for internal users.
Which solution meets these requirements?

A. Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint
B. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
C. Order daily AWS Snowball devices Load the data onto the Snowball devices and return the devices to AWS each day.
D. Submit a support ticket through the AWS Management Console Request the removal ofS3 service limits from the account.

Answer: B

Explanation:
To address the issue of bandwidth limitations on the company's on-premises application, and to minimize the impact on internal user connectivity, a new AWS
Direct Connect connection should be established to direct backup traffic through this new connection. This solution will offer a secure, high-speed connection
between the company's data center and AWS, which will allow the company to transfer data quickly without consuming internet bandwidth.
Reference:
AWS Direct Connect documentation: https://aws.amazon.com/directconnect/

NEW QUESTION 4
- (Topic 1)
A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights enabled. The
testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of running the tests without reducing the
compute and memory attributes of the DB instance.
Which solution meets these requirements MOST cost-effectively?

A. Stop the DB instance when tests are complete


B. Restart the DB instance when required.
C. Use an Auto Scaling policy with the DB instance to automatically scale when tests are completed.
D. Create a snapshot when tests are complete
E. Terminate the DB instance and restore the snapshot when required.
F. Modify the DB instance to a low-capacity instance when tests are complete
G. Modify the DB instance again when required.

Answer: A

Explanation:

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

To reduce the cost of running the tests without reducing the compute and memory attributes of the Amazon RDS for MySQL DB instance, the development team
can stop the instance when tests are completed and restart it when required. Stopping the DB instance when not in use can help save costs because customers
are only charged for storage while the DB instance is stopped. During this time, automated backups and automated DB instance maintenance are suspended.
When the instance is restarted, it retains the same configurations, security groups, and DB parameter groups as when it was stopped.
Reference:
Amazon RDS Documentation: Stopping and Starting a DB instance (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_StopInstance.html)

NEW QUESTION 5
- (Topic 1)
A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and
dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and
dynamic data. The company is using its own domain name registered with Amazon Route 53.
What should a solutions architect do to meet these requirements?

A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins Configure Route 53 to route traffic to the CloudFront distribution.
B. Create an Amazon CloudFront distribution that has the ALB as an origin Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an
endpoin
C. Configure Route 53 to route traffic to the CloudFront distribution.
D. Create an Amazon CloudFront distribution that has the S3 bucket as an origin Create an AWS Global Accelerator standard accelerator that has the ALB and the
CloudFront distribution as endpoints Create a custom domain name that points to the accelerator DNS name Use the custom domain name as an endpoint for the
web application.
E. Create an Amazon CloudFront distribution that has the ALB as an origin
F. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint Create two domain name
G. Point one domain name to the CloudFront DNS name for dynamic content, Point the other domain name to the accelerator DNS name for static content Use the
domain names as endpoints for the web application.

Answer: C

Explanation:
Static content can be cached at Cloud front Edge locations from S3 and dynamic content EC2 behind the ALB whose performance can be improved by Global
Accelerator whose one endpoint is ALB and other Cloud front. So with regards to custom domain name endpoint is web application is R53 alias records for the
custom domain point to web application https://aws.amazon.com/blogs/networking-and-content-delivery/improving-availability-and-performance-for-application-load-
balancers-using-one-click-integration- with-aws-global-accelerator/

NEW QUESTION 6
- (Topic 1)
A company runs an on-premises application that is powered by a MySQL database The company is migrating the application to AWS to Increase the application's
elasticity and availability
The current architecture shows heavy read activity on the database during times of normal operation Every 4 hours the company's development team pulls a full
export of the production database to populate a database in the staging environment During this period, users experience unacceptable application latency The
development team is unable to use the staging environment until the procedure completes
A solutions architect must recommend replacement architecture that alleviates the application latency issue The replacement architecture also must give the
development team the ability to continue using the staging environment without delay
Which solution meets these requirements?

A. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for productio
B. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility.
C. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production Use database cloning to create the staging database on-demand
D. Use Amazon RDS for MySQL with a Mufti AZ deployment and read replicas for production Use the standby instance tor the staging database.
E. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for productio
F. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility.

Answer: B

Explanation:
https://aws.amazon.com/blogs/aws/amazon-aurora-fast-database-cloning/

NEW QUESTION 7
- (Topic 1)
A company hosts an application on AWS Lambda functions mat are invoked by an Amazon API Gateway API The Lambda functions save customer data to an
Amazon Aurora MySQL database Whenever the company upgrades the database, the Lambda functions fail to establish database connections until the upgrade is
complete The result is that customer data Is not recorded for some of the event
A solutions architect needs to design a solution that stores customer data that is created during database upgrades
Which solution will meet these requirements?

A. Provision an Amazon RDS proxy to sit between the Lambda functions and the database Configure the Lambda functions to connect to the RDS proxy
B. Increase the run time of me Lambda functions to the maximum Create a retry mechanism in the code that stores the customer data in the database
C. Persist the customer data to Lambda local storag
D. Configure new Lambda functions to scan the local storage to save the customer data to the database.
E. Store the customer data m an Amazon Simple Queue Service (Amazon SOS) FIFO queue Create a new Lambda function that polls the queue and stores the
customer data in the database

Answer: D

Explanation:
https://www.learnaws.org/2020/12/13/aws-rds-proxy-deep-dive/
RDS proxy can improve application availability in such a situation by waiting for the new database instance to be functional and maintaining any requests received
from the application during this time. The end result is that the application is more resilient to issues with the underlying database.
This will enable solution to hold data till the time DB comes back to normal. RDS proxy is to optimally utilize the connection between Lambda and DB. Lambda can
open multiple connection concurrently which can be taxing on DB compute resources, hence RDS proxy was introduced to manage and leverage these

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

connections efficiently.

NEW QUESTION 8
- (Topic 1)
A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload images and
metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and Amazon DynamoDB to store
the metadata.
The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary significantly
depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the growing user base.
Which solution meats these requirements?

A. Use AWS Lambda to process the photo


B. Store the photos and metadata in DynamoDB.
C. Use Amazon Kinesis Data Firehose to process the photos and to store the photos and metadata.
D. Use AWS Lambda to process the photo
E. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.
F. Increase the number of EC2 instances to thre
G. Use Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes to store the photos and metadata.

Answer: C

Explanation:
https://www.quora.com/How-can-I-use-DynamoDB-for-storing-metadata-for-Amazon-S3-objects
This solution meets the requirements of scalability, performance, and availability. AWS Lambda can process the photos in parallel and scale up or down
automatically depending on the demand. Amazon S3 can store the photos and metadata reliably and durably, and provide high availability and low latency.
DynamoDB can store the metadata efficiently and provide consistent performance. This solution also reduces the cost and complexity of managing EC2 instances
and EBS volumes.
Option A is incorrect because storing the photos in DynamoDB is not a good practice, as it can increase the storage cost and limit the throughput. Option B is
incorrect because Kinesis Data Firehose is not designed for processing photos, but for streaming data to destinations such as S3 or Redshift. Option D is incorrect
because increasing the number of EC2 instances and using Provisioned IOPS SSD volumes does not guarantee scalability, as it depends on the load balancer
and the application code. It also increases the cost and complexity of managing the infrastructure.
References:
? https://aws.amazon.com/certification/certified-solutions-architect-professional/
? https://www.examtopics.com/discussions/amazon/view/7193-exam-aws-certified-solutions-architect-professional-topic-1/
? https://aws.amazon.com/architecture/

NEW QUESTION 9
- (Topic 1)
A company needs to keep user transaction data in an Amazon DynamoDB table. The company must retain the data for 7 years.
What is the MOST operationally efficient solution that meets these requirements?

A. Use DynamoDB point-in-time recovery to back up the table continuously.


B. Use AWS Backup to create backup schedules and retention policies for the table.
C. Create an on-demand backup of the table by using the DynamoDB consol
D. Store the backup in an Amazon S3 bucke
E. Set an S3 Lifecycle configuration for the S3 bucket.
F. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda functio
G. Configure the Lambda function to back up the table and to store thebackup in an Amazon S3 bucke
H. Set an S3 Lifecycle configuration for the S3 bucket.

Answer: C

NEW QUESTION 10
- (Topic 1)
A company runs multiple Windows workloads on AWS. The company's employees use Windows file shares that are hosted on two Amazon EC2 instances. The
file shares synchronize data between themselves and maintain duplicate copies. The company wants a highly available and durable storage solution that
preserves how users currently access the files.
What should a solutions architect do to meet these requirements?

A. Migrate all the data to Amazon S3 Set up IAM authentication for users to access files
B. Set up an Amazon S3 File Gatewa
C. Mount the S3 File Gateway on the existing EC2 Instances.
D. Extend the file share environment to Amazon FSx for Windows File Server with a Multi- AZ configuratio
E. Migrate all the data to FSx for Windows File Server.
F. Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuratio
G. Migrate all the data to Amazon EFS.

Answer: C

Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/AmazonEFS.html Amazon FSx for Windows File Server provides fully managed Microsoft Windows
file servers, backed by a fully native Windows file system. https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html

NEW QUESTION 10
- (Topic 1)
A company is running an SMB file server in its data center. The file server stores large files that are accessed frequently for the first few days after the files are
created. After 7 days the files are rarely accessed.
The total data size is increasing and is close to the company's total storage capacity. A solutions architect must increase the company's available storage space
without losing low-latency access to the most recently accessed files. The solutions architect must also provide file lifecycle management to avoid future storage

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

issues.
Which solution will meet these requirements?

A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
B. Create an Amazon S3 File Gateway to extend the company's storage spac
C. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
D. Create an Amazon FSx for Windows File Server file system to extend the company's storage space.
E. Install a utility on each user's computer to access Amazon S3. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.

Answer: B

Explanation:
Amazon S3 File Gateway is a hybrid cloud storage service that enables on- premises applications to seamlessly use Amazon S3 cloud storage. It provides a file
interface to Amazon S3 and supports SMB and NFS protocols. It also supports S3 Lifecycle policies that can automatically transition data from S3 Standard to S3
Glacier Deep Archive after a specified period of time. This solution will meet the requirements of increasing the company’s available storage space without losing
low-latency access to the most recently accessed files and providing file lifecycle management to avoid future storage issues.
Reference:
https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.ht ml

NEW QUESTION 13
- (Topic 1)
A company hosts an application on multiple Amazon EC2 instances The application processes messages from an Amazon SQS queue writes to an Amazon RDS
table and deletes the message from the queue Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate
messages.
What should a solutions architect do to ensure messages are being processed once only?

A. Use the CreateQueue API call to create a new queue


B. Use the Add Permission API call to add appropriate permissions
C. Use the ReceiveMessage API call to set an appropriate wail time
D. Use the ChangeMessageVisibility APi call to increase the visibility timeout

Answer: D

Explanation:
The visibility timeout begins when Amazon SQS returns a message. During this time, the consumer processes and deletes the message. However, if the
consumer fails before deleting the message and your system doesn't call the DeleteMessage action for that message before the visibility timeout expires, the
message becomes visible to other consumers and the message is received again. If a message must be received only once, your consumer should delete it within
the duration of the visibility timeout. https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs- visibility-timeout.html
Keyword: SQS queue writes to an Amazon RDS From this, Option D best suite & other Options ruled out [Option A - You can't intruduce one more Queue in
the existing one; Option B - only Permission & Option C - Only Retrieves Messages] FIF O queues are designed to never introduce duplicate messages.
However, your message producer might introduce duplicates in certain scenarios: for example, if the producer sends a message, does not receive a response, and
then resends the same message. Amazon SQS APIs provide deduplication functionality that prevents your message producer from sending duplicates. Any
duplicates introduced by the message producer are removed within a 5-minute deduplication interval. For standard queues, you might occasionally receive a
duplicate copy of a message (at-least- once delivery). If you use a standard queue, you must design your applications to be idempotent (that is, they must not be
affected adversely when processing the same message more than once).

NEW QUESTION 16
- (Topic 1)
A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to hundreds of
terabytes The application data must be stored in a standard file system structure The company wants a solution that scales automatically, is highly available, and
requires minimum operational overhead.
Which solution will meet these requirements?

A. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS) Use Amazon S3 for storage
B. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use Amazon Elastic Block Store (Amazon EBS) for storage
C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling grou
D. Use Amazon Elastic File System (Amazon EFS) for storage.
E. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling grou
F. Use Amazon Elastic Block Store (Amazon EBS) for storage.

Answer: C

Explanation:
EFS is a standard file system, it scales automatically and is highly available.

NEW QUESTION 17
- (Topic 1)
A company needs to store its accounting records in Amazon S3. The records must be immediately accessible for 1 year and then must be archived for an
additional 9 years. No one at the company, including administrative users and root users, can be able to delete the records during the entire 10-year period. The
records must be stored with maximum resiliency.
Which solution will meet these requirements?

A. Store the records in S3 Glacier for the entire 10-year perio


B. Use an access control policy to deny deletion of the records for a period of 10 years.
C. Store the records by using S3 Intelligent-Tierin
D. Use an IAM policy to deny deletion of the record
E. After 10 years, change the IAM policy to allow deletion.
F. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 yea
G. Use S3 Object Lock in compliance mode for a period of 10 years.
H. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 yea

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

I. Use S3 Object Lock in governance mode for a period of 10 years.

Answer: C

Explanation:
To meet the requirements of immediately accessible records for 1 year and then archived for an additional 9 years with maximum resiliency, we can use S3
Lifecycle policy to transition records from S3 Standard to S3 Glacier Deep Archive after 1 year. And to ensure that the records cannot be deleted by anyone,
including administrative and root users, we can use S3 Object Lock in compliance mode for a period of 10 years. Therefore, the correct answer is option C.
Reference: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html

NEW QUESTION 18
- (Topic 1)
A company is deploying a new public web application to AWS. The application will run behind an Application Load Balancer (ALB). The application needs to be
encrypted at the edge with an SSL/TLS certificate that is issued by an external certificate authority (CA). The certificate must be rotated each year before the
certificate expires.
What should a solutions architect do to meet these requirements?

A. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificat


B. Apply the certificate to the AL
C. Use the managed renewal feature to automatically rotate the certificate.
D. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificat
E. Import the key material from the certificat
F. Apply the certificate to the AL
G. Use the managed renewal feature to automatically rotate the certificate.
H. Use AWS Certificate Manager (ACM) Private Certificate Authority to issue an SSL/TLS certificate from the root C
I. Apply the certificate to the AL
J. Use the managed renewal feature to automatically rotate the certificate.
K. Use AWS Certificate Manager (ACM) to import an SSL/TLS certificat
L. Apply the certificate to the AL
M. Use Amazon EventBridge (Amazon CloudWatch Events) to send a notification when the certificate is nearing expiratio
N. Rotate the certificate manually.

Answer: D

Explanation:
https://www.amazonaws.cn/en/certificate-manager/faqs/#Managed_renewal_and_deployment

NEW QUESTION 22
- (Topic 1)
A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles during peak operating hours The company wants to use these
data points in its existing analytics platform A solutions architect must determine the most viable multi-tier option to support this architecture The data points must
be accessible from the REST API.
Which action meets these requirements for storing and retrieving location data?

A. Use Amazon Athena with Amazon S3


B. Use Amazon API Gateway with AWS Lambda
C. Use Amazon QuickSight with Amazon Redshift.
D. Use Amazon API Gateway with Amazon Kinesis Data Analytics

Answer: D

Explanation:
https://aws.amazon.com/solutions/implementations/aws-streaming-data-solution-for- amazon-kinesis/

NEW QUESTION 23
- (Topic 1)
A company runs a photo processing application that needs to frequently upload and download pictures from Amazon S3 buckets that are located in the same AWS
Region. A solutions architect has noticed an increased cost in data transfer fees and needs to implement a solution to reduce these costs.
How can the solutions architect meet this requirement?

A. Deploy Amazon API Gateway into a public subnet and adjust the route table to route S3 calls through It.
B. Deploy a NAT gateway into a public subnet and attach an end point policy that allows access to the S3 buckets.
C. Deploy the application Into a public subnet and allow it to route through an internet gateway to access the S3 Buckets
D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets.

Answer: D

Explanation:
The correct answer is Option D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets. By
deploying an S3 VPC gateway endpoint, the application can access the S3 buckets over a private network connection within the VPC, eliminating the need for data
transfer over the internet. This can help reduce data transfer fees as well as improve the performance of the application. The endpoint policy can be used to
specify which S3 buckets the application has access to.

NEW QUESTION 25
- (Topic 1)
A company is preparing to store confidential data in Amazon S3 For compliance reasons the data must be encrypted at rest Encryption key usage must be logged
tor auditing purposes. Keys must be rotated every year.
Which solution meets these requirements and «the MOST operationally efferent?

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

A. Server-side encryption with customer-provided keys (SSE-C)


B. Server-side encryption with Amazon S3 managed keys (SSE-S3)
C. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation
D. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automate rotation

Answer: D

Explanation:
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html When you enable automatic key rotation for a customer managed key, AWS KMS
generates new cryptographic material for the KMS key every year. AWS KMS also saves the KMS key's older cryptographic material in perpetuity so it can be
used to decrypt data that the KMS key encrypted.
Key rotation in AWS KMS is a cryptographic best practice that is designed to be transparent and easy to use. AWS KMS supports optional automatic key rotation
only for customer managed CMKs. Enable and disable key rotation. Automatic key rotation is disabled by default on customer managed CMKs. When you enable
(or re-enable) key rotation, AWS KMS automatically rotates the CMK 365 days after the enable date and every 365 days thereafter.

NEW QUESTION 30
- (Topic 1)
A solutions architect is designing a two-tier web application The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets The
database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet Security is a high priority for the company
How should security groups be configured in this situation? (Select TWO )

A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0.
B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0.
C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier.
D. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier.
E. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier.

Answer: AC

Explanation:
"Security groups create an outbound rule for every inbound rule." Not completely right. Statefull does NOT mean that if you create an inbound (or outbound) rule, it
will create an outbound (or inbound) rule. What it does mean is: suppose you create an inbound rule on port 443 for the X ip. When a request enters on port 443
from X ip, it will allow traffic out for that request in the port 443. However, if you look at the outbound rules, there will not be any outbound rule on port 443 unless
explicitly create it. In ACLs, which are stateless, you would have to create an inbound rule to allow incoming requests and an outbound rule to allow your
application responds to those incoming requests.
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#SecurityGro upRules

NEW QUESTION 34
- (Topic 1)
An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to
access the S3 bucket without connectivity to the internet.
Which solution will provide private network connectivity to Amazon S3?

A. Create a gateway VPC endpoint to the S3 bucket.


B. Stream the logs to Amazon CloudWatch Log
C. Export the logs to the S3 bucket.
D. Create an instance profile on Amazon EC2 to allow S3 access.
E. Create an Amazon API Gateway API with a private link to access the S3 endpoint.

Answer: A

Explanation:
VPC endpoint allows you to connect to AWS services using a private network instead of using the public Internet

NEW QUESTION 35
- (Topic 1)
A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solutions architect
needs to implement a solution to ingest and store the alerts for future analysis.
The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Ad
ditionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.
What is the MOST operationally efficient solution that meets these requirements?

A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3
bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days
B. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts Create a script on the EC2
instances that will store tne alerts m an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days
C. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon
Elasticsearch Service (Amazon ES) duster Set up the Amazon ES cluster to take manual snapshots every day and delete data from the duster that is older than 14
days
D. Create an Amazon Simple Queue Service (Amazon SQS i standard queue to ingest the alerts and set the message retention period to 14 days Configure
consumers to poll the SQS queue check the age of the message and analyze the message data as needed If the message is 14 days old the consumer should
copy the message to an Amazon S3 bucketand delete the message from the SQS queue

Answer: A

Explanation:
https://aws.amazon.com/kinesis/data-firehose/features/?nc=sn&loc=2#:~:text=into%20Amazon%20S3%2C%20Amazon%20Redshift%2C%20Amazon%20OpenS
earch%20Service%2C%20Kinesis,Delivery%20streams

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

NEW QUESTION 38
- (Topic 1)
A company has a production web application in which users upload documents through a web interlace or a mobile app. According to a new regulatory
requirement, new documents cannot be modified or deleted after they are stored.
What should a solutions architect do to meet this requirement?

A. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning and S3 Object Lock enabled
B. Store the uploaded documents in an Amazon S3 bucke
C. Configure an S3 Lifecycle policy to archive the documents periodically.
D. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning enabled Configure an ACL to restrict all access to read-only.
E. Store the uploaded documents on an Amazon Elastic File System (Amazon EFS) volum
F. Access the data by mounting the volume in read-only mode.

Answer: A

Explanation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html

NEW QUESTION 43
- (Topic 1)
A company recently launched a variety of new workloads on Amazon EC2 instances in its AWS account. The company needs to create a strategy to access and
administer the instances remotely and securely. The company needs to implement a repeatable process that works with native AWS services and follows the AWS
Well-Architected Framework.
Which solution will meet these requirements with the LEAST operational overhead?

A. Use the EC2 serial console to directly access the terminal interface of each instance foradministration.
B. Attach the appropriate IAM role to each existing instance and new instanc
C. Use AWS Systems Manager Session Manager to establish a remote SSH session.
D. Create an administrative SSH key pai
E. Load the public key into each EC2 instanc
F. Deploy a bastion host in a public subnet to provide a tunnel for administration of each instance.
G. Establish an AWS Site-to-Site VPN connectio
H. Instruct administrators to use their local on-premises machines to connect directly to the instances by using SSH keys across the VPN tunnel.

Answer: B

Explanation:
https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-launch-managed-instance.html

NEW QUESTION 44
- (Topic 1)
A company wants to reduce the cost of its existing three-tier web architecture. The web, application, and database servers are running on Amazon EC2 instances
for the development, test, and production environments. The EC2 instances average 30% CPU utilization during peak hours and 10% CPU utilization during non-
peak hours.
The production EC2 instances run 24 hours a day. The development and test EC2 instances run for at least 8 hours each day. The company plans to implement
automation to stop the development and test EC2 instances when they are not in use.
Which EC2 instance purchasing solution will meet the company's requirements MOST cost-effectively?

A. Use Spot Instances for the production EC2 instance


B. Use Reserved Instances for the development and test EC2 instances.
C. Use Reserved Instances for the production EC2 instance
D. Use On-Demand Instances for the development and test EC2 instances.
E. Use Spot blocks for the production EC2 instance
F. Use Reserved Instances for the development and test EC2 instances.
G. Use On-Demand Instances for the production EC2 instance
H. Use Spot blocks for the development and test EC2 instances.

Answer: B

NEW QUESTION 46
- (Topic 2)
A company sells ringtones created from clips of popular songs. The files containing the ringtones are stored in Amazon S3 Standard and are at least 128 KB in
size. The company has millions of files, but downloads are infrequent for ringtones older than 90 days. The company needs to save money on storage while
keeping the most accessed files readily available for its users.
Which action should the company take to meet these requirements MOST cost-effectively?

A. Configure S3 Standard-Infrequent Access (S3 Standard-IA) storage for the initial storage tier of the objects.
B. Move the files to S3 Intelligent-Tiering and configure it to move objects to a less expensive storage tier after 90 days.
C. Configure S3 inventory to manage objects and move them to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.
D. Implement an S3 Lifecycle policy that moves the objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.

Answer: D

Explanation:
This solution meets the requirements of saving money on storage while keeping the most accessed files readily available for the users. S3 Lifecycle policy can
automatically move objects from one storage class to another based on predefined rules. S3 Standard-IA is a lower-cost storage class for data that is accessed
less frequently, but requires rapid access when needed. It is suitable for ringtones older than 90 days that are downloaded infrequently.
Option A is incorrect because configuring S3 Standard-IA for the initial storage tier of the objects can incur higher costs for frequent access and retrieval fees.
Option B is incorrect
because moving the files to S3 Intelligent-Tiering can incur additional monitoring and automation fees that may not be necessary for ringtones older than 90 days.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

Option C is incorrect because using S3 inventory to manage objects and move them to S3 Standard-IA can be complex and time-consuming, and it does not
provide automatic cost savings. References:
? https://aws.amazon.com/s3/storage-classes/
? https://aws.amazon.com/s3/cloud-storage-cost-optimization-ebook/

NEW QUESTION 48
- (Topic 2)
A company has a Windows-based application that must be migrated to AWS. The application requires the use of a shared Windows file system attached to
multiple Amazon EC2 Windows instances that are deployed across multiple Availability Zones.
What should a solutions architect do to meet this requirement?

A. Configure AWS Storage Gateway in volume gateway mod


B. Mount the volume to each Windows instance.
C. Configure Amazon FSx for Windows File Serve
D. Mount the Amazon FSx file system to each Windows instance.
E. Configure a file system by using Amazon Elastic File System (Amazon EFS). Mount the EFS file system to each Windows instance.
F. Configure an Amazon Elastic Block Store (Amazon EBS) volume with the required siz
G. Attach each EC2 instance to the volum
H. Mount the file system within the volume to each Windows instance.

Answer: B

Explanation:
This solution meets the requirement of migrating a Windows-based application that requires the use of a shared Windows file system attached to multiple Amazon
EC2 Windows instances that are deployed across multiple Availability Zones. Amazon FSx for Windows File Server provides fully managed shared storage built on
Windows Server, and delivers a wide range of data access, data management, and administrative capabilities. It supports the Server Message Block (SMB)
protocol and can be mounted to EC2 Windows instances across multiple Availability Zones.
Option A is incorrect because AWS Storage Gateway in volume gateway mode provides cloud-backed storage volumes that can be mounted as iSCSI devices
from on-premises application servers, but it does not support SMB protocol or EC2 Windows instances. Option C is incorrect because Amazon Elastic File System
(Amazon EFS) provides a scalable and elastic NFS file system for Linux-based workloads, but it does not support SMB protocol or EC2 Windows instances.
Option D is incorrect because Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with EC2 instances, but it does not
support SMB protocol or attaching multiple instances to the same volume.
References:
? https://aws.amazon.com/fsx/windows/
? https://docs.aws.amazon.com/fsx/latest/WindowsGuide/using-file-shares.html

NEW QUESTION 51
- (Topic 2)
A company wants to migrate its on-premises data center to AWS. According to the company's compliance requirements, the company can use only the ap-
northeast-3 Region. Company administrators are not permitted to connect VPCs to the internet.
Which solutions will meet these requirements? (Choose two.)

A. Use AWS Control Tower to implement data residency guardrails to deny internet access and deny access to all AWS Regions except ap-northeast-3.
B. Use rules in AWS WAF to prevent internet acces
C. Deny access to all AWS Regions except ap-northeast-3 in the AWS account settings.
D. Use AWS Organizations to configure service control policies (SCPS) that prevent VPCs from gaining internet acces
E. Deny access to all AWS Regions except ap-northeast-3.
F. Create an outbound rule for the network ACL in each VPC to deny all traffic from 0.0.0.0/0. Create an IAM policy for each user to prevent the use of any AWS
Region other than ap-northeast-3.
G. Use AWS Config to activate managed rules to detect and alert for internet gateways and to detect and alert for new resources deployed outside of ap-
northeast-3.

Answer: AC

Explanation:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_vpc.html#example_vpc_2

NEW QUESTION 56
- (Topic 2)
A company wants to use the AWS Cloud to make an existing application highly available and resilient. The current version of the application resides in the
company's data center. The application recently experienced data loss after a database server crashed because of an unexpected power outage.
The company needs a solution that avoids any single points of failure. The solution must give the application the ability to scale to meet user demand.
Which solution will meet these requirements?

A. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zone
B. Use an Amazon RDS DB instance in a Multi-AZ configuration.
C. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group in a single Availability Zon
D. Deploy the databaseon an EC2 instanc
E. Enable EC2 Auto Recovery.
F. Deploy the application servers by using Amazon EC2 instances in an Auto Scalinggroup across multiple Availability Zone
G. Use an Amazon RDS DB instance with a read replica in a single Availability Zon
H. Promote the read replica to replace the primary DB instance if the primary DB instance fails.
I. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones Deploy the primary and secondary
database servers on EC2 instances across multiple Availability Zones Use Amazon Elastic Block Store (Amazon EBS) Multi-Attach to create shared storage
between the instances.

Answer: A

Explanation:
Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon RDS DB instance in a

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

Multi-AZ configuration. To make an existing application highly available and resilient while avoiding any single points of failure and giving the application the ability
to scale to meet user demand, the best solution would be to deploy the application servers using Amazon EC2 instances in an Auto Scaling group across multiple
Availability Zones and use an Amazon RDS DB instance in a Multi-AZ configuration. By using an Amazon RDS DB instance in a Multi-AZ configuration, the
database is automatically replicated across multiple Availability Zones, ensuring that the database is highly available and can withstand the failure of a single
Availability Zone. This provides fault tolerance and avoids any single points of failure.

NEW QUESTION 59
- (Topic 2)
An ecommerce company hosts its analytics application in the AWS Cloud. The application generates about 300 MB of data each month. The data is stored in
JSON format. The company is evaluating a disaster recovery solution to back up the data. The data must be accessible in milliseconds if it is needed, and the data
must be kept for 30 days.
Which solution meets these requirements MOST cost-effectively?

A. Amazon OpenSearch Service (Amazon Elasticsearch Service)


B. Amazon S3 Glacier
C. Amazon S3 Standard
D. Amazon RDS for PostgreSQL

Answer: C

Explanation:
This solution meets the requirements of a disaster recovery solution to back up the data that is generated by an analytics application, stored in JSON format, and
must be accessible in milliseconds if it is needed. Amazon S3 Standard is a durable and scalable storage class for frequently accessed data. It can store any
amount of data and provide high availability and performance. It can also support millisecond access time for data retrieval.
Option A is incorrect because Amazon OpenSearch Service (Amazon Elasticsearch Service) is a search and analytics service that can index and query data, but it
is not a backup solution for data stored in JSON format. Option B is incorrect because Amazon S3 Glacier is a low-cost storage class for data archiving and long-
term backup, but it does not support millisecond access time for data retrieval. Option D is incorrect because Amazon RDS for PostgreSQL is a relational database
service that can store and query structured data, but it is not a backup solution for data stored in JSON format.
References:
? https://aws.amazon.com/s3/storage-classes/
? https://aws.amazon.com/s3/faqs/#Durability_and_data_protection

NEW QUESTION 64
- (Topic 2)
A company is building a web-based application running on Amazon EC2 instances in multiple Availability Zones. The web application will provide access to a
repository of text documents totaling about 900 TB in size. The company anticipates that the web application will experience periods of high demand. A solutions
architect must ensure that the storage component for the text documents can scale to meet the demand of the application at all times. The company is concerned
about the overall cost of the solution.
Which storage solution meets these requirements MOST cost-effectively?

A. Amazon Elastic Block Store (Amazon EBS)


B. Amazon Elastic File System (Amazon EFS)
C. Amazon Elasticsearch Service (Amazon ES)
D. Amazon S3

Answer: D

Explanation:
Amazon S3 is cheapest and can be accessed from anywhere.

NEW QUESTION 65
- (Topic 2)
A company runs an application using Amazon ECS. The application creates esi/ed versions of an original image and then makes Amazon S3 API calls to store the
resized images in Amazon S3.
How can a solutions architect ensure that the application has permission to access Amazon S3?

A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the container.
B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleAm in the task definition.
C. Create a security group that allows access from Amazon ECS to Amazon S3, and update the launch configuration used by the ECS cluster.
D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instancesfor the ECS cluster while logged in as this account.

Answer: B

Explanation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-taskdefinition.html

NEW QUESTION 66
- (Topic 2)
A company has an ecommerce checkout workflow that writes an order to a database and calls a service to process the payment. Users are experiencing timeouts
during the checkout process. When users resubmit the checkout form, multiple unique orders are created for the same desired transaction.
How should a solutions architect refactor this workflow to prevent the creation of multiple orders?

A. Configure the web application to send an order message to Amazon Kinesis Data Firehos
B. Set the payment service to retrieve the message from Kinesis Data Firehose and process the order.
C. Create a rule in AWS CloudTrail to invoke an AWS Lambda function based on the logged application path request Use Lambda to query the database, call the
payment service, and pass in the order information.
D. Store the order in the databas
E. Send a message that includes the order number to Amazon Simple Notification Service (Amazon SNS). Set the payment service to pollAmazon SN
F. retrieve the message, and process the order.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

G. Store the order in the databas


H. Send a message that includes the order number to an Amazon Simple Queue Service (Amazon SQS) FIFO queu
I. Set the payment service to retrieve the message and process the orde
J. Delete the message from the queue.

Answer: D

Explanation:
This approach ensures that the order creation and payment processing steps are separate and atomic. By sending the order information to an SQS FIFO queue,
the payment service can process the order one at a time and in the order they were received. If the payment service is unable to process an order, it can be retried
later, preventing the creation of multiple orders. The deletion of the message from the queue after it is processed will prevent the same message from being
processed multiple times.

NEW QUESTION 68
- (Topic 2)
A new employee has joined a company as a deployment engineer. The deployment engineer will be using AWS CloudFormation templates to create multiple AWS
resources. A solutions architect wants the deployment engineer to perform job activities while following the principle of least privilege.
Which steps should the solutions architect do in conjunction to reach this goal? (Select two.)

A. Have the deployment engineer use AWS account roof user credentials for performing AWS CloudFormation stack operations.
B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached.
C. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the Administrate/Access IAM policy attached.
D. Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.
E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM
role.

Answer: DE

Explanation:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html

NEW QUESTION 71
- (Topic 2)
A company is running an online transaction processing (OLTP) workload on AWS. This workload uses an unencrypted Amazon RDS DB instance in a Multi-AZ
deployment. Daily database snapshots are taken from this instance.
What should a solutions architect do to ensure the database and snapshots are always encrypted moving forward?

A. Encrypt a copy of the latest DB snapsho


B. Replace existing DB instance by restoring the encrypted snapshot
C. Create a new encrypted Amazon Elastic Block Store (Amazon EBS) volume and copy the snapshots to it Enable encryption on the DB instance
D. Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS) Restore encrypted snapshot to an existing DB instance
E. Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption with AWS Key Management Service (AWS KMS) managed keys
(SSE-KMS)

Answer: A

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapsho t.html#USER_RestoreFromSnapshot.CON
Under "Encrypt unencrypted resources" - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html

NEW QUESTION 76
- (Topic 2)
A company is planning to move its data to an Amazon S3 bucket. The data must be encrypted when it is stored in the S3 bucket. Additionally, the encryption key
must be automatically rotated every year.
Which solution will meet these requirements with the LEAST operational overhead?

A. Move the data to the S3 bucke


B. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use the built-in key rotation behavior of SSE-S3 encryption keys.
C. Create an AWS Key Management Service {AWS KMS) customer managed ke
D. Enable automatic key rotatio
E. Set the S3 bucket's default encryption behavior to use the customer managed KMS ke
F. Move the data to the S3 bucket.
G. Create an AWS Key Management Service (AWS KMS) customer managed ke
H. Set the S3 bucket's default encryption behavior to use the customer managed KMS ke
I. Move the data to the S3 bucke
J. Manually rotate the KMS key every year.
K. Encrypt the data with customer key material before moving the data to the S3 bucke
L. Create an AWS Key Management Service (AWS KMS) key without key materia
M. Import the customer key material into the KMS ke
N. Enable automatic key rotation.

Answer: B

Explanation:
SSE-S3 - is free and uses AWS owned CMKs (CMK = Customer Master Key). The encryption key is owned and managed by AWS, and is shared among many
accounts. Its rotation is automatic with time that varies as shown in the table here. The time is not explicitly defined.
SSE-KMS - has two flavors:
AWS managed CMK. This is free CMK generated only for your account. You can only view it policies and audit usage, but not manage it. Rotation is automatic -
once per 1095 days (3 years),
Customer managed CMK. This uses your own key that you create and can manage. Rotation is not enabled by default. But if you enable it, it will be automatically

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

rotated every 1 year. This variant can also use an imported key material by you. If you create such key with an imported material, there is no automated rotation.
Only manual rotation.
SSE-C - customer provided key. The encryption key is fully managed by you outside of AWS. AWS will not rotate it.
This solution meets the requirements of moving data to an Amazon S3 bucket, encrypting the data when it is stored in the S3 bucket, and automatically rotating the
encryption key every year with the least operational overhead. AWS Key Management Service (AWS KMS) is a service that enables you to create and manage
encryption keys for your data. A customer managed key is a symmetric encryption key that you create and manage in AWS KMS. You can enable automatic key
rotation for a customer managed key, which means that AWS KMS generates new cryptographic material for the key every year. You can set the S3 bucket’s
default encryption behavior to use the customer managed KMS key, which means that any object that is uploaded to the bucket without specifying an encryption
method will be encrypted with that key.
Option A is incorrect because using server-side encryption with Amazon S3 managed encryption keys (SSE-S3) does not allow you to control or manage the
encryption keys. SSE-S3 uses a unique key for each object, and encrypts that key with a master key that is regularly rotated by S3. However, you cannot enable or
disable key rotation for SSE-S3 keys, or specify the rotation interval. Option C is incorrect because manually rotating the KMS key every year can increase the
operational overhead and complexity, and it may not meet the requirement of rotating the key every year if you forget or delay the rotation
process. Option D is incorrect because encrypting the data with customer key material before moving the data to the S3 bucket can increase the operational
overhead and complexity, and it may not provide consistent encryption for all objects in the bucket. Creating a KMS key without key material and importing the
customer key material into the KMS key can enable you to use your own source of random bits to generate your KMS keys, but it does not support automatic key
rotation.
References:
? https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html
? https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
? https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html

NEW QUESTION 77
- (Topic 2)
A company has a highly dynamic batch processing job that uses many Amazon EC2 instances to complete it. The job is stateless in nature, can be started and
stopped at any given time with no negative impact, and typically takes upwards of 60 minutes total to complete. The company has asked a solutions architect to
design a scalable and cost- effective solution that meets the requirements of the job.
What should the solutions architect recommend?

A. Implement EC2 Spot Instances


B. Purchase EC2 Reserved Instances
C. Implement EC2 On-Demand Instances
D. Implement the processing on AWS Lambda

Answer: A

Explanation:
EC2 Spot Instances allow users to bid on spare Amazon EC2 computing capacity and can be a cost-effective solution for stateless, interruptible workloads that can
be started and stopped at any time. Since the batch processing job is stateless, can be started and stopped at any time, and typically takes upwards of 60 minutes
to complete, EC2 Spot Instances would be a good fit for this workload.

NEW QUESTION 82
- (Topic 2)
A company runs a web-based portal that provides users with global breaking news, local alerts, and weather updates. The portal delivers each user a personalized
view by using mixture of static and dynamic content. Content is served over HTTPS through an API server running on an Amazon EC2 instance behind an
Application Load Balancer (ALB). The company wants the portal to provide this content to its users across the world as quickly as possible.
How should a solutions architect design the application to ensure the LEAST amount of latency for all users?

A. Deploy the application stack in a single AWS Regio


B. Use Amazon CloudFront to serve all static and dynamic content by specifying the ALB as an origin.
C. Deploy the application stack in two AWS Region
D. Use an Amazon Route 53 latency routing policy to serve all content from the ALB in the closest Region.
E. Deploy the application stack in a single AWS Regio
F. Use Amazon CloudFront to serve the static conten
G. Serve the dynamic content directly from the ALB.
H. Deploy the application stack in two AWS Region
I. Use an Amazon Route 53 geolocation routing policy to serve all content from the ALB in the closest Region.

Answer: A

Explanation:
https://aws.amazon.com/blogs/networking-and-content-delivery/deliver-your-apps-dynamic-content-using-amazon-cloudfront-getting-started-template/

NEW QUESTION 87
- (Topic 2)
A hospital wants to create digital copies for its large collection of historical written records. The hospital will continue to add hundreds of new documents each day.
The hospital's data team will scan the documents and will upload the documents to the AWS Cloud.
A solutions architect must implement a solution to analyze the documents, extract the medical information, and store the documents so that an application can run
SQL queries on the data. The solution must maximize scalability and operational efficiency.
Which combination of steps should the solutions architect take to meet these requirements? (Select TWO.)

A. Write the document information to an Amazon EC2 instance that runs a MySQL database.
B. Write the document information to an Amazon S3 bucke
C. Use Amazon Athena to query the data.
D. Create an Auto Scaling group of Amazon EC2 instances to run a custom application that processes the scanned files and extracts the medical information.
E. Create an AWS Lambda function that runs when new documents are uploade
F. Use Amazon Rekognition to convert the documents to raw tex
G. Use Amazon Transcribe Medical to detect and extract relevant medical information from the text.
H. Create an AWS Lambda function that runs when new documents are uploade
I. Use Amazon Textract to convert the documents to raw tex

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

J. Use Amazon Comprehend Medical to detect and extract relevant medical information from the text.

Answer: BE

Explanation:
This solution meets the requirements of creating digital copies for a large collection of historical written records, analyzing the documents, extracting the medical
information, and storing the documents so that an application can run SQL queries on the data. Writing the document information to an Amazon S3 bucket can
provide scalable and durable storage for the scanned files. Using Amazon Athena to query the data can provide serverless and interactive SQL analysis on data
stored in S3. Creating an AWS Lambda function that runs when new documents are uploaded can provide event-driven and serverless processing of the scanned
files. Using Amazon Textract to convert the documents to raw text can provide
accurate optical character recognition (OCR) and extraction of structured data such as tables and forms from documents using artificial intelligence (AI). Using
Amazon Comprehend Medical to detect and extract relevant medical information from the text can provide natural language processing (NLP) service that uses
machine learning that has been pre-trained to understand and extract health data from medical text.
Option A is incorrect because writing the document information to an Amazon EC2 instance that runs a MySQL database can increase the infrastructure overhead
and complexity, and it may not be able to handle large volumes of data. Option C is incorrect because creating an Auto Scaling group of Amazon EC2 instances to
run a custom application that processes the scanned files and extracts the medical information can increase the infrastructure overhead and complexity, and it may
not be able to leverage existing AI and NLP services such as Textract and Comprehend Medical. Option D is incorrect because using Amazon Rekognition to
convert the documents to raw text can provide image and video analysis, but it does not support OCR or extraction of structured data from documents. Using
Amazon Transcribe Medical to detect and extract relevant medical information from the text can provide speech-to-text transcription service for medical
conversations, but it does not support text analysis or extraction of health data from medical text.
References:
? https://aws.amazon.com/s3/
? https://aws.amazon.com/athena/
? https://aws.amazon.com/lambda/
? https://aws.amazon.com/textract/
? https://aws.amazon.com/comprehend/medical/

NEW QUESTION 91
- (Topic 2)
A company owns an asynchronous API that is used to ingest user requests and, based on the request type, dispatch requests to the appropriate microservice for
processing. The company is using Amazon API Gateway to deploy the API front end, and an AWS Lambda function that invokes Amazon DynamoDB to store user
requests before dispatching them to the processing microservices.
The company provisioned as much DynamoDB throughput as its budget allows, but the company is still experiencing availability issues and is losing user requests.
What should a solutions architect do to address this issue without impacting existing users?

A. Add throttling on the API Gateway with server-side throttling limits.


B. Use DynamoDB Accelerator (DAX) and Lambda to buffer writes to DynamoDB.
C. Create a secondary index in DynamoDB for the table with the user requests.
D. Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB.

Answer: D

Explanation:
By using an SQS queue and Lambda, the solutions architect can decouple the API front end from the processing microservices and improve the overall scalability
and availability of the system. The SQS queue acts as a buffer, allowing the API front end to continue accepting user requests even if the processing microservices
are experiencing high workloads or are temporarily unavailable. The Lambda function can then retrieve requests from the SQS queue and write them to
DynamoDB, ensuring that all user requests are stored and processed. This approach allows the company to scale the processing microservices independently
from the API front end, ensuring that the API remains available to users even during periods of high demand.

NEW QUESTION 94
- (Topic 2)
A company uses AWS Organizations to create dedicated AWS accounts for each business unit to manage each business unit's account independently upon
request. The root email recipient missed a notification that was sent to the root user email address of one account. The company wants to ensure that all future
notifications are not missed. Future notifications must be limited to account administrators.
Which solution will meet these requirements?

A. Configure the company's email server to forward notification email messages that are sent to the AWS account root user email address to all users in the
organization.
B. Configure all AWS account root user email addresses as distribution lists that go to a few administrators who can respond to alert
C. Configure AWS account alternate contacts in the AWS Organizations console or programmatically.
D. Configure all AWS account root user email messages to be sent to one administrator who is responsible for monitoring alerts and forwarding those alerts to the
appropriate groups.
E. Configure all existing AWS accounts and all newly created accounts to use the same root user email addres
F. Configure AWS account alternate contacts in the AWS Organizations console or programmatically.

Answer: B

Explanation:
Use a group email address for the management account's root user https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-
acct.html#best-practices_mgmt-acct_email-address

NEW QUESTION 99
- (Topic 2)
A company runs its ecommerce application on AWS. Every new order is published as a message in a RabbitMQ queue that runs on an Amazon EC2 instance in a
single Availability Zone. These messages are processed by a different application that runs on a separate EC2 instance. This application stores the details in a
PostgreSQL database on another EC2 instance. All the EC2 instances are in the same Availability Zone.
The company needs to redesign its architecture to provide the highest availability with the least operational overhead.
What should a solutions architect do to meet these requirements?

A. Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon M

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

B. Create a Multi-AZ Auto Scaling group (or EC2 instances that host the applicatio
C. Create another Multi-AZAuto Scaling group for EC2 instances that host the PostgreSQL database.
D. Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon M
E. Create a Multi-AZ Auto Scaling group for EC2 instances that host the applicatio
F. Migrate the database to run on a Multi-AZ deployment of Amazon RDS for PostgreSQL.
G. Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queu
H. Create another Multi-AZ Auto Scaling group for EC2 instances that host the application.Migrate the database to run on a Multi-AZ deployment of Amazon RDS
fqjPostgreSQL.
I. Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queue.Create another Multi-AZ Auto Scaling group for EC2 instances that host
the applicatio
J. Create a third Multi-AZ Auto Scaling group for EC2 instances that host the PostgreSQL database.

Answer: B

Explanation:
Migrating to Amazon MQ reduces the overhead on the queue management. C and D are dismissed. Deciding between A and B means deciding to go for an
AutoScaling group for EC2 or an RDS for Postgress (both multi- AZ). The RDS option has less operational impact, as provide as a service the tools and software
required. Consider for instance, the effort to add an additional node like a read replica, to the DB. https://docs.aws.amazon.com/amazon-mq/latest/developer-
guide/active-standby-broker- deployment.html https://aws.amazon.com/rds/postgresql/

NEW QUESTION 101


- (Topic 2)
A company wants to migrate its existing on-premises monolithic application to AWS.
The company wants to keep as much of the front- end code and the backend code as possible. However, the company wants to break the application into smaller
applications. A different team will manage each application. The company needs a highly scalable solution that minimizes operational overhead.
Which solution will meet these requirements?

A. Host the application on AWS Lambda Integrate the application with Amazon API Gateway.
B. Host the application with AWS Amplif
C. Connect the application to an Amazon API Gateway API that is integrated with AWS Lambda.
D. Host the application on Amazon EC2 instance
E. Set up an Application Load Balancer with EC2 instances in an Auto Scaling group as targets.
F. Host the application on Amazon Elastic Container Service (Amazon ECS) Set up an Application Load Balancer with Amazon ECS as the target.

Answer: D

Explanation:
https://aws.amazon.com/blogs/compute/microservice-delivery-with-amazon-ecs-and-application-load-balancers/

NEW QUESTION 106


- (Topic 2)
A company has a legacy data processing application that runs on Amazon EC2 instances. Data is processed sequentially, but the order of results does not matter.
The application uses a monolithic architecture. The only way that the company can scale the application to meet increased demand is to increase the size of the
instances.
The company's developers have decided to rewrite the application to use a microservices architecture on Amazon Elastic Container Service (Amazon ECS).
What should a solutions architect recommend for communication between the microservices?

A. Create an Amazon Simple Queue Service (Amazon SQS) queu


B. Add code to the data producers, and send data to the queu
C. Add code to the data consumers to process data from the queue.
D. Create an Amazon Simple Notification Service (Amazon SNS) topi
E. Add code to the data producers, and publish notifications to the topi
F. Add code to the data consumers to subscribe to the topic.
G. Create an AWS Lambda function to pass message
H. Add code to the data producers to call the Lambda function with a data objec
I. Add code to the data consumers to receive a data object that is passed from the Lambda function.
J. Create an Amazon DynamoDB tabl
K. Enable DynamoDB Stream
L. Add code to the data producers to insert data into the tabl
M. Add code to the data consumers to use the DynamoDB Streams API to detect new table entries and retrieve the data.

Answer: A

Explanation:
Queue has Limited throughput (300 msg/s without batching, 3000 msg/s with batching whereby up-to 10 msg per batch operation; Msg duplicates not allowed in
the queue (exactly-once delivery); Msg order is preserved (FIFO); Queue name must end with .fifo

NEW QUESTION 110


- (Topic 2)
A solutions architect is designing a customer-facing application for a company. The application's database will have a clearly defined access pattern throughout the
year and will have a variable number of reads and writes that depend on the time of year. The company must retain audit records for the database for 7 days. The
recovery point objective (RPO) must be less than 5 hours.
Which solution meets these requirements?

A. Use Amazon DynamoDB with auto scaling Use on-demand backups and Amazon DynamoDB Streams
B. Use Amazon Redshif
C. Configure concurrency scalin
D. Activate audit loggin
E. Perform database snapshots every 4 hours.
F. Use Amazon RDS with Provisioned IOPS Activate the database auditing parameter Perform database snapshots every 5 hours

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

G. Use Amazon Aurora MySQL with auto scalin


H. Activate the database auditing parameter

Answer: A

Explanation:
This solution meets the requirements of a customer-facing application that has a clearly defined access pattern throughout the year and a variable number of reads
and writes that depend on the time of year. Amazon DynamoDB is a fully managed NoSQL database service that can handle any level of request traffic and data
size. DynamoDB auto scaling can automatically adjust the provisioned read and write capacity based on the actual workload. DynamoDB on-demand backups can
create full backups of the tables for data protection and archival purposes. DynamoDB Streams can capture a time-ordered sequence of item-level modifications in
the tables for audit purposes.
Option B is incorrect because Amazon Redshift is a data warehouse service that is designed for analytical workloads, not for customer-facing applications. Option
C is incorrect because Amazon RDS with Provisioned IOPS can provide consistent performance for relational databases, but it may not be able to handle
unpredictable spikes in traffic and data size. Option D is incorrect because Amazon Aurora MySQL with auto scaling can provide high performance and availability
for relational databases, but it does not support audit logging as a parameter.
References:
? https://aws.amazon.com/dynamodb/
? https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScalin g.html
? https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/BackupRe store.html
? https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.ht ml

NEW QUESTION 114


- (Topic 2)
A company has implemented a self-managed DNS solution on three Amazon EC2 instances behind a Network Load Balancer (NLB) in the us-west-2 Region. Most
of the company's users are located in the United States and Europe. The company wants to improve the performance and availability of the solution. The company
launches and configures three EC2 instances in the eu-west-1 Region and adds the EC2 instances as targets for a new NLB.
Which solution can the company use to route traffic to all the EC2 instances?

A. Create an Amazon Route 53 geolocation routing policy to route requests to one of the two NLB
B. Create an Amazon CloudFront distributio
C. Use the Route 53 record as the distribution's origin.
D. Create a standard accelerator in AWS Global Accelerato
E. Create endpoint groups in us- west-2 and eu-west-1. Add the two NLBs as endpoints for the endpoint groups.
F. Attach Elastic IP addresses to the six EC2 instance
G. Create an Amazon Route 53 geolocation routing policy to route requests to one of the six EC2 instance
H. Create an Amazon CloudFront distributio
I. Use the Route 53 record as the distribution's origin.
J. Replace the two NLBs with two Application Load Balancers (ALBs). Create an Amazon Route 53 latency routing policy to route requests to one of the two ALB
K. Create an Amazon CloudFront distributio
L. Use the Route 53 record as the distribution's origin.

Answer: B

Explanation:
For standard accelerators, Global Accelerator uses the AWS global network to route traffic to the optimal regional endpoint based on health, client location, and
policies that you configure, which increases the availability of your applications. Endpoints for standard accelerators can be Network Load Balancers, Application
Load Balancers, Amazon EC2 instances, or Elastic IP addresses that are located in one AWS Region or multiple Regions.
https://docs.aws.amazon.com/global-accelerator/latest/dg/what-is-global-accelerator.html

NEW QUESTION 116


- (Topic 3)
An ecommerce company is experiencing an increase in user traffic. The company's store is deployed on Amazon EC2 instances as a two-tier web application
consisting of a web tier and a separate database tier. As traffic increases, the company notices that the architecture is causing significant delays in sending timely
marketing and order confirmation email to users. The company wants to reduce the time it spends resolving complex email delivery issues and minimize
operational overhead.
What should a solutions architect do to meet these requirements?

A. Create a separate application tier using EC2 instances dedicated to email processing.
B. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES).
C. Configure the web instance to send email through Amazon Simple Notification Service (Amazon SNS)
D. Create a separate application tier using EC2 instances dedicated to email processin
E. Place the instances in an Auto Scaling group.

Answer: B

Explanation:
Amazon SES is a cost-effective and scalable email service that enables businesses to send and receive email using their own email addresses and domains.
Configuring the web instance to send email through Amazon SES is a simple and effective solution that can reduce the time spent resolving complex email
delivery issues and minimize operational overhead.

NEW QUESTION 121


- (Topic 3)
A gaming company is moving its public scoreboard from a data center to the AWS Cloud. The company uses Amazon EC2 Windows Server instances behind an
Application Load Balancer to host its dynamic application. The company needs a highly available storage solution for the application. The application consists of
static files and dynamic server-side code.
Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)

A. Store the static files on Amazon S3. Use Amazon CloudFront to cache objects at the edge.
B. Store the static files on Amazon S3. Use Amazon ElastiCache to cache objects at the edge.
C. Store the server-side code on Amazon Elastic File System (Amazon EFS). Mount the EFS volume on each EC2 instance to share the files.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

D. Store the server-side code on Amazon FSx for Windows File Serve
E. Mount the FSx for Windows File Server volume on each EC2 instance to share the files.
F. Store the server-side code on a General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volum
G. Mount the EBS volume on each EC2 instance to share the files.

Answer: AD

Explanation:
A because Elasticache, despite being ideal for leaderboards per Amazon, doesn't cache at edge locations. D because FSx has higher performance for low latency
needs. https://www.techtarget.com/searchaws/tip/Amazon-FSx-vs-EFS-Compare-the-AWS-file- services "FSx is built for high performance and submillisecond
latency using solid-state drive storage volumes. This design enables users to select storage capacity and latency independently. Thus, even a subterabyte file
system can have 256 Mbps or higher throughput and support volumes up to 64 TB."
Amazon S3 is an object storage service that can store static files such as images, videos, documents, etc. Amazon EFS is a file storage service that can store files
in a hierarchical structure and supports NFS protocol. Amazon FSx for Windows File Server is a file storage service that can store files in a hierarchical structure
and supports SMB protocol. Amazon EBS is a block storage service that can store data in fixed-size blocks and attach to EC2 instances.
Based on these definitions, the combination of steps that should be taken to meet the requirements are:
* A. Store the static files on Amazon S3. Use Amazon CloudFront to cache objects at the edge. D. Store the server-side code on Amazon FSx for Windows File
Server. Mount the FSx for Windows File Server volume on each EC2 instance to share the files.

NEW QUESTION 126


- (Topic 3)
A company hosts a marketing website in an on-premises data center. The website consists of static documents and runs on a single server. An administrator
updates the website content infrequently and uses an SFTP client to upload new documents.
The company decides to host its website on AWS and to use Amazon CloudFront. The company's solutions architect creates a CloudFront distribution. The
solutions architect
must design the most cost-effective and resilient architecture for website hosting to serve as the CloudFront origin.
Which solution will meet these requirements?

A. Create a virtual server by using Amazon Lightsai


B. Configure the web server in the Lightsail instanc
C. Upload website content by using an SFTP client.
D. Create an AWS Auto Scaling group for Amazon EC2 instance
E. Use an Application Load Balance
F. Upload website content by using an SFTP client.
G. Create a private Amazon S3 bucke
H. Use an S3 bucket policy to allow access from a CloudFront origin access identity (OAI). Upload website content by using theAWSCLI.
I. Create a public Amazon S3 bucke
J. Configure AWS Transfer for SFT
K. Configure the S3 bucket for website hostin
L. Upload website content by using the SFTP client.

Answer: C

Explanation:
https://docs.aws.amazon.com/cli/latest/reference/transfer/describe- server.html

NEW QUESTION 130


- (Topic 3)
A telemarketing company is designing its customer call center functionality on AWS. The company needs a solution that provides multiples speaker recognition
and generates transcript files The company wants to query the transcript files to analyze the business patterns The transcript files must be stored for 7 years for
auditing piloses.
Which solution will meet these requirements?

A. Use Amazon Recognition for multiple speaker recognitio


B. Store the transcript files in Amazon S3 Use machine teaming models for transcript file analysis
C. Use Amazon Transcribe for multiple speaker recognitio
D. Use Amazon Athena for transcript file analysts
E. Use Amazon Translate lor multiple speaker recognitio
F. Store the transcript files in Amazon Redshift Use SQL queues lor transcript file analysis
G. Use Amazon Recognition for multiple speaker recognitio
H. Store the transcript files in Amazon S3 Use Amazon Textract for transcript file analysis

Answer: B

Explanation:
Amazon Transcribe now supports speaker labeling for streaming transcription. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it
easy for you to convert speech-to-text. In live audio transcription, each stream of audio may contain multiple speakers. Now you can conveniently turn on the ability
to label speakers, thus helping to identify who is saying what in the output transcript. https://aws.amazon.com/about-aws/whats-new/2020/08/amazon-transcribe-
supports- speaker-labeling-streaming-transcription/

NEW QUESTION 133


- (Topic 3)
A company's facility has badge readers at every entrance throughout the building. When badges are scanned, the readers send a message over HTTPS to
indicate who attempted to access that particular entrance.
A solutions architect must design a system to process these messages from the sensors. The solution must be highly available, and the results must be made
available for the company's security team to analyze.
Which system architecture should the solutions architect recommend?

A. Launch an Amazon EC2 instance to serve as the HTTPS endpoint and to process the messages Configure the EC2 instance to save the results to an Amazon
S3 bucket.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

B. Create an HTTPS endpoint in Amazon API Gatewa


C. Configure the API Gateway endpoint to invoke an AWS Lambda function to process the messages and save the results to an Amazon DynamoDB table.
D. Use Amazon Route 53 to direct incoming sensor messages to an AWS Lambda functio
E. Configure the Lambda function to process the messages and save the results to an Amazon DynamoDB table.
F. Create a gateway VPC endpoint for Amazon S3. Configure a Site-to-Site VPN connection from the facility network to the VPC so that sensor data can be written
directly to an S3 bucket by way of the VPC endpoint.

Answer: B

Explanation:
Deploy Amazon API Gateway as an HTTPS endpoint and AWS Lambda to process and save the messages to an Amazon DynamoDB table. This option provides
a highly available and scalable solution that can easily handle large amounts of data. It also integrates with other AWS services, making it easier to analyze and
visualize the data for the security team.

NEW QUESTION 136


- (Topic 3)
A solutions architect is designing the architecture for a software demonstration environment The environment will run on Amazon EC2 instances in an Auto Scaling
group behind an Application Load Balancer (ALB) The system will experience significant increases in traffic during working hours but Is not required to operate on
weekends.
Which combination of actions should the solutions architect take to ensure that the system can scale to meet demand? (Select TWO)

A. Use AWS Auto Scaling to adjust the ALB capacity based on request rate
B. Use AWS Auto Scaling to scale the capacity of the VPC internet gateway
C. Launch the EC2 instances in multiple AWS Regions to distribute the load across Regions
D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization
E. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends Revert to the default values at the
start of the week

Answer: DE

Explanation:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target- tracking.html#target-tracking-choose-metrics
A target tracking scaling policy is a type of dynamic scaling policy that adjusts the capacity of an Auto Scaling group based on a specified metric and a target
value1. A target tracking scaling policy can automatically scale out or scale in the Auto Scaling group to keep the actual metric value at or near the target value1. A
target tracking scaling policy is suitable for scenarios where the load on the application changes frequently and unpredictably, such as during working hours2.
To meet the requirements of the scenario, the solutions architect should use a target tracking scaling policy to scale the Auto Scaling group based on instance
CPU utilization. Instance CPU utilization is a common metric that reflects the demand on the
application1. The solutions architect should specify a target value that represents the ideal average CPU utilization level for the application, such as 50 percent1.
Then, the Auto Scaling group will scale out or scale in to maintain that level of CPU utilization.
Scheduled scaling is a type of scaling policy that performs scaling actions based on a date and time3. Scheduled scaling is suitable for scenarios where the load
on the application changes periodically and predictably, such as on weekends2.
To meet the requirements of the scenario, the solutions architect should also use scheduled scaling to change the Auto Scaling group minimum, maximum, and
desired capacity to zero for weekends. This way, the Auto Scaling group will terminate all instances on weekends when they are not required to operate. The
solutions architect should also revert to the default values at the start of the week, so that the Auto Scaling group can resume normal operation.

NEW QUESTION 138


- (Topic 3)
A rapidly growing global ecommerce company is hosting its web application on AWS. The web application includes static content and dynamic content. The
website stores online transaction processing (OLTP) data in an Amazon RDS database. The website’s users are experiencing slow page loads.
Which combination of actions should a solutions architect take to resolve this issue? (Select TWO.)

A. Configure an Amazon Redshift cluster.


B. Set up an Amazon CloudFront distribution
C. Host the dynamic web content in Amazon S3
D. Create a t wd replica tor the RDS DB instance.
E. Configure a Multi-AZ deployment for the RDS DB instance

Answer: BD

Explanation:
To resolve the issue of slow page loads for a rapidly growing e-commerce website hosted on AWS, a solutions architect can take the following two actions:
* 1. Set up an Amazon CloudFront distribution
* 2. Create a read replica for the RDS DB instance
Configuring an Amazon Redshift cluster is not relevant to this issue since Redshift is a data warehousing service and is typically used for the analytical processing
of large amounts of data.
Hosting the dynamic web content in Amazon S3 may not necessarily improve performance since S3 is an object storage service, not a web application server.
While S3 can be used to host static web content, it may not be suitable for hosting dynamic web content since S3 doesn't support server-side scripting or
processing.
Configuring a Multi-AZ deployment for the RDS DB instance will improve high availability but may not necessarily improve performance.

NEW QUESTION 140


- (Topic 3)
An Amazon EC2 instance is located in a private subnet in a new VPC. This subnet does not have outbound internet access, but the EC2 instance needs the ability
to download monthly security updates from an outside vendor.
What should a solutions architect do to meet these requirements?

A. Create an internet gateway, and attach it to the VP


B. Configure the private subnet route table to use the internet gateway as the default route.
C. Create a NAT gateway, and place it in a public subne
D. Configure the private subnet route table to use the NAT gateway as the default route.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

E. Create a NAT instance, and place it in the same subnet where the EC2 instance is locate
F. Configure the private subnet route table to use the NAT instance as the default route.
G. Create an internet gateway, and attach it to the VP
H. Create a NAT instance, and place it in the same subnet where the EC2 instance is locate
I. Configure the private subnet route table to use the internet gateway as the default route.

Answer: B

Explanation:
This approach will allow the EC2 instance to access the internet and download the monthly security updates while still being located in a private subnet. By
creating a NAT gateway and placing it in a public subnet, it will allow the instances in the private subnet to access the internet through the NAT gateway. And then,
configure the private subnet route table to use the NAT gateway as the default route. This will ensure that all outbound traffic is directed through the NAT gateway,
allowing the EC2 instance to access the internet while still maintaining the security of the private subnet.

NEW QUESTION 142


- (Topic 3)
A company has a multi-tier application deployed on several Amazon EC2 instances in an Auto Scaling group. An Amazon RDS for Oracle instance is the
application’s data layer that uses Oracle-specific
PL/SQL functions. Traffic to the application has been steadily increasing. This is causing the EC2 instances to become overloaded and the RDS instance to run
out of storage. The Auto Scaling group does not have any scaling metrics and defines the minimum healthy instance count only. The company predicts that traffic
will continue to increase at a steady but unpredictable rate before levelling off.
What should a solutions architect do to ensure the system can automatically scale for the increased traffic? (Select TWO.)

A. Configure storage Auto Scaling on the RDS for Oracle Instance.


B. Migrate the database to Amazon Aurora to use Auto Scaling storage.
C. Configure an alarm on the RDS for Oracle Instance for low free storage space
D. Configure the Auto Scaling group to use the average CPU as the scaling metric
E. Configure the Auto Scaling group to use the average free memory as the seeing metric

Answer: AD

Explanation:
Auto scaling storage RDS will ease storage issues and migrating Oracle Pl/Sql to Aurora is cumbersome. Also Aurora has auto storage scaling by default.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes. html#USER_PIOPS.Autoscaling

NEW QUESTION 144


- (Topic 3)
A company has an application thai runs on several Amazon EC2 instances Each EC2 instance has multiple Amazon Elastic Block Store (Amazon EBS) data
volumes attached to it The application's EC2 instance configuration and data need to be backed up nightly The application also needs to be recoverable in a
different AWS Region
Which solution will meet these requirements in the MOST operationally efficient way?

A. Write an AWS Lambda function that schedules nightly snapshots of the application's EBS volumes and copies the snapshots to a different Region
B. Create a backup plan by using AWS Backup to perform nightly backup
C. Copy the backups to another Region Add the application's EC2 instances as resources
D. Create a backup plan by using AWS Backup to perform nightly backups Copy the backups to another Region Add the application's EBS volumes as resources
E. Write an AWS Lambda function that schedules nightly snapshots of the application's EBS volumes and copies the snapshots to a different Availability Zone

Answer: B

Explanation:
The most operationally efficient solution to meet these requirements would be to create a backup plan by using AWS Backup to perform nightly backups and
copying the backups to another Region. Adding the application's EBS volumes as resources will ensure that the application's EC2 instance configuration and data
are backed up, and copying the backups to another Region will ensure that the application is recoverable in a different AWS Region.

NEW QUESTION 149


- (Topic 3)
A company is experiencing sudden increases in demand. The company needs to provision large Amazon EC2 instances from an Amazon Machine image (AMI)
The instances will run m an Auto Scaling group. The company needs a solution that provides minimum initialization latency to meet the demand.
Which solution meets these requirements?

A. Use the aws ec2 register-image command to create an AMI from a snapshot Use AWS Step Functions to replace the AMI in the Auto Scaling group
B. Enable Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot Provision an AMI by using the snapshot Replace the AMI m the Auto
Scaling group with the new AMI
C. Enable AMI creation and define lifecycle rules in Amazon Data Lifecycle Manager (Amazon DLM) Create an AWS Lambda function that modifies the AMI in the
Auto Scaling group
D. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke AWS Backup lifecycle policies that provision AMIs Configure Auto Scaling group capacity
limits as an event source in EventBridge

Answer: B

Explanation:
Enabling Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot allows you to quickly create a new Amazon Machine Image (AMI) from a
snapshot, which can help reduce the initialization latency when provisioning new instances. Once the AMI is provisioned, you can replace the AMI in the Auto
Scaling group with the new AMI. This will ensure that new instances are launched from the updated AMI and are able to meet the increased demand quickly.

NEW QUESTION 153


- (Topic 3)
A solution architect needs to assign a new microsoft for a company’s application. Clients must be able to call an HTTPS endpoint to reach the micoservice. The

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

microservice also must use AWS identity and Access Management (IAM) to authentication calls. The soltions architect will write the logic for this microservice by
using a single AWS Lambda function that is written in Go 1.x.
Which solution will deploy the function in the in the MOST operationally efficient way?

A. Create an Amazon API Gateway REST AP


B. Configure the method to use the Lambda functio
C. Enable IAM authentication on the API.
D. Create a Lambda function URL for the functio
E. Specify AWS_IAM as the authenticationtype.
F. Create an Amazon CloudFront distributio
G. Deploy the function to Lambda@Edg
H. Integrate IAM authentication logic into the Lambda@Edge function.
I. Create an Amazon CloudFront distribuio
J. Deploy the function to CloudFront Function
K. Specify AWS_IAM as the authentication type.

Answer: A

Explanation:
A. Create an Amazon API Gateway REST API. Configure the method to use the Lambda function. Enable IAM authentication on the API. This option is the most
operationally efficient as it allows you to use API Gateway to handle the HTTPS endpoint and also allows you to use IAM to authenticate the calls to the
microservice. API Gateway also provides many additional features such as caching, throttling, and monitoring, which can be useful for a microservice.

NEW QUESTION 154


- (Topic 3)
An IAM user made several configuration changes to AWS resources m their company's account during a production deployment last week. A solutions architect
learned that a couple of security group rules are not configured as desired. The solutions architect wants to confirm which IAM user was responsible for making
changes.
Which service should the solutions architect use to find the desired information?

A. Amazon GuardDuty
B. Amazon Inspector
C. AWS CloudTrail
D. AWS Config

Answer: C

Explanation:
The best option is to use AWS CloudTrail to find the desired information. AWS CloudTrail is a service that enables governance, compliance, operational auditing,
and risk auditing of AWS account activities. CloudTrail can be used to log all changes made to resources in an AWS account, including changes made by IAM
users, EC2 instances, AWS management console, and other AWS services. By using CloudTrail, the solutions architect can identify the IAM user who made the
configuration changes to the security group rules.

NEW QUESTION 157


- (Topic 3)
A company is using AWS to design a web application that will process insurance quotes Users will request quotes from the application Quotes must be separated
by quote type, must be responded to within 24 hours, and must not get lost The solution must maximize operational efficiency and must minimize maintenance.
Which solution meets these requirements?

A. Create multiple Amazon Kinesis data streams based on the quote type Configure the web application to send messages to the proper data stream Configure
each backend group of application servers to use the Kinesis Client Library (KCL) to pool messages from its own data stream
B. Create an AWS Lambda function and an Amazon Simple Notification Service (Amazon SNS) topic for each quote type Subscribe the Lambda function to its
associated SNS topic Configure the application to publish requests tot quotes to the appropriate SNS topic
C. Create a single Amazon Simple Notification Service (Amazon SNS) topic Subscribe Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic
Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type Configure each backend application server to use its own
SQS queue
D. Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to deliver data streams to an Amazon Elasucsearch Service (Amazon
ES) cluster Configure the application to send messages to the proper delivery stream Configure each backend group of application servers to search for the
messages from Amazon ES and process them accordingly

Answer: C

Explanation:
https://aws.amazon.com/getting-started/hands-on/filter-messages-published- to-topics/

NEW QUESTION 162


- (Topic 3)
A company has a custom application with embedded credentials that retrieves information from an Amazon RDS MySQL DB instance. Management says the
application must be made more secure with the least amount of programming effort.
What should a solutions architect do to meet these requirements?

A. Use AWS Key Management Service (AWS KMS) customer master keys (CMKs) to create key
B. Configure the application to load the database credentials from AWS KM
C. Enable automatic key rotation.
D. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manage
E. Configure the application to load the database credentials from Secrets Manage
F. Create an AWS Lambda function that rotates the credentials in Secret Manager.
G. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manage
H. Configure the application to load the database credentials from Secrets Manage
I. Set up a credentials rotation schedule for the application user in the RDS for MySQL database using Secrets Manager.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

J. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Systems Manager Parameter Stor
K. Configure the application to load the database credentials from Parameter Stor
L. Set up a credentials rotation schedule for the application user in the RDS for MySQL database using Parameter Store.

Answer: C

Explanation:
https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/

NEW QUESTION 164


- (Topic 3)
A company must migrate 20 TB of data from a data center to the AWS Cloud within 30 days. The company's network bandwidth is limited to 15 Mbps and cannot
exceed 70% utilization. What should a solutions architect do to meet these requirements?

A. Use AWS Snowball.


B. Use AWS DataSync.
C. Use a secure VPN connection.
D. Use Amazon S3 Transfer Acceleration.

Answer: A

Explanation:
AWS Snowball is a secure data transport solution that accelerates moving large amounts of data into and out of the AWS cloud. It can move up to 80 TB of data
at a time, and provides a network bandwidth of up to 50 Mbps, so it is well-suited for the task. Additionally, it is secure and easy to use, making it the ideal solution
for this migration.

NEW QUESTION 167


- (Topic 3)
A company wants to create an application to store employee data in a hierarchical structured relationship. The company needs a minimum-latency response to
high-traffic queries for the employee data and must protect any sensitive data. The company also need to receive monthly email messages if any financial
information is present in the employee data.
Which combination of steps should a solutin architect take to meet these requirement? (
Select TWO.)

A. Use Amazon Redshift to store the employee data in hierarchie


B. Unload the data to Amazon S3 every month.
C. Use Amazon DynamoDB to store the employee data in hierarchies Export the data to Amazon S3 every month.
D. Configure Amazon Macie for the AWS account Integrate Macie with Amazon EventBridge to send monthly events to AWS Lambda.
E. Use Amazon Athena to analyze the employee data in Amazon S3 integrate Athena with Amazon QuickSight to publish analysis dashboards and share the
dashboards with users.
F. Configure Amazon Macie for the AWS accoun
G. integrate Macie with Amazon EventBridge to send monthly notifications through an Amazon Simple Notification Service (Amazon SNS) subscription.

Answer: BE

Explanation:
https://docs.aws.amazon.com/prescriptive-guidance/latest/dynamodb-hierarchical-data-model/introduction.html

NEW QUESTION 172


- (Topic 3)
A company has a regional subscription-based streaming service that runs in a single AWS Region. The architecture consists of web servers and application
servers on Amazon EC2 instances. The EC2 instances are in Auto Scaling groups behind Elastic Load Balancers. The architecture includes an Amazon Aurora
database cluster that extends across multiple Availability Zones.
The company wants to expand globally and to ensure that its application has minimal downtime.

A. Extend the Auto Scaling groups for the web tier and the application tier to deploy instances in Availability Zones in a second Regio
B. Use an Aurora global database to deploy the database in the primary Region and the second Regio
C. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
D. Deploy the web tier and the application tier to a second Regio
E. Add an Aurora PostgreSQL cross-Region Aurara Replica in the second Regio
F. Use Amazon Route 53 health checks with a failovers routing policy to the second Region, Promote the secondary to primary as needed.
G. Deploy the web tier and the applicatin tier to a second Regio
H. Create an Aurora PostSQL database in the second Regio
I. Use AWS Database Migration Service (AWS DMS) to replicate the primary database to the second Regio
J. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
K. Deploy the web tier and the application tier to a second Regio
L. Use an Amazon Aurora global database to deploy the database in the primary Region and the second Regio
M. Use Amazon Route 53 health checks with a failover routing policy to the second Regio
N. Promote the secondary to primary as needed.

Answer: D

Explanation:
This option is the most efficient because it deploys the web tier and the application tier to a second Region, which provides high availability and redundancy for the
application. It also uses an Amazon Aurora global database, which is a feature that allows a single Aurora database to span multiple AWS Regions1. It also
deploys the database in the primary Region and the second Region, which provides low latency global reads and fast recovery from a Regional outage. It also
uses Amazon Route 53 health checks with a failover routing policy to the second Region, which provides data protection by routing traffic to healthy endpoints in
different Regions2. It also promotes the secondary to primary as needed, which provides data consistency by allowing write operations in one of the Regions at a
time3. This solution meets the requirement of expanding globally and ensuring that its application has minimal downtime. Option A is less efficient because it
extends the Auto Scaling groups for the web tier and the application tier to deploy instances in Availability Zones in a second Region, which could incur higher

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

costs and complexity than deploying them separately. It also uses an Aurora global database to deploy the database in the primary Region and the second
Region, which is correct. However, it does not use Amazon Route 53 health checks with a failover routing policy to the second Region, which could result in traffic
being routed to unhealthy endpoints. Option B is less efficient because it deploys the web tier and the application tier to a second Region, which is correct. It also
adds an Aurora PostgreSQL cross-Region Aurora Replica in the second Region, which provides read scalability across Regions. However, it does not use an
Aurora global database, which provides faster replication and recovery than cross-Region replicas. It also uses Amazon Route 53 health checks with a failover
routing policy to the second Region, which is correct. However, it does not promote the secondary to primary as needed, which could result in data inconsistency
or loss. Option C is less efficient because it deploys the web tier and the application tier to a second Region, which is correct. It also creates an Aurora
PostgreSQL database in the second Region, which provides data redundancy across Regions. However, it does not use an Aurora global database or cross-
Region replicas, which provide faster replication and recovery than creating separate databases. It also uses AWS Database Migration Service (AWS DMS) to
replicate the primary database to the second Region, which provides data migration between different sources and targets. However, it does not use an Aurora
global database or cross-Region replicas, which provide faster replication and recovery than using AWS DMS. It also uses Amazon Route 53 health checks with a
failover routing policy to the second Region, which is correct.

NEW QUESTION 175


- (Topic 3)
A rapidly growing ecommerce company is running its workloads in a single AWS Region. A solutions architect must create a disaster recovery (DR) strategy that
includes a different AWS Region The company wants its database to be up to date in the DR Region with the least possible latency The remaining infrastructure in
the DR Region needs to run at reduced capacity and must be able to scale up it necessary
Which solution will meet these requirements with the LOWEST recovery time objective (RTO)?

A. Use an Amazon Aurora global database with a pilot light deployment


B. Use an Amazon Aurora global database with a warm standby deployment
C. Use an Amazon RDS Multi-AZ DB instance with a pilot light deployment
D. Use an Amazon RDS Multi-AZ DB instance with a warm standby deployment

Answer: B

Explanation:
https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html

NEW QUESTION 179


- (Topic 3)
A company is deploying a new application on Amazon EC2 instances. The application writes data to Amazon Elastic Block Store (Amazon EBS) volumes. The
company needs to ensure that all data that is written to the EBS volumes is encrypted at rest.
Which solution wil meet this requirement?

A. Create an IAM role that specifies EBS encryptio


B. Attach the role to the EC2 instances.
C. Create the EBS volumes as encrypted volumes Attach the EBS volumes to the EC2 instances.
D. Create an EC2 instance tag that has a key of Encrypt and a value of Tru
E. Tag all instances that require encryption at the ESS level.
F. Create an AWS Key Management Service (AWS KMS) key policy that enforces EBS encryption in the account Ensure that the key policy is active.

Answer: B

Explanation:
The solution that will meet the requirement of ensuring that all data that is written to the EBS volumes is encrypted at rest is B. Create the EBS volumes as
encrypted volumes and attach the encrypted EBS volumes to the EC2 instances. When you create an EBS volume, you can specify whether to encrypt the
volume. If you choose to encrypt the volume, all data written to the volume is automatically encrypted at rest using AWS-managed keys. You can also use
customer-managed keys (CMKs) stored in AWS KMS to encrypt and protect your EBS volumes. You can create encrypted EBS volumes and attach them to EC2
instances to ensure that all data written to the volumes is encrypted at rest.

NEW QUESTION 184


- (Topic 3)
A company is running a publicly accessible serverless application that uses Amazon API Gateway and AWS Lambda. The application's traffic recently spiked due
to fraudulent requests from botnets.
Which steps should a solutions architect take to block requests from unauthorized users? (Select TWO.)

A. Create a usage plan with an API key that is shared with genuine users only.
B. Integrate logic within the Lambda function to ignore the requests from fraudulent IP addresses.
C. Implement an AWS WAF rule to target malicious requests and trigger actions to filter them out.
D. Convert the existing public API to a private AP
E. Update the DNS records to redirect users to the new API endpoint.
F. Create an IAM role for each user attempting to access the AP
G. A user will assume the role when making the API call.

Answer: AC

Explanation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html#:~:text=Don%27t%20rely%20on%20API%20keys%20as%20y
our%20only%20means%20of%20authentication%20and%20authorization%20for%20your%20APIs
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage- plans.html

NEW QUESTION 186


- (Topic 3)
A company has a web server running on an Amazon EC2 instance in a public subnet with an Elastic IP address. The default security group is assigned to the EC2
instance. The default network ACL has been modified to block all traffic. A solutions architect needs to make the web server accessible from everywhere on port
443.
Which combination of steps will accomplish this task? (Choose two.)

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

A. Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0.
B. Create a security group with a rule to allow TCP port 443 to destination 0.0.0.0/0.
C. Update the network ACL to allow TCP port 443 from source 0.0.0.0/0.
D. Update the network ACL to allow inbound/outbound TCP port 443 from source 0.0.0.0/0 and to destination 0.0.0.0/0.
E. Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and outbound TCP port 32768-65535 to destination 0.0.0.0/0.

Answer: AC

Explanation:
The combination of steps that will accomplish the task of making the web server accessible from everywhere on port 443 is to create a security group with a rule
to allow TCP port 443 from source 0.0.0.0/0 (A) and to update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 (C). This will ensure that
traffic to port 443 is allowed both at the security group level and at the network ACL level, which will make the web server accessible from everywhere on port 443.

NEW QUESTION 187


- (Topic 3)
A solutions architect needs to design a system to store client case files. The files are core company assets and are important. The number of files will grow over
time.
The files must be simultaneously accessible from multiple application servers that run on Amazon EC2 instances. The solution must have built-in redundancy.
Which solution meets these requirements?

A. Amazon Elastic File System (Amazon EFS)


B. Amazon Elastic Block Store (Amazon EBS)
C. Amazon S3 Glacier Deep Archive
D. AWS Backup

Answer: A

Explanation:
Amazon EFS provides a simple, scalable, fully managed file system that can be simultaneously accessed from multiple EC2 instances and provides built-in
redundancy. It is optimized for multiple EC2 instances to access the same files, and it is designed to be highly available, durable, and secure. It can scale up to
petabytes of data and can handle thousands of concurrent connections, and is a cost-effective solution for storing and accessing large amounts of data.

NEW QUESTION 189


- (Topic 3)
A company wants to configure its Amazon CloudFront distribution to use SSL/TLS certificates. The company does not want to use the default domain name for the
distribution. Instead, the company wants to use a different domain name for the distribution.
Which solution will deploy the certificate with icurring any additional costs?

A. Request an Amazon issued private certificate from AWS Certificate Manager (ACM) in the us-east-1 Region
B. Request an Amazon issued private certificate from AWS Certificate Manager (ACM) in the us-west-1 Region.
C. Request an Amazon issued public certificate from AWS Certificate Manager (ACU) in the us-east-1 Region
D. Request an Amazon issued public certificate from AWS Certificate Manager (ACU) in the us-west-1 Regon.

Answer: C

Explanation:
This option is the most efficient because it requests an Amazon issued public certificate from AWS Certificate Manager (ACM), which is a service that lets you
easily provision, manage, and deploy public and private SSL/TLS certificates for use with AWS services and your internal connected resources1. It also requests
the certificate in the us-east-1 Region, which is required for using an ACM certificate with CloudFront2. It also meets the requirement of deploying the certificate
without incurring any additional costs, as ACM does not charge for certificates that are used with supported AWS services3. This solution meets the requirement of
configuring its CloudFront distribution to use SSL/TLS certificates and using a different domain name for the distribution. Option A is less efficient because it
requests an Amazon issued private certificate from ACM, which is a type of certificate that can be used only within your organization or virtual private cloud (VPC).
However, this does not meet the requirement of configuring its CloudFront distribution to use SSL/TLS certificates, as CloudFront requires a public certificate. It
also requests the certificate in the us-east-1 Region, which is correct. Option B is less efficient because it requests an Amazon issued private certificate from ACM,
which is incorrect for the same reason as option A. It also requests the certificate in the us-west-1 Region, which is incorrect as CloudFront requires a certificate in
the us-east-1 Region. Option D is less efficient because it requests an Amazon issued public certificate from ACM, which is correct. However, it requests the
certificate in the us-west-1 Region, which is incorrect as CloudFront requires a certificate in the us-east-1 Region.

NEW QUESTION 194


- (Topic 3)
A meteorological startup company has a custom web application to sell weather data to its users online. The company uses Amazon DynamoDB to store is data
and wants to bu4d a new service that sends an alert to the managers of four Internal teams every time a new weather event is recorded. The company does not
want true new service to affect the performance of the current application
What should a solutions architect do to meet these requirement with the LEAST amount of operational overhead?

A. Use DynamoDB transactions to write new event data to the table Configure the transactions to notify internal teams.
B. Have the current application publish a message to four Amazon Simple Notification Service (Amazon SNS) topic
C. Have each team subscribe to one topic.
D. Enable Amazon DynamoDB Streams on the tabl
E. Use triggers to write to a mingle Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.
F. Add a custom attribute to each record to flag new item
G. Write a cron job that scans the table every minute for items that are new and notifies an Amazon Simple Queue Service (Amazon SOS) queue to which the
teams can subscribe.

Answer: C

Explanation:
The best solution to meet these requirements with the least amount of operational overhead is to enable Amazon DynamoDB Streams on the table and use
triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe. This solution requires minimal configuration
and infrastructure setup, and Amazon DynamoDB Streams provide a low-latency way to capture changes to the DynamoDB table. The triggers automatically

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

capture the changes and publish them to the SNS topic, which notifies the internal teams.

NEW QUESTION 197


- (Topic 3)
A company is launching a new application deployed on an Amazon Elastic Container Service (Amazon ECS) cluster and is using the Fargate launch type tor ECS
tasks The company is monitoring CPU and memory usage because it is expecting high traffic to the application upon its launch However the company wants to
reduce costs when utilization decreases
What should a solutions architect recommend?

A. Use Amazon EC2 Auto Scaling to scale at certain periods based on previous traffic patterns
B. Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm
C. Use Amazon EC2 Auto Scaling with simple scaling policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm
D. Use AWS Application Auto Scaling with target tracking policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm

Answer: D

Explanation:
https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto- scaling.html

NEW QUESTION 201


- (Topic 3)
A company needs to ingested and handle large amounts of streaming data that its application generates. The application runs on Amazon EC2 instances and
sends data to Amazon Kinesis Data Streams. which is contained wild default settings. Every other day the application consumes the data and writes the data to an
Amazon S3 bucket for business intelligence (BI) processing the company observes that Amazon S3 is not receiving all the data that trio application sends to
Kinesis Data Streams.
What should a solutions architect do to resolve this issue?

A. Update the Kinesis Data Streams default settings by modifying the data retention period.
B. Update the application to use the Kinesis Producer Library (KPL) lo send the data to Kinesis Data Streams.
C. Update the number of Kinesis shards lo handle the throughput of me data that is sent to Kinesis Data Streams.
D. Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is ingested in the S3 bucket.

Answer: A

Explanation:
The data retention period of a Kinesis data stream is the time period from when a record is added to when it is no longer accessible1. The default retention period
for a Kinesis data stream is 24 hours, which can be extended up to 8760 hours (365 days)1. The data retention period can be updated by using the AWS
Management Console, the AWS CLI, or the Kinesis Data Streams API1.
To meet the requirements of the scenario, the solutions architect should update the Kinesis Data Streams default settings by modifying the data retention period.
The solutions architect should increase the retention period to a value that is greater than or equal to the frequency of consuming the data and writing it to S32.
This way, the company can ensure that S3 receives all the data that the application sends to Kinesis Data Streams.

NEW QUESTION 203


- (Topic 3)
A development team has launched a new application that is hosted on Amazon EC2 instances inside a development VPC. A solution architect needs to create a
new VPC in the same account. The new VPC will be peered with the development VPC. The VPC CIDR block for the development VPC is 192. 168. 00/24. The
solutions architect needs to create a CIDR block for the new VPC. The CIDR block must be valid for a VPC peering connection to the development VPC.
What is the SMALLEST CIOR block that meets these requirements?

A. 10.0.1.0/32
B. 192.168.0.0/24
C. 192.168.1.0/32
D. 10.0.1.0/24

Answer: D

Explanation:
The allowed block size is between a /28 netmask and /16 netmask. The CIDR block must not overlap with any existing CIDR block that's associated with the VPC.
https://docs.aws.amazon.com/vpc/latest/userguide/configure-your-vpc.html

NEW QUESTION 206


- (Topic 3)
A company runs an application on a large fleet of Amazon EC2 instances. The application reads and write entries into an Amazon DynamoDB table. The size of
the DynamoDB table continuously grows, but the application needs only data from the last 30 days. The company needs a solution that minimizes cost and
development effort.
Which solution meets these requirements?

A. Use an AWS CloudFormation template to deploy the complete solutio


B. Redeploy the CloudFormation stack every 30 days, and delete the original stack.
C. Use an EC2 instance that runs a monitoring application from AWS Marketplac
D. Configure the monitoring application to use Amazon DynamoDB Streams to store the timestamp when a new item is created in the tabl
E. Use a script that runs on the EC2 instance to delete items that have a timestamp that is older than 30 days.
F. Configure Amazon DynamoDB Streams to invoke an AWS Lambda function when a new item is created in the tabl
G. Configure the Lambda function to delete items in the table that are older than 30 days.
H. Extend the application to add an attribute that has a value of the current timestamp plus 30 days to each new item that is created in the tabl
I. Configure DynamoDB to use the attribute as the TTL attribute.

Answer: D

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

Explanation:
Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the date and time
of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput. TTL is provided at no extra cost as a means to
reduce stored data volumes by retaining only the items that remain current for your workload’s needs.
TTL is useful if you store items that lose relevance after a specific time. The following are example TTL use cases:
Remove user or sensor data after one year of inactivity in an application.
Archive expired items to an Amazon S3 data lake via Amazon DynamoDB Streams and AWS Lambda.
Retain sensitive data for a certain amount of time according to contractual or regulatory obligations.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html

NEW QUESTION 210


......

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SAA-C03 Questions & Answers shared by Certleader
https://www.certleader.com/SAA-C03-dumps.html (551 Q&As)

Thank You for Trying Our Product

* 100% Pass or Money Back


All our products come with a 90-day Money Back Guarantee.
* One year free update
You can enjoy free update one year. 24x7 online support.
* Trusted by Millions
We currently serve more than 30,000,000 customers.
* Shop Securely
All transactions are protected by VeriSign!

100% Pass Your SAA-C03 Exam with Our Prep Materials Via below:

https://www.certleader.com/SAA-C03-dumps.html

The Leader of IT Certification visit - https://www.certleader.com


Powered by TCPDF (www.tcpdf.org)

You might also like