Amazon - Web.services - Examcollection.saa C03.exam - Question.2022 Oct 09.by - Osborn.0q.vce
Amazon - Web.services - Examcollection.saa C03.exam - Question.2022 Oct 09.by - Osborn.0q.vce
Amazon - Web.services - Examcollection.saa C03.exam - Question.2022 Oct 09.by - Osborn.0q.vce
Get the Full SAA-C03 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/SAA-C03-exam-dumps.html (0 New Questions)
Amazon-Web-Services
Exam Questions SAA-C03
AWS Certified Solutions Architect - Associate (SAA-C03)
NEW QUESTION 1
A company uses a popular content management system (CMS) for its corporate website. However, the required patching and maintenance are burdensome. The
company is redesigning its website and wants anew solution. The website will be updated four times a year and does not need to have any dynamic content
available. The solution must provide high scalability and enhanced security.
Which combination of changes will meet these requirements with the LEAST operational overhead? (Choose two.)
A. Deploy an AWS WAF web ACL in front of the website to provide HTTPS functionality
B. Create and deploy an AWS Lambda function to manage and serve the website content
C. Create the new website and an Amazon S3 bucket Deploy the website on the S3 bucket with static website hosting enabled
D. Create the new websit
E. Deploy the website by using an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer.
Answer: AD
NEW QUESTION 2
A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using
user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management.
What should a solutions architect do to accomplish this goal?
Answer: B
NEW QUESTION 3
A solutions architect is designing a new hybrid architecture to extend a company s on-premises infrastructure to AWS The company requires a highly available
connection with consistent low latency to an AWS Region. The company needs to minimize costs and is willing to accept slower traffic if the primary connection
fails.
What should the solutions architect do to meet these requirements?
A. Provision an AWS Direct Connect connection to a Region Provision a VPN connection as a backup if the primary Direct Connect connection fails.
B. Provision a VPN tunnel connection to a Region for private connectivit
C. Provision a second VPN tunnel for private connectivity and as a backup if the primary VPN connection fails.
D. Provision an AWS Direct Connect connection to a Region Provision a second Direct Connect connection to the same Region as a backup if the primary Direct
Connect connection fails.
E. Provision an AWS Direct Connect connection to a Region Use the Direct Connect failover attribute from the AWS CLI to automatically create a backup
connection if the primary Direct Connect connection fails.
Answer: A
Explanation:
Explanation
"In some cases, this connection alone is not enough. It is always better to guarantee a fallback connection as the backup of DX. There are several options, but
implementing it with an AWS Site-To-Site VPN is a real
cost-effective solution that can be exploited to reduce costs or, in the meantime, wait for the setup of a second DX."
https://www.proud2becloud.com/hybrid-cloud-networking-backup-aws-direct-connect-network-connection-with
NEW QUESTION 4
A company wants to run a gaming application on Amazon EC2 instances that are part of an Auto Scaling group in the AWS Cloud. The application will transmit
data by using UDP packets. The company wants to ensure that the application can scale out and in as traffic increases and decreases.
What should a solutions architect do to meet these requirements?
Answer: B
NEW QUESTION 5
A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload images and
metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and Amazon DynamoDB to store
the metadata.
The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary significantly
depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the growing user base.
Which solution meats these requirements?
Answer: A
NEW QUESTION 6
A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights enabled. The
testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of running the tests without reducing the
compute and memory attributes of the DB instance.
Which solution meets these requirements MOST cost-effectively?
Answer: C
NEW QUESTION 7
A company is migrating applications to AWS. The applications are deployed in different accounts. The company manages the accounts centrally by using AWS
Organizations. The company's security team needs a single sign-on (SSO) solution across all the company's accounts. The company must continue managing the
users and groups in its on-premises self-managed Microsoft Active Directory.
Which solution will meet these requirements?
A. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO consol
B. Create a one-way forest trust or a one-way domain trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS
Directory Service for Microsoft Active Directory.
C. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO consol
D. Create a two-way forest trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft
Active Directory.
E. Use AWS Directory Servic
F. Create a two-way trust relationship with the company's self-managed Microsoft Active Directory.
G. Deploy an identity provider (IdP) on premise
H. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console.
Answer: A
NEW QUESTION 8
A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles during peak operating hours The company wants to use these
data points in its existing analytics platform A solutions architect must determine the most viable multi-tier option to support this architecture The data points must
be accessible from the REST API.
Which action meets these requirements for storing and retrieving location data?
Answer: D
Explanation:
Explanation
https://aws.amazon.com/solutions/implementations/aws-streaming-data-solution-for-amazon-kinesis/
NEW QUESTION 9
A company runs its two-tier ecommerce website on AWS. The web tier consists of a load balancer that sends traffic to Amazon EC2 instances. The database tier
uses an Amazon RDS DB instance. The EC2 instances and the RDS DB instance should not be exposed to the public internet. The EC2 instances require internet
access to complete payment processing of orders through a third-party web service. The application must be highly available.
Which combination of configuration options will meet these requirements? (Choose two.)
A. Use an Auto Scaling group to launch the EC2 instances in private subnet
B. Deploy an RDS Multi-AZ DB instance in private subnets.
C. Configure a VPC with two private subnets and two NAT gateways across two Availability Zones.Deploy an Application Load Balancer in the private subnets.
D. Use an Auto Scaling group to launch the EC2 instances in public subnets across two Availability Zones.Deploy an RDS Multi-AZ DB instance in private subnets.
E. Configure a VPC with one public subnet, one private subnet, and two NAT gateways across two Availability Zone
F. Deploy an Application Load Balancer in the public subnet.
G. Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zone
H. Deploy an Application Load Balancer in the public subnets.
Answer: AE
Explanation:
Explanation
Before you begin: Decide which two Availability Zones you will use for your EC2 instances. Configure your
virtual private cloud (VPC) with at least one public subnet in each of these Availability Zones. These public subnets are used to configure the load balancer. You
can launch your EC2 instances in other subnets of these Availability Zones instead.
NEW QUESTION 10
A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains more than 10
million rows The database has 2 TB of General Purpose SSD storage There are millions of updates against this data every day through the company's website
The company has noticed that some insert operations are taking 10 seconds or longer The company has determined that the database storage performance is the
problem
Which solution addresses this performance issue?
Answer: A
Explanation:
Explanation
https://aws.amazon.com/ebs/features/
"Provisioned IOPS volumes are backed by solid-state drives (SSDs) and are the highest performance EBS volumes designed for your critical, I/O intensive
database applications. These volumes are ideal for both IOPS-intensive and throughput-intensive workloads that require extremely low latency."
NEW QUESTION 10
A company wants to migrate its on-premises data center to AWS. According to the company's compliance requirements, the company can use only the ap-
northeast-3 Region. Company administrators are not permitted to connect VPCs to the internet.
Which solutions will meet these requirements? (Choose two.)
A. Use AWS Control Tower to implement data residency guardrails to deny internet access and deny access to all AWS Regions except ap-northeast-3.
B. Use rules in AWS WAF to prevent internet acces
C. Deny access to all AWS Regions except ap-northeast-3 in the AWS account settings.
D. Use AWS Organizations to configure service control policies (SCPS) that prevent VPCs from gaining internet acces
E. Deny access to all AWS Regions except ap-northeast-3.
F. Create an outbound rule for the network ACL in each VPC to deny all traffic from 0.0.0.0/0. Create an IAM policy for each user to prevent the use of any AWS
Region other than ap-northeast-3.
G. Use AWS Config to activate managed rules to detect and alert for internet gateways and to detect and alert for new resources deployed outside of ap-
northeast-3.
Answer: AC
NEW QUESTION 12
A company hosts a marketing website in an on-premises data center. The website consists of static documents and runs on a single server. An administrator
updates the website content infrequently and uses an SFTP client to upload new documents.
The company decides to host its website on AWS and to use Amazon CloudFront. The company's solutions architect creates a CloudFront distribution. The
solutions architect must design the most cost-effective and resilient architecture for website hosting to serve as the CloudFront origin.
Which solution will meet these requirements?
Answer: D
NEW QUESTION 15
A solutions architect must design a highly available infrastructure for a website. The website is powered by Windows web servers that run on Amazon EC2
instances. The solutions architect must implement a solution that can mitigate a large-scale DDoS attack that originates from thousands of IP addresses.
Downtime is not acceptable for the website.
Which actions should the solutions architect take to protect the website from such an attack? (Select TWO.)
Answer: AC
NEW QUESTION 20
A company hosts a two-tier application on Amazon EC2 instances and Amazon RDS. The application's demand varies based on the time of day. The load is
minimal after work hours and on weekends. The EC2 instances run in an EC2 Auto Scaling group that is configured with a minimum of two instances and a
maximum of five instances. The application must be available at all times, but the company is concerned about overall cost.
Which solution meets the availability requirement MOST cost-effectively?
Answer: D
NEW QUESTION 21
A company hosts a containerized web application on a fleet of on-premises servers that process incoming requests. The number of requests is growing quickly.
The on-premises servers cannot handle the increased number of requests. The company wants to move the application to AWS with minimum code changes and
minimum development effort.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scalin
B. Use an Application Load Balancer to distribute the incoming requests.
C. Use two Amazon EC2 instances to host the containerized web applicatio
D. Use an Application Load Balancer to distribute the incoming requests
E. Use AWS Lambda with a new code that uses one of the supported language
F. Create multiple Lambda functions to support the loa
G. Use Amazon API Gateway as an entry point to the Lambda functions.
H. Use a high performance computing (HPC) solution such as AWS ParallelClusterto establish an HPC cluster that can process the incoming requests at the
appropriate scale.
Answer: A
NEW QUESTION 25
The management account has an Amazon S3 bucket that contains project reports. The company
wants to limit access to this S3 bucket to only users of accounts within the organization in AWS
Organizations.
Which solution meets these requirements with the LEAST amount of operational overhead?
A. Add the aws:PrincipalOrgID global condition key with a reference to the organization ID to the S3bucket policy.
B. Create an organizational unit (OU) for each departmen
C. Add the aws:PrincipalOrgPaths globalcondition key to the S3 bucket policy.
D. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization,LeaveOrganization, and RemoveAccountFromOrganization event
E. Update the S3 bucket policyaccordingly.
F. Tag each user that needs access to the S3 bucke
G. Add the aws:PrincipalTag global condition key tothe S3 bucket policy.
Answer: A
Explanation:
Explanation
https://aws.amazon.com/blogs/security/control-access-to-aws-resources-by-using-the-awsorganization-
of-iam-principals/
The aws:PrincipalOrgID global key provides an alternative to listing all the account IDs for all AWS
accounts in an organization. For example, the following Amazon S3 bucket policy allows members of
any account in the XXX organization to add an object into the examtopics bucket.
{"Version": "2020-09-10",
"Statement": {
"Sid": "AllowPutObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::examtopics/*",
"Condition": {"StringEquals":
{"aws:PrincipalOrgID":["XXX"]}}}}
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html
NEW QUESTION 27
A company needs to review its AWS Cloud deployment to ensure that its Amazon S3 buckets do not have unauthorized configuration changes.
What should a solutions architect do to accomplish this goal?
Answer: A
NEW QUESTION 28
A company is launching a new application and will display application metrics on an Amazon CloudWatch dashboard. The company’s product manager needs to
access this dashboard periodically. The product manager does not have an AWS account. A solution architect must provide access to the product manager by
following the principle of least privilege.
Which solution will meet these requirements?
Answer: A
NEW QUESTION 30
A company provides a Voice over Internet Protocol (VoIP) service that uses UDP connections. The service consists of Amazon EC2 instances that run in an Auto
Scaling group. The company has deployments across multiple AWS Regions.
The company needs to route users to the Region with the lowest latency. The company also needs automated failover between Regions.
Which solution will meet these requirements?
Answer: C
NEW QUESTION 31
A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances. Amazon RDS DB instances. and Amazon Redshift clusters are
configured with tags. The company wants to minimize the effort of configuring and operating this check.
What should a solutions architect do to accomplish this?
A. Use AWS Config rules to define and detect resources that are not properly tagged.
B. Use Cost Explorer to display resources that are not properly tagge
C. Tag those resources manually.
D. Write API calls to check all resources for proper tag allocatio
E. Periodically run the code on an EC2 instance.
F. Write API calls to check all resources for proper tag allocatio
G. Schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code.
Answer: A
NEW QUESTION 32
A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a
scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications Transactions also need to be
processed to remove sensitive data before being stored in a document database for low-latency retrieval.
What should a solutions architect recommend to meet these requirements?
A. Store the transactions data into Amazon DynamoDB Set up a rule in DynamoDB to remove sensitive data from every transaction upon write Use DynamoDB
Streams to share the transactions data with other applications
B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3 Use AWS Lambda integration with
Kinesis Data Firehose to remove sensitive dat
C. Other applications can consumethe data stored in Amazon S3
D. Stream the transactions data into Amazon Kinesis Data Streams Use AWS Lambda integration to remove sensitive data from every transaction and then store
the transactions data in Amazon DynamoDB Other applications can consumethe transactions data off the Kinesis data stream.
E. Store the batched transactions data in Amazon S3 as file
F. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3 The Lambda function then stores the data in Amazon
DynamoDBOther applications can consume transaction files stored in Amazon S3.
Answer: C
Explanation:
Explanation
The destination of your Kinesis Data Firehose delivery stream. Kinesis Data Firehose can send data records to various destinations, including Amazon Simple
Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service,
and any HTTP endpoint that is owned by you or any of your third-party service providers. The following are the supported destinations:
* Amazon OpenSearch Service
* Amazon S3
* Datadog
* Dynatrace
* Honeycomb
* HTTP Endpoint
* Logic Monitor
* MongoDB Cloud
* New Relic
* Splunk
* Sumo Logic
https://docs.aws.amazon.com/firehose/latest/dev/create-name.html
https://aws.amazon.com/kinesis/data-streams/
Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per
second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and
location-tracking events.
NEW QUESTION 34
A company is building an application in the AWS Cloud. The application will store data in Amazon S3 buckets in two AWS Regions. The company must use an
AWS Key Management Service (AWS KMS) customer managed key to encrypt
all data that is stored in the S3 buckets. The data in both S3 buckets must be encrypted and decrypted with the same KMS key. The data and the key must be
stored in each of the two Regions.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an S3 bucket in each Region Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) Configure
replication between the S3 buckets.
B. Create a customer managed multi-Region KMS ke
C. Create an S3 bucket in each Regio
D. Configure replication between the S3 bucket
E. Configure the application to use the KMS key with client-side encryption.
F. Create a customer managed KMS key and an S3 bucket in each Region Configure the S3 buckets to use server-side encryption with Amazon S3 managed
encryption keys (SSE-S3) Configure replication between the S3 buckets.
G. Create a customer managed KMS key and an S3 bucket m each Region Configure the S3 buckets to use server-side encryption with AWS KMS keys (SSE-
KMS) Configure replication between the S3 buckets.
Answer: C
Explanation:
Explanation
From https://docs.aws.amazon.com/kms/latest/developerguide/custom-key-store-overview.htmlFor most users, the default AWS KMS key store, which is protected
by FIPS 140-2 validatedcryptographic modules, fulfills their security requirements. There is no need to add an extra layer ofmaintenance responsibility or a
dependency on an additional service. However, you might considercreating a custom key store if your organization has any of the following requirements: Key
materialcannot be stored in a shared environment. Key material must be subject to a secondary, independentaudit path. The HSMs that generate and store key
material must be certified at FIPS 140-2 Level 3.
https://docs.aws.amazon.com/kms/latest/developerguide/custom-key-store-overview.html
NEW QUESTION 37
A company's application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive
the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the
user when an upload is complete. The company has noticed slow application performance and wants to improve the performance as much as possible.
Which solution will meet these requirements with the LEAST operational overhead?
Answer: B
NEW QUESTION 39
A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC The EC2 instances run inside several subnets across
multiple Availability Zones. The EC2 instances do not communicate with each other However, the EC2 instances download images from Amazon S3 and upload
images to Amazon S3 through a single NAT gateway The company is concerned about data transfer charges What is the MOST cost-effective way for the
company to avoid Regional data transfer charges?
Answer: C
NEW QUESTION 40
A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application has grown and
there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that allows for both timely backups to
Amazon S3 and with minimal impact on internet connectivity for internal users.
Which solution meets these requirements?
A. Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint
B. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
C. Order daily AWS Snowball devices Load the data onto the Snowball devices and return the devices to AWS each day.
D. Submit a support ticket through the AWS Management Console Request the removal of S3 service limits from the account.
Answer: B
NEW QUESTION 44
A company has an Amazon S3 bucket that contains critical dat a. The company must protect the data from accidental deletion.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
Answer: AB
NEW QUESTION 49
A company has a data ingestion workflow that consists the following:
An Amazon Simple Notification Service (Amazon SNS) topic for notifications about new data deliveries An AWS Lambda function to process the data and record
metadata The company observes that the ingestion workflow fails occasionally because of network connectivity issues. When such a failure occurs, the Lambda
function does not ingest the corresponding data unless the company manually reruns the job.
Which combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future? (Select TWO.)
Answer: BE
NEW QUESTION 50
A company has an application that provides marketing services to stores. The services are based on previous purchases by store customers. The stores upload
transaction data to the company through SFTP, and the data is processed and analyzed to generate new marketing offers. Some of the files can exceed 200 GB in
size.
Recently, the company discovered that some of the stores have uploaded files that contain personally identifiable information (PII) that should not have been
included. The company wants administrators to be alerted if PII is shared again.
The company also wants to automate remediation.
What should a solutions architect do to meet these requirements with the LEAST development effort?
Answer: B
NEW QUESTION 55
A company's website uses an Amazon EC2 instance store for its catalog of items. The company wants to make sure that the catalog is highly available and that
the catalog is stored in a durable location.
What should a solutions architect do to meet these requirements?
Answer: A
NEW QUESTION 57
A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to hundreds of
terabytes The application data must be stored in a standard file system structure
The company wants a solution that scales automatically, is highly available, and requires minimum operational overhead.
Which solution will meet these requirements?
A. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS) Use Amazon S3 for storage
B. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use Amazon Elastic Block Store (Amazon EBS) for storage
C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling grou
D. Use Amazon Elastic File System (Amazon EFS) for storage.
E. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling grou
F. Use Amazon Elastic Block Store (Amazon EBS) for storage.
Answer: C
NEW QUESTION 62
A company needs to store its accounting records in Amazon S3. The records must be immediately accessible for 1 year and then must be archived for an
additional 9 years. No one at the company, including administrative users and root users, can be able to delete the records during the entire 10- year period. The
records must be stored with maximum resiliency.
Which solution will meet these requirements?
Answer: C
NEW QUESTION 67
A company needs to keep user transaction data in an Amazon DynamoDB table. The company must retain the data for 7 years.
What is the MOST operationally efficient solution that meets these requirements?
Answer: C
NEW QUESTION 68
A company has more than 5 TB of file data on Windows file servers that run on premises Users and applications interact with the data each day
The company is moving its Windows workloads to AWS. As the company continues this process, the company requires access to AWS and on-premises file
storage with minimum latency The company needs a solution that minimizes operational overhead and requires no significant changes to the existing file access
patterns. The company uses an AWS Site-to-Site VPN connection for connectivity to AWS
What should a solutions architect do to meet these requirements?
Answer: D
NEW QUESTION 71
A company is expecting rapid growth in the near future. A solutions architect needs to configure existing users and grant permissions to new users on AWS The
solutions architect has decided to create IAM groups The solutions architect will add the new users to IAM groups based on department
Which additional action is the MOST secure way to grant permissions to the new users?
Answer: C
NEW QUESTION 72
A company hosts its product information webpages on AWS The existing solution uses multiple Amazon EC2 instances behind an Application Load Balancer in an
Auto Scaling group. The website also uses a custom DNS name and communicates with HTTPS only using a dedicated SSL certificate The company is planning a
new product launch and wants to be sure that users from around the world have the best possible experience on the new website
What should a solutions architect do to meet these requirements?
Answer: A
Explanation:
as CloudFront can help provide the best experience for global users. CloudFront integrates seamlessly with ALB and provides and option to use custom DNS and
SSL certs.
NEW QUESTION 76
A company is storing sensitive user information in an Amazon S3 bucket The company wants to provide secure access to this bucket from the application tier
running on Ama2on EC2 instances inside a VPC
Which combination of steps should a solutions architect take to accomplish this? (Select TWO.)
Answer: BD
NEW QUESTION 79
A company is migrating its on-premises PostgreSQL database to Amazon Aurora PostgreSQL. The
on-premises database must remain online and accessible during the migration. The Aurora database must remain synchronized with the on-premises database.
Which combination of actions must a solutions architect take to meet these requirements? (Select TWO.)
Answer: CD
NEW QUESTION 82
A company's web application consists of multiple Amazon EC2 instances that run behind an Application Load Balancer in a VPC. An Amazon ROS for MySQL DB
instance contains the data. The company needs the ability to automatically detect and respond to suspicious or unexpected behaviour in its AWS environment the
company already has added AWS WAF to its architecture.
What should a solutions architect do next lo protect against threats?
Answer: A
NEW QUESTION 84
A company has an on-premises MySQL database that handles transactional data The company is migrating the database to the AWS Cloud The migrated
database must maintain compatibility with the company's applications that use the database The migrated database also must scale automatically during periods
of increased demand.
Which migration solution will meet these requirements?
A. Use native MySQL tools to migrate the database to Amazon RDS for MySQL Configure elastic storage scaling
B. Migrate the database to Amazon Redshift by using the mysqldump utility Turn on Auto Scaling for the Amazon Redshift cluster
C. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora Turn on Aurora Auto Scaling.
D. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon DynamoDB Configure an Auto Scaling policy.
Answer: C
NEW QUESTION 86
A company's website handles millions of requests each day and the number of requests continues to increase. A solutions architect needs to improve the response
time of the web application. The solutions architect determines that the application needs to decrease latency when retrieving product details from the Amazon
DynamoDB table
Which solution will meet these requirements with the LEAST amount of operational overhead?
A. Set up a DynamoDB Accelerator (DAX) cluster Route all read requests through DAX.
B. Set up Amazon ElastiCache for Redis between the DynamoDB table and the web application Route all read requests through Redis.
C. Set up Amazon ElastrCachertor Memcached between the DynamoDB table and the web application Route all read requests through Memcached.
D. Set up Amazon DynamoDB streams on the table and have AWS Lambda read from the table andpopulate Amazon ElastiCache Route all read requests through
ElastiCache
Answer: A
NEW QUESTION 88
A company's ecommerce website has unpredictable traffic and uses AWS Lambda functions to directly access a private Amazon RDS for PostgreSQL DB
instance. The company wants to maintain predictable database performance and ensure that the Lambda invocations do not overload the database with too many
connections.
What should a solutions architect do to meet these requirements?
A. Point the client driver at an RDS custom endpoint Deploy the Lambda functions inside a VPC
B. Point the client driver at an RDS proxy endpoint Deploy the Lambda functions inside a VPC
C. Point the client driver at an RDS custom endpoint Deploy the Lambda functions outside a VPC
D. Point the client driver at an RDS proxy endpoint Deploy the Lambda functions outside a VPC
Answer: B
NEW QUESTION 90
A company is running a critical business application on Amazon EC2 instances behind an Application Load Balancer The EC2 instances run in an Auto Scaling
group and access an Amazon RDS DB instance
The design did not pass an operational review because the EC2 instances and the DB instance are all located in a single Availability Zone A solutions architect
must update the design to use a second Availability Zone
Which solution will make the application highly available?
A. Provision a subnet in each Availability Zone Configure the Auto Scaling group to distribute the EC2 instances across bothAvailability Zones Configure the DB
instance with connections to each network
B. Provision two subnets that extend across both Availability Zones Configure the Auto Scaling group to distribute the EC2 instancesacross both Availability Zones
Configure the DB instance with connections to each network
C. Provision a subnet in each Availability Zone Configure the Auto Scaling group to distribute the EC2 instances across both Availability Zones Configure the DB
instance for Multi-AZ deployment
D. Provision a subnet that extends across both Availability Zones Configure the Auto Scaling group to distribute the EC2 instancesacross both Availability Zones
Configure the DB instance for Multi-AZ deployment
Answer: C
NEW QUESTION 95
A company's order system sends requests from clients to Amazon EC2 instances The EC2 instances process the orders and then store the orders in a database
on Amazon RDS. Users report that they must reprocess orders when the system fails. The company wants a resilient solution that can process orders
automatically if a system outage occurs.
What should a solutions architect do to meet these requirements?
Answer: C
NEW QUESTION 99
A company is building a containerized application on premises and decides to move the application to AWS. The application will have thousands of users soon
after li is deployed. The company Is unsure how to manage the deployment of containers at scale. The company needs to deploy the containerized application in a
highly available architecture that minimizes operational overhead.
Which solution will meet these requirements?
A. Store container images In an Amazon Elastic Container Registry (Amazon ECR) repositor
B. Use an Amazon Elastic Container Service (Amazon ECS) cluster with the AWS Fargate launch type to run the container
C. Use target tracking to scale automatically based on demand.
D. Store container images in an Amazon Elastic Container Registry (Amazon ECR) repositor
E. Use an Amazon Elastic Container Service (Amazon ECS) cluster with the Amazon EC2 launch type to run the container
F. Use target tracking to scale automatically based on demand.
G. Store container images in a repository that runs on an Amazon EC2 instanc
H. Run the containers on EC2 instances that are spread across multiple Availability Zone
I. Monitor the average CPU utilization in Amazon CloudWatc
Answer: A
Answer: B
A. Have the R&D AWS account be part of both organizations during the transition.
B. Invite the R&D AWS account to be part of the new organization after the R&D AWS account has left the prior organization.
C. Create a new R&D AWS account in the new organizatio
D. Migrate resources from the period R&D AWS account to thee new R&D AWS account
E. Have the R&D AWS account into the now organisatio
F. Make the now management account a member of the prior organisation
Answer: B
Answer: D
A. Use zonal Reserved Instances for the master nodes and the ewe nodes Use a Spot Fleet lor tire task nodes
B. Use zonal Reserved Instances for the master nodes Use Spot instances for the core nodes and the task nodes
C. Use regional Reserved Instances for the master nodes Use a Spot Fleer for the core nodes and the task nodes
D. Use regional Reserved Instances for the master node
E. Use On-Demand Capacity Reservations for the core nodes and the task nodes.
Answer: A
A. Use AWS Glue and mile custom scripts lo query CloudTrail logs for the errors.
B. Use AWS Batch and write custom scripts to query CloudTrail logs for the errors.
C. Search CloudTrail logs will Amazon Athena queries to identify the errors
D. Search CloudTrail logs with Amazon QuicKSight Create a dashboard to identify the errors
Answer: C
Answer: C
A. Create a read replica in us-west-1 Set the DB cluster to automaKaliy fail over to the read replica if the primary instance is not responding
B. Create an Aurora global database Sel us-west-1 as the secondary Region update connections to use the writer and reader endpomis as appropriate
C. Set up a second Aurora DB cluster in us-west-1 Use logical replication to keep the databases synchronized Create an Amazon EvontBridgc (Amazon
CloudWatch Events) rule to change thedatabase endpoint rf the primary DB cluster does not respond.
D. Use Aurora automated snapshots to store data in an Amazon S3 bucket Enable S3 Verswnm
E. Configure S3 Cross-Region Replication to us-west-1 Create a second Aurora DB cluster in us-west-1 Create an Amazon EventBndge (Amazon CloudWatch
Events) rule to restore the snapshot il the primary D8 cluster does not respond
Answer: B
A. Use existing Python libraries to extract the text from the reports and to identify the PHI from the extracted text.
B. Use Amazon Textract to extract the text from the reports Use Amazon SageMaker to identify the PHI from the extracted text.
C. Use Amazon Textract to extract the text from the reports Use Amazon Comprehend Medical to identify the PHI from the extracted text
D. Use Amazon Rekognition to extract the text from the reports Use Amazon Comprehend Medical to identify the PHI from the extracted text
Answer: C
A. Enable S3 Versioning on the publisher account's S3 bucket Configure S3 Same-Region Replication of the objects to the subscriber account's S3 bucket
B. Create an AWS Lambda function that is invoked when objects are published in the publisher account's S3 bucke
C. Configure the Lambda function to copy the objects to the subscriber accounts S3 bucket
D. Configure Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Lambda function when objects are published in the publisher account's S3
bucket Configure the Lambda function to copy the objects to the subscriber account's S3 bucket
E. Configure Amazon EventBridge (Amazon CloudWatch Events) to publish Amazon Simple Notification Service (Amazon SNS) notifications when objects are
published in the publisher account's S3 bucket When notifications are received use the S3 console to copy the objects to the subscriber accounts S3 bucket
Answer: B
A. Configure provisioned concurrency for the Lambda function Modify the database to be a global database in multiple AWS Regions
B. Use Amazon RDS Proxy to create a proxy for the database Modify the Lambda function to use the RDS Proxy endpoint instead of the database endpoint
C. Create a read replica for the database in a different AWS Region Use query string parameters in API Gateway to route traffic to the read replica
D. Migrate the data from Aurora PostgreSQL to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS| Modify the Lambda function to use
the OynamoDB table
Answer: C
A. Deploy EC2 instances In an additional Region Create a DB instance with the Multi-AZ option activated
B. Deploy all EC2 instances in the same Region and the same Availability Zon
C. Create a DB instance with the Multi-AZ option activated.
D. Deploy the fcC2 instances across at least two Availability Zones within the some Regio
E. Create a DB instance in a single Availability Zone
F. Deploy the EC2 instances across at least Two Availability Zones within the same Regio
G. Create a DB instance with the Multi-AZ option activated
Answer: D
A. Provision an AWS Direct Connect connection to a Region Provision a VPN connection as a backup if the primary Direct Connect connection fails.
B. Provision a VPN tunnel connection to a Region for private connectivit
C. Provision a second VPN tunnel for private connectivity and as a backup if the primary VPN connection fails.
D. Provision an AWS Direct Connect connection to a Region Provision a second Direct Connect connection to the same Region as a backup if the primary Direct
Connect connection fails.
E. Provision an AWS Direct Connect connection to a Region Use the Direct Connect failover attribute from the AWS CLI to automatically create a backup
connection if the primary Direct Connect connection fails.
Answer: A
A. Create a usage plan with an API key that it shared with genuine users only.
B. Integrate logic within the Lambda function to ignore the requests lion- fraudulent IP addresses
C. Implement an AWS WAF rule to target malicious requests and trigger actions to filler them out
D. Convert the existing public API to a private API Update the DNS records to redirect users to the new API endpoint
E. Create an IAM role tor each user attempting to access the API A user will assume the role when making the API call
Answer: CD
A. Direct the requests from the API to a Network Load Balancer (NLB) Deploy the models as AWS Lambda functions that are invoked by the NLB.
B. Direct the requests from the API to an Application Load Balancer (ALB). Deploy the models as Amazon Elastic Container Service (Amazon ECS) services that
read from an Amazon Simple Queue Service (Amazon SQS) queue Use AWS App Mesh to scale the instances of the ECS cluster based on the SQS queue size
C. Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue Deploy the models as AWS Lambda functions that are invoked
by SQS events Use AWS Auto Scaling to increase the number of vCPUs for the Lambda functions based on the SQS queue size
D. Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue Deploy the models as Amazon Elastic Container Service
(Amazon ECS) services that read from the queue Enable AWS Auto Scaling on Amazon ECS for both the cluster and copies of the service based on thequeue
size
Answer: C
A. Configure an AWS Lambda function m each developer account to copy the log files to the central account Create an IAM role in the central account for the
auditor Attach an IAM policy providing read-only permissions to the bucket
B. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket m the central account Create an IAM user in the central account for
the auditor Attach an IAM policy providing full permissions to the bucket
C. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account Create an IAM role in the central account for the
auditor Attach an IAM policy providingread-only permissions to the bucket
D. Configure an AWS Lambda function in the central account to copy the log files from the S3 bucket m each developer account Create an IAM user m the central
account for the auditor Attach an IAM policy providing full permissions to the bucket
Answer: C
Explanation:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-sharing-logs.html
A. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for productio
B. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility.
C. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production Use database cloning to create the staging database on-demand
D. Use Amazon RDS for MySQL with a Mufti AZ deployment and read replicas for production Use the standby instance tor the staging database.
E. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for productio
F. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility.
Answer: C
A. Attach a VPC endpoint policy for DynamoDB to allow write access to only the specific DynamoDB tables.
B. Attach a security group to the interface VPC endpoint to allow write access to only the specific DynamoDB tables.
C. Create a resource-based 1AM policy to grant write access to only the specific DynamoDB table
D. Attach the policy to the DynamoDB tables.
E. Create a gateway VPC endpoint for DynamoDB that is associated with the Lambda VP
F. Ensure that the Lambda execution role can access the gateway VPC endpoint.
G. Create an interface VPC endpoint for DynamoDB that is associated with the Lambda VP
H. Ensure that the Lambda execution role can access the interface VPC endpoint.
Answer: AD
Answer: C
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/multivalue-versus-simple-policies/
"Use a multivalue answer routing policy to help distribute DNS responses across multiple resources. For example, use multivalue answer routing when you want to
associate your routing records with a Route 53 health check."
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-multivalue
A. Create a database server security group with inbound and outbound rules for MySQL port 3306 traffic to and from anywhere (0.0.0.0/0).
B. Create a database server security group with an inbound rule for MySQL port 3300 and specify the source as a web server security group.
C. Create a web server security group within an inbound allow rule for HTTPS port 443 traffic from anywbere (0.0.0.0/0) and an inbound deny rule for IP range
182. 20.0.0/16
D. Create a web server security group with an inbound rule for HTTPS port 443 traffic from anywhere (0.0.0.0/0). Create network ACL inbound and outbound deny
rules for IP range 182. 20.0.0/16
E. Create a web server security group with an inbound and outbound rules for HTTPS port 443 traffic to and from anywbere (0.0.0.0/0). Create a network ACL
inbound deny rule for IP range 182. 20.0.0/16.
Answer: BD
Answer: A
A. Use AWS DataSync to transfer the data to Amazon S3. Use AWS Glue to transform the data and integrate the data into a data lake.
B. Use AWS Snowball to transfer the data to Amazon S3. Use AWS Batch to transform the data and integrate the data into a data lake.
C. Use AWS Database Migration Service (AWS DMS) to transfer the data to Amazon S3 Use AWS Glue to transform the data and integrate the data into a data
lake.
D. Use an Amazon EC2 instance to transfer the data to Amazon S3. Configure the EC2 instance to transform the data and integrate the data into a data lake.
Answer: C
A. Create internal Network Load Balancers in front of the application in each Region
B. Create external Application Load Balancers in front of the application in each Region
C. Create an AWS Global Accelerator accelerator to route traffic to the load balancers in each Region
D. Configure Amazon Route 53 to use a geolocation routing policy to distribute the traffic
E. Configure Amazon CloudFront to handle the traffic and route requests to the application in each Region
Answer: AC
A. Use one SQS FIFO queue Assign a higher priority to the paid photos so they are processed first
B. Use two SQS FIFO queues: one for paid and one for free Set the free queue to use short polling and the paid queue to use long polling
C. Use two SQS standard queues one for paid and one for free Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue.
D. Use one SQS standard queu
E. Set the visibility timeout of the paid photos to zero Configure Amazon EC2 instances to prioritize visibility settings so paid photos are processed first
Answer: C
Explanation:
https://acloud.guru/forums/guru-of-the-week/discussion/-L7Be8rOao3InQxdQcXj/ https://aws.amazon.com/sqs/features/
Priority: Use separate queues to provide prioritization of work. https://aws.amazon.com/sqs/features/
https://aws.amazon.com/sqs/features/#:~:text=Priority%3A%20Use%20separate%20queues%20to%20provide%
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.
A. S3 Standard
B. S3 Intelligent-Tiering
C. S3 Standard-Infrequent Access (S3 Standard-IA)
D. S3 One Zone-Infrequent Access (S3 One Zone-IA)
Answer: C
A. Use an Auto Scaling group to launch the EC2 Instances in private subnets Deploy an RDS Mulli-AZ DB instance in private subnets
B. Configure a VPC with two private subnets and two NAT gateways across two Availability Zones Deploy an Application Load Balancer in the private subnets
C. Use an Auto Scaling group to launch the EC2 instances in public subnets across two Availability Zones Deploy an RDS Multi-AZ DB instance in private subnets
D. Configure a VPC with one public subnet, one private subnet, and two NAT gateways across two Availability Zones Deploy an Application Load Balancer in the
public subnet
E. Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zones Deploy an Application Load Balancer in the
public subnets
Answer: AE
Answer: A
Answer: B
A. Use Spot Instances in an Amazon EC2 Auto Scaling group to run the application containers
B. Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group
C. Use On-Demand Instances in an Amazon EC2 Auto Scaling group to run the application containers
D. Use On-Demand Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
Answer: A
A. Use AWS DataSync to move the data Create a custom transformation job by using AWS Glue
B. Order an AWS Snowcone device to move the data Deploy the transformation application to the device
C. Order an AWS Snowball Edge Storage Optimized devic
D. Copy the data to the devic
E. Create a customtransformation job by using AWS Glue
F. Order an AWS
G. Snowball Edge Storage Optimized device that includes Amazon EC2 compute Copy the data to the device Create a new EC2 instance on AWS to run the
transformation application
Answer: D
A. Have the deployment engineer use AWS account roof user credentials for performing AWS CloudFormation stack operations.
B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has thePowerUsers IAM policy attached.
C. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the Administrate/Access IAM policy attached.
D. Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.
E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM
role.
Answer: DE
Explanation:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html
Answer: A
Answer: C
Answer: C
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html#Overview.Encryption.
Answer: A
Explanation:
https://aws.amazon.com/ebs/features/
"Provisioned IOPS volumes are backed by solid-state drives (SSDs) and are the highest performance EBS volumes designed for your critical, I/O intensive
database applications. These volumes are ideal for both IOPS-intensive and throughput-intensive workloads that require extremely low latency."
Answer: C
A. Migrate both applications to AWS Lambda Create an Amazon S3 bucket to exchange data between the applications.
B. Migrate both applications to Amazon Elastic Container Service (Amazon ECS). Configure Amazon FSx File Gateway for storage.
C. Migrate the simulation application to Linux Amazon EC2 instance
D. Migrate the visualization application to Windows EC2 instance
E. Configure Amazon Simple Queue Service (Amazon SOS) to exchange data between the applications.
F. Migrate the simulation application to Linux Amazon EC2 instance
G. Migrate the visualization application to Windows EC2 instance
H. Configure Amazon FSx for NetApp ONTAP for storage.
I. B
Answer: E
A. Use Amazon Kinesis Data Streams to ingest the data Process the data using AWS Lambda function.
B. Use Amazon API Gateway on top of the existing applicatio
C. Create a usage plan with a quota limit for the third-party vendor
D. Use Amazon Simple Notification Service (Amazon SNS) to ingest the data Put the EC2 instances in an Auto Scaling group behind an Application Load Balancer
E. Repackage the application as a container Deploy the application using Amazon Elastic Container Service (Amazon ECS) using the EC2 launch type with an
Auto Scaling group
Answer: A
Answer: B
A. Create an Amazon Route 53 public hosted zone for the domain nam
B. Import the zone file containing the domain records hosted by the previous provider.
C. Create an Amazon Route 53 private hosted zone for the domain name Import the zone file containing the domain records hosted by the previous provider
D. Create a Simple AD directory in AW
E. Enable zone transfer between the DNS provider and AWS Directory Service for Microsoft Active Directory for the domain records.
F. Create an Amazon Route 53 Resolver inbound endpoint in the VPC Specify the IP addresses that the provider's DNS will forward DNS queries to Configure the
provider's DNS to forward DNS queries for the domain to the IP addresses that are specified in the inbound endpoint.
Answer: B
A. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs Implement the compute nodes with Amazon EC2 instances
that are managed in an Auto Scaling grou
B. Configure EC2 Auto Scaling to use scheduled scaling
C. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs Implement the compute nodes with Amazon EC2 Instances
that are managed in an Auto Scaling group Configure EC2 Auto Scaling based on the size of the queue
D. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed Inan Auto Scaling grou
E. Configure AWS CloudTrail as a destination for the fobs Configure EC2 Auto Scaling based on the load on the primary server
F. implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group Configure Amazon EventBridge
(Amazon CloudWatch Events) as a destination for the jobs Configure EC2 Auto Scaling based on the load on the compute nodes
Answer: C
Answer: A
A. Store the Iogs in Amazon S3 Use AWS Backup lo move logs more than 1 month old to S3 Glacier Deep Archive
B. Store the logs in Amazon S3 Use S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive
C. Store the logs in Amazon CloudWatch Logs Use AWS Backup to move logs more then 1 month old to S3 Glacier Deep Archive
D. Store the logs in Amazon CloudWatch Logs Use Amazon S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive
Answer: B
A. Use Amazon Kinesis Data Streams to ingest the data Use AWS Lambda to transform the data Configure Lambda to write the data to Amazon Amazon
OpenSearch Service (Amazon Elasticsearch Service)
B. Configure Amazon Kinesis Data Streams to write the data to an Amazon S3 bucket Schedule an AWS Glue crawler job to enrich the data and update the AWS
Glue Data Catalog Use Amazon Athena for analytics
C. Configure Amazon Kinesis Data Streams to write the data to an Amazon S3 bucket Add an Apache Spark job on Amazon EMR to enrich the data in the S3
bucket and write the data to Amazon OpenSearch Service (Amazon Elasticsearch Service)
D. Use Amazon Kinesis Data Firehose to ingest the data Enable Kinesis Data Firehose data transformation with AWS Lambda Configure Kinesis Data Firehose to
write the data to Amazon OpenSearch Service (Amazon Elasticsearch Service).
Answer: C
Answer: B
A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins Configure Route 53 to route traffic to the CloudFront distribution.
B. Create an Amazon CloudFront distribution that has the ALB as an origin Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an
endpoin
C. Configure Route 53 to route traffic to the CloudFront distribution.
D. Create an Amazon CloudFront distribution that has the S3 bucket as an origin Create an AWS Global Accelerator standard accelerator that has the ALB and the
CloudFront distribution as endpoints Create a custom domain name that points to the accelerator DNS name Use the custom domain name as an endpoint for the
web application.
E. Create an Amazon CloudFront distribution that has the ALB as an origin
F. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint Create two domain name
G. Point one domain name to the CloudFront DNS name for dynamic content, Point the other domain name to the accelerator DNS name for static content Use the
domain names as endpoints for the web application.
Answer: D
A. Use Amazon S3 to host the full website in different S3 buckets Add Amazon CloudFront distributions Set the S3 buckets as origins for the distributions Store the
order data in Amazon S3
B. Deploy the full website on Amazon EC2 instances that run in Auto Scaling groups across multiple Availability Zones Add an Application Load Balancer (ALB) to
distribute the website traffic Add another ALB for the backend APIs Store the data in Amazon RDS for MySQL
C. Migrate the full application to run in containers Host the containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use the Kubernetes Cluster
Autoscaler to increase and decrease the number of pods to process bursts in traffic Store the data in Amazon RDS for MySQL
D. Use an Amazon S3 bucket to host the website's static content Deploy an Amazon CloudFront distributio
E. Set the S3 bucket as the origin Use Amazon API Gateway and AWS Lambda functions for the backend APIs Store the data in Amazon DynamoDB
Answer: D
A. Use an Amazon CioudWatch metric stream to send the EC2 Auto Scaling status data to Amazon Kinesis Data Firehose Store the data in Amazon S3
B. Launch an Amazon EMR duster to collect the EC2 Auto Scaling status data and send the data to Amazon Kinesis Data Firehose Store the data in Amazon S3
C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda (unction on a schedule Configure the Lambda function to send
the EC2 Auto Scaling status data directly to Amazon S3
D. Use a bootstrap script during the launch of an EC2 instance to install Amazon Kinesis Agent Configure Kinesis Agent to collect the EC2 Auto Scaling status
data and send the data to Amazon Kinesis Data Firehose Store the data in Amazon S3
Answer: B
A. S3 Standard
B. S3 Intelligent-Tiering
C. S3 Standard-Infrequent Access {S3 Standard-IA)
Answer: B
A. Create a site-to-site VPN tunnel to an Amazon S3 bucket and transfer the files directl
B. Create a bucket policy to enforce a VPC endpoint
C. Order 10 AWS Snowball appliances and select an S3 Glacier vault as the destinatio
D. Create a bucket policy to enforce a VPC endpoint
E. Mount the network-attached file system to Amazon S3 and copy the files directl
F. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier
G. Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destinatio
H. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier
Answer: D
Answer: AD
A. Store the data in Amazon S3 Glacier Update me S3 Glacier vault policy to allow access to the application Instances
B. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume Mount the EBS volume on the application nuances.
C. Store the data in an Amazon Elastic File System (Amazon EFS) tile system Mount the file system on the application instances.
D. Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned K)PS volume shared between the application instances.
Answer: C
A. Create Amazon Elastic Block Store (Amazon EBS) volumes In the same Availability Zones where EKS worker nodes are place
B. Register the volumes In a StorageClass object on an EKS cluster Use EBS Multi-Attach to share the data between containers
C. Create an Amazon Elastic File System (Amazon EFS) tile system Register the tile system in a StorageClass object on an EKS cluster Use the same file system
for all containers
D. Create an Amazon Elastic Block Store (Amazon EBS) volume Register the volume In a StorageClass object on an EKS cluster Use the same volume for all
containers.
E. Create Amazon Elastic File System (Amazon EFS) file systems In the same Availability Zones where EKS worker nodes are placed Register the file systems in
a StorageClass obied on an EKS duster Create an AWS Lambda function to synchronize the data between file systems
Answer: B
A. Place the instances in a public subnet Use Amazon S3 for storage Access S3 objects by using URLs
B. Place the instances in a private subnet use Amazon S3 for storage Use a VPC endpoint to access S3 objects
C. Use the instances with a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume.
D. Use Amazon Elastic File System (Amazon EPS) Standard-Infrequent Access (Standard-IA) to store data and provide shared access to the instances
Answer: B
A company stores confidential data in an Amazon Aurora PostgreSQL database in the ap-southeast-3 Region The database is encrypted with an AWS Key
Management Service (AWS KMS) customer managed key The company was recently acquired and must securely share a backup of the database with the
acquiring company's AWS account in ap-southeast-3.
What should a solutions architect do to meet these requirements?
A. Create a database snapshot Copy the snapshot to a new unencrypted snapshot Share the new snapshot with the acquiring company's AWS account
B. Create a database snapshot Add the acquiring company's AWS account to the KMS key policy Share the snapshot with the acquiring company's AWS account
C. Create a database snapshot that uses a different AWS managed KMS key Add the acquiring company's AWS account to the KMS key alia
D. Share the snapshot with the acquiring company's AWS account.
E. Create a database snapshot Download the database snapshot Upload the database snapshot to an Amazon S3 bucket Update the S3 bucket policy to allow
access from the acquiring company's AWS account
Answer: A
* SAA-C03 Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* SAA-C03 Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year