AWS Certified Solutions Architect - Associate SAA-C02 v1.0

Page:    1 / 55   
Exam contains 822 questions

A company currently operates a web application backed by an Amazon RDS MySQL database. It has automated backups that are run daily and are not encrypted. A security audit requires future backups to be encrypted and the unencrypted backups to be destroyed. The company will make at least one encrypted backup before destroying the old backups.
What should be done to enable encryption for future backups?

  • A. Enable default encryption for the Amazon S3 bucket where backups are stored.
  • B. Modify the backup section of the database configuration to toggle the Enable encryption check box.
  • C. Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot.
  • D. Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary. Remove the original database instance.


Answer : C

Explanation:
However, because you can encrypt a copy of an unencrypted DB snapshot, you can effectively add encryption to an unencrypted DB instance. That is, you can create a snapshot of your DB instance, and then create an encrypted copy of that snapshot. You can then restore a DB instance from the encrypted snapshot, and thus you have an encrypted copy of your original DB instance.
DB instances that are encrypted can't be modified to disable encryption.
You can't have an encrypted read replica of an unencrypted DB instance or an unencrypted read replica of an encrypted DB instance.
Encrypted read replicas must be encrypted with the same key as the source DB instance when both are in the same AWS Region.
You can't restore an unencrypted backup or snapshot to an encrypted DB instance.
To copy an encrypted snapshot from one AWS Region to another, you must specify the KMS key identifier of the destination AWS Region. This is because KMS encryption keys are specific to the AWS Region that they are created in.
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html

A company is hosting a website behind multiple Application Load Balancers. The company has different distribution rights for its content around the world. A solutions architect needs to ensure that users are served the correct content without violating distribution rights.
Which configuration should the solutions architect choose to meet these requirements?

  • A. Configure Amazon CloudFront with AWS WAF.
  • B. Configure Application Load Balancers with AWS WAF.
  • C. Configure Amazon Route 53 with a geolocation policy.
  • D. Configure Amazon Route 53 with a geoproximity routing policy.


Answer : C

Reference:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
(geolocation routing)

A solutions architect has created a new AWS account and must secure AWS account root user access.
Which combination of actions will accomplish this? (Choose two.)

  • A. Ensure the root user uses a strong password.
  • B. Enable multi-factor authentication to the root user.
  • C. Store root user access keys in an encrypted Amazon S3 bucket.
  • D. Add the root user to a group containing administrative permissions.
  • E. Apply the required permissions to the root user with an inline policy document.


Answer : AB

A solutions architect at an ecommerce company wants to back up application log data to Amazon S3. The solutions architect is unsure how frequently the logs will be accessed or which logs will be accessed the most. The company wants to keep costs as low as possible by using the appropriate S3 storage class.
Which S3 storage class should be implemented to meet these requirements?

  • A. S3 Glacier
  • B. S3 Intelligent-Tiering
  • C. S3 Standard-Infrequent Access (S3 Standard-IA)
  • D. S3 One Zone-Infrequent Access (S3 One Zone-IA)


Answer : B

Explanation:

S3 Intelligent-Tiering -
S3 Intelligent-Tiering is a new Amazon S3 storage class designed for customers who want to optimize storage costs automatically when data access patterns change, without performance impact or operational overhead. S3 Intelligent-Tiering is the first cloud object storage class that delivers automatic cost savings by moving data between two access tiers ג€" frequent access and infrequent access ג€" when access patterns change, and is ideal for data with unknown or changing access patterns.
S3 Intelligent-Tiering stores objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. For a small monthly monitoring and automation fee per object, S3 Intelligent-Tiering monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the infrequent access tier. There are no retrieval fees in S3 Intelligent-Tiering. If an object in the infrequent access tier is accessed later, it is automatically moved back to the frequent access tier. No additional tiering fees apply when objects are moved between access tiers within the
S3 Intelligent-Tiering storage class. S3 Intelligent-Tiering is designed for 99.9% availability and 99.999999999% durability, and offers the same low latency and high throughput performance of S3 Standard.
Reference:
https://aws.amazon.com/about-aws/whats-new/2018/11/s3-intelligent-tiering/

A companyג€™s website is used to sell products to the public. The site runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer
(ALB). There is also an Amazon CloudFront distribution, and AWS WAF is being used to protect against SQL injection attacks. The ALB is the origin for the
CloudFront distribution. A recent review of security logs revealed an external malicious IP that needs to be blocked from accessing the website.
What should a solutions architect do to protect the application?

  • A. Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address.
  • B. Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address.
  • C. Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP address.
  • D. Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP address.


Answer : B

Explanation:
If you want to allow or block web requests based on the IP addresses that the requests originate from, create one or more IP match conditions. An IP match condition lists up to 10,000 IP addresses or IP address ranges that your requests originate from. Later in the process, when you create a web ACL, you specify whether to allow or block requests from those IP addresses.
AWS Web Application Firewall (WAF) ג€" Helps to protect your web applications from common application-layer exploits that can affect availability or consume excessive resources. As you can see in my post (New ג€" AWS WAF), WAF allows you to use access control lists (ACLs), rules, and conditions that define acceptable or unacceptable requests or IP addresses. You can selectively allow or deny access to specific parts of your web application and you can also guard against various SQL injection attacks. We launched WAF with support for Amazon CloudFront.
Reference:
https://aws.amazon.com/blogs/aws/aws-web-application-firewall-waf-for-application-loadbalancers/ https://docs.aws.amazon.com/waf/latest/developerguide/classic-web-acl-ip-conditions.html https://docs.aws.amazon.com/waf/latest/developerguide/classic-web-acl-ip-conditions.html https://aws.amazon.com/blogs/aws/aws-web-application-firewall-waf-for-application-load-balancers/

A solutions architect is designing an application for a two-step order process. The first step is synchronous and must return to the user with little latency. The second step takes longer, so it will be implemented in a separate component. Orders must be processed exactly once and in the order in which they are received.
How should the solutions architect integrate these components?

  • A. Use Amazon SQS FIFO queues.
  • B. Use an AWS Lambda function along with Amazon SQS standard queues.
  • C. Create an SNS topic and subscribe an Amazon SQS FIFO queue to that topic.
  • D. Create an SNS topic and subscribe an Amazon SQS Standard queue to that topic.


Answer : C

Reference:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html

A web application is deployed in the AWS Cloud. It consists of a two-tier architecture that includes a web layer and a database layer. The web server is vulnerable to cross-site scripting (XSS) attacks.
What should a solutions architect do to remediate the vulnerability?

  • A. Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.
  • B. Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.
  • C. Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.
  • D. Create an Application Load Balancer. Put the web layer behind the load balancer and use AWS Shield Standard.


Answer : C

Explanation:
Working with cross-site scripting match conditions
Attackers sometimes insert scripts into web requests in an effort to exploit vulnerabilities in web applications. You can create one or more cross-site scripting match conditions to identify the parts of web requests, such as the URI or the query string, that you want AWS WAF Classic to inspect for possible malicious scripts. Later in the process, when you create a web ACL, you specify whether to allow or block requests that appear to contain malicious scripts.

Web Application Firewall -
You can now use AWS WAF to protect your web applications on your Application Load Balancers. AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.
Reference:
https://docs.aws.amazon.com/waf/latest/developerguide/classic-web-acl-xss-conditions.html https://aws.amazon.com/elasticloadbalancing/features/

A companyג€™s website is using an Amazon RDS MySQL Multi-AZ DB instance for its transactional data storage. There are other internal systems that query this DB instance to fetch data for internal batch processing. The RDS DB instance slows down significantly when the internal systems fetch data. This impacts the websiteג€™s read and write performance, and the users experience slow response times.
Which solution will improve the website's performance?

  • A. Use an RDS PostgreSQL DB instance instead of a MySQL database.
  • B. Use Amazon ElastiCache to cache the query responses for the website.
  • C. Add an additional Availability Zone to the current RDS MySQL Multi-AZ DB instance.
  • D. Add a read replica to the RDS DB instance and configure the internal systems to query the read replica.


Answer : D

Explanation:

Amazon RDS Read Replicas -

Enhanced performance -
You can reduce the load on your source DB instance by routing read queries from your applications to the read replica. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. Because read replicas can be promoted to master status, they are useful as part of a sharding implementation.
To further maximize read performance, Amazon RDS for MySQL allows you to add table indexes directly to Read Replicas, without those indexes being present on the master.
Reference:
https://aws.amazon.com/rds/features/read-replicas

An application runs on Amazon EC2 instances across multiple Availability Zones. The instances run in an Amazon EC2 Auto Scaling group behind an Application
Load Balancer. The application performs best when the CPU utilization of the EC2 instances is at or near 40%.
What should a solutions architect do to maintain the desired performance across all instances in the group?

  • A. Use a simple scaling policy to dynamically scale the Auto Scaling group.
  • B. Use a target tracking policy to dynamically scale the Auto Scaling group.
  • C. Use an AWS Lambda function to update the desired Auto Scaling group capacity.
  • D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group.


Answer : B

Explanation:
ג€With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 AutoScaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling djustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the targetvalue, a target tracking scaling policy also adjusts to changes in the metric due to a changing load pattern. For example, you can use target tracking scaling to: Configure a target tracking scaling policy to keep the average aggregate CPU utilization of your Auto Scaling group at 40 percent. Configure a target tracking scaling policy to keep the request count per target of your Application Load Balancer target group at 1000 for your AutoScaling group.ג€
Reference:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html

A company runs an internal browser-based application. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during work hours, but scales down to
2 instances overnight. Staff are complaining that the application is very slow when the day begins, although it runs well by mid-morning.
How should the scaling be changed to address the staff complaints and keep costs to a minimum?

  • A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens.
  • B. Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period.
  • C. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period.
  • D. Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens.


Answer : A

Reference:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html

A financial services company has a web application that serves users in the United States and Europe. The application consists of a database tier and a web server tier. The database tier consists of a MySQL database hosted in us-east-1. Amazon Route 53 geoproximity routing is used to direct traffic to instances in the closest Region. A performance review of the system reveals that European users are not receiving the same level of query performance as those in the United
States.
Which changes should be made to the database tier to improve performance?

  • A. Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in one of the European Regions.
  • B. Migrate the database to Amazon DynamoDB. Use DynamoDB global tables to enable replication to additional Regions.
  • C. Deploy MySQL instances in each Region. Deploy an Application Load Balancer in front of MySQL to reduce the load on the primary instance.
  • D. Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure read replicas in one of the European Regions.


Answer : D

A company hosts a static website on-premises and wants to migrate the website to AWS. The website should load as quickly as possible for users around the world. The company also wants the most cost-effective solution.
What should a solutions architect do to accomplish this?

  • A. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Replicate the S3 bucket to multiple AWS Regions.
  • B. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin.
  • C. Copy the website content to an Amazon EBS-backed Amazon EC2 instance running Apache HTTP Server. Configure Amazon Route 53 geolocation routing policies to select the closest origin.
  • D. Copy the website content to multiple Amazon EBS-backed Amazon EC2 instances running Apache HTTP Server in multiple AWS Regions. Configure Amazon CloudFront geolocation routing policies to select the closest origin.


Answer : B

Explanation:
What Is Amazon CloudFront?
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users.
CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with
CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.
Using Amazon S3 Buckets for Your Origin
When you use Amazon S3 as an origin for your distribution, you place any objects that you want CloudFront to deliver in an Amazon S3 bucket. You can use any method that is supported by Amazon S3 to get your objects into Amazon S3, for example, the Amazon S3 console or API, or a third-party tool. You can create a hierarchy in your bucket to store the objects, just as you would with any other Amazon S3 bucket.
Using an existing Amazon S3 bucket as your CloudFront origin server doesn't change the bucket in any way; you can still use it as you normally would to store and access Amazon S3 objects at the standard Amazon S3 price. You incur regular Amazon S3 charges for storing the objects in the bucket.
Reference:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html

A solutions architect is designing storage for a high performance computing (HPC) environment based on Amazon Linux. The workload stores and processes a large amount of engineering drawings that require shared storage and heavy computing.
Which storage option would be the optimal solution?

  • A. Amazon Elastic File System (Amazon EFS)
  • B. Amazon FSx for Lustre
  • C. Amazon EC2 instance store
  • D. Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1)


Answer : B

Explanation -

Amazon FSx for Lustre -
Amazon FSx for Lustre is a new, fully managed service provided by AWS based on the Lustre file system. Amazon FSx for Lustre provides a high-performance file system optimized for fast processing of workloads such as machine learning, high performance computing (HPC), video processing, financial modeling, and electronic design automation (EDA).
FSx for Lustre allows customers to create a Lustre filesystem on demand and associate it to an Amazon S3 bucket. As part of the filesystem creation, Lustre reads the objects in the buckets and adds that to the file system metadata. Any Lustre client in your VPC is then able to access the data, which gets cached on the high- speed Lustre filesystem. This is ideal for HPC workloads, because you can get the speed of an optimized Lustre file system without having to manage the complexity of deploying, optimizing, and managing the Lustre cluster.
Additionally, having the filesystem work natively with Amazon S3 means you can shut down the Lustre filesystem when you donג€™t need it but still access objects in
Amazon S3 via other AWS Services. FSx for Lustre also allows you to also write the output of your HPC job back to Amazon S3.
Reference:
https://d1.awsstatic.com/whitepapers/AWS%20Partner%20Network_HPC%20Storage%20Options_2019_FINAL.pdf
(p.8)

A company is performing an AWS Well-Architected Framework review of an existing workload deployed on AWS. The review identified a public-facing website running on the same Amazon EC2 instance as a Microsoft Active Directory domain controller that was install recently to support other AWS services. A solutions architect needs to recommend a new design that would improve the security of the architecture and minimize the administrative demand on IT staff.
What should the solutions architect recommend?

  • A. Use AWS Directory Service to create a managed Active Directory. Uninstall Active Directory on the current EC2 instance.
  • B. Create another EC2 instance in the same subnet and reinstall Active Directory on it. Uninstall Active Directory.
  • C. Use AWS Directory Service to create an Active Directory connector. Proxy Active Directory requests to the Active domain controller running on the current EC2 instance.
  • D. Enable AWS Single Sign-On (AWS SSO) with Security Assertion Markup Language (SAML) 2.0 federation with the current Active Directory controller. Modify the EC2 instanceג€™s security group to deny public access to Active Directory.


Answer : A

Explanation:

AWS Managed Microsoft AD -
AWS Directory Service lets you run Microsoft Active Directory (AD) as a managed service. AWS Directory Service for Microsoft Active Directory, also referred to as AWS Managed Microsoft AD, is powered by Windows Server 2012 R2. When you select and launch this directory type, it is created as a highly available pair of domain controllers connected to your virtual private cloud (VPC). The domain controllers run in different Availability Zones in a region of your choice. Host monitoring and recovery, data replication, snapshots, and software updates are automatically configured and managed for you.
Reference:
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html

A company hosts a static website within an Amazon S3 bucket. A solutions architect needs to ensure that data can be recovered in case of accidental deletion.
Which action will accomplish this?

  • A. Enable Amazon S3 versioning.
  • B. Enable Amazon S3 Intelligent-Tiering.
  • C. Enable an Amazon S3 lifecycle policy.
  • D. Enable Amazon S3 cross-Region replication.


Answer : A

Explanation:
Data can be recover if versioning enable, also it provide a extra protection like file delete,MFA delete. MFA. Delete only works for CLI or API interaction, not in the
AWS Management Console. Also, you cannot make version DELETE actions with MFA using IAM user credentials. You must use your root AWS account.

Object Versioning -
[1]
(version 222222) in a single bucket. S3 Versioning protects you from the consequences of unintended overwrites and deletions. You can also use it to archive objects so that you have access to previous versions.
You must explicitly enable S3 Versioning on your bucket. By default, S3 Versioning is disabled. Regardless of whether you have enabled Versioning, each object in your bucket has a version ID. If you have not enabled Versioning, Amazon S3 sets the value of the version ID to null. If S3 Versioning is enabled, Amazon S3 assigns a version ID value for the object. This value distinguishes it from other versions of the same key.
Reference:
https://books.google.com.sg/books?id=wv45DQAAQBAJ&pg=PA39&lpg=PA39&dq=hosts+a+static+website+within+an+Amazon+S3+bucket.+A
+solutions+architect+needs+to+ensure+that+data+can+be+recovered+in+case+of+accidental
+deletion&source=bl&ots=0NolP5igY5&sig=ACfU3U3opL9Jha6jM2EI8x7EcjK4rigQHQ&hl=en&sa=X&ved=2ahUKEwiS9e3yy7vpAhVx73MBHZNoDnQQ6AEwAH oECBQQAQ#v=onepage&q=hosts%20a%20static%20website%20within%20an%20Amazon%20S3%20bucket.%20A%20solutions%20architect%20needs%20to
%20ensure%20that%20data%20can%20be%20recovered%20in%20case%20of%20accidental%20deletion&f=false https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/ https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html

Page:    1 / 55   
Exam contains 822 questions

Talk to us!


Have any questions or issues ? Please dont hesitate to contact us

Certlibrary.com is owned by MBS Tech Limited: Room 1905 Nam Wo Hong Building, 148 Wing Lok Street, Sheung Wan, Hong Kong. Company registration number: 2310926
Certlibrary doesn't offer Real Microsoft Exam Questions. Certlibrary Materials do not contain actual questions and answers from Cisco's Certification Exams.
CFA Institute does not endorse, promote or warrant the accuracy or quality of Certlibrary. CFA® and Chartered Financial Analyst® are registered trademarks owned by CFA Institute.
Terms & Conditions | Privacy Policy