AWS DevOps Engineer - Professional (DOP-C01) v1.0

Page:    1 / 14   
Exam contains 208 questions

A company recently launched an application that is more popular than expected. The company wants to ensure the application can scale to meet increasing demands and provide reliability using multiple Availability Zones (AZs). The application runs on a fleet of Amazon EC2 instances behind an Application Load
Balancer (ALB). A DevOps engineer has created an Auto Scaling group across multiple AZs for the application. Instances launched in the newly added AZs are not receiving any traffic for the application.
What is likely causing this issue?

  • A. Auto Scaling groups can create new instances in a single AZ only.
  • B. The EC2 instances have not been manually associated to the ALB.
  • C. The ALB should be replaced with a Network Load Balancer (NLB).
  • D. The new AZ has not been added to the ALB.


Answer : D

A DevOps Engineer manages a web application that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an EC2
Auto Scaling group across multiple Availability Zones. The engineer needs to implement a deployment strategy that:
✑ Launches a second fleet of instances with the same capacity as the original fleet.
✑ Maintains the original fleet unchanged while the second fleet is launched.
✑ Transitions traffic to the second fleet when the second fleet is fully deployed.
✑ Terminates the original fleet automatically 1 hour after transition.
Which solution will satisfy these requirements?

  • A. Use an AWS CloudFormation template with a retention policy for the ALB set to 1 hour. Update the Amazon Route 53 record to reflect the new ALB.
  • B. Use two AWS Elastic Beanstalk environments to perform a blue/green deployment from the original environment to the new one. Create an application version lifecycle policy to terminate the original environment in 1 hour.
  • C. Use AWS CodeDeploy with a deployment group configured with a blue/green deployment configuration. Select the option Terminate the original instances in the deployment group with a waiting period of 1 hour.
  • D. Use AWS Elastic Beanstalk with the configuration set to Immutable. Create an .ebextension using the Resources key that sets the deletion policy of the ALB to 1 hour, and deploy the application.


Answer : C

A healthcare services company is concerned about the growing costs of software licensing for an application for monitoring patient wellness. The company wants to create an audit process to ensure that the application is running exclusively on Amazon EC2 Dedicated Hosts. A DevOps Engineer must create a workflow to audit the application to ensure compliance.
What steps should the Engineer take to meet this requirement with the LEAST administrative overhead?

  • A. Use AWS Systems Manager Configuration Compliance. Use calls to the put-compliance-items API action to scan and build a database of noncompliant EC2 instances based on their host placement configuration. Use an Amazon DynamoDB table to store these instance IDs for fast access. Generate a report through Systems Manager by calling the list-compliance-summaries API action.
  • B. Use custom Java code running on an EC2 instance. Set up EC2 Auto Scaling for the instance depending on the number of instances to be checked. Send the list of noncompliant EC2 instance IDs to an Amazon SQS queue. Set up another worker instance to process instance IDs from the SQS queue and write them to Amazon DynamoDB. Use an AWS Lambda function to terminate noncompliant instance IDs obtained from the queue, and send them to an Amazon SNS email topic for distribution.
  • C. Use AWS Config. Identify all EC2 instances to be audited by enabling Config Recording on all Amazon EC2 resources for the region. Create a custom AWS Config rule that triggers an AWS Lambda function by using the ג€config-rule-change-triggeredג€ blueprint. Modify the Lambda evaluateCompliance() function to verify host placement to return a NON_COMPLIANT result if the instance is not running on an EC2 Dedicated Host. Use the AWS Config report to address noncompliant instances.
  • D. Use AWS CloudTrail. Identify all EC2 instances to be audited by analyzing all calls to the EC2 RunCommand API action. Invoke an AWS Lambda function that analyzes the host placement of the instance. Store the EC2 instance ID of noncompliant resources in an Amazon RDS MySQL DB instance. Generate a report by querying the RDS instance and exporting the query results to a CSV text file.


Answer : C

A company has 100 GB of log data in an Amazon S3 bucket stored in .csv format. SQL developers want to query this data and generate graphs to visualize it.
They also need an efficient, automated way to store metadata from the .csv file.
Which combination of steps should be taken to meet these requirements with the LEAST amount of effort? (Choose three.)

  • A. Filter the data through AWS X-Ray to visualize the data.
  • B. Filter the data through Amazon QuickSight to visualize the data.
  • C. Query the data with Amazon Athena.
  • D. Query the data with Amazon Redshift.
  • E. Use AWS Glue as the persistent metadata store.
  • F. Use Amazon S3 as the persistent metadata store.


Answer : BCF

A DevOps Engineer has several legacy applications that all generate different log formats. The Engineer must standardize the formats before writing them to
Amazon S3 for querying and analysis.
How can this requirement be met at the LOWEST cost?

  • A. Have the application send its logs to an Amazon EMR cluster and normalize the logs before sending them to Amazon S3
  • B. Have the application send its logs to Amazon QuickSight, then use the Amazon QuickSight SPICE engine to normalize the logs. Do the analysis directly from Amazon QuickSight
  • C. Keep the logs in Amazon S3 and use Amazon Redshift Spectrum to normalize the logs in place
  • D. Use Amazon Kinesis Agent on each server to upload the logs and have Amazon Kinesis Data Firehose use an AWS Lambda function to normalize the logs before writing them to Amazon S3


Answer : D

A company needs to implement a robust CI/CD pipeline to automate the deployment of an application in AWS. The pipeline must support continuous integration, continuous delivery, and automatic rollback upon deployment failure. The entire CI/CD pipeline must be capable of being re-provisioned in alternate AWS accounts or Regions within minutes. A DevOps engineer has already created an AWS CodeCommit repository to store the source code.
Which combination of actions should be taken when building this pipeline to meet these requirements? (Choose three.)

  • A. Configure an AWS CodePipeline pipeline with a build stage using AWS CodeBuild.
  • B. Copy the build artifact from CodeCommit to Amazon S3.
  • C. Create an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer (ALB) and set the ALB as the deployment target in AWS CodePipeline.
  • D. Create an AWS Elastic Beanstalk environment as the deployment target in AWS CodePipeline.
  • E. Implement an Amazon SQS queue to decouple the pipeline components.
  • F. Provision all resources using AWS CloudFormation.


Answer : ABD

A company is building a solution for storing files containing Personally Identifiable Information (PII) on AWS.
Requirements state:
✑ All data must be encrypted at rest and in transit.
✑ All data must be replicated in at least two locations that are at least 500 miles (805 kilometers) apart.
Which solution meets these requirements?

  • A. Create primary and secondary Amazon S3 buckets in two separate Availability Zones that are at least 500 miles (805 kilometers) apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce Amazon S3 SSE-C on all objects uploaded to the bucket. Configure cross- region replication between the two buckets.
  • B. Create primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 500 miles (805 kilometers) apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce S3-Managed Keys (SSE-S3) on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
  • C. Create primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 500 miles (805 kilometers) apart. Use an IAM role to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce Amazon S3-Managed Keys (SSE-S3) on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
  • D. Create primary and secondary Amazon S3 buckets in two separate Availability Zones that are at least 500 miles (805 kilometers) apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce AWS KMS encryption on all objects uploaded to the bucket. Configure cross-region replication between the two buckets. Create a KMS Customer Master Key (CMK) in the primary region for encrypting objects.


Answer : B

A company is using an AWS CodeBuild project to build and package an application. The packages are copied to a shared Amazon S3 bucket before being deployed across multiple AWS accounts.
The buildspec.yml file contains the following:

The DevOps Engineer has noticed that anybody with an AWS account is able to download the artifacts.
What steps should the DevOps Engineer take to stop this?

  • A. Modify the post_build to command to use ג€"-acl public-read and configure a bucket policy that grants read access to the relevant AWS accounts only.
  • B. Configure a default ACL for the S3 bucket that defines the set of authenticated users as the relevant AWS accounts only and grants read-only access.
  • C. Create an S3 bucket policy that grants read access to the relevant AWS accounts and denies read access to the principal ג€*ג€
  • D. Modify the post_build command to remove ג€"-acl authenticated-read and configure a bucket policy that allows read access to the relevant AWS accounts only.


Answer : A

A DevOps engineer needs to grant several external contractors access to a legacy application that runs on an Amazon Linux Amazon EC2 instance. The application server is available only in a private subnet. The contractors are not authorized for VPN access.
What should the DevOps engineer do to grant the contactors access to the application server?

  • A. Create an IAM user and SSH keys for each contractor. Add the public SSH key to the application serverג€™s SSH authorized_keys file. Instruct the contractors to install the AWS CLI and AWS Systems Manager Session Manager plugin, update their AWS credentials files with their private keys, and use the aws ssm start-session command to gain access to the target application server instance ID.
  • B. Ask each contractor to securely send their SSH public key. Add this public key to the application serverג€™s SSH authorized-keys file. Instruct the contractors to use their private key to connect to the application server through SSH.
  • C. Ask each contractor to securely send their SSH public key. Use EC2 pairs to import their key. Update the application serverג€™s SSH authorized_keys file. Instruct the contractors to use their private key to connect to the application server through SSH.
  • D. Create an IAM user for each contractor with programmatic access. Add each user to an IAM group that has a policy that allows the ssm:StartSession action. Instruct the contractors to install the AWS CLI and AWS Systems Manager Session Manager plugin, update their AWS credentials files with their access keys, and use the aws ssm start-session to gain access to the target application server instance ID.


Answer : B

A company hosts its staging website using an Amazon EC2 instance backed with Amazon EBS storage. The company wants to recover quickly with minimal data losses in the event of network connectivity issues or power failures on the EC2 instance.
Which solution will meet these requirements?

  • A. Add the instance to an EC2 Auto Scaling group with the minimum, maximum, and desired capacity set to 1.
  • B. Add the instance to an EC2 Auto Scaling group with a lifecycle hook to detach the EBS volume when the EC2 instance shuts down or terminates.
  • C. Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric and select the EC2 action to recover the instance.
  • D. Create an Amazon CloudWatch alarm for the StatusCheckFailed_Instance metric and select the EC2 action to reboot the instance.


Answer : C

Reference:
https://aws.amazon.com/ru/blogs/aws/ec2-instance-status-metrics/ https://docs.amazonaws.cn/en_us/AmazonCloudWatch/latest/monitoring/UsingAlarmActions.html

A company has built a web service that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The company has deployed the application in us-east-1. Amazon Route 53 provides an external DNS that routes traffic from example.com to the application, created with appropriate health checks.
The company has deployed a second environment for the application in eu-west-1. The company wants traffic to be routed to whichever environment results in the best response time for each user. If there is an outage in one Region, traffic should be directed to the other environment.
Which configuration will achieve these requirements?
A.
✑ A subdomain us.example.com with weighted routing: the US ALB with weight 2 and the EU ALB with weight 1.
✑ Another subdomain eu.example.com with weighted routing: the EU ALB with weight 2 and the US ALB with weight 1.
✑ Geolocation routing records for example.com: North America aliased to us.example.com and Europe aliased to eu.example.com.
B.
✑ A subdomain us.example.com with latency-based routing: the US ALB as the first target and the EU ALB as the second target.
✑ Another subdomain eu.example.com with latency-based routing: the EU ALB as the first target and the US ALB as the second target.
✑ Failover routing records for example.com aliased to us.example.com as the first target and eu.example.com as the second target.
C.
✑ A subdomain us.example.com with failover routing: the US ALB as primary and the EU ALB as secondary.
✑ Another subdomain eu.example.com with failover routing: the EU ALB as primary and the US ALB as secondary.
✑ Latency-based routing records for example.com that are aliased to us.example.com and eu.example.com.
D.
✑ A subdomain us.example.com with multivalue answer routing: the US ALB first and the EU ALB second.
✑ Another subdomain eu.example.com with multivalue answer routing: the EU ALB first and the US ALB second.
✑ Failover routing records for example.com that are aliased to us.example.com and eu.example.com.



Answer : C

A company has multiple development teams sharing one AWS account. The development team's manager wants to be able to automatically stop Amazon EC2 instances and receive notifications if resources are idle and not tagged as production resources.
Which solution will meet these requirements?

  • A. Use a scheduled Amazon CloudWatch Events rule to filter for Amazon EC2 instance status checks and identify idle EC2 instances. Use the CloudWatch Events rule to target an AWS Lambda function to stop non-production instances and send notifications.
  • B. Use a scheduled Amazon CloudWatch Events rule to filter AWS Systems Manager events and identify idle EC2 instances and resources. Use the CloudWatch Events rule to target an AWS Lambda function to stop non-production instances and send notifications.
  • C. Use a scheduled Amazon CloudWatch Events rule to target a custom AWS Lambda function that runs AWS Trusted Advisor checks. Create a second CloudWatch Events rule to filter events from Trusted Advisor to trigger a Lambda function to stop idle non-production instances and send notifications.
  • D. Use a scheduled Amazon CloudWatch Events rule to target Amazon Inspector events for idle EC2 instances. Use the CloudWatch Events rule to target the AWS Lambda function to stop non-production instances and send notifications.


Answer : C

An n-tier application requires a table in an Amazon RDS MySQL DB instance to be dropped and repopulated at each deployment. This process can take several minutes and the web tier cannot come online until the process is complete. Currently, the web tier is configured in an Amazon EC2 Auto Scaling group, with instances being terminated and replaced at each deployment. The MySQL table is populated by running a SQL query through an AWS CodeBuild job.
What should be done to ensure that the web tier does not come online before the database is completely configured?

  • A. Use Amazon Aurora as a drop-in replacement for RDS MySQL. Use snapshots to populate the table with the correct data.
  • B. Modify the launch configuration of the Auto Scaling group to pause user data execution for 600 seconds, allowing the table to be populated.
  • C. Use AWS Step Functions to monitor and maintain the state of data population. Mark the database in service before continuing with the deployment.
  • D. Use an EC2 Auto Scaling lifecycle hook to pause the configuration of the web tier until the table is populated.


Answer : D

A highly regulated company has a policy that DevOps Engineers should not log in to their Amazon EC2 instances except in emergencies. If a DevOps Engineer does log in, the Security team must be notified within 15 minutes of the occurrence.
Which solution will meet these requirements?

  • A. Install the Amazon Inspector agent on each EC2 instance. Subscribe to Amazon CloudWatch Events notifications. Trigger an AWS Lambda function to check if a message is about user logins. If it is, send a notification to the Security team using Amazon SNS.
  • B. Install the Amazon CloudWatch agent on each EC2 instance. Configure the agent to push all logs to Amazon CloudWatch Logs and set up a CloudWatch metric filter that searches for user logins. If a login is found, send a notification to the Security team using Amazon SNS.
  • C. Set up AWS CloudTrail with Amazon CloudWatch Logs. Subscribe CloudWatch Logs to Amazon Kinesis. Attach AWS Lambda to Kinesis to parse and determine if a log contains a user login. If it does, send a notification to the Security team using Amazon SNS.
  • D. Set up a script on each Amazon EC2 instance to push all logs to Amazon S3. Set up an S3 event to trigger an AWS Lambda function, which triggers an Amazon Athena query to run. The Athena query checks for logins and sends the output to the Security team using Amazon SNS.


Answer : B

A DevOps engineer has automated a web service deployment by using AWS CodePipeline with the following steps:
1. An AWS CodeBuild project compiles the deployment artifact and runs unit tests.
2. An AWS CodeDeploy deployment group deploys the web service to Amazon EC2 instances in the staging environment.
3. A CodeDeploy deployment group deploys the web service to EC2 instances in the production environment.
The quality assurance (QA) team requests permission to inspect the build artifact before the deployment to the production environment occurs. The QA team wants to run an internal penetration testing tool to conduct manual tests. The tool will be invoked by a REST API call.
Which combination of actions should the DevOps engineer take to fulfill this request? (Choose two.)

  • A. Insert a manual approval action between the test actions and deployment actions of the pipeline.
  • B. Modify the buildspec.yml file for the compilation stage to require manual approval before completion.
  • C. Update the CodeDeploy deployment groups so that they require manual approval to proceed.
  • D. Update the pipeline to directly call the REST API for the penetration testing tool.
  • E. Update the pipeline to invoke a Lambda function that calls the REST API for the penetration testing tool.


Answer : BC

Reference:
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-codedeploy.html

Page:    1 / 14   
Exam contains 208 questions

Talk to us!


Have any questions or issues ? Please dont hesitate to contact us

Certlibrary.com is owned by MBS Tech Limited: Room 1905 Nam Wo Hong Building, 148 Wing Lok Street, Sheung Wan, Hong Kong. Company registration number: 2310926
Certlibrary doesn't offer Real Microsoft Exam Questions. Certlibrary Materials do not contain actual questions and answers from Cisco's Certification Exams.
CFA Institute does not endorse, promote or warrant the accuracy or quality of Certlibrary. CFA® and Chartered Financial Analyst® are registered trademarks owned by CFA Institute.
Terms & Conditions | Privacy Policy