Validated of DOP-C01 exams materials and practice exam for Amazon-Web-Services certification for customers, Real Success Guaranteed with Updated DOP-C01 pdf dumps vce Materials. 100% PASS AWS Certified DevOps Engineer- Professional exam Today!

Also have DOP-C01 free dumps questions for you:

NEW QUESTION 1
Your company uses AWS to host its resources. They have the following requirements
1) Record all API calls and Transitions
2) Help in understanding what resources are there in the account
3) Facility to allow auditing credentials and logins
Which services would suffice the above requirements

  • A. AWS Config, CloudTrail, 1AM Credential Reports
  • B. CloudTrail, 1AM Credential Reports, AWS Config
  • C. CloudTrail, AWS Config, 1AM Credential Reports
  • D. AWS Config, 1AM Credential Reports, CloudTrail

Answer: C

Explanation:
You can use AWS CloudTrail to get a history of AWS API calls and related events for your account. This history includes calls made with the AWS Management
Console, AWS Command Line Interface, AWS SDKs, and other AWS services. For more information on Cloudtrail, please visit the below URL:
• http://docs.aws.a mazon.com/awscloudtrail/latest/userguide/cloudtrai l-user-guide.html
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting. For more information on the config service, please visit the below URL:
• https://aws.amazon.com/config/
You can generate and download a credential reportthat lists all users in your account and the status of their various credentials, including passwords, access keys, and MFA devices. You can get a credential report from the AWS Management Console, the AWS SDKs and Command Line Tools, or the 1AM API. For more information on Credentials Report, please visit the below URL:
• http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html

NEW QUESTION 2
Which of the following is the default deployment mechanism used by Elastic Beanstalk when the application is created via Console or EBCLI?

  • A. All at Once
  • B. Rolling Deployments
  • C. Rolling with additional batch
  • D. Immutable

Answer: B

Explanation:
The AWS documentation mentions
AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment policies (All at once. Rolling, Rolling with additional batch,
and Immutable) and options that let you configure batch size and health check behavior during deployments. By default, your environment uses rolling deployments
if you created it with the console or EB CLI, or all at once deployments if you created it with a different client (API, SDK or AWS CLI).
For more information on Elastic Beanstalk deployments, please refer to the below link:
• http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version- deploy.html

NEW QUESTION 3
You have an application which consists of EC2 instances in an Auto Scaling group. Between a particular time frame every day, there is an increase in traffic to your website. Hence users are complaining of a poor response time on the application. You have configured your Auto Scaling group to deploy one new EC2 instance when CPU utilization is greater than 60% for 2 consecutive periods of 5 minutes. What is the least cost-effective way to resolve this problem?

  • A. Decrease the consecutive number of collection periods
  • B. Increase the minimum number of instances in the Auto Scaling group
  • C. Decrease the collection period to ten minutes
  • D. Decrease the threshold CPU utilization percentage at which to deploy a new instance

Answer: B

Explanation:
If you increase the minimum number of instances, then they will be running even though the load is not high on the website. Hence you are incurring cost even though there is no need.
All of the remaining options are possible options which can be used to increase the number of instances on a high load.
For more information on On-demand scaling, please refer to the below link: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-scale-based-on-demand.html
Note: The tricky part where the question is asking for 'least cost effective way". You got the design consideration correctly but need to be careful on how the question is phrased.

NEW QUESTION 4
Your social media marketing application has a component written in Ruby running on AWS Elastic Beanstalk. This application component posts messages to social media sites in support of various marketing campaigns. Your management now requires you to record replies to these social media messages to analyze the effectiveness of the marketing campaign in comparison to past and future efforts. You've already developed a new application component to interface with the social media site APIs in order to read the replies. Which process should you use to record the social media replies in a durable data store that can be accessed at any time for analytics of historical data?

  • A. Deploythe new application component in an Auto Scaling group of Amazon EC2 instances,read the data from the social media sites, store it with Amazon Elastic BlockStore, and use AWS Data Pipeline to publish it to Amazon Kinesis for analytics.
  • B. Deploythe new application component as an Elastic Beanstalk application, read thedata from the social media sites, store it in DynamoDB, and use Apache Hivewith Amazon Elastic MapReduce for analytics.
  • C. Deploythe new application component in an Auto Scaling group of Amazon EC2 instances,read the data from the social media sites, store it in Amazon Glacier, and useAWS Data Pipeline to publish it to Amazon RedShift for analytics.
  • D. Deploythe new application component as an Amazon Elastic Beanstalk application, readthe data from the social media site, store it with Amazon Elastic Block store,and use Amazon Kinesis to stream the data to Amazon Cloud Watch for analytics

Answer: B

Explanation:
The AWS Documentation mentions the below
Amazon DynamoDB is a fast and flexible NoSQL database sen/ice for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity, makes it a great fit for mobile, web, gaming, ad tech, loT, and many other applications.
For more information on AWS DynamoDB please see the below link:
• https://aws.amazon.com/dynamodb/

NEW QUESTION 5
You have deployed an Elastic Beanstalk application in a new environment and want to save the current state of your environment in a document. You want to be able to restore your environment to the current state later or possibly create a new environment. You also want to make sure you have a restore point. How can you achieve this?

  • A. Use CloudFormation templates
  • B. Configuration Management Templates
  • C. Saved Configurations
  • D. Saved Templates

Answer: C

Explanation:
You can save your environment's configuration as an object in Amazon S3 that can be applied to other environments during environment creation, or applied to a running environment. Saved configurations are YAML formatted templates that define an environment's platform configuration, tier, configuration option settings,
and tags.
For more information on Saved Configurations please refer to the below link:
• http://docs.aws.a mazon.com/elasticbeanstalk/latest/dg/envi ronment-configuration- savedconfig.html

NEW QUESTION 6
If I want Cloud Formation stack status updates to show up in a continuous delivery system in as close to real time as possible, how should I achieve this?

  • A. Use a long-poll on the Resources object in your Cloud Formation stack and display those state changes in the Ul for the system.
  • B. Use a long-poll on the ListStacksAPI call for your CloudFormation stack and display those state changes in the Ul for the system.
  • C. Subscribe your continuous delivery system to an SNS topic that you also tell your CloudFormation stack to publish events int
  • D. Subscribe your continuous delivery system to an SQS queue that you also tell your CloudFormation stack to publish events into.

Answer: C

Explanation:
Answer - C
You can monitor the progress of a stack update by viewing the stack's events. The console's Cvents tab displays each major step in the creation and update of the stack sorted by the time of each event with latest events on top. The start of the stack update process is marked with an UPDATE_IN_PROGRCSS event for the stack For more information on Monitoring your stack, please visit the below URL:
http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/using-cfn-updating-stacks- monitor-stack. html

NEW QUESTION 7
You have launched a cloudformation template, but are receiving a failure notification after the template was launched. What is the default behavior of Cloudformation in such a case

  • A. It will rollback all the resources that were created up to the failure point.
  • B. It will keep all the resources that were created up to the failure point.
  • C. It will prompt the user on whether to keep or terminate the already created resources
  • D. It will continue with the creation of the next resource in the stack

Answer: A

Explanation:
The AWS Documentation mentions
AWS Cloud Formation ensures all stack resources are created or deleted as appropriate. Because AWS CloudFormation treats the stack resources as a single unit,
they must all be created or deleted successfully for the stack to be created or deleted. If a resource cannot be created, AWS CloudFormation rolls the stack back and automatically deletes any resources that were created.
For more information on Cloudformation, please refer to the below link: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/stacks.html

NEW QUESTION 8
Which of the following services can be used in conjunction with Cloudwatch Logs. Choose the 3 most viable services from the options given below

  • A. Amazon Kinesis
  • B. Amazon S3
  • C. Amazon SQS
  • D. Amazon Lambda

Answer: ABD

Explanation:
The AWS Documentation the following products which can be integrated with Cloudwatch logs
1) Amazon Kinesis - Here data can be fed for real time analysis
2) Amazon S3 - You can use CloudWatch Logs to store your log data in highly durable storage such as S3.
3) Amazon Lambda - Lambda functions can be designed to work with Cloudwatch log For more information on Cloudwatch Logs, please refer to the below link: link:http://docs^ws.amazon.com/AmazonCloudWatch/latest/logs/WhatlsCloudWatchLogs.html

NEW QUESTION 9
Your company wants to understand where cost is coming from in the company's production AWS account. There are a number of applications and services running at any given time. Without expending too much initial development time.how best can you give the business a good understanding of which applications cost the most per month to operate?

  • A. Create an automation script which periodically creates AWS Support tickets requesting detailed intra-month information about your bill.
  • B. Use custom CloudWatch Metrics in your system, and put a metric data point whenever cost is incurred.
  • C. Use AWS Cost Allocation Taggingfor all resources which support i
  • D. Use the Cost Explorer to analyze costs throughout the month.
  • E. Use the AWS Price API and constantly running resource inventory scripts to calculate total price based on multiplication of consumed resources over time.

Answer: C

Explanation:
A tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a Areyand a value. A key can have more than one value. You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level. After you activate cost allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report, to make it easier
for you to categorize and track your AWS costs. AWS provides two types of cost allocation tags, an A WS-generated tagand user-defined tags. AWS defines, creates, and applies the AWS-generated tag for you, and you define, create, and apply user-defined tags. You must activate both types of tags separately before they can appear in Cost Explorer or on a cost allocation report.
For more information on Cost Allocation tags, please visit the below URL: http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloctags.html

NEW QUESTION 10
A vendor needs access to your AWS account. They need to be able to read protected messages in a private S3 bucket. They have a separate AWS account. Which of the solutions below is the best way to do this?

  • A. Allowthe vendor to ssh into your EC2 instance and grant them an 1AM role with fullaccess to the bucket.
  • B. Createa cross-account 1AM role with permission to access the bucket, and grantpermission to use the role to the vendor AWS account.
  • C. Createan 1AM User with API Access Key
  • D. Give the vendor the AWS Access Key ID and AWSSecret Access Key for the user.
  • E. Createan S3 bucket policy that allows the vendor to read from the bucket from theirAWS account.

Answer: B

Explanation:
The AWS Documentation mentions the following on cross account roles
You can use AWS Identity and Access Management (I AM) roles and AWS Security Token Service (STS) to set up cross-account access between AWS accounts. When you assume an 1AM role in another AWS account to obtain cross-account access to services and resources in that account, AWS CloudTrail logs the cross-account activity. For more information on Cross account roles, please visit the below URL
http://docs.aws.amazon.com/IAM/latest/UserGuide/tuto rial_cross-account-with-roles.htm I https://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access- example2.html

NEW QUESTION 11
You are using Elastic Beanstalk for your development team. You are responsible for deploying multiple versions of your application. How can you ensure, in an ideal way, that you don't cross the application version limit in Elastic beanstalk?

  • A. Createa lambda function to delete the older versions.
  • B. Createa script to delete the older versions.
  • C. UseAWSConfig to delete the older versions
  • D. Uselifecyle policies in Elastic beanstalk

Answer: D

Explanation:
The AWS Documentation mentions
Each time you upload a new version of your application with the Clastic Beanstalk console or the CB CLI, Elastic Beanstalk creates an application version. If you don't delete versions that you no longer use, you will eventually reach the application version limit and be unable to create new versions of that application.
You can avoid hitting the limit by applying an application version lifecycle policy to your applications.
A lifecycle policy tells Clastic Beanstalk to delete application versions that are old, or to delete application versions when the total number of versions for an application exceeds a specified number.
For more information on Clastic Beanstalk lifecycle policies please see the below link:
• http://docs.aws.a mazon.com/elasticbeanstalk/latest/dg/appl ications-lifecycle.html

NEW QUESTION 12
When you add lifecycle hooks to an Autoscaling Group, what are the wait states that occur during the scale in and scale out process. Choose 2 answers from the options given below

  • A. Launching:Wait
  • B. Exiting:Wait
  • C. Pending:Wait
  • D. Terminating:Wait

Answer: CD

Explanation:
The AWS Documentation mentions the following
After you add lifecycle hooks to your Auto Scaling group, they work as follows:
1. Auto Scaling responds to scale out events by launching instances and scale in events by terminating instances.
2. Auto Scaling puts the instance into a wait state (Pending:Wait orTerminating: Wait). The instance is paused until either you tell Auto Scaling to continue or the timeout period ends.
For more information on Autoscaling Lifecycle hooks, please visit the below URL: • http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.htmI

NEW QUESTION 13
During metric analysis, your team has determined that the company's website during peak hours is experiencing response times higher than anticipated. You currently rely on Auto Scaling to make sure that you are scaling your environment during peak windows. How can you improve your Auto Scaling policy to reduce this high response time? Choose 2 answers.

  • A. Push custom metrics to CloudWatch to monitor your CPU and network bandwidth from your servers, which will allow your Auto Scaling policy to have betterfine-grain insight.
  • B. IncreaseyourAutoScalinggroup'snumberofmaxservers.
  • C. Create a script that runs and monitors your servers; when it detects an anomaly in load, it posts to an Amazon SNS topic that triggers Elastic Load Balancing to add more servers to the load balancer.
  • D. Push custom metrics to CloudWatch for your application that include more detailed information about your web application, such as how many requests it is handling and how many are waiting to be processed.

Answer: BD

Explanation:
Option B makes sense because maybe the max servers is low hence the application cannot handle the peak load.
Option D helps in ensuring Autoscaling can scale the group on the right metrics.
For more information on Autoscaling health checks, please refer to the below document link: from AWS
http://docs.aws.amazon.com/autoscaling/latest/userguide/healthcheck.html

NEW QUESTION 14
Your application is having a very high traffic, so you have enabled autoscaling in multi availability zone to suffice the needs of your application but you observe that one of the availability zone is not receiving any traffic. What can be wrong here?

  • A. Autoscalingonly works for single availability zone
  • B. Autoscalingcan be enabled for multi AZ only in north Virginia region
  • C. Availabilityzone is not added to Elastic load balancer
  • D. Instancesneed to manually added to availability zone

Answer: C

Explanation:
When you add an Availability Zone to your load balancer. Clastic Load Balancing creates a load balancer node in the Availability Zone. Load balancer nodes accept traffic from clients and forward requests to the healthy registered instances in one or more Availability Zones.
For more information on adding AZ's to CLB, please refer to the below U RL:
htto://docs aws.amazon.com/eiasticloadbaIancins/latest/classic/enable-disable-az.html

NEW QUESTION 15
After reviewing the last quarter's monthly bills, management has noticed an increase in the overall bill from Amazon. After researching this increase in cost, you discovered that one of your new services is doing a lot of GET Bucket API calls to Amazon S3 to build a metadata cache of all objects in the applications bucket. Your boss has asked you to come up with a new cost-effective way to help reduce the amount of these new GET Bucket API calls. What process should you use to help mitigate the cost?

  • A. Update your Amazon S3 buckets' lifecycle policies to automatically push a list of objects to a new bucket, and use this list to view objects associated with the application's bucket.
  • B. Create a new DynamoDB tabl
  • C. Use the new DynamoDB table to store all metadata about all objects uploaded to Amazon S3. Any time a new object is uploaded, update the application's internalAmazon S3 object metadata cache from DynamoDB.C Using Amazon SNS, create a notification on any new Amazon S3 objects that automatical ly updates a new DynamoDB table to store allmetadata about the new objec
  • D. Subscribe the application to the Amazon SNS topic to update its internal Amazon S3 object metadata cache from the DynamoDB tabl
  • E. ^/
  • F. Upload all files to an ElastiCache file cache serve
  • G. Update your application to now read all file metadata from the ElastiCache file cache server, and configure the ElastiCache policies to push all files to Amazon S3 for long-term storage.

Answer: C

Explanation:
Option A is an invalid option since Lifecycle policies are normally used for expiration of objects or archival of objects.
Option B is partially correct where you store the data in DynamoDB, but then the number of GET requests would still be high if the entire DynamoDB table had to be
traversed and each object compared and updated in S3.
Option D is invalid because uploading all files to Clastic Cache is not an ideal solution.
The best option is to have a notification which can then trigger an update to the application to update the DynamoDB table accordingly.
For more information on SNS triggers and DynamoDB please refer to the below link:
◆ https://aws.amazon.com/blogs/compute/619/

NEW QUESTION 16
You have just recently deployed an application on EC2 instances behind an ELB. After a couple of weeks, customers are complaining on receiving errors from the application. You want to diagnose the errors and are trying to get errors from the ELB access logs. But the ELB access logs are empty. What is the reason for this.

  • A. You do not have the appropriate permissions to access the logs
  • B. You do not have your CloudWatch metrics correctly configured
  • C. ELB Access logs are only available for a maximum of one week.
  • D. Access logging is an optional feature of Elastic Load Balancing that is disabled by default

Answer: D

Explanation:
Clastic Load Balancing provides access logs that capture detailed information about requests sent to
your load balancer. Cach log contains information such as the
time the request was received, the client's IP address, latencies, request paths, and server responses.
You can use these access logs to analyze traffic patterns and to troubleshoot issues.
Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer. Clastic Load
Balancing captures the logs and stores them in the Amazon S3 bucket that you specify. You can disable access logging at any time.
For more information on CLB access logs, please refer to the below document link: from AWS http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html

NEW QUESTION 17
Your team wants to begin practicing continuous delivery using CloudFormation, to enable automated builds and deploys of whole, versioned stacks or stack layers. You have a 3-tier, mission-critical system. Which of the following is NOT a best practice for using CloudFormation in a continuous delivery environment?

  • A. Use the AWS CloudFormation ValidateTemplate call before publishing changes to AWS.
  • B. Model your stack in one template, so you can leverage CloudFormation's state management and dependency resolution to propagate all changes.
  • C. Use CloudFormation to create brand new infrastructure for all stateless resources on each push, and run integration tests on that set of infrastructure.
  • D. Parametrize the template and use Mappings to ensure your template works in multiple Regions.

Answer: B

Explanation:
Answer - B
Some of the best practices for Cloudformation are
• Created Nested stacks
As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference other templates.
• Reuse Templates
After you have your stacks and resources set up, you can reuse your templates to replicate your infrastructure in multiple environments. For example, you can create environments for development, testing, and production so that you can test changes before implementing them into production. To make templates reusable, use the parameters, mappings, and conditions sections so that you can customize your stacks when you create them. For example, for your development environments, you can specify a lower-cost instance type compared to your production environment, but all other configurations and settings remain the same. For more information on Cloudformation best practises, please visit the below URL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/best-practices.html

NEW QUESTION 18
You are building a mobile app for consumers to post cat pictures online. You will be storing the images in AWS S3. You want to run the system very cheaply and simply. Which one of these options allows you to build a photo sharing application with the right authentication/authorization implementation.

  • A. Build the application out using AWS Cognito and web identity federation to allow users to log in using Facebook or Google Account
  • B. Once they are logged in, the secret token passed to that user is used to directly access resources on AWS, like AWS S3. ^/
  • C. Use JWT or SAML compliant systems to build authorization policie
  • D. Users log in with a username and password, and are given a token they can use indefinitely to make calls against the photo infrastructure.C Use AWS API Gateway with a constantly rotating API Key to allow access from the client-sid
  • E. Construct a custom build of the SDK and include S3 access in it.
  • F. Create an AWS oAuth Service Domain ad grant public signup and access to the domai
  • G. During setup, add at least one major social media site as a trusted Identity Provider for users.

Answer: A

Explanation:
Amazon Cognito lets you easily add user sign-up and sign-in and manage permissions for your mobile and web apps. You can create your own user directory within Amazon Cognito. You can also choose to authenticate users through social identity providers such as Facebook, Twitter, or Amazon; with SAML identity solutions; or by using your own identity system. In addition, Amazon Cognito enables you to save data locally on users' devices, allowing your applications to work even when the devices are offline. You can then synchronize data across users' devices so that their app experience remains consistent regardless of the device they use.
For more information on AWS Cognito, please visit the below URL:
• http://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html

NEW QUESTION 19
You meet once per month with your operations team to review the past month's data. During the meeting, you realize that 3 weeks ago, your monitoring system which pings over HTTP from outside AWS recorded a large spike in latency on your 3-tier web service API. You use DynamoDB for the database layer, ELB, EBS, and EC2 for the business logic tier, and SQS, ELB, and EC2 for the presentation layer. Which of the following techniques will NOT help you figure out what happened?

  • A. Check your CloudTrail log history around the spike's time for any API calls that caused slowness.
  • B. Review CloudWatch Metrics for one minute interval graphs to determine which components) slowed the system down.
  • C. Review your ELB access logs in S3 to see if any ELBs in your system saw the latency.
  • D. Analyze your logs to detect bursts in traffic at that time.

Answer: B

Explanation:
The Cloudwatch metric retention is as follows. If the data points are of a one minute interval, then the graphs will not be available in Cloudwatch
• Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution custom metrics.
• Data points with a period of 60 seconds (1 minute) are available for 15 days
• Data points with a period of 300 seconds (5 minute) are available for 63 days
• Data points with a period of 3600 seconds (1 hour) are available for 455 days (15 months) For more information on Cloudwatch metrics, please visit the below U RL:
• http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_co ncepts.html

NEW QUESTION 20
You work at a company that makes use of AWS resources. One of the key security policies is to ensure that all data is encrypted both at rest and in transit. Which of the following is not a right implementation which aligns to this policy?

  • A. UsingS3 Server Side Encryption (SSE) to store the information
  • B. Enable SSLtermination on the ELB C EnablingProxy ProtocolD- Enablingsticky sessions on your load balancer

Answer: B

Explanation:
Please note the keyword "NOT" in the question.
Option A is incorrect. Enabling S3 SSE encryption helps the encryption of data at rest in S3.So Option A is invalid.
Option B is correct. If you disable SSL termination on the ELB the traffic will be encrypted all the way to the backend. SSL termination allows encrypted traffic between the client
and the ELB but cause traffic to be unencrypted between the ELB and the backend (presumably EC2 or ECS/Task, etc.)
If SSL is not terminated on the ELB you must use Layer A to have traffic encrypted all the way.
Sticky sessions are not supported with Layer A (TCP endpoint). Thus option D" Enabling sticky sessions on your load balancer" can't be used and is the right answer
For more information on sticky sessions, please visit the below URL https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html Requirements
• An HTTP/HTTPS load balancer.
• At least one healthy instance in each Availability Zone.
• At least one healthy instance in each Availability Zone.
If you don't want the load balancer to handle the SSL termination (known as SSL offloading), you can use TCP for both the front-end and back-end connections, and deploy certificates on the registered instances handling requests.
For more information on elb-listener-config, please visit the below
• https://docs.awsamazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html If the front-end connection uses TCP or SSL, then your back-end connections can use either TCP or SSL. Note: You can use an HTTPS listener and still use SSL on the backend but the ELB must terminate, decrypt and re-encrypt. This is slower and less secure then using the same encryption all the way to the backend.. It also breaks the question requirement of having all data encrypted in transit since it force the ELB to decrypt Proxy protocol is used to provide a secure transport connection hence Option C is also incorrect. For more information on SSL Listeners for your load balancer, please visit the below URL
http://docsaws.amazon.com/elasticloadbalancing/latest/classic/elb-https-load-balancers.html
https://aws.amazon.com/blogs/aws/elastic-load-balancer-support-for-ssl-termination/

NEW QUESTION 21
You've created a Cloudformation template as per your team's requets which is required for testing an application. By there is a request that when the stack is deleted, that the database is preserved for future reference. How can you achieve this using Cloudformation?

  • A. Ensurethat the RDS is created with Read Replica's so that the Read Replica remainsafter the stack is torn down.
  • B. IntheAWSCIoudFormation template, set the DeletionPolicy of theAWS::RDS::DBInstance'sDeletionPolicy property to "Retain."
  • C. Inthe AWS CloudFormation template, set the WaitPolicy of the AWS::RDS::DBInstance'sWaitPolicy property to "Retain."
  • D. Inthe AWS CloudFormation template, set the AWS::RDS::DBInstance'sDBInstanceClassproperty to be read-only.

Answer: B

Explanation:
With the Deletion Policy attribute you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS Cloud Formation deletes the resource by default. Note that this capability also applies to update operations that lead to resources being removed.
For more information on Cloudformation Deletion policy, please visit the below URL:
◆ http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html

NEW QUESTION 22
Your development team is developing a mobile application that access resources in AWS. The users accessing this application will be logging in via Facebook and Google. Which of the following AWS mechanisms would you use to authenticate users for the application that needs to access AWS resou rces

  • A. Useseparate 1AM users that correspond to each Facebook and Google user
  • B. Useseparate 1AM Roles that correspond to each Facebook and Google user
  • C. UseWeb identity federation to authenticate the users
  • D. UseAWS Policies to authenticate the users

Answer: C

Explanation:
The AWS documentation mentions the following
You can directly configure individual identity providers to access AWS resources using web identity federation. AWS currently supports authenticating users using web identity federation through several identity providers: Login with Amazon
Facebook Login
Google Sign-in For more information on Web identity federation please visit the below URL:
• http://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/load ing-browser- credentials-federated-id.htm I

NEW QUESTION 23
You need to implement Blue/Green Deployment for several multi-tier web applications. Each of them has Its Individual infrastructure:
Amazon Elastic Compute Cloud (EC2) front-end servers, Amazon ElastiCache clusters, Amazon Simple Queue Service (SQS) queues, and Amazon Relational Database (RDS) Instances.
Which combination of services would give you the ability to control traffic between different deployed versions of your application?

  • A. Create one AWS Elastic Beanstalk application and all AWS resources (using configuration files inside the application source bundle) for each web applicatio
  • B. New versions would be deployed using Elastic Beanstalk environments and using the Swap URLs feature.
  • C. Using AWS CloudFormation templates, create one Elastic Beanstalk application and all AWS resources (in the same template) for each web applicatio
  • D. New versions would be deployed using AWS CloudFormation templates to create new Elastic Beanstalk environments, and traffic would be balanced between them using weighted Round Robin (WRR) records in Amazon Route 53. >/
  • E. Using AWS CloudFormation templates, create one Elastic Beanstalk application and all AWS resources (in the same template) for each web applicatio
  • F. New versions would be deployed updating a parameter on the CloudFormation template and passing it to the cfn-hup helper daemon, and traffic would be balanced between them using Weighted Round Robin (WRR) records in Amazon Route 53.
  • G. Create one Elastic Beanstalk application and all AWS resources (using configuration files inside the application source bundle) for each web applicatio
  • H. New versions would be deployed updating theElastic Beanstalk application version for the current Elastic Beanstalk environment.

Answer: B

Explanation:
This an example of Blue green deployment
DOP-C01 dumps exhibit
With Amazon Route 53, you can define a percentage of traffic to go to the green environment and gradually update the weights until the green environment carries
the full production traffic. A weighted distribution provides the ability to perform canary analysis where a small percentage of production traffic is introduced to a
new environment. You can test the new code and monitor for errors, limiting the blast radius if any issues are encountered. It also allows the green environment to
scale out to support the full production load if you're using Elastic Load Balancing.
When if s time to promote the green environment/stack into production, update DNS records to point to the green environment/stack's load balancer. You can also
do this DNS flip gradually by using the Amazon Route 53 weighted routing policy. For more information on Blue green deployment, please refer to the link:
• https://dOawsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 24
Which of the following are advantages of using AWS CodeCommit over hosting your own source code repository system?

  • A. Reduction in hardware maintenance costs
  • B. Reduction in fees paid over licensing
  • C. No specific restriction on files andbranches
  • D. All of the above

Answer: D

Explanation:
The AWS Documentation mentions the following on CodeCommit
Self-hosted version control systems have many potential drawbacks, including: Expensive per-developer licensing fees.
High hardware maintenance costs. High support staffing costs.
Limits on the amount and types of files that can be stored and managed.
Limits on the number of branches, the amount of version history, and other related metadata that can be stored. For more information on CodeCommit please refer to the below link
• http://docs.aws.amazon.com/codecommit/latest/userguide/wel come.html

NEW QUESTION 25
An application is currently writing a large number of records to a DynamoDB table in one region. There is a requirement for a secondary application tojust take in the changes to the DynamoDB table every 2 hours and process the updates accordingly. Which of the following is an ideal way to ensure the secondary application can get the relevant changes from the DynamoDB table.

  • A. Inserta timestamp for each record and then scan the entire table for the timestamp asper the last 2 hours.
  • B. Createanother DynamoDB table with the records modified in the last 2 hours.
  • C. UseDynamoDB streams to monitor the changes in the DynamoDB table.
  • D. Transferthe records to S3 which were modified in the last 2 hours

Answer: C

Explanation:
The AWS Documentation mentions the following
A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.
Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a stream record with the primary key attribute(s) of the items that were modified. Astream record contains information about a data modification to a single item in a DynamoDB table. You can configure the stream so that the stream records capture additional information, such as the "before" and "after" images of modified items.
For more information on DynamoDB streams, please visit the below URL: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html

NEW QUESTION 26
......

P.S. Allfreedumps.com now are offering 100% pass ensure DOP-C01 dumps! All DOP-C01 exam questions have been updated with correct answers: https://www.allfreedumps.com/DOP-C01-dumps.html (116 New Questions)