Your success in Amazon-Web-Services DOP-C01 is our sole target and we develop all our DOP-C01 braindumps in a way that facilitates the attainment of this target. Not only is our DOP-C01 study material the best you can find, it is also the most detailed and the most updated. DOP-C01 Practice Exams for Amazon-Web-Services DOP-C01 are written to the highest standards of technical accuracy.

Free DOP-C01 Demo Online For Amazon-Web-Services Certifitcation:

NEW QUESTION 1
Your company has a set of EC2 resources hosted on AWS. Your new IT procedures state that AWS EC2 Instances must be of a particular Instance type. Which of the following can be used to get the list of EC2 Instances which currently don't match the instance type specified in the new IT procedures

  • A. Use AWS Cloudwatch alarms to check which EC2 Instances don't match the intended instance type.
  • B. Use AWS Config to create a rule to check the EC2 Instance type
  • C. Use Trusted Ad visor to check which EC2 Instances don't match the intended instance type.
  • D. Use VPC Flow Logs to check which EC2 Instances don't match the intended instance type.

Answer: B

Explanation:
In AWS Config, you can create a rule which can be used to check if CC2 Instances follow a particular instance type. Below is a snapshot of the output of a rule to check if CC2 instances matches the type of t2micro.
DOP-C01 dumps exhibit
For more information on AWS Config, please visit the below URL:
• https://aws.amazon.com/config/

NEW QUESTION 2
Which of the following are Lifecycle events available in Opswork? Choose 3 answers from the options below

  • A. Setup
  • B. Decommision
  • C. Deploy
  • D. Shutdown

Answer: ACD

Explanation:
Below is a snapshot of the Lifecycle events in Opswork.
DOP-C01 dumps exhibit
For more information on Lifecycle events, please refer to the below URL:
• http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-events.html

NEW QUESTION 3
You are having a web and worker role infrastructure defined in AWS using Amazon EC2 resources. You are using SQS to manage the jobs being send by the web role. Which of the following is the right way to ensure the worker processes are adequately setup to handle the number of jobs send by the web role

  • A. Use Cloudwatch monitoring to check the size of the queue and then scale out SQS to ensure that it can handle the right number of jobs
  • B. Use ELB to ensure that the load is evenly distributed to the set of web and worker instances
  • C. Use Route53 to ensure that the load is evenly distributed to the set of web and worker instances
  • D. Use Cloudwatch monitoring to check the size of the queue and then scale out using Autoscaling to ensure that it can handle the right number of jobs

Answer: D

Explanation:
The below diagram shows how SGS can be used to manage the communication between the Web
and worker roles. The number of messages in the SQS queue can
be used to determine the number of instances that should be there in the AutoScaling Group.
DOP-C01 dumps exhibit
For more information on SQS and Autoscaling, please refer to the below U RL: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-using-sqs-queue.html

NEW QUESTION 4
Your application is currently running on Amazon EC2 instances behind a load balancer. Your management has decided to use a Blue/Green deployment strategy. How should you implement this for each deployment?

  • A. Set up Amazon Route 53 health checks to fail over from any Amazon EC2 instance that is currently being deployed to.
  • B. Using AWS CloudFormation, create a test stack for validating the code, and then deploy the code to each production Amazon EC2 instance.
  • C. Create a new load balancer with new Amazon EC2 instances, carry out the deployment, and then switch DNS over to the new load balancer using Amazon Route 53 after testing.
  • D. Launch more Amazon EC2 instances to ensure high availability, de-register each Amazon EC2 instance from the load balancer, upgrade it, and test it, and then register it again with the load balancer.

Answer: C

Explanation:
The below diagram shows how this can be done
DOP-C01 dumps exhibit
1) First create a new ELB which will be used to point to the new production changes.
2) Use the Weighted Route policy for Route53 to distribute the traffic to the 2 ELB's based on a 80- 20% traffic scenario. This is the normal case, the % can be changed based on the requirement.
3) Finally when all changes have been tested, Route53 can be set to 100% for the new ELB.
Option A is incorrect because this is a failover scenario and cannot be used for Blue green deployments. In Blue Green deployments, you need to have 2 environments running side by side. Option B is incorrect, because you need to a have a production stack with the changes which will run side by side.
Option D is incorrect because this is not a blue green deployment scenario. You cannot control which users will go the new EC2 instances.
For more information on blue green deployments, please refer to the below document link: from AWS
https://dOawsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 5
You work for an insurance company and are responsible for the day-to-day operations of your company's online quote system used to provide insurance quotes to members of the public. Your company wants to use the application logs generated by the system to better understand customer behavior. Industry, regulations also require that you retain all application logs for the system indefinitely in order to investigate fraudulent claims in the future. You have been tasked with designing a log management system with the following requirements:
- All log entries must be retained by the system, even during unplanned instance failure.
- The customer insight team requires immediate access to the logs from the past seven days.
- The fraud investigation team requires access to all historic logs, but will wait up to 24 hours before these logs are available.
How would you meet these requirements in a cost-effective manner? Choose three answers from the options below

  • A. Configure your application to write logs to the instance's ephemeral disk, because this storage is free and has good write performanc
  • B. Create a script that moves the logs from the instance to Amazon S3 once an hour.
  • C. Write a script that is configured to be executed when the instance is stopped or terminated and that will upload any remaining logs on the instance to Amazon S3.
  • D. Create an Amazon S3 lifecycle configuration to move log files from Amazon S3 to Amazon Glacier after seven days.
  • E. Configure your application to write logs to the instance's default Amazon EBS boot volume, because this storage already exist
  • F. Create a script that moves the logs from the instance to Amazon S3 once an hour.
  • G. Configure your application to write logs to a separate Amazon EBS volume with the "delete on termination" field set to fals
  • H. Create a script that moves the logs from the instance to Amazon S3 once an hour.
  • I. Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availabilit
  • J. The script uses the AWS API to identify any unattached Amazon EBS volumes containing log file
  • K. Your housekeeping script will mount the Amazon EBS volume, upload all logs to Amazon S3, and then delete the volume.

Answer: CEF

Explanation:
Since all logs need to be stored indefinitely. Glacier is the best option for this. One can use Lifecycle events to stream the data from S3 to Glacier
Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule
defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as
follows:
• Transition actions - In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARDJA QK for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.
• Expiration actions - In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf. For more information on Lifecycle events, please refer to the below link:
• http://docs.aws.a mazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.htm I You can use scripts to put the logs onto a new volume and then transfer those logs to S3.
Note:
Moving the logs from CBS volume to S3 we have some custom scripts running in the background. Inorder to ensure the minimum memory requirements for the OS and the applications for the script to execute we can use a cost effective ec2 instance.
Considering the computing resource requirements of the instance and the cost factor a tZmicro instance can be used in this case.
The following link provides more information on various t2 instances. https://docs.aws.amazon.com/AWSCC2/latest/WindowsGuide/t2-instances.html
Question is "How would you meet these requirements in a cost-effective manner? Choose three answers from the options below"
So here user has to choose the 3 options so that the requirement is fulfilled. So in the given 6 options, options C, C and F fulfill the requirement.
" The CC2s use CBS volumes and the logs are stored on CBS volumes those are marked for non- termination" - is one of the way to fulfill requirement. So this shouldn't be a issue.

NEW QUESTION 6
The company you work for has a huge amount of infrastructure built on AWS. However there has been some concerns recently about the security of this infrastructure, and an external auditor has been given the task of running a thorough check of all of your company's AWS assets. The auditor will be in the USA while your company's infrastructure resides in the Asia Pacific (Sydney) region on AWS. Initially, he needs to check all of your VPC assets, specifically, security groups and NACLs You have been assigned the task of providing the auditor with a login to be able to do this. Which of the following would be the best and most secure solution to provide the auditor with so he can begin his initial investigations? Choose the correct answer from the options below

  • A. Createan 1AM usertied to an administrator rol
  • B. Also provide an additional level ofsecurity with MFA.
  • C. Givehim root access to your AWS Infrastructure, because he is an auditor he willneed access to every service.
  • D. Createan 1AM user who will have read-only access to your AWS VPC infrastructure andprovide the auditor with those credentials.
  • E. Createan 1AM user with full VPC access but set a condition that will not allow him tomodify anything if the request is from any IP other than his own.

Answer: C

Explanation:
Generally you should refrain from giving high level permissions and give only the required permissions. In this case option C fits well by just providing the relevant access which is required.
For more information on 1AM please see the below link:
• https://aws.amazon.com/iam/

NEW QUESTION 7
You have an Opswork stack defined with Linux instances. You have executed a recipe, but the execution has failed. What is one of the ways that you can use to diagnose what was the reason why the recipe did not execute correctly.

  • A. UseAWS Cloudtrail and check the Opswork logs to diagnose the error
  • B. UseAWS Config and check the Opswork logs to diagnose the error
  • C. Logintotheinstanceandcheckiftherecipewasproperlyconfigured.
  • D. Deregisterthe instance and check the EC2 Logs

Answer: C

Explanation:
The AWS Documentation mentions the following
If a recipe fails, the instance will end up in the setup_failed state instead of online. Even though the instance is not online as far as AWS Ops Works Stacks is concerned, the CC2 instance is running and it's often useful to log in to troubleshoot the issue. For example, you can check whether an application or custom
cookbook is correctly installed. The AWS Ops Works Stacks built-in support for SSH and RDP login is
available only for instances in the online state.
For more information on Opswork troubleshooting, please visit the below URL: http://docs.aws.amazon.com/opsworks/latest/userguide/troubleshoot-debug-login.htmI

NEW QUESTION 8
Which of the following CLI commands is used to spin up new EC2 Instances?

  • A. awsec2 run-instances
  • B. awsec2 create-instances
  • C. awsec2 new-instancesD- awsec2 launch-instances

Answer: A

Explanation:
The AWS Documentation mentions the following
Launches the specified number of instances using an AMI for which you have permissions. You can specify a number of options, or leave the default options. The following rules apply:
[EC2-VPC] If you don't specify a subnet ID. we choose a default subnet from your default VPC for you. If you don't have a default VPC, you must specify a subnet ID in the request.
[EC2-Classic] If don't specify an Availability Zone, we choose one for you.
Some instance types must be launched into a VPC. if you do not have a default VPC. or if you do not specify a subnet ID. the request fails. For more information, see Instance Types Available Only in a VPC.
[EC2-VPC] All instances have a network interface with a primary private IPv4 address. If you don't specify this address, we choose one from the IPv4 range of your subnet.
Not all instance types support IPv6 addresses. For more information, see Instance Types.
If you don't specify a security group ID, we use the default security group. For more information, see Security Groups.
If any of the AMIs have a product code attached for which the user has not subscribed, the request fails. For more information on the Cc2 run instance command please refer to the below link http://docs.aws.a mazon.com/cli/latest/reference/ec2/run-instances.html

NEW QUESTION 9
Which of these is not an instrinsic function in AWS CloudFormation?

  • A. Fn::Equals
  • B. Fn::lf
  • C. Fn::Not
  • D. Fn::Parse

Answer: D

Explanation:
You can use intrinsic functions, such as Fn::lf, Fn::Cquals, and Fn::Not, to conditionally create stack resources. These conditions are evaluated based on input parameters that you declare when you create or update a stack. After you define all your conditions, you can associate them with resources or resource properties in the Resources and Outputs sections of a template.
For more information on Cloud Formation template functions, please refer to the URL:
• http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/intrinsic-function- reference.html and
• http://docs.aws.a mazon.com/AWSCIoudFormation/latest/UserGuide/intri nsic-function- reference-conditions.html

NEW QUESTION 10
You are using Jenkins as your continuous integration systems for the application hosted in AWS. The builds are then placed on newly launched EC2 Instances. You want to ensure that the overall cost of the entire continuous integration and deployment pipeline is minimized. Which of the below options would meet these requirements? Choose 2 answers from the options given below

  • A. Ensurethat all build tests are conducted using Jenkins before deploying the build tonewly launched EC2 Instances.
  • B. Ensurethat all build tests are conducted on the newly launched EC2 Instances.
  • C. Ensurethe Instances are launched only when the build tests are completed.
  • D. Ensurethe Instances are created beforehand for faster turnaround time for theapplication builds to be placed.

Answer: AC

Explanation:
To ensure low cost, one can carry out the build tests on the Jenkins server itself. Once the build tests are completed, the build can then be transferred onto newly launched CC2 Instances.
For more information on AWS and Jenkins, please visit the below URL:
• https://aws.amazon.com/getting-started/projects/setup-jenkins-build-server/
Option D is incorrect. It would be right choice in case the requirement is to get better speed.

NEW QUESTION 11
After a daily scrum with your development teams, you've agreed that using Blue/Green style deployments would benefit the team. Which technique should you use to deliver this new requirement?

  • A. Re-deploy your application on AWS Elastic Beanstalk, and take advantage of Elastic Beanstalk deployment types.
  • B. Using an AWS CloudFormation template, re-deploy your application behind a load balancer, launch a new AWS CloudFormation stack during each deployment, update your load balancer to send half your traffic to the new stack while you test, after verification update the load balancer to send 100% of traffic to the new stack, and then terminate the old stack.
  • C. Create a new Autoscaling group with the new launch configuration and desired capacity same as that of the initial Autoscaling group andassociate it with the same load balance
  • D. Once the new AutoScaling group's instances got registered with ELB, modify the desired capacity of the initial AutoScal ing group to zero and gradually delete the old Auto scaling grou
  • E. •>/
  • F. Using an AWS OpsWorks stack, re-deploy your application behind an Elastic Load Balancing load balancer and take advantage of OpsWorks stack versioning, during deployment create a new version of your application, tell OpsWorks to launch the new version behind your load balancer, and when the new version is launched, terminate the old OpsWorks stack.

Answer: C

Explanation:
This is given as a practice in the Green Deployment Guides
DOP-C01 dumps exhibit
A blue group carries the production load while a green group is staged and deployed with the new code. When if s time to deploy, you simply attach the green group to
the existing load balancer to introduce traffic to the new environment. For HTTP/HTTP'S listeners, the load balancer favors the green Auto Scaling group because it uses a least outstanding requests routing algorithm
As you scale up the green Auto Scaling group, you can take blue Auto Scaling group instances out of service by either terminating them or putting them in Standby state.
For more information on Blue Green Deployments, please refer to the below document link: from AWS
https://dOawsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 12
You have a web application that's developed in Node.js The code is hosted in Git repository. You want to now deploy this application to AWS. Which of the below 2 options can fulfil this requirement.

  • A. Create an Elastic Beanstalk applicatio
  • B. Create a Docker file to install Node.j
  • C. Get the code from Gi
  • D. Use the command "aws git.push" to deploy the application
  • E. Create an AWS CloudFormation template which creates an instance with the AWS::EC2::Container resources typ
  • F. With UserData, install Git to download the Node.js application and then set it up.
  • G. Create a Docker file to install Node.j
  • H. and gets the code from Gi
  • I. Use the Dockerfile to perform the deployment on a new AWS Elastic Beanstalk applicatio
  • J. S
  • K. Create an AWS CloudFormation template which creates an instance with the AWS::EC2::lnstance resource type and an AMI with Docker pre-installe
  • L. With UserData, install Git to download the Node.js application and then set it up.

Answer: CD

Explanation:
Option A is invalid because there is no "awsgitpush" command
Option B is invalid because there is no AWS::CC2::Container resource type.
Clastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.
For more information on Docker and Clastic beanstalk please refer to the below link:
◆ http://docs.aws.a mazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html
When you launch an instance in Amazon CC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon CC2: shell scripts and cloud- init directives. You can also pass this data into the launch wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls). For more information on Cc2 User data please refer to the below link:
• http://docs.aws.a mazon.com/AWSCC2/latest/UserGuide/user-data. htm I
Note: "git aws.push" with CB CLI 2.x - see a forum thread at https://forums.aws.amazon.com/thread.jspa7messageID=583202#jive-message-582979. Basically, this is a predecessor to the newer "eb deploy" command in CB CLI 31. This question kept in order to be consistent with exam.

NEW QUESTION 13
What would you set in your CloudFormation template to fire up different instance sizes based off of environment type? i.e. (If this is for prod, use m1.large instead of t1.micro)

  • A. Outputs
  • B. Resources
  • C. Mappings
  • D. conditions

Answer: D

Explanation:
The optional Conditions section includes statements that define when a resource is created or when a property is defined. For example, you can compare whether a value is equal to another value. Based on the result of that condition, you can conditionally create resources. If you have multiple conditions, separate them with commas.
For more information on Cloudformation conditions please visit the below link
http://docs^ws.a mazon.com/AWSCIoudFormation/latest/UserGuide/cond itions-section- structure.htm I

NEW QUESTION 14
You have been asked to de-risk deployments at your company. Specifically, the CEO is concerned about outages that occur because of accidental inconsistencies between Staging and Production, which sometimes cause unexpected behaviors in Production even when Staging tests pass. You already use Docker to get high consistency between Staging and Production for the application environment on your EC2 instances. How do you further de-risk the rest of the execution environment, since in AWS, there are many service components you may use beyond EC2 virtual machines?

  • A. Develop models of your entire cloud system in CloudFormatio
  • B. Use this model in Staging and Production to achieve greater parit
  • C. */
  • D. Use AWS Config to force the Staging and Production stacks to have configuration parit
  • E. Any differences will be detected for you so you are aware of risks.
  • F. Use AMIs to ensure the whole machine, including the kernel of the virual machines, is consistent,since Docker uses Linux Container (LXC) technology, and we need to make sure the container environment is consistent.
  • G. Use AWS ECS and Docker clusterin
  • H. This will make sure that the AMIs and machine sizes are the same across both environments.

Answer: A

Explanation:
After you have your stacks and resources set up, you can reuse your templates to replicate your infrastructure in multiple environments. For example, you can create environments for development, testing, and production so that you can test changes before implementing them into production. To make templates reusable, use the parameters, mappings, and conditions sections so that you can customize your stacks when you create them. For example, for your development environments, you can specify a lower-cost instance type compared to your production environment, but all other configurations and settings remain the same
For more information on Cloudformation best practices please refer to the below link: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/best-practices.html

NEW QUESTION 15
Which of the following is not a supported platform for the Elastic beanstalk service

  • A. Java
  • B. AngularJS
  • C. PHP
  • D. .Net

Answer: B

Explanation:
DOP-C01 dumps exhibit
For more information on Elastic beanstalk, please visit the below URL:
http://docs.aws.a mazon.com/elasticbeanstalk/latest/dg/concepts.platforms. htm I

NEW QUESTION 16
If your application performs operations or workflows that take a long time to complete, what service can the Elastic Beanstalk environment do for you?

  • A. Manages a Amazon SQS queue and running a daemon process on each instance
  • B. Manages a Amazon SNS Topic and running a daemon process on each instance
  • C. Manages Lambda functions and running a daemon process on each instance
  • D. Manages the ELB and running a daemon process on each instance

Answer: A

Explanation:
Elastic Beanstalk simplifies this process by managing the Amazon SQS queue and running a daemon process on each instance that reads from the queue for you.
When the daemon pulls an item from the queue, it sends an HTTP POST request locally to http://localhost/ with the contents of the queue message in the body. All that your application needs to do is perform the long-running task in response to the POST.
For more information Elastic Beanstalk managing worker environments, please visit the below URL:
◆ http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.htm I

NEW QUESTION 17
You are setting up cloudformation templates for your organization. The cloudformation template consists of creating EC2 Instances for both your development and production environments in the same region. Each of these instances will have an Elastic IP and a security group attached to them which will be done via Cloudformation. Your cloudformation stack for the development environment gets successfully created, but then the production cloudformation stack fails. Which of the below could be a reason for this.

  • A. Youhave chosen the wrong tags when creating the instances in both environments.
  • B. Youhit the soft limit of 5 EIPs per region when creating the developmentenvironment.
  • C. Youhit the soft limit for security groups when creating the developmentenvironment.
  • D. Youdidn't choose the Production version of the AMI you are using when creating theproduction stack.

Answer: B

Explanation:
The most viable reason could be that you reached the limit for the number of Clastic IP's in the region.
For more information on AWS CC2 service limits, please refer to the below link:
• http://docs.aws.a mazon.com/AWSCC2/latest/UserGuide/ec2-resource-l imits.html

NEW QUESTION 18
You are working with a customer who is using Chef Configuration management in their data center. Which service is designed to let the customer leverage existing Chef recipes in AWS?

  • A. AmazonSimple Workflow Service
  • B. AWSEIastic Beanstalk
  • C. AWSCIoudFormation
  • D. AWSOpsWorks

Answer: D

Explanation:
AWS OpsWorks is a configuration management service that helps you configure and operate applications of all shapes and sizes using Chef. You can define the application's architecture and the specification of each component including package installation, software configuration and resources
such as storage. Start from templates for common technologies like application servers and databases or build your own to perform any task that can be scripted. AWS OpsWorks includes automation to scale your application based on time or load and dynamic configuration to orchestrate changes as your environment scales.
For more information on Opswork, please visit the link:
• https://aws.amazon.com/opsworks/

NEW QUESTION 19
Which of the following is not a supported platform on Elastic Beanstalk?

  • A. PackerBuilder
  • B. Go
  • C. Nodejs
  • D. JavaSE
  • E. Kubernetes

Answer: E

Explanation:
Answer-C
Below is the list of supported platforms
*Packer Builder
*Single Container Docker
*Multicontainer Docker
*Preconfigured Docker
*Go
*Java SE
*Java with Tomcat
*NET on Windows Server with I IS
*Nodejs
*PHP
*Python
*Ruby
For more information on the supported platforms please refer to the below link
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms. Html

NEW QUESTION 20
Your serverless architecture using AWS API Gateway, AWS Lambda, and AWS DynamoDB experienced a large increase in traffic to a sustained 3000 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your DynamoDB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue?

  • A. Your API Gateway deployment is throttling your requests.
  • B. Your AWS API Gateway Deployment is bottleneckingon request (deserialization.
  • C. You did not request a limit increase on concurrent Lambda function executions.
  • D. You used Consistent Read requests on DynamoDB and are experiencing semaphore lock.

Answer: C

Explanation:
Every Lambda function is allocated with a fixed amount of specific resources regardless of the memory allocation, and each function is allocated with a fixed amount of code storage per function and per account.
By default, AWS Lambda limits the total concurrent executions across all functions within a given region to 1000.
For more information on Concurrent executions, please visit the below URL: http://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.htmI

NEW QUESTION 21
Which of the following Cloudformation helper scripts can help install packages on EC2 resources

  • A. cfn-init
  • B. cfn-signal
  • C. cfn-get-metadata
  • D. cfn-hup

Answer: A

Explanation:
The AWS Documentation mentions
Currently, AWS CloudFormation provides the following helpers:
cf n-init: Used to retrieve and interpret the resource metadata, installing packages, creating files and starting services.
cf n-signal: A simple wrapper to signal an AWS CloudFormation CreationPolicy or WaitCondition,
enabling you to synchronize other resources in the stack with the application being ready.
cf n-get-metadata: A wrapper script making it easy to retrieve either all metadata defined for a resource or path to a specific key or subtree of the resource metadata.
cf n-hup: A daemon to check for updates to metadata and execute custom hooks when the changes are detected. For more information on helper scripts, please visit the below URL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/cfn-helper-scripts-reference.htmI

NEW QUESTION 22
Which of the following are components of the AWS Data Pipeline service. Choose 2 answers from the options given below

  • A. Pipeline definition
  • B. Task Runner
  • C. Task History
  • D. Workflow Runner

Answer: AB

Explanation:
The AWS Documentation mentions the following on AWS Pipeline
The following components of AWS Data Pipeline work together to manage your data: A pipeline definition specifies the business logic of your data management.
A pipeline schedules and runs tasks. You upload your pipeline definition to the pipeline, and then activate the pipeline. You can edit the pipeline definition for a running pipeline and activate the pipeline again for it to take effect. You can deactivate the pipeline, modify a data source, and then activate the pipeline again. When you are finished with your pipeline, you can delete it.
Task Runner polls for tasks and then performs those tasks. For example. Task Runner could copy log files to Amazon S3 and launch Amazon EMR clusters. Task Runner is installed and runs automatically on resources created by your pipeline definitions. You can write a custom task runner application, or you can use the Task Runner application that is provided by AWS Data Pipeline.
For more information on AWS Pipeline, please visit the link: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/what-is-datapipeline.html

NEW QUESTION 23
You are in charge of designing a number of Cloudformation templates for your organization. You need to ensure that no one can accidentally update the production based resources on the stack during a stack update. How can this be achieved in the most efficient way?

  • A. Createtags for the resources and then create 1AM policies to protect the resources.
  • B. Usea Stack based policy to protect the production based resources.
  • C. UseS3 bucket policies to protect the resources.
  • D. UseMFA to protect the resources

Answer: B

Explanation:
The AWS Documentation mentions
When you create a stack, all update actions are allowed on all resources. By default, anyone with stack update permissions can update all of the resources in the stack. During an update, some resources might require an interruption or be completely replaced, resulting in new physical IDs or completely new storage. You can prevent stack resources from being unintentionally updated or deleted during a stack update by using a stack policy. A stack policy is a JSON document that defines the update action1.-; that car1 be performed on designated resources.
For more information on protecting stack resources, please visit the below url http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/protect-stack-resources.html

NEW QUESTION 24
Which of the below is not a lifecycle event in Opswork?

  • A. Setup
  • B. Uninstall
  • C. Configure
  • D. Shutdown

Answer: B

Explanation:
Below are the Lifecycle events of Opsstack
1) Setup - This event occurs after a started instance has finished booting.
2) Configure - This event occurs on all of the stack's instances when one of the following occurs:
a) An instance enters or leaves the online state.
b) You associate an Clastic IP address with an instance or disassociate one from an instance.
c) You attach an Clastic Load Balancing load balancer to a layer, or detach one from a layer.
3) Deploy - This event occurs when you run a Deploy command, typically to deploy an application to a set of application server instances.
4) Undeploy - This event occurs when you delete an app or run an Undeploy command to remove an app from a set of application server instances.
5) Shutdown - This event occurs after you direct AWS Ops Works Stacks to shut an instance down but before the associated Amazon CC2 instance is actually terminated
For more information on Opswork lifecycle events, please visit the below URL:
• http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-events.htm I

NEW QUESTION 25
A user is accessing RDS from an application. The user has enabled the Multi AZ feature with the MS SQL RDS DB. During a planned outage how will AWS ensure that a switch from DB to a standby replica will not affect access to the application?

  • A. RDS will have an internal IP which will redirect all requests to the new DB
  • B. RDS uses DNS to switch over to stand by replica for seamless transition
  • C. The switch over changes Hardware so RDS does not need to worry about access
  • D. RDS will have both the DBs running independently and the user has to manually switch over

Answer: B

Explanation:
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi- AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Cach AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable.
In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete.
And as per the AWS documentation, the cname is changed to the standby DB when the primary one fails.
Q: What happens during Multi-AZ failover and how long does it take?
"Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAMC) for your DB instance to point at the standby, which is in turn promoted to become the new primary. We encourage you to follow best practices and implement database connection retry at the application layer".
https://aws.amazon.com/rds/faqs/
Based on this, RDS Multi-AZ will use DNS to create the CNAM C and hence B is the right option. For more information on RDS Multi-AZ please visit the link:
http://docs.aws.a mazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.htm I

NEW QUESTION 26
......

Recommend!! Get the Full DOP-C01 dumps in VCE and PDF From Surepassexam, Welcome to Download: https://www.surepassexam.com/DOP-C01-exam-dumps.html (New 116 Q&As Version)