Precise of DOP-C01 practice exam materials and training for Amazon-Web-Services certification for IT examinee, Real Success Guaranteed with Updated DOP-C01 pdf dumps vce Materials. 100% PASS AWS Certified DevOps Engineer- Professional exam Today!
Online DOP-C01 free questions and answers of New Version:
NEW QUESTION 1
You work for a company that has multiple applications which are very different and built on different programming languages. How can you deploy applications as quickly as possible?
- A. Develop each app in one Docker container and deploy using ElasticBeanstalk
- B. Create a Lambda function deployment package consisting of code and any dependencies
- C. Develop each app in a separate Docker container and deploy using Elastic Beanstalk V
- D. Develop each app in a separate Docker containers and deploy using CloudFormation
Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You
can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other
platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.
Option A is an efficient way to use Docker. The entire idea of Docker is that you have a separate environment for various applications.
Option B is ideally used to running code and not packaging the applications and dependencies Option D is not ideal deploying Docker containers using Cloudformation
For more information on Docker and Clastic Beanstalk, please visit the below URL:
◆ http://docs.aws.a mazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html
NEW QUESTION 2
You work for a startup that has developed a new photo-sharing application for mobile devices. Over recent months your application has increased in popularity; this has resulted in a decrease in the performance of the application clue to the increased load. Your application has a two-tier architecture that is composed of an Auto Scaling PHP application tier and a MySQL RDS instance initially deployed with AWS Cloud Formation. Your Auto Scaling group has a min value of 4 and a max value of 8. The desired capacity is now at 8 because of the high CPU utilization of the instances. After some analysis, you are confident that the performance issues stem from a constraint in CPU capacity, although memory utilization remains low. You therefore decide to move from the general-purpose M3 instances to the compute-optimized C3 instances. How would you deploy this change while minimizing any interruption to your end users?
- A. Sign into the AWS Management Console, copy the old launch configuration, and create a new launch configuration that specifies the C3 instance
- B. Update the Auto Scalinggroup with the new launch configuratio
- C. Auto Scaling will then update the instance type of all running instances.
- D. Sign into the AWS Management Console, and update the existing launch configuration with the new C3 instance typ
- E. Add an UpdatePolicy attribute to your Auto Scaling group that specifies AutoScalingRollingUpdate.
- F. Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance typ
- G. Run a stack update with the new templat
- H. Auto Scaling will then update the instances with the new instance type.
- I. Update the launch configuration specified in the AWS CloudFormation template with the new C3instance typ
- J. Also add an UpdatePolicy attribute to your Auto Scalinggroup that specifies AutoScalingRollingUpdat
- K. Run a stack update with the new template.
The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePoIicy attribute. This is used to define how an Auto Scalinggroup resource is updated when an update to the Cloud Formation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified. For more information on rolling updates, please visit the below link:
• https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling- updates/
NEW QUESTION 3
Which of the following is not a rolling type update which is present for Configuration Updates when it comes to the Elastic Beanstalk service
- A. Rolling based on Health
- B. Rolling based on Instances
- C. Immutable
- D. Rolling based on time
When you go to the configuration of your Elastic Beanstalk environment, below are the updates that are possible
The AWS Documentation mentions
1) With health-based rolling updates. Elastic Beanstalk waits until instances in a batch pass health checks before moving on to the next batch.
2) For time-based rolling updates, you can configure the amount of time that Elastic Beanstalk waits after completing the launch of a batch of instances before moving on to the next batch. This pause time allows your application to bootsrap and start serving requests.
3) Immutable environment updates are an alternative to rolling updates that ensure that configuration changes that require replacing instances are applied efficiently and safely. If an immutable environment update fails, the rollback process requires only terminating an Auto Scalinggroup. A failed rolling update, on the other hand, requires performing an additional rolling update to roll back the changes.
For more information on Rolling updates for Elastic beanstalk configuration updates, please visit the below URL:
NEW QUESTION 4
You are planning on using encrypted snapshots in the design of your AWS Infrastructure. Which of the following statements are true with regards to EBS Encryption
- A. Snapshottingan encrypted volume makes an encrypted snapshot; restoring an encrypted snapshot creates an encrypted volume when specified / requested.
- B. Snapshotting an encrypted volume makes an encrypted snapshot when specified / requested; restoring an encrypted snapshot creates an encrypted volume when specified / requested.
- C. Snapshotting an encrypted volume makes an encrypted snapshot; restoring an encrypted snapshot always creates an encrypted volume.
- D. Snapshotting an encrypted volume makes an encrypted snapshot when specified / requested; restoring an encrypted snapshot always creates an encrypted volume.
Amazon CBS encryption offers you a simple encryption solution for your CBS volumes without the need for you to build, maintain, and secure your own key management infrastructure. When you create an encrypted CBS volume and attach it to a supported instance type, the following types of data are encrypted:
• Data at rest inside the volume
• All data moving between the volume and the instance
• All snapshots created from the volume
Snapshots that are taken from encrypted volumes are automatically encrypted. Volumes that are created from encrypted snapshots are also automatically
For more information on CBS encryption, please visit the below URL:
• http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/ CBSCncryption.html
NEW QUESTION 5
What is web identity federation?
- A. Use of an identity provider like Google or Facebook to become an AWS1AM User.
- B. Use of an identity provider like Google or Facebook to exchange for temporary AWS security credentials.
- C. Use of AWS 1AM Usertokens to log in as a Google or Facebook user.
- D. Use STS service to create an user on AWS which will allow them to login from facebook orgoogle app.
With web identity federation, you don't need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) — such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an 1AM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don't have to embed and distribute long- term security credentials with your application. For more information on Web Identity federation please refer to the below link:
NEW QUESTION 6
Which of the following is incorrect when it comes to using the instances in an Opswork stack?
- A. In a stack you can use a mix of both Windowsand Linux operating systems
- B. You can start and stop instances manually in a stack
- C. You can use custom AMI'S as long as they are based on one of the AWS OpsWorks Stacks- supported AMIs
- D. You can use time-based automatic scaling with any stack
The AWS documentation mentions the following about Opswork stack
• A stack's instances can run either Linux or Windows.
A stack can have different Linux versions or distributions on different instances, but you cannot mix Linux and Windows instances.
• You can use custom AMIs (Amazon Machine Images), but they must be based on one of the AWS Ops Works Stacks-supported AMIs
• You can start and stop instances manually or have AWS OpsWorks Stacks automatically scale the number of instances. You can use time-based automatic scaling with any stack; Linux stacks also can use load-based scaling.
• In addition to using AWS OpsWorks Stacks to create Amazon EC2 instances, you can also register instances with a Linux stack that were created outside of AWS OpsWorks Stacks.
For more information on Opswork stacks, please visit the below link: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html
NEW QUESTION 7
Which of the following commands for the elastic beanstalk CLI can be used to create the current application into the specified environment?
- A. ebcreate
- B. ebstart
- C. enenv
- D. enapp
Differences from Version 3 of EB CLI
CB is a command line interface (CLI) tool for Clastic Beanstalk that you can use to deploy applications quickly and more easily. The latest version of CB was introduced by Clastic Beanstalk in CB CLI 3. Although Clastic Beanstalk still supports CB 2.6 for customers who previously installed and continue to use it, you should migrate to the latest version of CB CLI 3, as it can manage environments that you launched using CB CLI 2.6 or earlier versions of CB CLI. CB CLI automatically retrieves settings from an environment created using CB if the environment is running. Note that CB CLI 3 does not store option settings locally, as in earlier versions.
CB CLI introduces the commands eb create, eb deploy, eb open, eb console, eb scale, eb setenv, eb config, eb terminate, eb clone, eb list, eb use, eb printenv, and eb ssh. In CB CLI 3.1 or later, you can also use the eb swap command. In CB CLI 3.2 only, you can use the eb abort, eb platform, and eb upgrade commands. In addition to these new commands, CB CLI 3 commands differ from CB CLI 2.6 commands in several cases:
1. eb init - Use eb init to create an .elasticbeanstalk directory in an existing project directory and create a new Clastic Beanstalk application for the project. Unlike with previous versions, CB CLI 3 and later versions do not prompt you to create an environment.
2. eb start - CB CLI 3 does not include the command eb start. Use eb create to create an environment.
3. eb stop - CB CLI 3 does not include the command eb stop. Use eb terminate to completely terminate an environment and clean up.
4. eb push and git aws.push - CB CLI 3 does not include the commands eb push or git aws.push. Use eb deploy to update your application code.
5. eb update - CB CLI 3 does not include the command eb update. Use eb config to update an environment.
6. eb branch - CB CLI 3 does not include the command eb branch.
For more information about using CB CLI 3 commands to create and manage an application, see CB CLI Command Reference. For a command reference for CB 2.6, see CB CLI 2 Commands. For a walkthrough of how to deploy a sample application using CB CLI 3, see Managing Clastic Beanstalk environments with the CB CLI. For a walkthrough of how to deploy a sample application using eb 2.6, see Getting Started with Cb. For a walkthrough of how to use CB 2.6 to map a Git branch to a specific environment, see Deploying a Git Branch to a Specific environment. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli. html #eb-cli2-differences Note: Additionally, CB CLI 2.6 has been deprecated. It has been replaced by AWS CLI https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cl i3.htm I We will replace this question soon.
NEW QUESTION 8
You were just hired as a DevOps Engineer for a startup. Your startup uses AWS for 100% of their infrastructure. They currently have no automation at all for deployment, and they have had many failures while trying to deploy to production.The company has told you deployment process risk mitigation is the most important thing now, and you have a lot of budget for tools and AWS resources.
Their stack includes a 2-tier API with data stored in DynamoDB or S3, depending on type. The Compute layer is EC2 in Auto Scaling Groups. They use Route53 for DNS pointing to an ELB. An ELB balances load across the EC2 instances. The scaling group properly varies between 4 and 12 EC2 servers. Which of the following approaches, given this company's stack and their priorities, best meets the company's needs?
- A. Model the stack in AWS Elastic Beanstalk as a single Application with multiple Environment
- B. Use Elastic Beanstalk's Rolling Deploy option to progressively roll out application code changes when promoting across environments.
- C. Model the stack in three CloudFormation templates: Data layer, compute layer, and networking laye
- D. Write stack deployment and integration testing automation following Blue-Green methodologie
- E. •>/
- F. Model the stack in AWS OpsWorks as a single Stack, with 1 compute layer and its associated EL
- G. Use Chef and App Deployments to automate Rolling Deployment.
- H. Model the stack in 1 CloudFormation template, to ensure consistency and dependency graph resolutio
- I. Write deployment and integration testingautomation following Rolling Deployment methodologies.
Here you are using 2 of the best practices for deployment, one is Blue Green Deployments and the other is using Nested Cloudformation stacks.
The AWS Documentation mentions the below on nested stacks
As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single,
unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference other templates.
For more information on Cloudformation best practises, please visit the link:
• http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/best-practices. html For more information on Blue Green Deployment, please visit the link:
NEW QUESTION 9
You are currently using Elastic Beanstalk to host your production environment. You need to rollout updates to your application hosted on this environment. This is a critical application which is why there is a requirement that the rollback, if required, should be carried out with the least amount of downtime. Which of the following deployment strategies would ideally help achieve this purpose
- A. Create a Cloudformation template with the same resources as those in the Elastic beanstalk environmen
- B. If the deployment fails, deploy the Cloudformation template.
- C. Use Rolling updates in Elastic Beanstalk so that if the deployment fails, the rolling updates feature would roll back to the last deployment.
- D. Create another parallel environment in elastic beanstal
- E. Use the Swap URL feature.
- F. Create another parallel environment in elastic beanstal
- G. Create a new Route53 Domain name for the new environment and release that url to the users.
Since the requirement is to have the least amount of downtime, the ideal way is to create a blue green deployment environment and then use the Swap URL feature
to swap environments for the new deployment and then do the swap back, incase the deployment fails.
The AWS Documentation mentions the following on the SWAP url feature of Elastic Beanstalk
Because Elastic Beanstalk performs an in-place update when you update your application versions, your application may become unavailable to users for a short period of time. It is possible to avoid this downtime by performing a blue/green deployment, where you deploy the new version to a separate environment, and then swap CNAMCs of the two environments to redirect traffic to the new version instantly.
NEW QUESTION 10
When one creates an encrypted EBS volume and attach it to a supported instance type ,which of the following data types are encrypted?
Choose 3 answers from the options below
- A. Dataat rest inside the volume
- B. Alldata copied from the EBS volume to S3
- C. Alldata moving between the volume and the instance
- D. Allsnapshots created from the volume
This is clearly given in the aws documentation. Amazon EBS Encryption
Amazon CBS encryption offers a simple encryption solution for your CBS volumes without the need to build, maintain, and secure your own key management infrastructure. When you create an encrypted CBS volume and attach it to a supported instance type, the following types of data are encrypted:
• Data at rest inside the volume
• All data moving between the volume and the instance
• All snapshots created from the volume
• All volumes created from those snapshots
For more information on CBS encryption, please refer to the below url http://docs.aws.a mazon.com/AWSCC2/latest/UserGuide/CBSCncryption.html
NEW QUESTION 11
You currently have the following setup in AWS
1) An Elastic Load Balancer
2) Auto Scaling Group which launches EC2 Instances
3) AMIs with your code pre-installed
You want to deploy the updates of your app to only a certain number of users. You want to have a cost-effective solution. You should also be able to revert back quickly. Which of the below solutions is the most feasible one?
- A. Create a second ELB, and a new Auto Scaling Group assigned a new Launch Configuratio
- B. Create a new AMI with the updated ap
- C. Use Route53 Weighted Round Robin records to adjust the proportion of traffic hitting the two ELBs.
- D. Create new AM Is with the new ap
- E. Then use the new EC2 instances in half proportion to the older instances.
- F. Redeploy with AWS Elastic Beanstalk and Elastic Beanstalk version
- G. Use Route 53 Weighted Round Robin records to adjust the proportion of traffic hitting the two ELBs
- H. Create a full second stack of instances, cut the DNS over to the new stack of instances, and change the DNS back if a rollback is needed.
The Weighted Routing policy of Route53 can be used to direct a proportion of traffic to your application. The best option is to create a second CLB, attach the new Autoscaling Group and then use Route53 to divert the traffic.
Option B is wrong because just having EC2 instances running with the new code will not help.
Option C is wrong because Clastic beanstalk is good for development environments, and also there is no mention of having 2 environments where environment url's
can be swapped.
Option D is wrong because you still need Route53 to split the traffic.
For more information on Route53 routing policies, please refer to the below link: http://docs.aws.a mazon.com/Route53/latest/DeveloperGuide/routing-policy. html
NEW QUESTION 12
You have an Autoscaling Group configured to launch EC2 Instances for your application. But you notice that the Autoscaling Group is not launching instances in the right proportion. In fact instances are being launched too fast. What can you do to mitigate this issue? Choose 2 answers from the options given below
- A. Adjust the cooldown period set for the Autoscaling Group
- B. Set a custom metric which monitors a key application functionality forthe scale-in and scale-out process.
- C. Adjust the CPU threshold set for the Autoscaling scale-in and scale-out process.
- D. Adjust the Memory threshold set forthe Autoscaling scale-in and scale-out process.
The Auto Scaling cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that Auto Scaling doesn't launch or terminate additional instances before the previous scaling activity takes effect.
For more information on the cool down period, please refer to the below link:
• http://docs^ws.a mazon.com/autoscaling/latest/userguide/Cooldown.html
Also it is better to monitor the application based on a key feature and then trigger the scale-in and scale-out feature accordingly. In the question, there is no mention of CPU or memory causing the issue.
NEW QUESTION 13
You are in charge of designing a Cloudformation template which deploys a LAMP stack. After deploying a stack, you see that the status of the stack is showing as CREATE_COMPLETE, but the apache server is still not up and running and is experiencing issues while starting up. You want to ensure that the stack creation only shows the status of CREATE_COMPLETE after all resources defined in the stack are up and running. How can you achieve this?
Choose 2 answers from the options given below.
- A. Definea stack policy which defines that all underlying resources should be up andrunning before showing a status of CREATE_COMPLETE.
- B. Uselifecycle hooks to mark the completion of the creation and configuration of theunderlying resource.
- C. Usethe CreationPolicy to ensure it is associated with the EC2 Instance resource.
- D. Usethe CFN helper scripts to signal once the resource configuration is complete.
The AWS Documentation mentions
When you provision an Amazon EC2 instance in an AWS Cloud Formation stack, you might specify additional actions to configure the instance, such as install software packages or bootstrap applications. Normally, CloudFormation proceeds with stack creation after the instance has been successfully created. However, you can use a Creation Pol icy so that CloudFormation proceeds with stack creation only after your configuration actions are done. That way you'll know your applications are ready to go after stack creation succeeds.
For more information on the Creation Policy, please visit the below url https://aws.amazon.com/blogs/devops/use-a-creationpolicy-to-wait-for-on-instance-configurations/
NEW QUESTION 14
Your company has an application hosted on an Elastic beanstalk environment. You have been instructed that whenever application changes occur and new versions need to be deployed that the fastest deployment approach is employed. Which of the following deployment mechanisms will fulfil this requirement?
- A. Allatonce
- B. Rolling
- C. Immutable
- D. Rollingwith batch
The following table from the AWS documentation shows the deployment time for each deployment methods.
For more information on Elastic beanstalk deployments, please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing- version, htm I
NEW QUESTION 15
You have a large number of web servers in an Auto Scalinggroup behind a load balancer. On an hourly basis, you want to filter and process the logs to collect data on unique visitors, and then put that data in a durable data store in order to run reports. Web servers in the Auto Scalinggroup are constantly launching and terminating based on your scaling policies, but you do not want to lose any of the log data from these servers during a stop/termination initiated by a user or by Auto Scaling. What two approaches will meet these requirements? Choose two answers from the optionsgiven below.
- A. Install an Amazon Cloudwatch Logs Agent on every web server during the bootstrap proces
- B. Create a CloudWatch log group and defineMetric Filters to create custom metrics that track unique visitors from the streaming web server log
- C. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloudwatch custom metric
- D. ^/
- E. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacie
- F. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminate
- G. Use Amazon Data Pipeline to process the data in Amazon Glacier and run reports every hour.
- H. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucke
- I. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminate
- J. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift In order to process and run reports every hour.
- K. Install an AWS Data Pipeline Logs Agent on every web server during the bootstrap proces
- L. Create a log group object in AWS Data Pipeline, and define Metric Filters to move processed log data directly from the web servers to Amazon Redshift and run reports every hour.
You can use the Cloud Watch Logs agent installer on an existing CC2 instance to install and configure the Cloud Watch Logs agent.
For more information, please visit the below link:
• http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Qu ickStartCC2lnstance.html
You can publish your own metrics to Cloud Watch using the AWS CLI or an API. For more information, please visit the below link:
• http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.htmI Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (Bl) tools. It allows you to run complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution. Most results come back in seconds. For more information on copying data from S3 to redshift, please refer to the below link:
• http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-copydata- redshift html
NEW QUESTION 16
One of your instances is reporting an unhealthy system status check. However, this is not something you should have to monitor and repair on your own. How might you automate the repair of the system status check failure in an AWS environment? Choose the correct answer from the options given below
- A. Create Cloud Watch alarms for StatuscheckFailed_System metrics and select EC2 action-Recover the instance
- B. Writea script that queries the EC2 API for each instance status check
- C. Writea script that periodically shuts down and starts instances based on certainstats.
- D. Implementa third party monitoring tool.
Using Amazon Cloud Watch alarm actions, you can create alarms that automatically stop, terminate, reboot, or recover your CC2 instances. You can use the stop or terminate actions to help you save money when you no longer need an instance to be running. You can use the reboot and recover actions to automatically reboot those instances or recover them onto new hardware if a system impairment occurs.
For more information on using alarm actions, please refer to the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/UsingAlarmActions.html
NEW QUESTION 17
One of your engineers has written a web application in the Go Programming language and has asked your DevOps team to deploy it to AWS. The application code is hosted on a Git repository.
What are your options? (Select Two)
- A. Create a new AWS Elastic Beanstalk application and configure a Go environment to host your application, Using Git check out the latest version of the code, once the local repository for Elastic Beanstalk is configured use "eb create" command to create an environment and then use "eb deploy" command to deploy the application.
- B. Writea Dockerf ile that installs the Go base image and uses Git to fetch yourapplicatio
- C. Create a new AWS OpsWorks stack that contains a Docker layer thatuses the Dockerrun.aws.json file to deploy your container and then use theDockerfile to automate the deployment.
- D. Writea Dockerfile that installs the Go base image and fetches your application usingGit, Create a new AWS Elastic Beanstalk application and use this Dockerfile toautomate the deployment.
- E. Writea Dockerfile that installs the Go base image and fetches your application usingGit, Create anAWS CloudFormation template that creates and associates an AWS::EC2::lnstanceresource type with an AWS::EC2::Container resource type.
Opsworks works with Chef recipes and not with Docker containers so Option B and C are invalid. There is no AWS::CC2::Container resource for Cloudformation so Option D is invalid.
Below is the documentation on Clastic beanstalk and Docker
Clastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.
For more information on Clastic beanstalk and Docker, please visit the link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html
https://docs.aws.a mazon.com/elasticbeanstalk/latest/dg/eb-cl i3-getting-started.htmI https://docs^ws.amazon.com/elasticbeanstalk/latest/dg/eb3-cli-githtml
NEW QUESTION 18
You are currently using SGS to pass messages to EC2 Instances. You need to pass messages which are greater than 5 MB in size. Which of the following can help you accomplish this.
- A. UseKinesis as a buffer stream for message bodie
- B. Store the checkpoint id fortheplacement in the Kinesis Stream in SQS.
- C. Usethe Amazon SQS Extended Client Library for Java and Amazon S3 as a storagemechanism for message bodie
- D. */
- E. UseSQS's support for message partitioning and multi-part uploads on Amazon S3.
- F. UseAWS EFS as a shared pool storage mediu
- G. Store filesystem pointers to the fileson disk in the SQS message bodies.
The AWS documentation mentions the following
You can manage Amazon SQS messages with Amazon S3. This is especially useful for storing and consuming messages with a message size of up to 2 GB. To manage
Amazon SQS messages with Amazon S3, use the Amazon SQS Extended Client Library for Java. Specifically, you use this library to:
Specify whether messages are always stored in Amazon S3 or only when a message's size exceeds 256 KB.
Send a message that references a single message object stored in an Amazon S3 bucket. Get the corresponding message object from an Amazon S3 bucket.
Delete the corresponding message object from an Amazon S3 bucket. For more information on SQS and sending larger messages please visit the link
NEW QUESTION 19
You need to perform ad-hoc analysis on log data, including searching quickly for specific error codes and reference numbers. Which should you evaluate first?
- A. AWS Elasticsearch Service
- B. AWSRedShift
- C. AWSEMR
- D. AWSDynamoDB
Amazon Dasticsearch Service makes it easy to deploy, operate, and scale dasticsearch for log analytics, full text search, application monitoring, and more. Amazon
Oasticsearch Service is a fully managed service that delivers Dasticsearch's easy-to-use APIs and real- time capabilities along with the availability, scalability, and security required by production workloads. The service offers built-in integrations with Kibana, Logstash, and AWS services including Amazon Kinesis Firehose, AWS Lambda, and Amazon CloudWatch so that you can go from raw data to actionable insights quickly For more information on the elastic cache service, please refer to the below link:
NEW QUESTION 20
As part of your deployment process, you are configuring your continuous integration (CI) system to build AMIs. You want to build them in an automated manner that is also cost-efficient. Which method should you use?
- A. Attachan Amazon EBS volume to your CI instance, build the root file system of yourimage on the volume, and use the Createlmage API call to create an AMI out ofthis volume.
- B. Havethe CI system launch a new instance, bootstrap the code and apps onto theinstance and create an AMI out of it.
- C. Uploadall contents of the image to Amazon S3 launch the base instance, download allof the contents from Amazon S3 and create the AMI.
- D. Havethe CI system launch a new spot instance bootstrap the code and apps onto theinstance and create an AMI out of it.
The AWS documentation mentions the following
If your organization uses Jenkins software in a CI/CD pipeline, you can add Automation as a post- build step to pre-install application releases into Amazon Machine Images (AMIs). You can also use the Jenkins scheduling feature to call Automation and create your own operating system (OS) patching cadence
For more information on Automation with Jenkins, please visit the link:
• http://docs.aws.a mazon.com/systems-manager/latest/userguide/automation-jenkinsJntm I
• https://wiki.jenkins.io/display/JCNKINS/Amazon < CC21 Plugin
NEW QUESTION 21
You have a set of EC2 Instances hosting an nginx server and a web application that is used by a set of users in your organization. After a recent application version upgrade, the instance runs into technical issues and needs an immediate restart. This does not give you enough time to inspect the cause of the issue on the server. Which of the following options if implemented prior to the incident would have assisted in detecting the underlying cause of the issue?
- A. Enabledetailed monitoring and check the Cloudwatch metrics to see the cause of theissue.
- B. Createa snapshot of the EBS volume before restart, attach it to another instance as avolume and then diagnose the issue.
- C. Streamall the data to Amazon Kinesis and then analyze the data in real time.
- D. Install Cloudwatch logs agent on the instance and send all the logs to Cloudwatch logs.
The AWS documentation mentions the following
You can publish log data from Amazon CC2 instances running Linux or Windows Server, and logged events from AWS CloudTrail. CloudWatch Logs can consume logs
from resources in any region, but you can only view the log data in the CloudWatch console in the regions where CloudWatch Logs is supported.
Option A is invalid as detailed monitoring will only help us to get more information about the performance metrics of the instances, volumes etc and will not be able to provide full information regarding technical issues.
Option B is incorrect if we had created a snapshot prior to the update it might be useful but not after the incident.
Option C is incorrect here we are dealing with an issue concerning the underlying application that handles the data so this solution will not help.
For more information on Cloudwatch logs, please refer to the below link:
• http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/StartTheCW LAgent.htm I
NEW QUESTION 22
You have the requirement to get a snapshot of the current configuration of the resources in your AWS Account. Which of the following services can be used for this purpose
- A. AWS CodeDeploy
- B. AWS Trusted Advisor
- C. AWSConfig
- D. AWSIAM
The AWS Documentation mentions the following With AWS Config, you can do the following:
• Evaluate your AWS resource configurations for desired settings.
• Get a snapshot of the current configurations of the supported resources that are associated with your AWS account.
• Retrieve configurations of one or more resources that exist in your account.
• Retrieve historical configurations of one or more resources.
• Receive a notification whenever a resource is created, modified, or deleted.
• View relationships between resources. For example, you might want to find all resources that use a particular security group. For more information on AWS Config, please visit the below URL: http://docs.aws.amazon.com/config/latest/developerguide/WhatlsConfig.html
NEW QUESTION 23
Your application uses Amazon SQS and Auto Scaling to process background jobs. The Auto Scaling policy is based on the number of messages in the queue, with a maximum instance count of 100. Since the application was launched, the group has never scaled above 50. The Auto scaling group has now scaled to 100, the queue size is increasing and very few jobs are being completed. The number of messages being sent to the queue is at normal levels. What should you do to identity why the queue size is unusually high and to reduce it?
- A. Temporarily increase the AutoScaling group's desired value to 200. When the queue size has been reduced,reduce it to 50.
- B. Analyzethe application logs to identify possible reasons for message processingfailure and resolve the cause for failure
- C. V
- D. Createadditional Auto Scalinggroups enabling the processing of the queue to beperformed in parallel.
- E. AnalyzeCloudTrail logs for Amazon SQS to ensure that the instances Amazon EC2 role haspermission to receive messages from the queue.
Here the best option is to look at the application logs and resolve the failure. You could be having a functionality issue in the application that is causing the messages to queue up and increase the fleet of instances in the Autoscaling group.
For more information on centralized logging system implementation in AWS, please visit this link: https://aws.amazon.com/answers/logging/centralized-logging/
NEW QUESTION 24
You are administering a continuous integration application that polls version control for changes and then launches new Amazon EC2 instances for a full suite of build tests. What should you do to ensure the lowest overall cost while being able to run as many tests in parallel as possible?
- A. Perform syntax checking on the continuous integration system before launching a new Amazon EC2 instance for build test, unit and integration tests.
- B. Perform syntax and build tests on the continuous integration system before launching the newAmazon EC2 instance unit and integration test
- C. Perform all tests on the continuous integration system, using AWS OpsWorks for unit, integration, and build tests.
- D. Perform syntax checking on the continuous integration system before launching a new AWS Data Pipeline for coordinating the output of unit, integration, and build tests.
Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.
Option A and D are invalid because you can do build tests on a CI system and not only Syntax tests. And Syntax tests are normally done during coding time and not during the build time.
Option C is invalid because Opswork is ideally not used for build and integration tests.
For an example of a Continuous integration system, please refer to the Jenkins system via the url below
NEW QUESTION 25
When your application is loaded onto an Opsworks stack, which of the following event is triggered by Opsworks?
- A. Deploy
- B. Setup
- C. Configure
- D. Shutdown
When you deploy an application, AWS Ops Works Stacks triggers a Deploy event, which runs each layer's Deploy recipes. AWS OpsWorks Stacks also installs stack configuration and deployment attributes that contain all of the information needed to deploy the app, such as the app's repository and database connection data. For more information on the Deploy event please refer to the below link:
NEW QUESTION 26
P.S. Thedumpscentre.com now are offering 100% pass ensure DOP-C01 dumps! All DOP-C01 exam questions have been updated with correct answers: https://www.thedumpscentre.com/DOP-C01-dumps/ (116 New Questions)