Cause all that matters here is passing the Amazon AWS-Certified-Solutions-Architect-Professional exam. Cause all that you need is a high score of AWS-Certified-Solutions-Architect-Professional Amazon AWS Certified Solutions Architect Professional exam. The only one thing you need to do is downloading Certleader AWS-Certified-Solutions-Architect-Professional exam study guides now. We will not let you down with our money-back guarantee.

Also have AWS-Certified-Solutions-Architect-Professional free dumps questions for you:

NEW QUESTION 1

A company has millions of objects in an Amazon S3 bucket. The objects are in the S3 Standard storage class. All the S3 objects are accessed frequently. The number of users and applications that access the objects is increasing rapidly. The objects are encrypted with server-side encryption with AWS KMS Keys (SSE-KMS).
A solutions architect reviews the company's monthly AWS invoice and notices that AWS KMS costs are increasing because of the high number of requests from Amazon S3. The solutions architect needs to optimize costs with minimal changes to the application.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create a new S3 bucket that has server-side encryption with customer-provided keys (SSE-C) as the encryption typ
  • B. Copy the existing objects to the new S3 bucke
  • C. Specify SSE-C.
  • D. Create a new S3 bucket that has server-side encryption with Amazon S3 managed keys (SSE-S3) as the encryption typ
  • E. Use S3 Batch Operations to copy the existing objects to the new S3 bucke
  • F. Specify SSE-S3.
  • G. Use AWS CloudHSM to store the encryption key
  • H. Create a new S3 bucke
  • I. Use S3 Batch Operations to copy the existing objects to the new S3 bucke
  • J. Encrypt the objects by using the keys from CloudHSM.
  • K. Use the S3 Intelligent-Tiering storage class for the S3 bucke
  • L. Create an S3 Intelligent-Tiering archive configuration to transition objects that are not accessed for 90 days to S3 Glacier Deep Archive.

Answer: B

Explanation:
To reduce the volume of Amazon S3 calls to AWS KMS, use Amazon S3 bucket keys, which are protected encryption keys that are reused for a limited time in Amazon S3. Bucket keys can reduce costs for AWS KMS requests by up to 99%. You can configure a bucket key for all objects in an Amazon S3 bucket, or for a specific object in an Amazon S3 bucket. https://docs.aws.amazon.com/fr_fr/kms/latest/developerguide/services-s3.html

NEW QUESTION 2

A company is deploying a third-party firewall appliance solution from AWS Marketplace to monitor and protect traffic that leaves the company's AWS environments. The company wants to deploy this appliance into a shared services VPC and route all outbound internet-bound traffic through the appliances.
A solutions architect needs to recommend a deployment method that prioritizes reliability and minimizes failover time between firewall appliances within a single AWS Region. The company has set up routing from the shared services VPC to other VPCs.
Which steps should the solutions architect recommend to meet these requirements? (Select THREE.)

  • A. Deploy two firewall appliances into the shared services VPC, each in a separate Availability Zone.
  • B. Create a new Network Load Balancer in the shared services VP
  • C. Create a new target group, and attach it to the new Network Load Balance
  • D. Add each of the firewall appliance instances to the target group.
  • E. Create a new Gateway Load Balancer in the shared services VP
  • F. Create a new target group, and attach it to the new Gateway Load Balance
  • G. Add each of the firewall appliance instances to the target group.
  • H. Create a VPC interface endpoin
  • I. Add a route to the route table in the shared services VP
  • J. Designate the new endpoint as the next hop for traffic that enters the shared services VPC from other VPCs.
  • K. Deploy two firewall appliances into the shared services VP
  • L. each in the same Availability Zone.
  • M. Create a VPC Gateway Load Balancer endpoin
  • N. Add a route to the route table in the shared services VP
  • O. Designate the new endpoint as the next hop for traffic that enters the shared services VPC from other VPCs.

Answer: ACF

Explanation:
The best solution is to deploy two firewall appliances into the shared services VPC, each in a separate Availability Zone, and create a new Gateway Load Balancer to distribute traffic to them. A Gateway Load Balancer is designed for high performance and high availability scenarios with third-party network virtual appliances, such as firewalls. It operates at the network layer and maintains flow stickiness and symmetry to a specific appliance instance. It also uses the GENEVE protocol to encapsulate traffic between the load balancer and the appliances. To route traffic from other VPCs to the Gateway Load Balancer, a VPC Gateway Load Balancer endpoint is needed. This is a VPC endpoint that provides private connectivity between the appliances in the shared services VPC and the application servers in other VPCs. The endpoint must be added as the next hop in the route table for the shared services VPC. This solution ensures reliability and minimizes failover time between firewall appliances within a single AWS Region. References: What is a Gateway Load Balancer?, Gateway load balancer - Azure Load Balancer, Introducing Azure Gateway Load Balancer: Depl and scale network …

NEW QUESTION 3

A company has hundreds of AWS accounts. The company uses an organization in AWS Organizations to manage all the accounts. The company has turned on all features.
A finance team has allocated a daily budget for AWS costs. The finance team must receive an email notification if the organization's AWS costs exceed 80% of the allocated budget. A solutions architect needs to implement a solution to track the costs and deliver the notifications.
Which solution will meet these requirements?

  • A. In the organization's management account, use AWS Budgets to create a budget that has a daily period.Add an alert threshold and set the value to 80%. Use Amazon Simple Notification Service (Amazon SNS) to notify the finance team.
  • B. In the organization’s management account, set up the organizational view feature for AWS Trusted Adviso
  • C. Create an organizational view report for cost optimizatio
  • D. Set an alert threshold of 80%.Configure notification preference
  • E. Add the email addresses of the finance team.
  • F. Register the organization with AWS Control Towe
  • G. Activate the optional cost control (guardrail). Set a control (guardrail) parameter of 80%. Configure control (guardrail) notification preference
  • H. Use Amazon Simple Notification Service (Amazon SNS) to notify the finance team.
  • I. Configure the member accounts to save a daily AWS Cost and Usage Report to an Amazon S3 bucket in the organization's management accoun
  • J. Use Amazon EventBridge to schedule a daily Amazon Athena query to calculate the organization’s cost
  • K. Configure Athena to send an Amazon CloudWatch alert if the total costs are more than 80% of the allocated budge
  • L. Use Amazon Simple Notification Service (Amazon SNS) to notify the finance team.

Answer: A

NEW QUESTION 4

A company is creating a REST API to share information with six of its partners based in the United States. The company has created an Amazon API Gateway Regional endpoint. Each of the six partners will access the API once per day to post daily sales figures.
After initial deployment, the company observes 1.000 requests per second originating from 500 different IP addresses around the world. The company believes this traffic is originating from a botnet and wants to secure its API while minimizing cost.
Which approach should the company take to secure its API?

  • A. Create an Amazon CloudFront distribution with the API as the origi
  • B. Create an AWS WAF web ACL with a rule lo block clients thai submit more than fiverequests per da
  • C. Associate the web ACL with the CloudFront distnbutio
  • D. Configure CloudFront with an origin access identity (OAI) and associate it with the distributio
  • E. Configure API Gateway to ensure only the OAI can run the POST method.
  • F. Create an Amazon CloudFront distribution with the API as the origi
  • G. Create an AWS WAF web ACL with a rule to block clients that submit more than five requests per da
  • H. Associate the web ACL with the CloudFront distnbutio
  • I. Add a custom header to the CloudFront distribution populated with an API ke
  • J. Configure the API to require an API key on the POST method.
  • K. Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the six partners.Associate the web ACL with the AP
  • L. Create a resource policy with a request limit and associate it with the AP
  • M. Configure the API to require an API key on the POST method.
  • N. Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the six partners.Associate the web ACL with the AP
  • O. Create a usage plan with a request limit and associate it with the AP
  • P. Create an API key and add it to the usage plan.

Answer: D

Explanation:
"A usage plan specifies who can access one or more deployed API stages and methods—and also how much and how fast they can access them. The plan uses API keys to identify API clients and meters access to the associated API stages for each key. It also lets you configure throttling limits and quota limits that are enforced on individual client API keys."
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
A rate-based rule tracks the rate of requests for each originating IP address, and triggers the rule action on IPs with rates that go over a limit. You set the limit as the number of requests per 5-minute time span...... The following caveats apply to AWS WAF rate-based rules: The minimum rate that you can set is 100. AWS WAF checks the rate of requests every 30 seconds, and counts requests for the prior five minutes each time. Because of this, it's possible for an IP address to send requests at too high a rate for 30 seconds before AWS WAF detects and blocks it. AWS WAF can block up to 10,000 IP addresses. If more than 10,000 IP addresses send high rates of requests at the same time, AWS WAF will only block 10,000 of them. " https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.html

NEW QUESTION 5

A company has an organization in AWS Organizations that includes a separate AWS account for each of the company's departments. Application teams from different departments develop and deploy solutions independently.
The company wants to reduce compute costs and manage costs appropriately across departments. The company also wants to improve visibility into billing for individual departments. The company does not want to lose operational flexibility when the company selects compute resources.
Which solution will meet these requirements?

  • A. Use AWS Budgets for each departmen
  • B. Use Tag Editor to apply tags to appropriate resource
  • C. Purchase EC2 Instance Savings Plans.
  • D. Configure AWS Organizations to use consolidated billin
  • E. Implement a tagging strategy that identifies department
  • F. Use SCPs to apply tags to appropriate resource
  • G. Purchase EC2 Instance Savings Plans.
  • H. Configure AWS Organizations to use consolidated billin
  • I. Implement a tagging strategy that identifies department
  • J. Use Tag Editor to apply tags to appropriate resource
  • K. Purchase Compute Savings Plans.
  • L. Use AWS Budgets for each departmen
  • M. Use SCPs to apply tags to appropriate resource
  • N. Purchase Compute Savings Plans.

Answer: C

NEW QUESTION 6

A company is building a solution in the AWS Cloud. Thousands or devices will connect to the solution and send data. Each device needs to be able to send and receive data in real time over the MQTT protocol. Each device must authenticate by using a unique X.509 certificate.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Set up AWS loT Cor
  • B. For each device, create a corresponding Amazon MQ queue and provision a certificat
  • C. Connect each device to Amazon MQ.
  • D. Create a Network Load Balancer (NLB) and configure it with an AWS Lambda authorize
  • E. Run an MQTT broker on Amazon EC2 instances in an Auto Scaling grou
  • F. Set the Auto Scaling group as the target for the NL
  • G. Connect each device to the NLB.
  • H. Set up AWS loT Cor
  • I. For each device, create a corresponding AWS loT thing and provision a certificat
  • J. Connect each device to AWS loT Core.
  • K. Set up an Amazon API Gateway HTTP API and a Network Load Balancer (NLB). Create integration between API Gateway and the NL
  • L. Configure a mutual TLS certificate authorizer on the HTTP AP
  • M. Run an MQTT broker on an Amazon EC2 instance that the NLB target
  • N. Connect each device to the NLB.

Answer: D

Explanation:
This solution requires minimal operational overhead, as it only requires setting up AWS IoT Core and creating a thing for each device. (Reference: AWS Certified Solutions Architect - Professional Official Amazon Text Book, Page 537)
AWS IoT Core is a fully managed service that enables secure, bi-directional communication between internet-connected devices and the AWS Cloud. It supports the MQTT protocol and includes built-in device
authentication and access control. By using AWS IoT Core, the company can easily provision and manage the X.509 certificates for each device, and connect the devices to the service with minimal operational overhead.

NEW QUESTION 7

A solutions architect is redesigning a three-tier application that a company hosts on premises. The application provides personalized recommendations based on user profiles. The company already has an AWS account and has configured a VPC to host the application.
The frontend is a Java-based application that runs in on-premises VMs. The company hosts a personalization model on a physical application server and uses TensorFlow to implement the model. The personalization model uses artificial intelligence and machine learning (AI/ML). The company stores user information in a Microsoft SQL Server database. The web application calls the personalization model, which reads the user profiles from the database and provides recommendations.
The company wants to migrate the redesigned application to AWS.
Which solution will meet this requirement with the LEAST operational overhead?

  • A. Use AWS Server Migration Service (AWS SMS) to migrate the on-premises physical application server and the web application VMs to AW
  • B. Use AWS Database Migration Service (AWS DMS) to migrate the SQL Server database to Amazon RDS for SQL Server.
  • C. Export the personalization mode
  • D. Store the model artifacts in Amazon S3. Deploy the model to Amazon SageMaker and create an endpoin
  • E. Host the Java application in AWS Elastic Beanstal
  • F. Use AWS Database Migration Service {AWS DMS) to migrate the SQL Server database to Amazon RDS for SQL Server.
  • G. Use AWS Application Migration Service to migrate the on-premises personalization model and VMs to Amazon EC2 instances in Auto Scaling group
  • H. Use AWS Database Migration Service (AWS DMS) to migrate the SQL Server database to an EC2 instance.
  • I. Containerize the personalization model and the Java applicatio
  • J. Use Amazon Elastic Kubernetes Service (Amazon EKS) managed node groups to deploy the model and the application to Amazon EKS Host the node groups in a VP
  • K. Use AWS Database Migration Service (AWS DMS) to migrate the SQL Server database to Amazon RDS for SQL Server.

Answer: B

Explanation:
Amazon SageMaker is a fully managed machine learning service that allows users to build, train, and deploy machine learning models quickly and easily1. Users can export their existing TensorFlow models and store the model artifacts in Amazon S3, a highly scalable and durable object storage service2. Users can then deploy the model to Amazon SageMaker and create an endpoint that can be invoked by the web application to provide recommendations3. This way, the solution can leverage the AI/ML capabilities of Amazon SageMaker without having to rewrite the personalization model.
AWS Elastic Beanstalk is a service that allows users to deploy and manage web applications without worrying about the infrastructure that runs those applications. Users can host their Java application in AWS Elastic Beanstalk and configure it to communicate with the Amazon SageMaker endpoint. This way, the solution can reduce the operational overhead of managing servers, load balancers, scaling, and application health monitoring.
AWS Database Migration Service (AWS DMS) is a service that helps users migrate databases to AWS quickly and securely. Users can use AWS DMS to migrate their SQL Server database to Amazon RDS for SQL Server, a fully managed relational database service that offers high availability, scalability, security, and compatibility. This way, the solution can reduce the operational overhead of managing database servers, backups, patches, and upgrades.
Option A is incorrect because using AWS Server Migration Service (AWS SMS) to migrate the on-premises physical application server and the web application VMs to AWS is not cost-effective or scalable. AWS SMS is a service that helps users migrate on-premises workloads to AWS. However, for this use case, migrating the physical application server and the web application VMs to AWS will not take advantage of the AI/ML capabilities of Amazon SageMaker or the managed services of AWS Elastic Beanstalk and Amazon RDS.
Option C is incorrect because using AWS Application Migration Service to migrate the on-premises personalization model and VMs to Amazon EC2 instances in Auto Scaling groups is not cost-effective or scalable. AWS Application Migration Service is a service that helps users migrate applications from
on-premises or other clouds to AWS without making any changes to their applications. However, for this use case, migrating the personalization model and VMs to EC2 instances will not take advantage of the AI/ML capabilities of Amazon SageMaker or the managed services of AWS Elastic Beanstalk and Amazon RDS.
Option D is incorrect because containerizing the personalization model and the Java application and using Amazon Elastic Kubernetes Service (Amazon EKS) managed node groups to deploy them to Amazon EKS is not necessary or cost-effective. Amazon EKS is a service that allows users to run Kubernetes on AWS without needing to install, operate, and maintain their own Kubernetes control plane or nodes. However, for this use case, containerizing and deploying the personalization model and the Java application will not take advantage of the AI/ML capabilities of Amazon SageMaker or the managed services of AWS Elastic Beanstalk. Moreover, using S3 Glacier Deep Archive as a storage class for images will incur a high retrieval fee and latency for accessing them.

NEW QUESTION 8

A company is planning to migrate an Amazon RDS for Oracle database to an RDS for PostgreSQL DB instance in another AWS account. A solutions architect needs to design a migration strategy that will require no downtime and that will minimize the amount of time necessary to complete the migration. The migration strategy must replicate all existing data and any new data that is created during the migration The target database must be identical to the source database at completion of the migration process
All applications currently use an Amazon Route 53 CNAME record as their endpoint for communication with the RDS for Oracle DB instance The RDS for Oracle DB instance is in a private subnet.
Which combination of steps should the solutions architect take to meet these requirements? (Select THREE)

  • A. Create a new RDS for PostgreSQL DB instance in the target account Use the AWS Schema Conversion Tool (AWS SCT) to migrate the database schema from the source database to the target database
  • B. Use the AWS Schema Conversion Tool (AWS SCT) to create a new RDS for PostgreSQL DB instance in the target account with the schema and initial data from thesource database
  • C. Configure VPC peering between the VPCs in the two AWS accounts to provide connectivity to both DB instances from the target accoun
  • D. Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
  • E. Temporarily allow the source DB instance to be publicly accessible to provide connectivity from the VPC in the target account Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
  • F. Use AWS Database Migration Service (AWS DMS) in the target account to perform a full load plus change data capture (CDC) migration from the source database to the target database When the migration is complete, change the CNAME record to point to the target DB instance endpoint
  • G. Use AWS Database Migration Service (AWS DMS) in the target account to perform a change data capture (CDC) migration from the source database to the target database When the migration is complete change the CNAME record to point to the target DB instance endpoint.

Answer: ACE

NEW QUESTION 9

A company uses AWS Organizations with a single OU named Production to manage multiple accounts All accounts are members of the Production OU Administrators use deny list SCPs in the root of the organization to manage access to restricted services.
The company recently acquired a new business unit and invited the new unit's existing AWS account to the organization Once onboarded the administrators of the new business unit discovered that they are not able to update existing AWS Config rules to meet the company's policies.
Which option will allow administrators to make changes and continue to enforce the current policies without introducing additional long-term maintenance?

  • A. Remove the organization's root SCPs that limit access to AWS Config Create AWS Service Catalog products for the company's standard AWS Config rules and deploy them throughout the organization, including the new account.
  • B. Create a temporary OU named Onboarding for the new account Apply an SCP to the Onboarding OU to allow AWS Config actions Move the new account to the Production OU when adjustments to AWS Config are complete
  • C. Convert the organization's root SCPs from deny list SCPs to allow list SCPs to allow the required services only Temporarily apply an SCP to the organization's root that allows AWS Config actions forprincipals only in the new account.
  • D. Create a temporary OU named Onboarding for the new account Apply an SCP to the Onboarding OU to allow AWS Config action
  • E. Move the organization's root SCP to the Production O
  • F. Move the new account to the Production OU when adjustments to AWS Config are complete.

Answer: D

Explanation:
An SCP at a lower level can't add a permission after it is blocked by an SCP at a higher level. SCPs can only filter; they never add permissions. SO you need to create a new OU for the new account assign an SCP, and move the root SCP to Production OU. Then move the new account to production OU when AWS config is done.

NEW QUESTION 10

A company wants to use AWS to create a business continuity solution in case the company's main on-premises application fails. The application runs on physical servers that also run other applications. The on-premises application that the company is planning to migrate uses a MySQL database as a data store. All the company's on-premises applications use operating systems that are compatible with Amazon EC2.
Which solution will achieve the company's goal with the LEAST operational overhead?

  • A. Install the AWS Replication Agent on the source servers, including the MySQL server
  • B. Set up replication for all server
  • C. Launch test instances for regular drill
  • D. Cut over to the test instances to fail over the workload in the case of a failure event.
  • E. Install the AWS Replication Agent on the source servers, including the MySQL server
  • F. Initialize AWS Elastic Disaster Recovery in the target AWS Regio
  • G. Define the launch setting
  • H. Frequently perform failover and fallback from the most recent point in time.
  • I. Create AWS Database Migration Service (AWS DMS) replication servers and a target Amazon Aurora MySQL DB cluster to host the databas
  • J. Create a DMS replication task to copy the existing data to the target DB cluste
  • K. Create a local AWS Schema Conversion Tool (AWS SCT) change data capture (CDC) task to keep the data synchronize
  • L. Install the rest of the software on EC2 instances by starting with a compatible base AMI.
  • M. Deploy an AWS Storage Gateway Volume Gateway on premise
  • N. Mount volumes on all on-premises server
  • O. Install the application and the MySQL database on the new volume
  • P. Take regular snapshot
  • Q. Install all the software on EC2 Instances by starting with a compatible base AM
  • R. Launch a Volume Gateway on an EC2 instanc
  • S. Restore the volumes from the latest snapsho
  • T. Mount the new volumes on the EC2 instances in the case of a failure event.

Answer: B

Explanation:
https://docs.aws.amazon.com/drs/latest/userguide/what-is-drs.html https://docs.aws.amazon.com/drs/latest/userguide/recovery-workflow-gs.html

NEW QUESTION 11

A company wants to migrate an application to Amazon EC2 from VMware Infrastructure that runs in an
on-premises data center. A solutions architect must preserve the software and configuration settings during the migration.
What should the solutions architect do to meet these requirements?

  • A. Configure the AWS DataSync agent to start replicating the data store to Amazon FSx for Windows FileServer Use the SMB share to host the VMware data stor
  • B. Use VM Import/Export to move the VMs to Amazon EC2.
  • C. Use the VMware vSphere client to export the application as an image in Open Virealization Format (OVF) format Create an Amazon S3 bucket to store the image in the destination AWS Regio
  • D. Create and apply an IAM role for VM Import Use the AWS CLI to run the EC2 import command.
  • E. . Configure AWS Storage Gateway for files service to export a Common Internet File System (CIFSJ shar
  • F. Create a backup copy to the shared folde
  • G. Sign in to the AWS Management Console and create an AMI from the backup copy Launch an EC2 instance that is based on the AMI.
  • H. Create a managed-instance activation for a hybrid environment in AWS Systems Manage
  • I. Download and install Systems Manager Agent on the on-premises VM Register the VM with Systems Manager to be a managed instance Use AWS Backup to create a snapshot of the VM and create an AM
  • J. Launch an EC2 instance that is based on the AMI

Answer: D

Explanation:
https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
- Export an OVF Template
- Create / use an Amazon S3 bucket for storing the exported images. The bucket must be in the Region where you want to import your VMs.
- Create an IAM role named vmimport.
- You'll use AWS CLI to run the import commands. https://aws.amazon.com/premiumsupport/knowledge-center/import-instances/

NEW QUESTION 12

A solutions architect needs to review the design of an Amazon EMR cluster that is using the EMR File System (EMRFS). The cluster performs tasks that are critical to business needs. The cluster is running Amazon EC2 On-Demand Instances at all times tor all task, primary, and core nodes. The EMR tasks run each morning, starting at 1 ;00 AM. and take 6 hours to finish running. The amount of time to complete the processing is not a priority because the data is not referenced until late in the day.
The solutions architect must review the architecture and suggest a solution to minimize the compute costs. Which solution should the solutions architect recommend to meet these requirements?

  • A. Launch all task, primary, and core nodes on Spool Instances in an instance flee
  • B. Terminate the cluster, including all instances, when the processing is completed.
  • C. Launch the primary and core nodes on On-Demand Instance
  • D. Launch the task nodes on Spot Instances in an instance flee
  • E. Terminate the cluster, including all instances, when the processing is complete
  • F. Purchase Compute Savings Plans to cover the On-Demand Instance usage.
  • G. Continue to launch all nodes on On-Demand Instance
  • H. Terminate the cluster, including all instances, when the processing is complete
  • I. Purchase Compute Savings Plans to cover the On-Demand Instance usage
  • J. Launch the primary and core nodes on On-Demand Instance
  • K. Launch the task nodes on Spot Instances in an instance flee
  • L. Terminate only the task node instances when the processing is complete
  • M. Purchase Compute Savings Plans to cover the On-Demand Instance usage.

Answer: A

Explanation:
Amazon EC2 Spot Instances offer spare compute capacity at steep discounts compared to On-Demand prices. Spot Instances can be interrupted by EC2 with two minutes of notification when EC2 needs the capacity back. Amazon EMR can handle Spot interruptions gracefully by decommissioning the nodes and redistributing the tasks to other nodes. By launching all nodes on Spot Instances in an instance fleet, the solutions architect can minimize the compute costs of the EMR cluster. An instance fleet is a collection of EC2 instances with different types and sizes that EMR automatically provisions to meet a defined target capacity. By terminating the cluster when the processing is completed, the solutions architect can avoid paying for idle resources. References:
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-managed-scaling.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-instance-fleet.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/blogs/big-data/optimizing-amazon-emr-for-resilience-and-cost-with-capacity-opt

NEW QUESTION 13

A company is storing sensitive data in an Amazon S3 bucket. The company must log all activities for objects in the S3 bucket and must keep the logs for 5 years. The company's security team also must receive an email notification every time there is an attempt to delete data in the S3 bucket.
Which combination of steps will meet these requirements MOST cost-effectively? (Select THREE.)

  • A. Configure AWS CloudTrail to log S3 data events.
  • B. Configure S3 server access logging for the S3 bucket.
  • C. Configure Amazon S3 to send object deletion events to Amazon Simple Email Service (Amazon SES).
  • D. Configure Amazon S3 to send object deletion events to an Amazon EventBridge event bus that publishes to an Amazon Simple Notification Service (Amazon SNS) topic.
  • E. Configure Amazon S3 to send the logs to Amazon Timestream with data storage tiering.
  • F. Configure a new S3 bucket to store the logs with an S3 Lifecycle policy.

Answer: ADF

Explanation:
Configuring AWS CloudTrail to log S3 data events will enable logging all activities for objects in the S3 bucket1. Data events are object-level API operations such as GetObject, DeleteObject, and PutObject1. Configuring Amazon S3 to send object deletion events to an Amazon EventBridge event bus that publishes to an Amazon Simple Notification Service (Amazon SNS) topic will enable sending email notifications every time there is an attempt to delete data in the S3 bucket2. EventBridge can route events from S3 to SNS, which can send emails to subscribers2. Configuring a new S3 bucket to store the logs with an S3 Lifecycle policy will enable keeping the logs for 5 years in a cost-effective way3. A lifecycle policy can transition the logs to a cheaper storage class such as Glacier or delete them after a specified period of time3.

NEW QUESTION 14

A company that uses AWS Organizations allows developers to experiment on AWS. As part of the landing zone that the company has deployed, developers use their company email address to request an account. The company wants to ensure that developers are not launching costly services or running services unnecessarily. The company must give developers a fixed monthly budget to limit their AWS costs.
Which combination of steps will meet these requirements? (Choose three.)

  • A. Create an SCP to set a fixed monthly account usage limi
  • B. Apply the SCP to the developer accounts.
  • C. Use AWS Budgets to create a fixed monthly budget for each developer's account as part of the account creation process.
  • D. Create an SCP to deny access to costly services and component
  • E. Apply the SCP to the developer accounts.
  • F. Create an IAM policy to deny access to costly services and component
  • G. Apply the IAM policy to the developer accounts.
  • H. Create an AWS Budgets alert action to terminate services when the budgeted amount is reached.Configure the action to terminate all services.
  • I. Create an AWS Budgets alert action to send an Amazon Simple Notification Service (Amazon SNS) notification when the budgeted amount is reache
  • J. Invoke an AWS Lambda function to terminate all services.

Answer: BCF

Explanation:
AWS-Certified-Solutions-Architect-Professional dumps exhibit Option A is incorrect because creating an SCP to set a fixed monthly account usage limit is not possible.
SCPs are policies that specify the services and actions that users and roles can use in the member accounts of an AWS Organization. SCPs cannot enforce budget limits or prevent users from launching
costly services or running services unnecessarily1
AWS-Certified-Solutions-Architect-Professional dumps exhibit Option B is correct because using AWS Budgets to create a fixed monthly budget for each developer’s account as part of the account creation process meets the requirement of giving developers a fixed monthly budget to limit their AWS costs. AWS Budgets allows you to plan your service usage, service costs, and instance reservations. You can create budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount2
AWS-Certified-Solutions-Architect-Professional dumps exhibit Option C is correct because creating an SCP to deny access to costly services and components meets the requirement of ensuring that developers are not launching costly services or running services
unnecessarily. SCPs can restrict access to certain AWS services or actions based on conditions such as region, resource tags, or request time. For example, an SCP can deny access to Amazon Redshift clusters or Amazon EC2 instances with certain instance types1
AWS-Certified-Solutions-Architect-Professional dumps exhibit Option D is incorrect because creating an IAM policy to deny access to costly services and components is not sufficient to meet the requirement of ensuring that developers are not launching costly services or running services unnecessarily. IAM policies can only control access to resources within a single AWS account. If developers have multiple accounts or can create new accounts, they can bypass the IAM policy restrictions. SCPs can apply across multiple accounts within an AWS Organization and prevent users from creating new accounts that do not comply with the SCP rules3
AWS-Certified-Solutions-Architect-Professional dumps exhibit Option E is incorrect because creating an AWS Budgets alert action to terminate services when the budgeted amount is reached is not possible. AWS Budgets alert actions can only perform one of the following actions: apply an IAM policy, apply an SCP, or send a notification through Amazon SNS. AWS Budgets alert actions cannot terminate services directly.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Option F is correct because creating an AWS Budgets alert action to send an Amazon SNS notification when the budgeted amount is reached and invoking an AWS Lambda function to terminate all services meets the requirement of giving developers a fixed monthly budget to limit their AWS costs. AWS Budgets alert actions can send notifications through Amazon SNS when a budget threshold is breached. Amazon SNS can trigger an AWS Lambda function that can perform custom logic such as terminating all services in the developer’s account. This way, developers cannot exceed their budget limit and incur additional costs.
References: 1: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html 2
: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/budgets-create.html 3: https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html : https://docs.aws.amazon.com/cost-management/latest/userguide/budgets-actions.html : https://docs.aws.amazon.com/sns/latest/dg/sns-lambda.html : https://docs.aws.amazon.com/lambda/latest/dg/welcome.html

NEW QUESTION 15

A company has set up its entire infrastructure on AWS. The company uses Amazon EC2 instances to host its ecommerce website and uses Amazon S3 to store static data. Three engineers at the company handle the cloud administration and development through one AWS account. Occasionally, an engineer alters an EC2 security group configuration of another engineer and causes noncompliance issues in the environment.
A solutions architect must set up a system that tracks changes that the engineers make. The system must send alerts when the engineers make noncompliant changes to the security settings for the EC2 instances.
What is the FASTEST way for the solutions architect to meet these requirements?

  • A. Set up AWS Organizations for the compan
  • B. Apply SCPs to govern and track noncompliant security group changes that are made to the AWS account.
  • C. Enable AWS CloudTrail to capture the changes to EC2 security group
  • D. Enable Amazon CtoudWatch rules to provide alerts when noncompliant security settings are detected.
  • E. Enable SCPs on the AWS account to provide alerts when noncompliant security group changes are made to the environment.
  • F. Enable AWS Config on the EC2 security groups to track any noncompliant changes Send the changes as alerts through an Amazon Simple Notification Service (Amazon SNS) topic.

Answer: D

Explanation:
https://aws.amazon.com/es/blogs/industries/how-to-monitor-alert-and-remediate-non-compliant-hipaa-findings

NEW QUESTION 16

A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store 200 TB of data. The application reads and modifies the data on the shared file system and generates a report. The job runs once monthly, reads a subset of the files from the shared file system, and takes about 72 hours to complete. The compute instances scale in an Auto Scaling group, but the instances that host the shared file system run continuously. The compute and storage instances are all in the same AWS Region.
A solutions architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed data for the duration of the 72-hour run.
Which solution will provide the LARGEST overall cost reduction while meeting these requirements?

  • A. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage clas
  • B. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loadin
  • C. Use the new file system as the shared storage for the duration of the jo
  • D. Delete the file system when the job is complete.
  • E. Migrate the data from the existing shared file system to a large Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enable
  • F. Attach the EBS volume to each of the instances by using a user data script in the Auto Scaling group launch templat
  • G. Use the EBS volume as the shared storage for the duration of the jo
  • H. Detach the EBS volume when the job is complete.
  • I. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Standard storage clas
  • J. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using batch loadin
  • K. Use the new file system as the shared storage for the duration of the jo
  • L. Delete the file system when the job is complete.
  • M. Migrate the data from the existing shared file system to an Amazon S3 bucke
  • N. Before the job runs each month, use AWS Storage Gateway to create a file gateway with the data from Amazon S3. Use the file gateway as the shared storage for the jo
  • O. Delete the file gateway when the job is complete.

Answer: A

Explanation:
https://aws.amazon.com/blogs/storage/new-enhancements-for-moving-data-between-amazon-fsx-for-lustre-and

NEW QUESTION 17

A financial services company sells its software-as-a-service (SaaS) platform for application compliance to
large global banks. The SaaS platform runs on AWS and uses multiple AWS accounts that are managed in an organization in AWS Organizations. The SaaS platform uses many AWS resources globally.
For regulatory compliance, all API calls to AWS resources must be audited, tracked for changes, and stored in a durable and secure data store.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create a new AWS CloudTrail trai
  • B. Use an existing Amazon S3 bucket in the organization's management account to store the log
  • C. Deploy the trail to all AWS Region
  • D. Enable MFA delete and encryption on the S3 bucket.
  • E. Create a new AWS CloudTrail trail in each member account of the organizatio
  • F. Create new Amazon S3 buckets to store the log
  • G. Deploy the trail to all AWS Region
  • H. Enable MFA delete and encryption on the S3 buckets.
  • I. Create a new AWS CloudTrail trail in the organization's management accoun
  • J. Create a new Amazon S3 bucket with versioning turned on to store the log
  • K. Deploy the trail for all accounts in the organizatio
  • L. Enable MFA delete and encryption on the S3 bucket.
  • M. Create a new AWS CloudTrail trail in the organization's management accoun
  • N. Create a new Amazon S3 bucket to store the log
  • O. Configure Amazon Simple Notification Service (Amazon SNS) to send log-file delivery notifications to an external management system that will track the log
  • P. Enable MFA delete and encryption on the S3 bucket.

Answer: C

Explanation:
The correct answer is C. This option uses AWS CloudTrail to create a trail in the organization’s management account that applies to all accounts in the organization. This way, the company can centrally manage and audit all API calls to AWS resources across multiple accounts and regions. The company also needs to create a new Amazon S3 bucket with versioning turned on to store the logs. Versioning helps protect against accidental or malicious deletion of log files by keeping multiple versions of each object in the bucket. The company also needs to enable MFA delete and encryption on the S3 bucket to further enhance the security and durability of the data store.
Option A is incorrect because it uses an existing S3 bucket in the organization’s management account to store the logs. This may not be optimal for regulatory compliance, as the existing bucket may have different permissions, encryption settings, or lifecycle policies than a dedicated bucket for CloudTrail logs.
Option B is incorrect because it requires creating a new CloudTrail trail in each member account of the organization. This adds operational overhead and complexity, as the company would need to manage multiple trails and S3 buckets across multiple accounts and regions.
Option D is incorrect because it requires configuring Amazon SNS to send log-file delivery notifications to an external management system that will track the logs. This adds unnecessary complexity and cost, as CloudTrail already provides log-file integrity validation and log-file digest delivery features that can help verify the authenticity and integrity of log files.
Reference: Creating a Trail for an Organization

NEW QUESTION 18

A security engineer determined that an existing application retrieves credentials to an Amazon RDS for MySQL database from an encrypted file in Amazon S3. For the next version of the application, the security engineer wants to implement the following application design changes to improve security:
AWS-Certified-Solutions-Architect-Professional dumps exhibit The database must use strong, randomly generated passwords stored in a secure AWS managed service.
AWS-Certified-Solutions-Architect-Professional dumps exhibit The application resources must be deployed through AWS CloudFormation.
AWS-Certified-Solutions-Architect-Professional dumps exhibit The application must rotate credentials for the database every 90 days.
A solutions architect will generate a CloudFormation template to deploy the application.
Which resources specified in the CloudFormation template will meet the security engineer's requirements with the LEAST amount of operational overhead?

  • A. Generate the database password as a secret resource using AWS Secrets Manage
  • B. Create an AWS Lambda function resource to rotate the database passwor
  • C. Specify a Secrets Manager RotationSchedule resource to rotate the database password every 90 days.
  • D. Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Stor
  • E. Create an AWS Lambda function resource to rotate the database passwor
  • F. Specify a Parameter Store RotationSchedule resource to rotate the database password every 90 days.
  • G. Generate the database password as a secret resource using AWS Secrets Manage
  • H. Create an AWS Lambda function resource to rotate the database passwor
  • I. Create an Amazon EventBridge scheduled rule resource to trigger the Lambda function password rotation every 90 days.
  • J. Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Stor
  • K. Specify an AWS AppSync DataSource resource to automatically rotate the database password every 90 days.

Answer: B

Explanation:
https://aws.amazon.com/blogs/security/how-to-securely-provide-database-credentials-to-lambda-functions-by-us https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_cloudformation.html

NEW QUESTION 19
......

P.S. Downloadfreepdf.net now are offering 100% pass ensure AWS-Certified-Solutions-Architect-Professional dumps! All AWS-Certified-Solutions-Architect-Professional exam questions have been updated with correct answers: https://www.downloadfreepdf.net/AWS-Certified-Solutions-Architect-Professional-pdf-download.html (300 New Questions)