Pass4sure SAP-C01 Questions are updated and all SAP-C01 answers are verified by experts. Once you have completely prepared with our SAP-C01 exam prep kits you will be ready for the real SAP-C01 exam without a problem. We have Up to date Amazon-Web-Services SAP-C01 dumps study guide. PASSED SAP-C01 First attempt! Here What I Did.

Online Amazon-Web-Services SAP-C01 free dumps demo Below:

NEW QUESTION 1
A group of research institutions and hospitals are in a partnership to study 2 PBs of genomic data. The institute that owns the data stores it in an Amazon S3 bucket and updates it regularly. The institute would like to give all of the organizations in the partnership read access to the data. All members of the partnership are extremely cost-conscious, and the institute that owns the account with the S3 bucket is concerned about covering the costs for requests and data transfers from Amazon S3.
Which solution allows for secure datasharing without causing the institute that owns the bucket to assume all the costs for S3 requests and data transfers?

  • A. Ensure that all organizations in the partnership have AWS account
  • B. In the account with the S3 bucket, create a cross-account role for each account in the partnership that allows read access to the dat
  • C. Have the organizations assume and use that read role when accessing the data.
  • D. Ensure that all organizations in the partnership have AWS account
  • E. Create a bucket policy on the bucket that owns the dat
  • F. The policy should allow the accounts in the partnership read access to the bucke
  • G. Enable Requester Pays on the bucke
  • H. Have the organizations use their AWS credentials whenaccessing the data.
  • I. Ensure that all organizations in the partnership have AWS account
  • J. Configure buckets in each of the accounts with a bucket policy that allows the institute that owns the data the ability to write to the bucke
  • K. Periodically sync the data from the institute’s account to the other organization
  • L. Have the organizations use their AWS credentials when accessing the data using their accounts.
  • M. Ensure that all organizations in the partnership have AWS account
  • N. In the account with the S3 bucket, create a cross-account role for each account in the partnership that allows read access to the dat
  • O. Enable Requester Pays on the bucke
  • P. Have the organizations assume and use that read role when accessing the data.

Answer: B

Explanation:
https://docs.aws.amazon.com/AmazonS3/latest/dev/RequesterPaysBuckets.html

NEW QUESTION 2
A company runs an ordering system on AWS using Amazon SQS and AWS Lambda, with each order received as a JSON message. recently the company had a marketing event that led to a tenfold increase in orders. With this increase, the following undesired behaviors started in the ordering system:
SAP-C01 dumps exhibit Lambda failures while processing orders lead to queue backlogs.
SAP-C01 dumps exhibit The same orders have been processed multiple times.
A solutions Architect has been asked to solve the existing issues with the ordering system and add the following resiliency features:
SAP-C01 dumps exhibit Retain problematic orders for analysis.
SAP-C01 dumps exhibit Send notification if errors go beyond a threshold value. How should the Solutions Architect meet these requirements?

  • A. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after processing, increase the visibility timeout for the messages, create a dead letter queue for messages that could not be processed, create an Amazon CloudWatch alarm on Lambda errors for notification.
  • B. Receive single messages with each Lambda invocation, put additional Lambda workers to poll the queue, delete messages after processing, increase the message timer for the messages, use Amazon CloudWatch Logs for messages that could not be processed, create a CloudWatch alarm on Lambda errors for notification.
  • C. Receive multiple messages with each Lambda invocation, use long polling when receiving the messages, log the errors from the message processing code using Amazon CloudWatch Logs, create a dead letter queue with AWS Lambda to capture failed invocations, create CloudWatch events on Lambda errors for notification.
  • D. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after processing, increase the visibility timeout for the messages, create a delay queue for messages that could not be processed, create an Amazon CloudWatch metric on Lambda errors for notification.

Answer: D

NEW QUESTION 3
A company had a tight deadline to migrate its on-premises environment to AWS. It moved over Microsoft SQL Servers and Microsoft Windows Servers using the virtual machine import/export service and rebuild other applications native to the cloud. The team created both Amazon EC2 databases and used Amazon RDS. Each team in the company was responsible for migrating their applications, and they have created individual accounts for isolation of resources. The company did not have much time to consider costs, but now it would like suggestions on reducing its AWS spend.
Which steps should a Solutions Architect take to reduce costs?

  • A. Enable AWS Business Support and review AWS Trusted Advisor’s cost check
  • B. Create Amazon EC2 Auto Scaling groups for applications that experience fluctuating deman
  • C. Save AWS Simple Monthly Calculator reports in Amazon S3 for trend analysi
  • D. Create a master account under Organizations and have teams join for consolidating billing.
  • E. Enable Cost Explorer and AWS Business Support Reserve Amazon EC2 and Amazon RDS DB instance
  • F. Use Amazon CloudWatch and AWS Trusted Advisor for monitoring and to receive cost-savings suggestion
  • G. Create a master account under Organizations and have teams join for consolidated billing.
  • H. Create an AWS Lambda function that changes the instance size based on Amazon CloudWatch alarms.Reserve instances based on AWS Simple Monthly Calculator suggestion
  • I. Have an AWSWell-Architected framework review and apply recommendation
  • J. Create a master account under Organizations and have teams join for consolidated billing.
  • K. Create a budget and monitor for costs exceeding the budge
  • L. Create Amazon EC2 Auto Scaling groups for applications that experience fluctuating deman
  • M. Create an AWS Lambda function that changes instance sizes based on Amazon CloudWatch alarm
  • N. Have each team upload their bill to an Amazon S3 bucket for analysis of team spendin
  • O. Use Spot instances on nightly batch processing jobs.

Answer: D

NEW QUESTION 4
A company has an application that generates a weather forecast that is updated every 15 minutes with an output resolution of 1 billion unique positions, each approximately 20 bytes in size (20 Gigabytes per forecast). Every hour, the forecast data is globally accessed approximately 5 million times (1,400 requests per second), and up to 10 times more during weather events. The forecast data is overwritten every update. Users of the current weather forecast application expect responses to queries to be returned in less than two seconds for each request.
Which design meets the required request rate and response time?

  • A. Store forecast locations in an Amazon ES cluste
  • B. Use an Amazon CloudFront distribution targeting an Amazon API Gateway endpoint with AWS Lambda functions responding to queries as the origi
  • C. Enable API caching on the API Gateway stage with a cache-control timeout set for 15 minutes.
  • D. Store forecast locations in an Amazon EFS volum
  • E. Create an Amazon CloudFront distribution that targets an Elastic Load Balancing group of an Auto Scaling fleet of Amazon EC2 instances that have mounted the Amazon EFS volum
  • F. Set the set cache-control timeout for 15 minutes in the CloudFront distribution.
  • G. Store forecast locations in an Amazon ES cluste
  • H. Use an Amazon CloudFront distribution targeting an API Gateway endpoint with AWS Lambda functions responding to queries as the origi
  • I. Create an Amazon Lambda@Edge function that caches the data locally at edge locations for 15 minutes.
  • J. Store forecast locations in an Amazon S3 as individual object
  • K. Create an Amazon CloudFront distribution targeting an Elastic Load Balancing group of an Auto Scaling fleet of EC2 instances, querying the origin of the S3 objec
  • L. Set the cache-control timeout for 15 minutes in the CloudFront distribution.

Answer: C

Explanation:
https://aws.amazon.com/blogs/networking-and-content-delivery/lambdaedge-design-best-practices/

NEW QUESTION 5
A company’s application is increasingly popular and experiencing latency because of high volume reads on the database server.
The service has the following properties:
SAP-C01 dumps exhibit A highly available REST API hosted in one region using Application Load Balancer (ALB) with auto scaling.
SAP-C01 dumps exhibit A MySQL database hosted on an Amazon EC2 instance in a single Availability Zone.
The company wants to reduce latency, increase in-region database read performance, and have multi-region disaster recovery capabilities that can perform a live recovery automatically without any data or performance loss (HA/DR).
Which deployment strategy will meet these requirements?

  • A. Use AWS CloudFormation StackSets to deploy the API layer in two region
  • B. Migrate the database to an Amazon Aurora with MySQL database cluster with multiple read replicas in one region and a read replica in a different region than the source database cluste
  • C. Use Amazon Route 53 health checks to trigger a DNS failover to the standby region if the health checks to the primary load balancer fai
  • D. In the event of Route 53 failover, promote the cross-region database replica to be the master and build out new read replicas in the standby region.
  • E. Use Amazon ElastiCache for Redis Multi-AZ with an automatic failover to cache the database readquerie
  • F. Use AWS OpsWorks to deploy the API layer, cache layer, and existing database layer in two region
  • G. In the event of failure, use Amazon Route 53 health checks on the database to trigger a DNS failover to the standby region if the health checks in the primary region fai
  • H. Back up the MySQL database frequently, and in the event of a failure in an active region, copy the backup to the standby region and restore the standby database.
  • I. Use AWS CloudFormation StackSets to deploy the API layer in two region
  • J. Add the database to an Auto Scaling grou
  • K. Add a read replica to the database in the second regio
  • L. Use Amazon Route 53 health checks on the database to trigger a DNS failover to the standby region if the health checks in the primary region fai
  • M. Promote the cross-region database replica to be the master and build out new read replicas in the standby region.
  • N. Use Amazon ElastiCache for Redis Multi-AZ with an automatic failover to cache the database read querie
  • O. Use AWS OpsWorks to deploy the API layer, cache layer, and existing database layer in two region
  • P. Use Amazon Route 53 health checks on the ALB to trigger a DNS failover to the standby region if the health checks in the primary region fai
  • Q. Back up the MySQL database frequently, and in the event of a failure in an active region, copy the backup to the standby region and restore the standby database.

Answer: A

NEW QUESTION 6
A Solutions Architect is migrating a 10 TB PostgreSQL database to Amazon RDS for PostgreSQL. The company’s internet link is 50 MB with a VPN in the Amazon VPC, and the Solutions Architect needs to migrate the data and synchronize the changes before the cutover. The cutover must take place within an 8-day period.
What is the LEAST complex method of migrating the database securely and reliably?

  • A. Order an AWS Snowball device and copy the database using the AWS DM
  • B. When the database is available in Amazon 3, use AWS DMS to load it to Amazon RDS, and configure a job to synchronize changes before the cutover.
  • C. Create an AWS DMS job to continuously replicate the data from on premises to AW
  • D. Cutover to Amazon RDS after the data is synchronized.
  • E. Order an AWS Snowball device and copy a database dump to the devic
  • F. After the data has been copied to Amazon S3, import it to the Amazon RDS instanc
  • G. Set up log shipping over a VPN to synchronize changes before the cutover.
  • H. Order an AWS Snowball device and copy the database by using the AWS Schema Conversion Tool.When the data is available in Amazon S3, use AWS DMS to load it to Amazon RDS, and configure a job to synchronize changes before the cutover.

Answer: B

NEW QUESTION 7
A financial services company is moving to AWS and wants to enable Developers to experiment and innovate while preventing access to production applications The company has the following requirements
• Production workloads cannot be directly connected to the internet
• All workloads must be restricted to the us-west-2 and eu-central-1 Regions
• Notification should be sent when Developer sandboxes exceed $500 in AWS spending monthly
Which combination of actions needs to be taken to create a multi-account structure that meets the company's requirements'? (Select THREE )

  • A. Create accounts for each production workload within an organization in AWS Organizations Place the production accounts within an organizational unit (OU) For each account delete the default VPC Create an SCP with a Deny rule for the attach an internet gateway and create a default VPC actions Attach the SCP to the OU for the production accounts
  • B. Create accounts for each production workload within an organization in AWS Organizations Place the production accounts within an organizational unit (OU) Create an SCP with a Deny rule on the attach an internet gateway action Create an SCP with a Deny rule to prevent use of the default VPC Attach the SCPs to the OU tor the production accounts
  • C. Create a SCP containing a Deny Effect for cloudfront". Iam:*, route53* and support* with a StringNotEquals condition on an aws RequestedRegion condition key with us-west-2 and eu-central-1 values Attach the SCP to the organization's root.
  • D. Create an IAM permission boundary containing a Deny Effect for cloudfront'. lam * route53' and support" with a StringNotEquals condition on an aws RequestedRegion condition key with us-west 2 and eu-central-1 values Attach the permission boundary to an IAM group containing the development and production users.
  • E. Create accounts for each development workload within an organization m AWS Organizations Place the development accounts within an organizational unit (OU) Create a custom AWS Config rule to deactivate all (AM users when an account's monthly bill exceeds $500.
  • F. Create accounts for each development workload within an organization in AWS Organizations Place the development accounts within an organizational unit (OU) Create a budget within AWS Budgets for each development account to monitor and report on monthly spending exceeding $500.

Answer: ABD

NEW QUESTION 8
A bank is designing an online customer service portal where customers can chat with customer service agents. The portal is required to maintain a 15-minute RPO or RTO in case of a regional disaster. Banking regulations require that all customer service chat transcripts must be preserved on durable storage for at least 7 years, chat conversations must be encrypted in-flight, and transcripts must be encrypted at rest. The Data Lost Prevention team requires that data at rest must be encrypted using a key that the team controls, rotates, and revokes.
Which design meets these requirements?

  • A. The chat application logs each chat message into Amazon CloudWatch Log
  • B. A scheduled AWS Lambda function invokes a CloudWatch Log
  • C. CreateExportTask every 5 minutes to export chat transcripts to Amazon S3. The S3 bucket is configured for cross-region replication to the backup regio
  • D. Separate AWS KMS keys are specified for the CloudWatch Logs group and the S3 bucket.
  • E. The chat application logs each chat message into two different Amazon CloudWatch Logs groups in two different regions, with the same AWS KMS key applie
  • F. Both CloudWatch Logs groups are configured to export logs into an Amazon Glacier vault with a 7-year vault lock policy with a KMS key specified.
  • G. The chat application logs each chat message into Amazon CloudWatch Log
  • H. A subscription filter on the CloudWatch Logs group feeds into an Amazon Kinesis Data Firehose which streams the chat messages into an Amazon S3 bucket in the backup regio
  • I. Separate AWS KMS keys are specified for the CloudWatch Logs group and the Kinesis Data Firehose.
  • J. The chat application logs each chat message into Amazon CloudWatch Log
  • K. The CloudWatch Logs group is configured to export logs into an Amazon Glacier vault with a 7-year vault lock polic
  • L. Glacier cross-region replication mirrors chat archives to the backup regio
  • M. Separate AWS KMS keys are specified for the CloudWatch Logs group and the Amazon Glacier vault.

Answer: B

NEW QUESTION 9
A company wants to replace its call system with a solution built using AWS managed services. The company call center would like the solution to receive calls, create contact flows, and scale to handle growth projections. The call center would also like the solution to use deep learning capabilities to recognize the intent of the callers and handle basic tasks, reducing the need to speak an agent. The solution should also be able to query business applications and provide relevant information back to calls as requested.
Which services should the Solution Architect use to build this solution? (Choose three.)

  • A. Amazon Rekognition to identity who is calling.
  • B. Amazon Connect to create a cloud-based contact center.
  • C. Amazon Alexa for Business to build conversational interface.
  • D. AWS Lambda to integrate with internal systems.
  • E. Amazon Lex to recognize the intent of the caller.
  • F. Amazon SQS to add incoming callers to a queue.

Answer: BDE

NEW QUESTION 10
A company is implementing a multi-account strategy; however, the Management team has expressed concerns that services like DNS may become overly complex. The company needs a solution that allows private DNS to be shared among virtual private clouds (VPCs) in different accounts. The company will have approximately 50 accounts in total.
What solution would create the LEAST complex DNS architecture and ensure that each VPC can resolve all AWS resources?

  • A. Create a shared services VPC in a central account, and create a VPC peering connection from the shared services VPC to each of the VPCs in the other account
  • B. Within Amazon Route 53, create a privately hosted zone in the shared services VPC and resource record sets for the domain and subdomains.Programmatically associate other VPCs with the hosted zone.
  • C. Create a VPC peering connection among the VPCs in all account
  • D. Set the VPC attributes enableDnsHostnames and enableDnsSupport to “true” for each VP
  • E. Create an Amazon Route 53 private zone for each VP
  • F. Create resource record sets for the domain and subdomain
  • G. Programmatically associate the hosted zones in each VPC with the other VPCs.
  • H. Create a shared services VPC in a central accoun
  • I. Create a VPC peering connection from the VPCs in other accounts to the shared services VP
  • J. Create an Amazon Route 53 privately hosted zone in the shared services VPC with resource record sets for the domain and subdomain
  • K. Allow UDP and TCP port 53 over the VPC peering connections.
  • L. Set the VPC attributes enableDnsHostnames and enableDnsSupport to “false” in every VP
  • M. Create an AWS Direct Connect connection with a private virtual interfac
  • N. Allow UDP and TCP port 53 over the virtual interfac
  • O. Use the on-premises DNS servers to resolve the IP addresses in each VPC on AWS.

Answer: A

Explanation:
https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-w

NEW QUESTION 11
A company has an application that runs a web service on Amazon EC2 instances and stores .jpg images in Amazon S3. The web traffic has a predictable baseline, but often demand spikes unpredictably for short periods of time. The application is loosely coupled and stateless. The .jpg images stored in Amazon S3 are accessed frequently for the first 15 to 20 days, they are seldom accessed thereafter but always need to be immediately available. The CIO has asked to find ways to reduce costs.
Which of the following options will reduce costs? (Choose two.)

  • A. Purchase Reserved instances for baseline capacity requirements and use On-Demand instances for the demand spikes.
  • B. Configure a lifecycle policy to move the .jpg images on Amazon S3 to S3 IA after 30 days.
  • C. Use On-Demand instances for baseline capacity requirements and use Spot Fleet instances for the demand spikes.
  • D. Configure a lifecycle policy to move the .jpg images on Amazon S3 to Amazon Glacier after 30 days.
  • E. Create a script that checks the load on all web servers and terminates unnecessary On-Demand instances.

Answer: AB

NEW QUESTION 12
A Solutions Architect wants to make sure that only AWS users or roles with suitable permissions can access a new Amazon API Gateway endpoint The Solutions Architect wants an end-to-end view of each request to analyze the latency of the request and create service maps
How can the Solutions Architect design the API Gateway access control and perform request inspections?

  • A. For the API Gateway method set the authorization to AWSJAM Then, give the I AM user or role execute-api Invoke permission on the REST API resource Enable the API caller to sign requests with AWS Signature when accessing the endpoint Use AWS X-Roy to trace and analyze user requests to API Gateway
  • B. For the API Gateway resource set CORS to enabled and only return the company's domain mAccess-Control-Allow-Origin headers Then give the IAM user or role execute-api Invoke permission on the REST API resource Use Amazon CloudWatch to trace and analyze user requests to API Gateway
  • C. Create an AWS Lambda function as the custom authorizer ask the API client to pass the key and secret when making the call and then use Lambda to validate the key'secret pair against the IAM system Use AWS X-Ray to trace and analyze user requests to API Gateway
  • D. Create a client certificate for API Gateway Distribute the certificate to the AWS users and roles that need to access the endpoint Enable the API caller to pass the client certificate when accessing the endpoint Use Amazon CloudWatch to trace and analyze user requests to API Gateway.

Answer: D

NEW QUESTION 13
A company is migrating to the cloud. It wants to evaluate the configurations of virtual machines in its existing data center environment to ensure that it can size new Amazon EC2 instances accurately. The company wants to collect metrics, such as CPU, memory, and disk utilization, and it needs an inventory of what processes are running on each instance. The company would also like to monitor network connections to map communications between servers.
Which would enable the collection of this data MOST cost effectively?

  • A. Use AWS Application Discovery Service and deploy the data collection agent to each virtual machine in the data center.
  • B. Configure the Amazon CloudWatch agent on all servers within the local environment and publish metrics to Amazon CloudWatch Logs.
  • C. Use AWS Application Discovery Service and enable agentless discovery in the existing virtualization environment.
  • D. Enable AWS Application Discovery Service in the AWS Management Console and configure the corporate firewall to allow scans over a VPN.

Answer: A

NEW QUESTION 14
A company is adding a new approved external vendor that only supports IPv6 connectivity. The company’s backend systems sit in the private subnet of an Amazon VPC. The company uses a NAT gateway to allow these systems to communicate with external vendors over IPv4. Company policy requires systems that communicate with external vendors use a security group that limits access to only approved external vendors. The virtual private cloud (VPC) uses the default network ACL.
The Systems Operator successfully assigns IPv6 addresses to each of the backend systems. The Systems Operator also updates the outbound security group to include the IPv6 CIDR of the external vendor (destination). The systems within the VPC are able to ping one another successfully over IPv6. However, these systems are unable to communicate with the external vendor.
What changes are required to enable communication with the external vendor?

  • A. Create an IPv6 NAT instanc
  • B. Add a route for destination 0.0.0.0/0 pointing to the NAT instance.
  • C. Enable IPv6 on the NAT gatewa
  • D. Add a route for destination ::/0 pointing to the NAT gateway.
  • E. Enable IPv6 on the internet gatewa
  • F. Add a route for destination 0.0.0.0/0 pointing to the IGW.
  • G. Create an egress-only internet gatewa
  • H. Add a route for destination ::/0 pointing to the gateway.

Answer: D

Explanation:
https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html

NEW QUESTION 15
A company has a large on-premises Apache Hadoop cluster with a 20 PB HDFS database. The cluster is growing every quarter by roughly 200 instances and 1 PB. The company’s goals are to enable resiliency for its Hadoop data, limit the impact of losing cluster nodes, and significantly reduce costs. The current cluster runs 24/7 and supports a variety of analysis workloads, including interactive queries and batch processing.
Which solution would meet these requirements with the LEAST expense and down time?

  • A. Use AWS Snowmobile to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle the interactive workload based on historical data from theon-premises cluste
  • B. Store the data on EMRF
  • C. Minimize costs using Reserved Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metric
  • D. Create job-specific, optimized clusters for batch workloads that are similarly optimized.
  • E. Use AWS Snowmobile to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster of similar size and configuration to the current cluste
  • F. Store the data on EMRF
  • G. Minimize costs by using Reserved Instance
  • H. As the workload grows each quarter, purchase additional Reserved Instances and add to the cluster.
  • I. Use AWS Snowball to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle the interactive workloads based on historical data from theon-premises cluste
  • J. Store the on EMRF
  • K. Minimize costs using Reserved Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metric
  • L. Create job-specific, optimized clusters for batch workloads that are similarly optimized.
  • M. Use AWS Direct Connect to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle the interactive workload based on historical data from theon-premises cluste
  • N. Store the data on EMRF
  • O. Minimize costs using Reserved Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metric
  • P. Create job-specific, optimized clusters for batch workloads that are similarly optimized.

Answer: A

Explanation:
Q: How should I choose between Snowmobile and Snowball?
To migrate large datasets of 10PB or more in a single location, you should use Snowmobile. For datasets less than 10PB or distributed in multiple locations, you should use Snowball. In addition, you should evaluate the amount of available bandwidth in your network backbone. If you have a high speed backbone with hundreds of Gb/s of spare throughput, then you can use Snowmobile to migrate the large datasets all at once. If you have limited bandwidth on your backbone, you should consider using multiple Snowballs to migrate the data incrementally.

NEW QUESTION 16
AnyCompany has acquired numerous companies over the past few years. The CIO for AnyCompany would like to keep the resources for each acquired company separate. The CIO also would like to enforce a chargeback model where each company pays for the AWS services it uses.
The Solutions Architect is tasked with designing an AWS architecture that allows AnyCompany to achieve the following:
SAP-C01 dumps exhibit Implementing a detailed chargeback mechanism to ensure that each company pays for the resources it uses.
SAP-C01 dumps exhibit AnyCompany can pay for AWS services for all its companies through a single invoice.
SAP-C01 dumps exhibit Developers in each acquired company have access to resources in their company only.
SAP-C01 dumps exhibit Developers in an acquired company should not be able to affect resources in their company only.
SAP-C01 dumps exhibit A single identity store is used to authenticate Developers across all companies.
Which of the following approaches would meet these requirements? (Choose two.)

  • A. Create a multi-account strategy with an account per compan
  • B. Use consolidated billing to ensure that AnyCompany needs to pay a single bill only.
  • C. Create a multi-account strategy with a virtual private cloud (VPC) for each compan
  • D. Reduce impact across companies by not creating any VPC peering link
  • E. As everything is in a single account, there will be a single invoic
  • F. use tagging to create a detailed bill for each company.
  • G. Create IAM users for each Developer in the account to which they require acces
  • H. Create policies that allow the users access to all resources in that accoun
  • I. Attach the policies to the IAM user.
  • J. Create a federated identity store against the company’s Active Director
  • K. Create IAM roles with appropriate permissions and set the trust relationships with AWS and the identity stor
  • L. Use AWS STS to grant users access based on the groups they belong to in the identity store.
  • M. Create a multi-account strategy with an account per compan
  • N. For billing purposes, use a tagging solution that uses a tag to identify the company that creates each resource.

Answer: AD

NEW QUESTION 17
A company has a legacy application running on servers on premises. To increase the application’s reliability, the company wants to gain actionable insights using application logs. A Solutions Architect has been given following requirements for the solution:
SAP-C01 dumps exhibit Aggregate logs using AWS.
SAP-C01 dumps exhibit Automate log analysis for errors.
SAP-C01 dumps exhibit Notify the Operations team when errors go beyond a specified threshold. What solution meets the requirements?

  • A. Install Amazon Kinesis Agent on servers, send logs to Amazon Kinesis Data Streams and use Amazon Kinesis Data Analytics to identify errors, create an Amazon CloudWatch alarm to notify the Operations team of errors
  • B. Install an AWS X-Ray agent on servers, send logs to AWS Lambda and analyze them to identify errors, use Amazon CloudWatch Events to notify the Operations team of errors.
  • C. Install Logstash on servers, send logs to Amazon S3 and use Amazon Athena to identify errors, use sendmail to notify the Operations team of errors.
  • D. Install the Amazon CloudWatch agent on servers, send logs to Amazon CloudWatch Logs and use metric filters to identify errors, create a CloudWatch alarm to notify the Operations team of errors.

Answer: A

Explanation:
https://docs.aws.amazon.com/kinesis-agent-windows/latest/userguide/what-is-kinesis-agent-windows.html https://medium.com/@khandelwal12nidhi/build-log-analytic-solution-on-aws-cc62a70057b2

NEW QUESTION 18
An organization has a write-intensive mobile application that uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The application has scaled well, however, costs have increased exponentially because of higher than anticipated Lambda costs. The application’s use is unpredictable, but there has been a steady 20% increase in utilization every month.
While monitoring the current Lambda functions, the Solutions Architect notices that the execution-time averages 4.5 minutes. Most of the wait time is the result of a high-latency network call to a 3-TB MySQL database server that is on-premises. A VPN is used to connect to the VPC, so the Lambda functions have been configured with a five-minute timeout.
How can the Solutions Architect reduce the cost of the current architecture?

  • A. Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.Enable local caching in the mobile application to reduce the Lambda function invocation calls.Monitor the Lambda function performance; gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time.Offload the frequently accessed records from DynamoDB to Amazon ElastiCache.
  • B. Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.Cache the API Gateway results to Amazon CloudFront.Use Amazon EC2 Reserved Instances instead of Lambda.Enable Auto Scaling on EC2, and use Spot Instances during peak times.Enable DynamoDB Auto Scaling to manage target utilization.
  • C. Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.Enable caching of the Amazon API Gateway results in Amazon CloudFront to reduce the number of Lambda function invocations.Monitor the Lambda function performance; gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time.Enable DynamoDB Accelerator for frequently accessed records, and enable the DynamoDB Auto Scaling feature.
  • D. Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.Enable API caching on API Gateway to reduce the number of Lambda function invocations.Continue to monitor the AWS Lambda function performance; gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time.Enable Auto Scaling in DynamoDB.

Answer: D

NEW QUESTION 19
A company has been using a third-party provider for its content delivery network and recently decided to switch to Amazon CloudFront the Development team wants to maximize performance for the global user base. The company uses a content management system (CMS) that serves both static and dynamic content. The CMS is both md an Application Load Balancer (ALB) which is set as the default origin for the distribution. Static assets are served from an Amazon S3 bucket. The Origin Access Identity (OAI) was created property d the S3 bucket policy has been updated to allow the GetObject action from the OAI, but static assets are receiving a 404 error
Which combination of steps should the Solutions Architect take to fix the error? (Select TWO. )

  • A. Add another origin to the CloudFront distribution for the static assets
  • B. Add a path based rule to the ALB to forward requests for the static assets
  • C. Add an RTMP distribution to allow caching of both static and dynamic content
  • D. Add a behavior to the CloudFront distribution for the path pattern and the origin of the static assets
  • E. Add a host header condition to the ALB listener and forward the header from CloudFront to add traffic to the allow list

Answer: AD

NEW QUESTION 20
A Company has a security event whereby an Amazon S3 bucket with sensitive information was made public. Company policy is to never have public S3 objects, and the Compliance team must be informed immediately when any public objects are identified.
How can the presence of a public S3 object be detected, set to trigger alarm notifications, and automatically remediated in the future? (Choose two.)

  • A. Turn on object-level logging for Amazon S3. Turn on Amazon S3 event notifications to notify by using an Amazon SNS topic when a PutObject API call is made with a public-read permission.
  • B. Configure an Amazon CloudWatch Events rule that invokes an AWS Lambda function to secure the S3 bucket.
  • C. Use the S3 bucket permissions for AWS Trusted Advisor and configure a CloudWatch event to notify by using Amazon SNS.
  • D. Turn on object-level logging for Amazon S3. Configure a CloudWatch event to notify by using an SNS topic when a PutObject API call with public-read permission is detected in the AWS CloudTrail logs.
  • E. Schedule a recursive Lambda function to regularly change all object permissions inside the S3 bucket.

Answer: BD

Explanation:
https://aws.amazon.com/blogs/security/how-to-detect-and-automatically-remediate-unintended-permissions-in-a

NEW QUESTION 21
A company runs a legacy system on a single m4.2xlarge Amazon EC2 instance with Amazon EBS2 storage. The EC2 instance runs both the web server and a self-managed Oracle database. A snapshot is made of the EBS volume every 12 hours, and an AMI was created from the fully configured EC2 instance.
A recent event that terminated the EC2 instance led to several hours of downtime. The application was successfully launched from the AMI, but the age of the EBS snapshot and the repair of the database resulted in the loss of 8 hours of data. The system was also down for 4 hours while the Systems Operators manually performed these processes.
What architectural changes will minimize downtime and reduce the chance of lost data?

  • A. Create an Amazon CloudWatch alarm to automatically recover the instanc
  • B. Create a script that will check and repair the database upon reboo
  • C. Subscribe the Operations team to the Amazon SNS message generated by the CloudWatch alarm.
  • D. Run the application on m4.xlarge EC2 instances behind an Elastic Load Balancer/Application Load Balance
  • E. Run the EC2 instances in an Auto Scaling group across multiple Availability Zones with a minimum instance count of tw
  • F. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.
  • G. Run the application on m4.2xlarge EC2 instances behind an Elastic Load Balancer/Application Load Balance
  • H. Run the EC2 instances in an Auto Scaling group across multiple Availability Zones with aminimum instance count of on
  • I. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.
  • J. Increase the web server instance count to two m4.xlarge instances and use Amazon Route 53 round-robin load balancing to spread the loa
  • K. Enable Route 53 health checks on the web server
  • L. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.

Answer: B

Explanation:
Ensures that there are at least two EC instances, each of which is in a different AZ. It also ensures that the database spans multiple AZs. Hence this meets all the criteria.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

NEW QUESTION 22
A company wants to ensure that the workloads for each of its business units have complete autonomy and a minimal blast radius in AWS. The Security team must be able to control access to the resources and services in the account to ensure that particular services are not used by the business units.
How can a Solutions Architect achieve the isolation requirements?

  • A. Create individual accounts for each business unit and add the account to an OU in AWS Organizations.Modify the OU to ensure that the particular services are blocke
  • B. Federate each account with an IdP, and create separate roles for the business units and the Security team.
  • C. Create individual accounts for each business uni
  • D. Federate each account with an IdP and create separate roles and policies for business units and the Security team.
  • E. Create one shared account for the entire compan
  • F. Create separate VPCs for each business uni
  • G. Create individual IAM policies and resource tags for each business uni
  • H. Federate each account with an IdP, and create separate roles for the business units and the Security team.
  • I. Create one shared account for the entire compan
  • J. Create individual IAM policies and resource tags for each business uni
  • K. Federate the account with an IdP, and create separate roles for the business units and the Security team.

Answer: A

NEW QUESTION 23
A bank is re-architecting its mainframe-based credit card approval processing application to a cloud-native application on the AWS cloud.
The new application will receive up to 1,000 requests per second at peak load. There are multiple steps to each transaction, and each step must receive the result of the previous step. The entire request must return an authorization response within less than 2 seconds with zero data loss. Every request must receive a response. The solution must be Payment Card Industry Data Security Standard (PCI DSS)-compliant.
Which option will meet all of the bank’s objectives with the LEAST complexity and LOWEST cost while also meeting compliance requirements?

  • A. Create an Amazon API Gateway to process inbound requests using a single AWS Lambda task that performs multiple steps and returns a JSON object with the approval statu
  • B. Open a support case to increase the limit for the number of concurrent Lambdas to allow room for bursts of activity due to the new application.
  • C. Create an Application Load Balancer with an Amazon ECS cluster on Amazon EC2 Dedicated instances in a target group to process incoming request
  • D. Use Auto Scaling to scale the cluster out/in based on average CPU utilizatio
  • E. Deploy a web service that processes all of the approval steps and returns a JSON object with the approval status.
  • F. Deploy the application on Amazon EC2 on Dedicated Instance
  • G. Use an Elastic Load Balancer in front of a farm of application servers in an Auto Scaling group to handle incoming request
  • H. Scale out/in based on a custom Amazon CloudWatch metric for the number of inbound requests per second after measuring the capacity of a single instance.
  • I. Create an Amazon API Gateway to process inbound requests using a series of AWS Lambda processes, each with an Amazon SQS input queu
  • J. As each step completes, it writes its result to the next step’s queu
  • K. The final step returns a JSON object with the approval statu
  • L. Open a support case to increase the limit for the number of concurrent Lambdas to allow room for bursts of activity due to the new application.

Answer: B

NEW QUESTION 24
As a part of building large applications in the AWS Cloud, the Solutions Architect is required to implement the perimeter security protection. Applications running on AWS have the following endpoints:
SAP-C01 dumps exhibit Application Load Balancer
SAP-C01 dumps exhibit Amazon API Gateway regional endpoint
SAP-C01 dumps exhibit Elastic IP address-based EC2 instances.
SAP-C01 dumps exhibit Amazon S3 hosted websites.
SAP-C01 dumps exhibit Classic Load Balancer
The Solutions Architect must design a solution to protect all of the listed web front ends and provide the following security capabilities:
SAP-C01 dumps exhibit DDoS protection
SAP-C01 dumps exhibit SQL injection protection
SAP-C01 dumps exhibit IP address whitelist/blacklist
SAP-C01 dumps exhibit HTTP flood protection
SAP-C01 dumps exhibit Bad bot scraper protection
How should the Solutions Architect design the solution?

  • A. Deploy AWS WAF and AWS Shield Advanced on all web endpoint
  • B. Add AWS WAF rules to enforce the company’s requirements.
  • C. Deploy Amazon CloudFront in front of all the endpoint
  • D. The CloudFront distribution provides perimeter protectio
  • E. Add AWS Lambda-based automation to provide additional security.
  • F. Deploy Amazon CloudFront in front of all the endpoint
  • G. Deploy AWS WAF and AWS Shield Advance
  • H. Add AWS WAF rules to enforce the company’s requirement
  • I. Use AWS Lambda to automate and enhance the security posture.
  • J. Secure the endpoints by using network ACLs and security groups and adding rules to enforce the company’s requirement
  • K. Use AWS Lambda to automatically update the rules.

Answer: C

NEW QUESTION 25
A company runs a public-facing application that uses a Java-based web sen/ice via a RESTful API It is hosted on Apache Tomcat on a single server in a data center that runs consistently at 30% CPU utilization Use of the API is expected to increase by 10 times with a new product launch The business wants to migrate the application to AWS with no disruption and needs it to scale to meet demand
The company has already decided to use Amazon Route 53 and CNAME records lo redirect traffic How can these requirements be met with the LEAST amount of effort?

  • A. Use AWS Elastic Beanstalk to deploy the Java web service and enable Auto Scaling Then switch the application to use the new web service
  • B. Lift and shift the Apache server to the cloud using AWS SMS Then switch the application to direct web service traffic to the new instance
  • C. Create a Docker image and migrate the image to Amazon ECS Then change the application code to direct web service queries to the ECS container
  • D. Modify the application to call the web service via Amazon API Gateway Then create a new AWS Lambda Java function to run the Java web service code After testing change API Gateway to use the Lambda function

Answer: A

NEW QUESTION 26
......

P.S. 2passeasy now are offering 100% pass ensure SAP-C01 dumps! All SAP-C01 exam questions have been updated with correct answers: https://www.2passeasy.com/dumps/SAP-C01/ (179 New Questions)