A company wants to provide its data analysts with uninterrupted access to the data in its Amazon Redshift cluster. All data is streamed to an Amazon S3 bucket with Amazon Kinesis Data Firehose. An AWS Glue job that is scheduled to run every 5 minutes issues a COPY command to move the data into Amazon Redshift. The amount of data delivered is uneven throughout the day, and cluster utilization is high during certain periods. The COPY command usually completes within a couple of seconds. However, when load spike occurs, locks can exist and data can be missed. Currently, the AWS Glue job is configured to run without retries, with timeout at 5 minutes and concurrency at 1. How should a data analytics specialist configure the AWS Glue job to optimize fault tolerance and improve data availability in the Amazon Redshift cluster? A. Increase the number of retries. Decrease the timeout value. Increase the job concurrency. B. Keep the number of retries at 0. Decrease the timeout value. Increase the job concurrency. C. Keep the number of retries at 0. Decrease the timeout value. Keep the job concurrency at 1. D. Keep the number of retries at 0. Increase the timeout value. Keep the job concurrency at 1.  Suggested Answer: B Community Answer: A This question is in DAS-C01 AWS Certified Data Analytics – Specialty Exam For getting AWS Certified Data Analytics – Specialty Certificate Disclaimers: The website is not related to, affiliated with, endorsed or authorized by Amazon. Trademarks, certification & product names are used for reference only and belong to Amazon. The website does not contain actual questions and answers from Amazon's Certification Exam.
Please login or Register to submit your answer