A large financial company is running its ETL process. Part of this process is to move data from Amazon S3 into an Amazon Redshift cluster. The company wants to use the most cost-efficient method to load the dataset into Amazon Redshift. Which combination of steps would meet these requirements? (Choose two.) A. Use the COPY command with the manifest file to load data into Amazon Redshift. B. Use S3DistCp to load files into Amazon Redshift. C. Use temporary staging tables during the loading process. D. Use the UNLOAD command to upload data into Amazon Redshift. E. Use Amazon Redshift Spectrum to query files from Amazon S3.  Suggested Answer: CE Community Answer: AC Reference: https://aws.amazon.com/blogs/big-data/top-8-best-practices-for-high-performance-etl-processing-using-amazon-redshift/ This question is in DAS-C01 AWS Certified Data Analytics – Specialty Exam For getting AWS Certified Data Analytics – Specialty Certificate Disclaimers: The website is not related to, affiliated with, endorsed or authorized by Amazon. Trademarks, certification & product names are used for reference only and belong to Amazon. The website does not contain actual questions and answers from Amazon's Certification Exam.
Please login or Register to submit your answer