A banking company wants to collect large volumes of transactional data using Amazon Kinesis Data Streams for real-time analytics. The company uses PutRecord to send data to Amazon Kinesis, and has observed network outages during certain times of the day. The company wants to obtain exactly once semantics for the entire processing pipeline. What should the company do to obtain these characteristics? A. Design the application so it can remove duplicates during processing be embedding a unique ID in each record. B. Rely on the processing semantics of Amazon Kinesis Data Analytics to avoid duplicate processing of events. C. Design the data producer so events are not ingested into Kinesis Data Streams multiple times. D. Rely on the exactly one processing semantics of Apache Flink and Apache Spark Streaming included in Amazon EMR.  Suggested Answer: A Community Answer: A Reference: https://docs.aws.amazon.com/streams/latest/dev/kinesis-record-processor-duplicates.html This question is in DAS-C01 AWS Certified Data Analytics – Specialty Exam For getting AWS Certified Data Analytics – Specialty Certificate Disclaimers: The website is not related to, affiliated with, endorsed or authorized by Amazon. Trademarks, certification & product names are used for reference only and belong to Amazon. The website does not contain actual questions and answers from Amazon's Certification Exam.
Please login or Register to submit your answer