You are planning an Azure solution that will aggregate streaming data. The input data will be retrieved from tab-separated values (TSV) files in Azure Blob storage. You need to output the maximum value from a specific column for every two-minute period in near real-time. The output must be written to Blob storage as a Parquet file. What should you use? A. Azure Data Factory and mapping data flows B. Azure Data Factory and wrangling data flows C. Azure Stream Analytics window functions D. Azure Databricks and Apache Spark SQL window functions  Suggested Answer: C Reference: https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-define-outputs#parquet-output-batching-window-properties This question is in DP-201 Designing an Azure Data Solution Exam For getting Microsoft Certified: Azure Data Engineer Associate Certificate Disclaimers: The website is not related to, affiliated with, endorsed or authorized by Microsoft. The website does not contain actual questions and answers from Microsoft's Certification Exams. Trademarks, certification & product names are used for reference only and belong to Microsoft.
Please login or Register to submit your answer