Overview – XYZ is an online training provider. They also provide a yearly gaming competition for their students. The competition is held every month in different locations. Current Environment – The company currently has the following environment in place: * The racing cars for the competition send their telemetry data to a MongoDB database. The telemetry data has around 100 attributes. * A custom application is then used to transfer…

QuestionsCategory: DP-200Overview – XYZ is an online training provider. They also provide a yearly gaming competition for their students. The competition is held every month in different locations. Current Environment – The company currently has the following environment in place: * The racing cars for the competition send their telemetry data to a MongoDB database. The telemetry data has around 100 attributes. * A custom application is then used to transfer…
Admin Staff asked 4 months ago
Overview -
XYZ is an online training provider. They also provide a yearly gaming competition for their students. The competition is held every month in different locations.
Current Environment -
The company currently has the following environment in place:
* The racing cars for the competition send their telemetry data to a MongoDB database. The telemetry data has around 100 attributes.
* A custom application is then used to transfer the data from the MongoDB database to a SQL Server 2017 database. The attribute names are changed when they are sent to the SQL Server database.
* Another application named "XYZ workflow" is then used to perform analytics on the telemetry data to look for improvements on the racing cars.
* The SQL Server 2017 database has a table named "cardata" which has around 1 TB of data. "XYZ workflow" performs the required analytics on the data in this table. Large aggregations are performed on a column of the table.
Proposed Environment -
The company now wants to move the environment to Azure. Below are the key requirements:
* The racing car data will now be moved to Azure Cosmos DB and Azure SQL database. The data must be written to the closest Azure data center and must converge in the least amount of time.
* The query performance for data in the Azure SQL database must be stable without the need of administrative overhead
* The data for analytics will be moved to an Azure SQL Data warehouse
* Transparent data encryption must be enabled for all data stores wherever possible
* An Azure Data Factory pipeline will be used to move data from the Cosmos DB database to the Azure SQL database. If there is a delay of more than 15 minutes for the data transfer, then configuration changes need to be made to the pipeline workflow.
* The telemetry data must be monitored for any sort of performance issues.
* The Request Units for Cosmos DB must be adjusted to maintain the demand while also minimizing costs.
* The data in the Azure SQL Server database must be protected via the following requirements:
- Only the last four digits of the values in the column CarID must be shown
- A zero value must be shown for all values in the column CarWeight
Which of the following would you use for the consistency level for the database?

A. Eventual

B. Session

C. Strong

D. Consistent prefix








 

Suggested Answer: A

Since there is a requirement for data to be written to the closest data center for Cosmos DB, we need to ensure there is a multi-master setup for Cosmos DB wherein data can be written from multiple regions. For such accounts, we can't set the consistency level to Strong.
The Microsoft documentation mentions the following:
Strong consistency and multi-master
Cosmos accounts configured for multi-master cannot be configured for strong consistency as it is not possible for a distributed system to provide an RPO of zero and an RTO of zero. Additionally, there are no write latency benefits for using strong consistency with multi-master as any write into any region must be replicated and committed to all configured regions within the account. This results in the same write latency as a single master account.
Hence if we want data to converge in the least amount of time, we need to use Eventual consistency. This offers the least latency in terms of consistency.
The Microsoft documentation mentions the following on the consistency levels.
With Azure Cosmos DB, developers can choose from five well-defined consistency models on the consistency spectrum. From strongest to more relaxed, the models include strong, bounded staleness, session, consistent prefix, and eventual consistency. The models are well-defined and intuitive and can be used for specific real-world scenarios. Each model provides availability and performance tradeoffs and is backed by the SLAs. The following image shows the different consistency levels as a spectrum.
 Reference Image
Because of the proposed logic to the consistency level, all other options are incorrect.
Reference: alt="Reference Image" />
Because of the proposed logic to the consistency level, all other options are incorrect.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels-tradeoffs
 https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels

This question is in DP-200 Microsoft Azure Data Engineer Exam
For getting Microsoft Certified: Azure Data Engineer Associate Certificate



Disclaimers:
The website is not related to, affiliated with, endorsed or authorized by Microsoft. 
The website does not contain actual questions and answers from Microsoft's Certification Exams.
Trademarks, certification & product names are used for reference only and belong to Microsoft.

Recommended

Welcome Back!

Login to your account below

Create New Account!

Fill the forms below to register

Retrieve your password

Please enter your username or email address to reset your password.