Mobile Menu

[Free] 2019(Nov) EnsurePass Microsoft DP-200 Dumps with VCE and PDF 21-30

by admin, November 7, 2019

Get Full Version of the Exam
http://www.EnsurePass.com/DP-200.html

Question No.21

DRAG DROP

You plan to create a new single database instance of Microsoft Azure SQL Database.

The database must only allow communication from the data engineer#39;s workstation. You must connect directly to the instance by using Microsoft SQL Server Management Studio.

You need to create and configure the Database. Which three Azure PowerShell cmdlets should you use to develop the solution? To answer, move the appropriate cmdlets from the list of cmdlets to the answer area and arrange them in the correct order.

image

Correct Answer:

image

Question No.22

HOTSPOT

A company is deploying a service-based data environment. You are developing a solution to process this data.

The solution must meet the following requirements:

image

Use an Azure HDInsight cluster for data ingestion from a relational database in a different cloud service

image

image

Use an Azure Data Lake Storage account to store processed data Allow users to download processed data

You need to recommend technologies for the solution.

Which technologies should you use? To answer, select the appropriate options in the answer area.

image

Correct Answer:

image

Question No.23

DRAG DROP

You manage the Microsoft Azure Databricks environment for a company. You must be able to access a private Azure Blob Storage account. Data must be available to all Azure Databricks workspaces. You need to provide the data access.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

image

Correct Answer:

image

Question No.24

HOTSPOT

You are developing a solution using a Lambda architecture on Microsoft Azure. The data at test layer must meet the following requirements:

Data storage:

image

image

Serve as a repository (or high volumes of large files in various formats. Implement optimized storage for big data analytics workloads.

image

Ensure that data can be organized using a hierarchical structure. Batch processing:

image

image

image

Use a managed solution for in-memory computation processing. Natively support Scala, Python, and R programming languages. Provide the ability to resize and terminate the cluster automatically.

Analytical data store:

image

image

Support parallel processing. Use columnar storage.

image

Support SQL-based languages.

You need to identify the correct technologies to build the Lambda architecture.

Which technologies should you use? To answer, select the appropriate options in the answer area

NOTE: Each correct selection is worth one point.

image

Correct Answer:

image

Question No.25

A company manages several on-premises Microsoft SQL Server databases.

You need to migrate the databases to Microsoft Azure by using a backup and restore process. Which data technology should you use?

  1. Azure SQL Database single database

  2. Azure SQL Data Warehouse

  3. Azure Cosmos DB

  4. Azure SQL Database Managed Instance

Correct Answer: D

Explanation:

Managed instance is a new deployment option of Azure SQL Database, providing near 100% compatibility with the latest SQL Server on-premises (Enterprise Edition) Database Engine, providing a native virtual network (VNet) implementation that addresses common security concerns, and a business model favorable for on-premises SQL Server customers. The managed instance deployment model allows existing SQL Server customers to lift and shift their on- premises applications to the cloud with minimal application and database changes.

References:

https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance

Question No.26

A company manages several on-premises Microsoft SQL Server databases.

You need to migrate the databases to Microsoft Azure by using the backup process of Microsoft SQL Server.

Which data technology should you use?

  1. Azure SQL Database Managed Instance

  2. Azure SQL Data Warehouse

  3. Azure Cosmos DB

  4. Azure SQL Database single database

Correct Answer: D

Question No.27

Note: This question is part of series of questions that present the same scenario. Each question in the series contain a unique solution. Determine whether the solution meets the stated goals.

You develop a data ingestion process that will import data to a Microsoft Azure SQL Data Warehouse. The data to be ingested resides in parquet files stored in an Azure Data Lake Gen 2 storage account.

You need to load the data from the Azure Data Lake Gen 2 storage account into the Azure SQL Data Warehouse.

Solution:

  1. Use Azure Data Factory to convert the parquet files to CSV files

  2. Create an external data source pointing to the Azure storage account

  3. Create an external file format and external table using the external data source

  4. Load the data using the INSERT…SELECT statement Does the solution meet the goal?

  1. Yes

  2. No

Correct Answer: B

Explanation:

There is no need to convert the parquet files to CSV files.

You load the data using the CREATE TABLE AS SELECT statement. References:

https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-load-from-azure- data-lake-store

You develop data engineering solutions for a company. You must migrate data from Microsoft Azure Blob storage to an Azure SQL Data Warehouse for further transformation. You need to implement the solution.

Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

image

Correct Answer:

image

You are creating a managed data warehouse solution on Microsoft Azure.

You must use PolyBase to retrieve data from Azure Blob storage that resides in parquet format and toad the data into a large table called FactSalesOrderDetails.

You need to configure Azure SQL Data Warehouse to receive the data.

Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

image

Correct Answer:

image

data. The database uses the key-value and wide-column NoSQL database type. Developers need to access data in the database using an API.

You need to determine which API to use for the database model and type.

Which two APIs should you use? Each correct answer presents a complete solution. NOTE: Each correct selection s worth one point.

  1. Table API

  2. MongoDB API

  3. Gremlin API

  4. SQL API

  5. Cassandra API

Correct Answer: BE

Explanation:

B: Azure Cosmos DB is the globally distributed, multimodel database service from Microsoft for mission-critical applications. It is a multimodel database and supports document, key-value, graph, and columnar data models.

E: Wide-column stores store data together as columns instead of rows and are optimized for queries over large datasets. The most popular are Cassandra and HBase.

References:

https://docs.microsoft.com/en-us/azure/cosmos-db/graph-introduction https://www.mongodb.com/scale/types-of-nosql-databases

Question No.28

A company has a real-lime data analysis solution that is hosted on Microsoft Azure the solution uses Azure Event Hub to ingest data and an Azure Stream Analytics cloud job to analyze the data. The cloud job is configured to use 120 Streaming Units (SU).

You need to optimize performance for the Azure Stream Analytics job.

Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one port.

  1. Implement event ordering

  2. Scale the SU count for the job up

  3. Implement Azure Stream Analytics user-defined functions (UDF)

  4. Scale the SU count for the job down

  5. Implement query parallelization by partitioning the data output

  6. Implement query parallelization by partitioning the data input

Correct Answer: BF

Explanation:

Scale out the query by allowing the system to process each input partition separately.

F: A Stream Analytics job definition includes inputs, a query, and output. Inputs are where the job reads the data stream from.

References:

https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-parallelization

Question No.29

You configure monitoring for a Microsoft Azure SQL Data Warehouse implementation. The implementation uses PolyBase to load data from comma-separated value (CSV) files stored in Azure Data Lake Gen 2 using an external table.

Files with an invalid schema cause errors to occur. You need to monitor for an invalid schema error. For which error should you monitor?

  1. EXTERNAL TABLE access failed due to internal error: #39;Java exception raised on call to HdfsBridge_Connect: Error [com.microsoft.polybase.client.KerberosSecureLogin] occurred while accessing external files.#39;

  2. EXTERNAL TABLE access failed due to internal error: #39;Java exception raised on call to HdfsBridge_Connect: Error [No FileSystem for scheme: wasbs] occurred while accessing external file.#39;

  3. Cannot execute the query quot;Remote Queryquot; against OLE DB provider quot;SQLNCLI11quot;: for linked server quot;(null)quot;, Query aborted- the maximum reject threshold (o rows) was reached while regarding from an external source: 1 rows rejected out of total 1 rows processed.

  4. EXTERNAL TABLE access failed due to internal error: #39;Java exception raised on call to HdfsBridge_Connect: Error [Unable to instantiate LoginClass] occurred while accessing external files.#39;

Correct Answer: C Explanation: Customer Scenario:

SQL Server 2016 or SQL DW connected to Azure blob storage. The CREATE EXTERNAL TABLE DDL points to a directory (and not a specific file) and the directory contains files with different schemas.

SSMS Error:

Select query on the external table gives the following error: Msg 7320, Level 16, State 110, Line 14

Cannot execute the query quot;Remote Queryquot; against OLE DB provider quot;SQLNCLI11quot; for linked server quot;(null)quot;. Query aborted the maximum reject threshold (0 rows) was reached while reading from an external source: 1 rows rejected out of total 1 rows processed.

Possible Reason:

The reason this error happens is because each file has different schema. The PolyBase external table DDL when pointed to a directory recursively reads all the files in that directory. When a column or data type mismatch happens, this error could be seen in SSMS.

Possible Solution:

If the data for each table consists of one file, then use the filename in the LOCATION section prepended by the directory of the external files. If there are multiple files per table, put each set of files into different directories in Azure Blob Storage and then you can point LOCATION to the directory instead of a particular file. The latter suggestion is the best practices recommended by SQLCAT even if you have one file per table.

Question No.30

You are a data architect. The data engineering team needs to configure a synchronization of data between an on-premises Microsoft SQL Server database to Azure SQL Database.

Ad-hoc and reporting queries are being overutilized the on-premises production instance. The synchronization process must:

image

image

Perform an initial data synchronization to Azure SQL Database with minimal downtime Perform bi-directional data synchronization after initial synchronization

You need to implement this synchronization solution. Which synchronization method should you use?

  1. transactional replication

  2. Data Migration Assistant (DMA)

  3. backup and restore

  4. SQL Server Agent job

  5. Azure SQL Data Sync

Correct Answer: E

Explanation:

SQL Data Sync is a service built on Azure SQL Database that lets you synchronize the data you select bi-directionally across multiple SQL databases and SQL Server instances.

With Data Sync, you can keep data synchronized between your on-premises databases and Azure SQL databases to enable hybrid applications.

Compare Data Sync with Transactional Replication

image

References:

https://docs.microsoft.com/en-us/azure/sql-database/sql-database-sync-data

Get Full Version of the Exam
DP-200 Dumps
DP-200 VCE and PDF

Categories