Dumps4cert.com : Latest Dumps with PDF and VCE Files
2018 Jan Microsoft Official New Released 70-764
100% Free Download! 100% Pass Guaranteed!
http://www.Dumps4cert.com/70-764.html

Administering a SQL Database Infrastructure

Question No: 1 HOTSPOT

You manage a Microsoft SQL Server environment. A server fails and writes the following event to the application event log:

MSG_AUDIT_FORCED_SHUTDOWN

You configure the SQL Server startup parameters as shown in the following graphic:

Dumps4Cert 2018 PDF and VCE

Use the drop-down menus to select the answer choice that answers each question.

NOTE: Each correct selection is worth one point.

Dumps4Cert 2018 PDF and VCE

Answer:

Dumps4Cert 2018 PDF and VCE

Explanation:

Dumps4Cert 2018 PDF and VCE

Box 1: single-user

The startup option -m starts an instance of SQL Server in single-user mode.

Box 2: sysadmin

Starting SQL Server in single-user mode enables anymember of the computer#39;s local Administrators group to connect to the instance of SQL Server as a member of the sysadmin fixed server role.

References:https://docs.microsoft.com/en-us/sql/database-engine/configure- windows/database-engine-service-startup-options

Question No: 2

You have a database named DB1 that stores more than 700 gigabyte (GB) of data and serves millions of requests per hour.

Queries on DB1 are taking longer than normal to complete. You run the following Transact-SQL statement:

SELECT * FROM sys.database_query_store_options

You determine that the Query Store is in Read-Only mode.

You need to maximize the time that the Query Store is in Read-Write mode. Which Transact-SQL statement should you run?

  1. ALTER DATABASE DB1SET QUERY_STORE (QUERY_CAPTURE_MODE = ALL)

  2. ALTER DATABASE DB1SET QUERY_STORE (MAX_STORAGE_SIZE_MB = 50)

  3. ALTER DATABASE DB1SET QUERY_STORE (CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 14));

  4. ALTER DATABASE DB1SET QUERY_STORE (QUERY_CAPTURE_MODE = NONE)

Answer: C Explanation:

Stale Query Threshold (Days): Time-based cleanup policy that controls the retention period of persisted runtime statistics and inactive queries.

By default, Query Store is configured to keep the data for 30 days which may be unnecessarily long for your scenario.

Avoid keeping historical data that you do not plan to use. This will reduce changes to read- only status. The size of Query Store data as well as the time to detect and mitigate the issue will be more predictable. Use Management Studio or the following script to configure time-based cleanup policy:

ALTER DATABASE [QueryStoreDB]

SET QUERY_STORE (CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 14));

References:https://docs.microsoft.com/en-us/sql/relational-databases/performance/best- practice-with-the-query-store

Question No: 3

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

A company has a server that runs Microsoft SQL Server 2016 Web edition. The server has a default instance that hosts a database named DB1.

You need to ensure that you can perform auditing at the database level for DB1.

Solution: You migrate DB1 to a named instance on a server than runs Microsoft SQL Server 2016 Standard edition.

Does the solution meet the goal?

  1. Yes

  2. No

Answer: B Explanation:

All editions of SQL Server support server level audits. All editions support database level audits beginning with SQL Server 2016 SP1. Prior to that, database level auditing was limited to Enterprise, Developer, and Evaluation editions.

References: https://docs.microsoft.com/en-us/sql/relational-databases/security/auditing/sql- server-audit-database-engine

Question No: 4

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

A company has an on-premises Microsoft SQL Server environment and Microsoft Azure

SQL Database instances. The environment hosts several customer databases.

One customer reports that their database is not responding as quickly as the service level agreements dictate. You observe that the database is fragmented.

You need to optimize query performance. Solution: You reorganize all indexes.

Does the solution meet the goal?

  1. Yes

  2. No

Answer: A Explanation:

You can remedy index fragmentation by either reorganizing an index or by rebuilding an index.

References: https://msdn.microsoft.com/en-us/library/ms189858(v=sql.105).aspx

Question No: 5

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

A company has an on-premises Microsoft SQL Server environment and Microsoft Azure

SQL Database instances. The environment hosts several customer databases.

One customer reports that their database is not responding as quickly as the service level agreements dictate. You observe that the database is fragmented.

You need to optimize query performance. Solution: You rebuild all indexes.

Does the solution meet the goal?

  1. Yes

  2. No

Answer: A Explanation:

You can remedy index fragmentation by either reorganizing an index or by rebuilding an index.

References:https://msdn.microsoft.com/en-us/library/ms189858(v=sql.105).aspx

Question No: 6 HOTSPOT

Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.

You are a database administrator for a company that has an on-premises Microsoft SQL Server environment and Microsoft Azure SQL Database instances. The environment hosts several customer databases, and each customer uses a dedicated instance. The environments that you manage are shown in the following table.

Dumps4Cert 2018 PDF and VCE

You need to configure auditing for WDWDB.

In the table below, identify the event type that you must audit for each activity.

Dumps4Cert 2018 PDF and VCE

Answer:

Dumps4Cert 2018 PDF and VCE

Question No: 7 HOTSPOT

You deploy a Microsoft SQL Server instance to support a global sales application. The instance includes the following tables: TableA and TableB.

TableA is a partitioned table that uses an incrementing integer number for partitioning. The table has millions of rows in each partition. Most changes to the data in TableA affect recently added data. The UPDATE STATISTICS for TableA takes longer to complete than the allotted maintenance window.

Thousands of operations are performed against TableB each minute. You observe a large number of Auto Update Statistics events for TableB.

You need to address the performance issues with each table.

In the table below, identify the action that will resolve the issues for each table. NOTE: Make only one selection in each column.

Dumps4Cert 2018 PDF and VCE

Answer:

Dumps4Cert 2018 PDF and VCE

Explanation:

Dumps4Cert 2018 PDF and VCE

Table A: Auto_update statistics off

Table A does not change much. There is no need to update the statistics on this table. Table B: SET AUTO_UPDATE_STATISTICS_ASYNC ON

You can set the database to update statistics asynchronously: ALTER DATABASE YourDBName

SET AUTO_UPDATE_STATISTICS_ASYNC ON

If you enable this option then the Query Optimizer will run the query first and update the outdated statistics afterwards. When you set this option to OFF, the Query Optimizer will update the outdated statistics before compiling the query. This option can be useful in OLTP environments

References:https://www.mssqltips.com/sqlservertip/2766/sql-server-auto-update-and-auto- create-statistics-options/

Question No: 8

Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.

You are a database administrator for a company that has an on-premises Microsoft SQL Server environment and Microsoft Azure SQL Database instances. The environment hosts several customer databases, and each customer uses a dedicated instance. The environments that you manage are shown in the following table.

Dumps4Cert 2018 PDF and VCE

You need to monitor WingDB and gather information for troubleshooting issues. What should you use?

  1. sp_updatestats

  2. sp_lock

  3. sys.dm_os_waiting_tasks

  4. sys.dm_tran_active_snapshot_database_transactions

Answer: B Explanation:

The sp_lock system stored procedure is packaged with SQL Server and will give you insight into the locks that are happening on your system. This procedure returns much of its information from the syslock info in the master database, which is a system table that contains information on all granted, converting, and waiting lock requests.

Note: sp_lock will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature. To obtain information about locks in the SQL Server Database Engine, use the sys.dm_tran_locks dynamic management view.

sys.dm_tran_locks returns information about currently active lock manager resources in SQL Server 2008and later. Each row represents a currently active request to the lock manager for a lock that has been granted or is waiting to be granted.

References:https://docs.microsoft.com/en-us/sql/relational-databases/system-stored- procedures/sp-lock-transact-sql

Question No: 9

Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.

You have five servers that run Microsoft Windows 2012 R2. Each server hosts a Microsoft SQL Server instance. The topology for the environment is shown in the following diagram.

Dumps4Cert 2018 PDF and VCE

You have an Always On Availability group named AG1. The details for AG1 are shown in the following table.

Dumps4Cert 2018 PDF and VCE

Instance1 experiences heavy read-write traffic. The instance hosts a database named OperationsMain that is four terabytes (TB) in size. The database has multiple data files and filegroups. One of the filegroups is read_only and is half of the total database size.

Instance4 and Instance5 are not part of AG1. Instance4 is engaged in heavy read-write I/O.

Instance5 hosts a database named StagedExternal. A nightly BULK INSERT process loads data into an empty table that has a rowstore clustered index and two nonclustered rowstore indexes.

You must minimize the growth of the StagedExternal database log file during the BULK INSERT operations and perform point-in-time recovery after the BULK INSERT transaction. Changes made must not interrupt the log backup chain.

You plan to add a new instance named Instance6 to a datacenter that is geographically distant from Site1 and Site2. You must minimize latency between the nodes in AG1.

All databases use the full recovery model. All backups are written to the network location

\\SQLBackup\. A separate process copies backups to an offsite location. You should minimize both the time required to restore the databases and the space required to store backups. The recovery point objective (RPO) for each instance is shown in the following table.

Dumps4Cert 2018 PDF and VCE

Full backups of OperationsMain take longer than six hours to complete. All SQL Server backups use the keyword COMPRESSION.

You plan to deploy the following solutions to the environment. The solutions will access a database named DB1 that is part of AG1.

The wait statistics monitoring requirements for the instances are described in the following table.

Dumps4Cert 2018 PDF and VCE

You need to reduce the amount of time it takes to backup OperationsMain. What should you do?

  1. Modify the backup script to use the keyword SKIP in the FILE_SNAPSHOT statement.

  2. Modify the backup script to use the keyword SKIP in the WITH statement

  3. Modify the backup script to use the keyword NO_COMPRESSION in the WITH statement.

  4. Modify the full database backups script to stripe the backup across multiple backup files.

Answer: D Explanation:

One of the filegroup is read_only should be as it only need to be backup up once. Partial backups are useful whenever you want to exclude read-only filegroups. A partial backup resembles a full database backup, but a partial backup does not contain all the filegroups. Instead, for a read-write database, a partial backup contains the data in the primary filegroup, every read-write filegroup, and, optionally, one or more read-only files. A partial backup of a read-only database contains only the primary filegroup.

From scenario: Instance1 experiences heavy read-write traffic. The instance hosts a database named OperationsMainthat is four terabytes (TB) in size. The database has multiple data files and filegroups. One of the filegroups is read_only and is half of the total database size.

References: https://docs.microsoft.com/en-us/sql/relational-databases/backup- restore/partial-backups-sql-server

Question No: 10 HOTSPOT

You are planning to deploy log shipping for Microsoft SQL Server and store all backups on a dedicated fileshare.

You need to configure the servers to perform each log shipping step.

Which server instance should you configure to perform each action? To answer, select the appropriate server instances in the dialog box in the answer area.

Dumps4Cert 2018 PDF and VCE

Answer:

Dumps4Cert 2018 PDF and VCE

Explanation:

Dumps4Cert 2018 PDF and VCE

Note: Before you configure log shipping, you must create a share to make the transaction log backups available to the secondary server.

SQL Server Log shipping allows you to automatically send transaction log backups from a primary database on a primary server instance to one or more secondary databases on separate secondary server instances. The transaction log backups are applied to each of the secondary databases individually. An optional third server instance, known as the monitor server, records the history and status of backup and restore operations and, optionally, raises alerts if these operations fail to occur as scheduled.

Box 1: Primary server instance.

The primary server instance runs the backup job to back up the transaction log on the primary database.

backup job: A SQL Server Agent job that performs the backup operation, logs history to the local server and the monitor server, and deletes old backup files and history information.

When log shipping is enabled, the job category quot;Log Shipping Backupquot; is created on the primary server instance.

Box 2: Secondary server instance

Each of the three secondary server instances runs its own copy job to copy the primary log-

backup file to its own local destination folder.

copy job: A SQL Server Agent job that copies the backup files from the primary server to a configurable destination on the secondary server and logs history on the secondary server and the monitor server. When log shipping is enabled on a database, the job category quot;Log Shipping Copyquot; is created on each secondary server in a log shipping configuration.

Box 3: Secondary server instance.

Each secondary server instance runs its own restore job to restore the log backup from the local destination folder onto the local secondary database.

restore job: A SQL Server Agent job that restores the copied backup files to the secondary databases. It logs history on the local server and the monitor server, and deletes old files and old history information. When log shipping is enabled on a database, the job category quot;Log Shipping Restorequot; is created on the secondary server instance.

References:https://docs.microsoft.com/en-us/sql/database-engine/log-shipping/about-log- shipping-sql-server

100% Dumps4cert Free Download!
Download Free Demo:70-764 Demo PDF
100% Dumps4cert Free Guaranteed!
Download 2018 Dumps4cert 70-764 Full Exam PDF and VCE

Dumps4cert ExamCollection Testking
Lowest Price Guarantee Yes No No
Up-to-Dated Yes No No
Real Questions Yes No No
Explanation Yes No No
PDF VCE Yes No No
Free VCE Simulator Yes No No
Instant Download Yes No No