Latesttests
2018 Jan Oracle Official New Released 1z0-060
100% Free Download! 100% Pass Guaranteed!
http://www.Latesttests.com/1z0-060.html

Upgrade to Oracle Database 12c

Question No: 21

Which two statements are true about the Oracle Direct Network File system (DNFS)?

  1. It utilizes the OS file system cache.

  2. A traditional NFS mount is not required when using Direct NFS.

  3. Oracle Disk Manager can manage NFS on its own, without using the operating kernel NFS driver.

  4. Direct NFS is available only in UNIX platforms.

  5. Direct NFS can load-balance I/O traffic across multiple network adapters.

Answer: C,E

Explanation: E: Performance is improved by load balancing across multiple network interfaces (if available).

Note:

* To enable Direct NFS Client, you must replace the standard Oracle Disk Manager (ODM) library with one that supports Direct NFS Client.

Incorrect:

Not A: Direct NFS Client is capable of performing concurrent

direct I/O, which bypasses any operating system level caches and eliminates any operating system write-ordering locks

Not B:

  • To use Direct NFS Client, the NFS file systems must first be mounted and available over regular NFS mounts.

  • Oracle Direct NFS (dNFS) is an optimized NFS (Network File System) client that provides faster and more scalable access to NFS storage located on NAS storage devices (accessible over TCP/IP).

    Not D: Direct NFS is provided as part of the database kernel, and is thus available on all supported database platforms – even those that don#39;t support NFS natively, like Windows.

    Note:

  • Oracle Direct NFS (dNFS) is an optimized NFS (Network File System) client that provides faster and more scalable access to NFS storage located on NAS storage devices (accessible over TCP/IP). Direct NFS is built directly into the database kernel – just like ASM which is mainly used when using DAS or SAN storage.

  • Oracle Direct NFS (dNFS) is an internal I/O layer that provides faster access to large NFS files than traditional NFS clients.

Question No: 22

Which two statements are true about variable extent size support for large ASM files?

  1. The metadata used to track extents in SGA is reduced.

  2. Rebalance operations are completed faster than with a fixed extent size

  3. An ASM Instance automatically allocates an appropriate extent size.

  4. Resync operations are completed faster when a disk comes online after being taken offline.

  5. Performance improves in a stretch cluster configuration by reading from a local copy of an extent.

Answer: A,C

Explanation: A: Variable size extents enable support for larger ASM datafiles, reduce SGA memory requirements for very large databases (A), and improve performance for file create and open operations.

C: You don#39;t have to worry about the sizes; the ASM instance automatically allocates the appropriate extent size.

Note:

  • The contents of ASM files are stored in a disk group as a set, or collection, of data extents that are stored on individual disks within disk groups. Each extent resides on an individual disk. Extents consist of one or more allocation units (AU). To accommodate increasingly larger files, ASM uses variable size extents.

  • The size of the extent map that defines a file can be smaller by a factor of 8 and 64 depending on the file size. The initial extent size is equal to the allocation unit size and it increases by a factor of 8 and 64 at predefined thresholds. This feature is automatic for newly created and resized datafiles when the disk group compatibility attributes are set to Oracle Release 11 or higher.

Question No: 23

To enable the Database Smart Flash Cache, you configure the following parameters:

DB_FLASH_CACHE_FILE = ‘/dev/flash_device_1’ , ‘/dev/flash_device_2’ DB_FLASH_CACHE_SIZE=64G

What is the result when you start up the database instance?

  1. It results in an error because these parameter settings are invalid.

  2. One 64G flash cache file will be used.

  3. Two 64G flash cache files will be used.

  4. Two 32G flash cache files will be used.

Answer: A

Question No: 24

Which three statements are true about Automatic Workload Repository (AWR)?

  1. All AWR tables belong to the SYSTEM schema.

  2. The AWR data is stored in memory and in the database.

  3. The snapshots collected by AWR are used by the self-tuning components in the database

  4. AWR computes time model statistics based on time usage for activities, which are displayed in the v$SYS time model and V$SESS_TIME_MODEL views.

  5. AWR contains system wide tracing and logging information.

Answer: C,D,E

Question No: 25

Your multitenant container database (CDB) contains some pluggable databases (PDBs), you execute the following command in the root container:

Latesttests 2018 PDF and VCE

Which two statements are true?

  1. Schema objects owned by the C# # A_ADMIN common user can be shared across all PDBs.

  2. The C # # A_ADMIN user will be able to use the TEMP_TS temporary tablespace only in root.

  3. The command will, create a common user whose description is contained in the root and each PDB.

  4. The schema for the common user C # # A_ADMIN can be different in each container.

  5. The command will create a user in the root container only because the container clause is not used.

Answer: C,D

Question No: 26

Your multitenant container database (CDB) contains pluggable databases (PDBs), you are connected to the HR_PDB. You execute the following command:

SQL gt; CREATE UNDO TABLESPACE undotb01

DATAFILE ‘u01/oracle/rddb1/undotbs01.dbf’ SIZE 60M AUTOEXTEND ON; What is the result?

  1. It executes successfully and creates an UNDO tablespace in HR_PDB.

  2. It falls and reports an error because there can be only one undo tablespace in a CDB.

  3. It fails and reports an error because the CONTAINER=ALL clause is not specified in the command.

  4. It fails and reports an error because the CONTAINER=CURRENT clause is not specified in the command.

  5. It executes successfully but neither tablespace nor the data file is created.

Answer: E

Explanation: Interesting behavior in 12.1.0.1 DB of creating an undo tablespace in a PDB. With the new Multitenant architecture the undo tablespace resides at the CDB level and PDBs all share the same UNDO tablespace.

When the current container is a PDB, an attempt to create an undo tablespace fails without returning an error.

Question No: 27

You are about to plug a multi-terabyte non-CDB into an existing multitenant container database (CDB).

The characteristics of the non-CDB are as follows:

->Version: Oracle Database 11g Release 2 (11.2.0.2.0) 64-bit

->Character set: AL32UTF8

->National character set: AL16UTF16

->O/S: Oracle Linux 6 64-bit

The characteristics of the CDB are as follows:

->Version: Oracle Database 12c Release 1 64-bit

->Character Set: AL32UTF8

->National character set: AL16UTF16

->O/S: Oracle Linux 6 64-bit

Which technique should you use to minimize down time while plugging this non-CDB into the CDB?

  1. Transportable database

  2. Transportable tablespace

  3. Data Pump full export/import

  4. The DBMS_PDB package

  5. RMAN

Answer: B

Question No: 28

Identify two valid options for adding a pluggable database (PDB) to an existing multitenant container database (CDB).

  1. Use the CREATE PLUGGABLE DATABASE statement to create a PDB using the files from the SEED.

  2. Use the CREATE DATABASE . . . ENABLE PLUGGABLE DATABASE statement to provision a PDB by copying file from the SEED.

  3. Use the DBMS_PDB package to clone an existing PDB.

  4. Use the DBMS_PDB package to plug an Oracle 12c non-CDB database into an existing CDB.

  5. Use the DBMS_PDB package to plug an Oracle 11 g Release 2 (11.2.0.3.0) non-CDB database into an existing CDB.

Answer: A,D

Question No: 29

Identify three valid methods of opening, pluggable databases (PDBs).

  1. ALTER PLUGGABLE DATABASE OPEN ALL ISSUED from the root

  2. ALTER PLUGGABLE DATABASE OPEN ALL ISSUED from a PDB

  3. ALTER PLUGGABLE DATABASE PDB OPEN issued from the seed

  4. ALTER DATABASE PDB OPEN issued from the root

  5. ALTER DATABASE OPEN issued from that PDB

  6. ALTER PLUGGABLE DATABASE PDB OPEN issued from another PDB

  7. ALTER PLUGGABLE DATABASE OPEN issued from that PDB

Answer: A,E,G

Explanation: E: You can perform all ALTER PLUGGABLE DATABASE tasks by connecting to a PDB and running the corresponding ALTER DATABASE statement. This functionality is provided to maintain backward compatibility for applications that have been migrated to a CDB environment.

AG: When you issue an ALTER PLUGGABLE DATABASE OPEN statement, READ WRITE is the default unless a PDB being opened belongs to a CDB that is used as a physical standby database, in which case READ ONLY is the default.

You can specify which PDBs to modify in the following ways: List one or more PDBs.

Specify ALL to modify all of the PDBs.

Specify ALL EXCEPT to modify all of the PDBs, except for the PDBs listed.

Question No: 30

You conned using SQL Plus to the root container of a multitenant container database (CDB) with SYSDBA privilege.

The CDB has several pluggable databases (PDBs) open in the read/write mode. There are ongoing transactions in both the CDB and PDBs.

What happens alter issuing the SHUTDOWN TRANSACTIONAL statement?

  1. The shutdown proceeds immediately.

    The shutdown proceeds as soon as all transactions in the PDBs are either committed or rolled hack.

  2. The shutdown proceeds as soon as all transactions in the CDB are either committed or rolled back.

  3. The shutdown proceeds as soon as all transactions in both the CDB and PDBs are either committed or rolled back.

  4. The statement results in an error because there are open PDBs.

Answer: B

Explanation: * SHUTDOWN [ABORT | IMMEDIATE | NORMAL | TRANSACTIONAL [LOCAL]]

Shuts down a currently running Oracle Database instance, optionally closing and dismounting a database. If the current database is a pluggable database, only the pluggable database is closed. The consolidated instance continues to run.

Shutdown commands that wait for current calls to complete or users to disconnect such as SHUTDOWN NORMAL and SHUTDOWN TRANSACTIONAL have a time limit that the SHUTDOWN command will wait. If all events blocking the shutdown have not occurred within the time limit, the shutdown command cancels with the following message:

ORA-01013: user requested cancel of current operation

* If logged into a CDB, shutdown closes the CDB instance.

To shutdown a CDB or non CDB, you must be connected to the CDB or non CDB instance that you want to close, and then enter

SHUTDOWN

Database closed. Database dismounted. Oracle instance shut down.

To shutdown a PDB, you must log into the PDB to issue the SHUTDOWN command. SHUTDOWN

Pluggable Database closed.

Note: