Approved of 1Z0-062 exam prep materials and questions for Oracle certification for IT professionals, Real Success Guaranteed with Updated 1Z0-062 pdf dumps vce Materials. 100% PASS Oracle Database 12c: Installation and Administration exam Today!

2016 May 1Z0-062 Study Guide Questions:

Q51. You use a recovery catalog for maintaining your database backups. 

You execute the following command: $rman TARGET / CATALOG rman / cat@catdb RMAN > BACKUP VALIDATE DATABASE ARCHIVELOG ALL; 

Which two statements are true? 

A. Corrupted blocks, if any, are repaired. 

B. Checks are performed for physical corruptions. 

C. Checks are performed for logical corruptions. 

D. Checks are performed to confirm whether all database files exist in correct locations 

E. Backup sets containing both data files and archive logs are created. 

Answer: B,D 

Explanation: B (not C): You can validate that all database files and archived redo logs can be backed up by running a command as follows: 

RMAN> BACKUP VALIDATE DATABASE ARCHIVELOG ALL; 

This form of the command would check for physical corruption. To check for logical corruption, 

RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE ARCHIVELOG ALL; 

D: You can use the VALIDATE keyword of the BACKUP command to do the following: 

Check datafiles for physical and logical corruption 

Confirm that all database files exist and are in the correct locations. 

Note: You can use the VALIDATE option of the BACKUP command to verify that database files exist and are in the correct locations (D), and have no physical or logical corruptions that would prevent RMAN from creating backups of them. When performing a BACKUP...VALIDATE, RMAN reads the files to be backed up in their entirety, as it would during a real backup. It does not, however, actually produce any backup sets or image copies (Not A, not E). 


Q52. The ORCL database is configured to support shared server mode. You want to ensure that a user connecting remotely to the database instance has a one-to-one ratio between client and server processes. 

Which connection method guarantees that this requirement is met? 

A. connecting by using an external naming method 

B. connecting by using the easy connect method 

C. creating a service in the database by using the dbms_service.create_service procedure and using this service for creating a local naming service" 

D. connecting by using the local naming method with the server = dedicated parameter set in the tnsnames.ora file for the net service 

E. connecting by using a directory naming method 

Answer: C,E 


Q53. You created an encrypted tablespace: 


You then closed the encryption wallet because you were advised that this is secure. 

Later in the day, you attempt to create the EMPLOYEES table in the SECURESPACE tablespace with the SALT option on the EMPLOYEE column. 

Which is true about the result? 

A. It creates the table successfully but does not encrypt any inserted data in the EMPNAME column because the wallet must be opened to encrypt columns with SALT. 

B. It generates an error when creating the table because the wallet is closed. 

C. It creates the table successfully, and encrypts any inserted data in the EMPNAME column because the wallet needs to be open only for tablespace creation. 

D. It generates error when creating the table, because the salt option cannot be used with encrypted tablespaces. 

Answer: C 

Explanation: 

* The environment setup for tablespace encryption is the same as that for transparent data encryption. Before attempting to create an encrypted tablespace, a wallet must be created to hold the encryption key. 

* Setting the tablespace master encryption key is a one-time activity. This creates the master encryption key for tablespace encryption. This key is stored in an external security module (Oracle wallet) and is used to encrypt the tablespace encryption keys. 

* Before you can create an encrypted tablespace, the Oracle wallet containing the tablespace master encryption key must be open. The wallet must also be open before you can access data in an encrypted tablespace. 

* Salt is a way to strengthen the security of encrypted data. It is a random string added to the data before it is encrypted, causing repetition of text in the clear to appear different when encrypted. Salt removes the one common method attackers use to steal data, namely, matching patterns of encrypted text. 

* ALT | NO SALT By default the database appends a random string, called "salt," to the clear text of the column before encrypting it. This default behavior imposes some limitations on encrypted columns: / If you specify SALT during column encryption, then the database does not compress the data in the encrypted column even if you specify table compression for the table. However, the database does compress data in unencrypted columns and encrypted columns without the SALT parameter. 


2passeasy.com

Up to date 1z0-062:

Q54. You must track all transactions that modify certain tables in the sales schema for at least three years. 

Automatic undo management is enabled for the database with a retention of one day. 

Which two must you do to track the transactions? 

A. Enable supplemental logging for the database. 

B. Specify undo retention guarantee for the database. 

C. Create a Flashback Data Archive in the tablespace where the tables are stored. 

D. Create a Flashback Data Archive in any suitable tablespace. 

E. Enable Flashback Data Archiving for the tables that require tracking. 

Answer: D,E 

Explanation: E: By default, flashback archiving is disabled for any table. You can enable flashback archiving for a table if you have the FLASHBACK ARCHIVE object privilege on the Flashback Data Archive that you want to use for that table. 

D: Creating a Flashback Data Archive 

/ Create a Flashback Data Archive with the CREATE FLASHBACK ARCHIVE statement, specifying the following: 

Name of the Flashback Data Archive 

Name of the first tablespace of the Flashback Data Archive 

(Optional) Maximum amount of space that the Flashback Data Archive can use in the first tablespace 

/ Create a Flashback Data Archive named fla2 that uses tablespace tbs2, whose data will be retained for two years: 

CREATE FLASHBACK ARCHIVE fla2 TABLESPACE tbs2 RETENTION 2 YEAR; 


Q55. Which statement is true about Enterprise Manager (EM) express in Oracle Database 12c? 

A. By default, EM express is available for a database after database creation. 

B. You can use EM express to manage multiple databases running on the same server. 

C. You can perform basic administrative tasks for pluggable databases by using the EM express interface. 

D. You cannot start up or shut down a database Instance by using EM express. 

E. You can create and configure pluggable databases by using EM express. 

Answer: A 

Explanation: EM Express is built inside the database. 

Note: Oracle Enterprise Manager Database Express (EM Express) is a web-based database management tool that is built inside the Oracle Database. It supports key performance management and basic database administration functions. From an architectural perspective, EM Express has no mid-tier or middleware components, ensuring that its overhead on the database server is negligible. 


Q56. Which two statements are true about the Oracle Direct Network File system (DNFS)? 

A. It utilizes the OS file system cache. 

B. A traditional NFS mount is not required when using Direct NFS. 

C. Oracle Disk Manager can manage NFS on its own, without using the operating kernel NFS driver. 

D. Direct NFS is available only in UNIX platforms. 

E. Direct NFS can load-balance I/O traffic across multiple network adapters. 

Answer: C,E 

Explanation: E: Performance is improved by load balancing across multiple network interfaces (if available). 

Note: 

* To enable Direct NFS Client, you must replace the standard Oracle Disk Manager (ODM) library with one that supports Direct NFS Client. 

Incorrect: Not A: Direct NFS Client is capable of performing concurrent direct I/O, which bypasses any operating system level caches and eliminates any operating system write-ordering locks Not B: 

* To use Direct NFS Client, the NFS file systems must first be mounted and available over regular NFS mounts. 

* Oracle Direct NFS (dNFS) is an optimized NFS (Network File System) client that provides faster and more scalable access to NFS storage located on NAS storage devices (accessible over TCP/IP). Not D: Direct NFS is provided as part of the database kernel, and is thus available on all supported database platforms - even those that don't support NFS natively, like Windows. 

Note: 

* Oracle Direct NFS (dNFS) is an optimized NFS (Network File System) client that provides faster and more scalable access to NFS storage located on NAS storage devices (accessible over TCP/IP). Direct NFS is built directly into the database kernel - just like ASM which is mainly used when using DAS or SAN storage. 

* Oracle Direct NFS (dNFS) is an internal I/O layer that provides faster access to large NFS files than traditional NFS clients. 


2passeasy.com

100% Guarantee exam 1z0-062:

Q57. You executed the following command to create a password file in the database server: 

$ orapwd file = orapworcl entries = 5 ignorecase=N 

Which statement describes the purpose of the above password file? 

A. It records usernames and passwords of users when granted the DBA role 

B. It contains usernames and passwords of users for whom auditing is enabled 

C. It is used by Oracle to authenticate users for remote database administrator 

D. It records usernames and passwords of all users when they are added to OSDBA or OSOPER operating groups 

Answer: C 


Q58. Examine the contents of SQL loader control file: 


Which three statements are true regarding the SQL* Loader operation performed using the control file? 

A. An EMP table is created if a table does not exist. Otherwise, if the EMP table is appended with the loaded data. 

B. The SQL* Loader data file myfile1.dat has the column names for the EMP table. 

C. The SQL* Loader operation fails because no record terminators are specified. 

D. Field names should be the first line in the both the SQL* Loader data files. 

E. The SQL* Loader operation assumes that the file must be a stream record format file with the normal carriage return string as the record terminator. 

Answer: A,B,E 

Explanation: A: The APPEND keyword tells SQL*Loader to preserve any preexisting data in the table. Other options allow you to delete preexisting data, or to fail with an error if the table is not empty to begin with. 

B (not D): Note: 

* SQL*Loader-00210: first data file is empty, cannot process the FIELD NAMES record 

Cause: The data file listed in the next message was empty. Therefore, the FIELD NAMES 

FIRST FILE directive could not be processed. 

Action: Check the listed data file and fix it. Then retry the operation 

E: 

* A comma-separated values (CSV) (also sometimes called character-separated values, because the separator character does not have to be a comma) file stores tabular data (numbers and text) in plain-text form. Plain text means that the file is a sequence of characters, with no data that has to be interpreted instead, as binary numbers. A CSV file consists of any number of records, separated by line breaks of some kind; each record consists of fields, separated by some other character or string, most commonly a literal comma or tab. Usually, all records have an identical sequence of fields. 

* Fields with embedded commas must be quoted. 

Example: 

1997,Ford,E350,"Super, luxurious truck" 

Note: 

* SQL*Loader is a bulk loader utility used for moving data from external files into the Oracle database. 


Q59. Examine the parameter for your database instance: 


You generated the execution plan for the following query in the plan table and noticed that the nested loop join was done. After actual execution of the query, you notice that the hash join was done in the execution plan: 


Identify the reason why the optimizer chose different execution plans. 

A. The optimizer used a dynamic plan for the query. 

B. The optimizer chose different plans because automatic dynamic sampling was enabled. 

C. The optimizer used re-optimization cardinality feedback for the query. 

D. The optimizer chose different plan because extended statistics were created for the columns used. 

Answer: B 

Explanation: * optimizer_dynamic_sampling OPTIMIZER_DYNAMIC_SAMPLING controls both when the database gathers dynamic statistics, and the size of the sample that the optimizer uses to gather the statistics. Range of values0 to 11 


Q60. Your multitenant container database has three pluggable databases (PDBs): PDB1, PDB2, and PDB3. 

Which two RMAN commands may be; used to back up only the PDB1 pluggable database? 

A. BACKUP PLUGGABLE DATABASE PDB1 while connected to the root container 

B. BACKUP PLUGGABLE DATABASE PDB1 while connected to the PDB1 container 

C. BACKUP DATABASE while connected to the PDB1 container 

D. BACKUP DATABASE while connected to the boot container 

E. BACKUP PLUGGABLE database PDB1 while connected to PDB2 

Answer: A,C 

Explanation: To perform operations on a single PDB, you can connect as target either to the root or directly to the PDB. 

* (A) If you connect to the root, you must use the PLUGGABLE DATABASE syntax in your RMAN commands. For example, to back up a PDB, you use the BACKUP PLUGGABLE DATABASE command. 

* (C)If instead you connect directly to a PDB, you can use the same commands that you would use when connecting to a non-CDB. For example, to back up a PDB, you would use the BACKUP DATABASE command. 

Reference: Oracle Database Backup and Recovery User's Guide 12c, About Backup and Recovery of CDBs