Pinpoint of 1Z0-058 practice materials and courses for Oracle certification for customers, Real Success Guaranteed with Updated 1Z0-058 pdf dumps vce Materials. 100% PASS Oracle Real Application Clusters 11g Release 2 and Grid Infrastructure exam Today!

2021 Jun 1Z0-058 Study Guide Questions:

Q61. Your cluster was originally created with nodes RACNODE1 and RACNODE2 three years ago. Last year, nodes RACNODE3 and RACNODE4 were added. 

These nodes have faster processors and more local storage than the original nodes making performance management and tuning more difficult. 

Two more nodes with the same processor speed have been added to the cluster last week as RACNODE5 and RACNODE6 and you must remove RACNODE1 and RACNODE2 for redeployment. 

The Oracle Grid Infrastructure is using GNS and the databases are all 11g Release 2, all running from the same home. The Grid home is /fs01/home/grid. 

Which three steps must be performed to remove the nodes from the cluster? 

A. Run /fs01/home/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/fs01/home/grid "CLUSTER_NODES= {RACNODE3 , RACNODE4 , 

RACNODE5 , 

RACNODE6} 

as the grid software owner on any remaining node. 

B. Run /fs01/home/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/fs01/home/grid " CLUSTER_NODES={RACNODE1} as the grid software owner on RACNODE1 and run /fs01/home/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/ fs01/home/grid " CLUSTER_NODES={RACNODE 2} as the grid software owner on RACNODE2. 

C. Run /fs01/home/grid/oui/bin/runInstaller -detachHome ORACLE_HOME=/fs01/home/grid as the grid software owner on RACNODE1 and RACNODE2. 

D. Run the /fs01/home/grid/crs/install/rootcrs.pl script as root on each node to be deleted. 

E. Run crsctl delete node -n RACNODE1 and crsctl delete node -n RACNODE2 as root from any node remaining in the cluster. 

Answer: A,D,E 

Explanation: 

Deleting a Cluster Node on Linux and UNIX Systems 

1. Ensure that Grid_home correctly specifies the full directory path for the Oracle Clusterware home on each node, where Grid_home is the location of the installed Oracle Clusterware software. 

2. Run the following command as either root or the user that installed Oracle Clusterware to determine whether the node you want to delete is active and whether it is pinned: $ olsnodes -s -t If the node is pinned, then run the crsctl unpin css command. Otherwise, proceed to the next step. 

3. Disable the Oracle Clusterware applications and daemons running on the node. Run the rootcrs.pl script as root from the Grid_home/crs/install directory on the node to be deleted, as follows: # ./rootcrs.pl -deconfig -deinstall -force If you are deleting multiple nodes, then run the rootcrs.pl script on each node that you are deleting. If you are deleting all nodes from a cluster, then append the -lastnode option to the preceding command to clear OCR and the voting disks, as follows: # ./rootcrs.pl -deconfig -deinstall -force -lastnode 

4. From any node that you are not deleting, run the following command from the 

Grid_home/bin directory as root to delete the node from the cluster: 

# crsctl delete node -n node_to_be_deleted 

Then if you run a dynamic Grid Plug and Play cluster using DHCP and GNS, skip to step 7. 

5. On the node you want to delete, run the following command as the user that installed 

Oracle Clusterware from the Grid_home/oui/bin directory where node_to_be_deleted is the 

name of the node that you are deleting: 

$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES= 

{node_to_be_deleted}" CRS=TRUE -silent -local 

6. On the node that you are deleting, depending on whether you have a shared or local 

Oracle home, complete one of the following procedures as the user that installed Oracle 

Clusterware: 

If you have a shared home, then run the following command from the Grid_home/oui/bin directory on the node you want to delete: 

$ ./runInstaller -detachHome ORACLE_HOME=Grid_home -silent -local 

For a local home, deinstall the Oracle Clusterware home from the node that you want to delete, as follows, by running the following command, where Grid_home is the path defined for the Oracle Clusterware home: 

$ Grid_home/deinstall/deinstall –local 

7. On any node other than the node you are deleting, run the following command from the 

Grid_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the 

nodes that are going to remain part of your cluster: 

$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES= 

{remaining_nodes_list}" CRS=TRUE -silent 

8. Run the following CVU command to verify that the specified nodes have been 

successfully deleted from the cluster: 

$ cluvfy stage -post nodedel -n node_list [-verbose] 

Oracle. Clusterware Administration and Deployment Guide 11g Release 2 (11.2) 


Q62. The Oracle Grid Infrastructure administrator decides to make more copies of the voting disks that are currently stored in the ASM disk group +VOTE. How can this be done? 

A. by running crsctl add css votedisk <path_to_new_voting_disk> to make a copy to a shared location on a shared device or file system 

B. by running crsctl add css votedisk +VOTE, thereby adding another copy of the voting disk to the +VOTE disk group 

C. by running srvctl replace votedisk +asm_disk_group on another disk group that has greater redundancy, thereby causing additional copies to be created 

D. by running crsctl replace votedisk +asm_disk_group on another disk group that has greater redundancy, thereby causing additional copies to be created 

Answer: D 

Explanation: 

Storing Voting Disks on Oracle ASM Using the crsctl replace votedisk command, you can move a given set of voting disks from one Oracle ASM disk group into another, or onto a certified file system. If you move voting disks from one Oracle ASM disk group to another, then you can change the number of voting disks by placing them in a disk group of a different redundancy level as the former disk group. Notes: You cannot directly influence the number of voting disks in one disk group. You cannot use the crsctl add | delete votedisk commands on voting disks stored in Oracle ASM disk groups because Oracle ASM manages the number of voting disks according to the redundancy level of the disk group. You cannot add a voting disk to a cluster file system if the voting disks are stored in an Oracle ASM disk group. Oracle does not support having voting disks in Oracle ASM and directly on a cluster file system for the same cluster at the same time. 

Oracle. Clusterware Administration and Deployment Guide 11g Release 2 (11.2) 


Q63. The Global Cache Block Access Latency chart shows high elapsed times. What are two possible causes for this? 

A. badly written SQL statements 

B. storage network bottlenecks 

C. a large number of requested blocks not cached in any instance 

D. slow or faulty interconnect 

Answer: A,D 

Explanation: 

About Global Cache Block Access Latency Chart 

If the Global Cache Block Access Latency chart shows high latencies (high elapsed times), then this can be caused by any of the following: 

A high number of requests caused by SQL statements that are not tuned. 

A large number of processes in the queue waiting for the CPU, or scheduling delays. 

Slow, busy, or faulty interconnects. In these cases, check your network connection for dropped packets, retransmittals, or cyclic redundancy check (CRC) errors. 

Oracle. Database 2 Day + Real Application Clusters Guide 

11g Release 2 (11.2) 


1Z0-058  practice

Refresh 1Z0-058 training:

Q64. Which two statements are true regarding undo management in the RAC environment? 

A. You can use Automatic Undo Management (AUM) in some of the instances and manual undo management in the rest of the instances in a RAC database. 

B. In a policy-managed RAC database, Oracle automatically allocates the undo tablespace even the Oracle Managed Files (OMF) is disabled in a database. 

C. In a policy-managed RAC database, Oracle automatically allocates the undo tablespace if the database is OMF enabled. 

D. You can dynamically switch undo tablespace assignments by executing the ALTER SYSTEM SET UNDO_TABLESPACE statement from any instance in a administrator managed database. 

Answer: C,D 

Explanation: You assign undo tablespaces in your Oracle RAC administrator-managed database by specifying a different value for the UNDO_TABLESPACE parameter for each instance in your SPFILE or individual PFILEs. For policy-managed databases, Oracle automatically allocates the undo tablespace when the instance starts if you have Oracle Managed Files enabled. You can switch from using one undo tablespace to another. Because the UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER SYSTEM SET statement can be used to assign a new undo tablespace. 


Q65. Which two types of files can be stored In an ASM clustered file system? 

A. OCR and Voting Disk files 

B. data files for external tables 

C. Oracle database executable 

D. Grid Infrastructure executables 

E. data files for tablespaces 

F. archive log files 

Answer: B,C 

Explanation: Oracle Automatic Storage Management Cluster File System (Oracle ACFS) is a multi-platform, scalable file system, and storage management technology that extends Oracle Automatic Storage Management (Oracle ASM) functionality to support customer files maintained outside of Oracle Database. Oracle ACFS supports many database and application files, including executables, database trace files, database alert logs, application reports, BFILEs, and configuration files. Other supported files are video, audio, text, images, engineering drawings, and other general-purpose application file data. Notes: 

. Oracle ASM is the preferred storage manager for all database files. It has been specifically designed and optimized to provide the best performance for database file types. 

. Oracle ACFS is the preferred file manager for non-database files. It is optimized for general purpose files. . Oracle ACFS does not support any file type that can be directly stored in Oracle 

ASM, except where explicitly noted in the documentation. Not supported means Oracle Support Services does not take calls and development does not fix bugs associated with storing unsupported file types in Oracle ACFS. 

. Starting with Oracle Automatic Storage Management 11g Release 2 (11.2.0.3), Oracle ACFS supports RMAN backups (BACKUPSET file type), archive logs (ARCHIVELOG file type), and Data Pump dumpsets (DUMPSET file type). Note that Oracle ACFS snapshots are not supported with these files. 

. Oracle ACFS does not support files for the Oracle Grid Infrastructure home. 

. Oracle ACFS does not support Oracle Cluster Registry (OCR) and voting files. 


Q66. Your production environment cluster is running Oracle Enterprise Linux and currently has four nodes. You are asked to plan for extending the cluster to six nodes. Which three methods are available to add the new nodes? 

A. silent cloning using crsctl clone cluster and ssh 

B. a GUI interface from Enterprise Manager 

C. with the Oracle Universal Installer using runInstaller –clone <nodename> 

D. silent cloning using perl clone.pl–silent either with parameters in a file or in line 

E. using addNode.sh 

Answer: B,D,E 

Explanation: 

Login to the Enterprise Manager Grid Control Console. Click on the "Deployments" tab. Under the "Deployments" --> "General" --> "Cloning" section, click on "Clone Oracle Home". "Clone Oracle Home: Source Home".page allows us to select the Oracle Home we want to clone. Once the selection has been made, click on "Next" to proceed. You can also use cloning to add nodes to a cluster. Prepare Node 2. Run the clone.pl script located in the Grid_home/clone/bin directory on Node 2. To set up the new Oracle Clusterware environment, the clone.pl script requires you to provide several setup values for the script. You can provide the variable values by either supplying input on the command line when you run the clone.pl script, or by creating a file in which you can assign values to the cloning variables. 

. To extend the Grid Infrastructure home to the node3, navigate to the Grid_home/oui/bin directory on node1 and run the addNode.sh script 


2passeasy.com

Certified 1Z0-058 guidance:

Q67. Which two statements are true about instance recovery in a RAC environment? 

A. Parallel instance recovery will work even if the recovery_parallelism initialization parameter set to 0 or 1. 

B. Increasing the size of the default buffer cache can speed up instance recovery because instance recovery may use as much as 50 percent of the default buffer cache for recovery buffers. 

C. The fast_start_mttr_target initialization parameter includes both instance startup and recovery time. 

D. The fast__start_mttr_target initialization parameter specifies only the instance recovery time. 

Answer: B,D 

Explanation: Many sites run with too few redo logs that are too small. Small redo logs cause system checkpoints to continuously put a high load on the buffer cache and I/O system. If there are too few redo logs, then the archive cannot keep up, and the database will wait for the archive process to catch up. With the Fast-Start Fault Recovery feature, the FAST_START_MTTR_TARGET initialization parameter simplifies the configuration of recovery time from instance or system failure. FAST_START_MTTR_TARGET specifies a target for the expected mean time to recover (MTTR), that is, the time (in seconds) that it should take to start up the instance and perform cache recovery. 

Oracle Database Performance Tuning Guide 


Q68. You plan to use Enterprise Manager to locate and stage patches to your Oracle Home. 

The software library has been configured to be downloaded to /u01/app/oracle and your "My Oracle Support" credentials have been entered. 

You want to start the provisioning daemon in order to use the deployment procedure manager to view, edit, monitor, and run deployment procedures. 

How would you start the provisioning daemon? 

A. using pafctl start 

B. using crsctl start paf 

C. using srvctl start paf 

D. using emctl start paf 

Answer: A 

Explanation: 

Starting the Provisioning Daemon The provisioning daemon is started with: $ pafctl start Enter repository user password : Enter interval [default 3]: Provisioning Daemon is Up, Interval = 3 D60488GC11 Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 4 – 26 


Q69. Identify the three valid storage options for Grid Infrastructure voting disk and Install. 

A. a certified Cluster File System (CFS) 

B. a certified Network File System (NFS) 

C. ASM Cluster File System (ACFS) 

D. Automatic Storage Management (ASM) 

E. shared disk slices (block or raw devices) 

Answer: A,B,D 

Explanation: 

untitled D60488GC11 

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 2 - 4 


Q70. As part of the preinstatlation process for adding two new nodes to your four-node UNIX cluster, you are in discussions with the OS administrators about the operating system Installation and setup for the two new nodes called RACNODE5 and RACNODE6. 

The nodes have already been connected to the network infrastructure and the administrators are ready for the OS installation. Which two methods fulfill the installation requirements? 

A. Install a new image of the OS, then configure SSH for the root user. 

B. Install a cloned image of the OS that at least matches the existing node images for drivers, patches, and updates. 

C. Install a new image of the OS, that at least matches an existing node for drivers, patches, and updates, and create the necessary OS users and groups with user and group IDs matching those on the existing nodes. 

D. Install a new image of the OS that at least matches the existing node Images for drivers. 

E. Install a new image of the OS, and create the necessary OS users and groups with any user and group IDs. 

Answer: B,C 

Explanation: 

Prerequisite Steps for Adding Cluster Nodes 

1. Make physical connections. 

Connect the nodes' hardware to the network infrastructure of your cluster. This includes establishing electrical connections, configuring network interconnects, configuring shared disk subsystem connections, and so on. See your hardware vendor documentation for details about this step. 

2. Install the operating system. 

Install a cloned image of the operating system that matches the operating system on the other nodes in your cluster. This includes installing required service patches, updates, and drivers. See your operating system vendor documentation for details about this process. Note: Oracle recommends that you use a cloned image. However, if the installation fulfills the installation requirements, then install the operating system according to the vendor documentation. 

3. Create Oracle users. You must create all Oracle users on the new node that exist on the existing nodes. For example, if you are adding a node to a cluster that has two nodes, and those two nodes have different owners for the Grid Infrastructure home and the Oracle home, then you must create those owners on the new node, even if you do not plan to install an Oracle home on the new node. 

Oracle. Clusterware Administration and Deployment Guide 11g Release 2 (11.2)