we provide Certified Oracle 1Z0-117 test preparation which are the best for clearing 1Z0-117 test, and to get certified by Oracle Oracle Database 11g Release 2: SQL Tuning Exam. The 1Z0-117 Questions & Answers covers all the knowledge points of the real 1Z0-117 exam. Crack your Oracle 1Z0-117 Exam with latest dumps, guaranteed!

♥♥ 2021 NEW RECOMMEND ♥♥

Free VCE & PDF File for Oracle 1Z0-117 Real Exam (Full Version!)

★ Pass on Your First TRY ★ 100% Money Back Guarantee ★ Realistic Practice Exam Questions

Free Instant Download NEW 1Z0-117 Exam Dumps (PDF & VCE):
Available on: http://www.surepassexam.com/1Z0-117-exam-dumps.html

Q21. You recently gathered statistics for a table by using the following commands: 

You noticed that the performance of queries has degraded after gathering statistics. You want to use the old statistics. The optimizer statistics retention period is default. 

What must you do to use the old statistics? 

A. Use the flashback to bring back the statistics to the desired time. 

B. Restore statistics from statistics history up to the desired time. 

C. Delete all the statistics collected after the desired time. 



Explanation: Whenever statistics in dictionary are modified, old versions of statistics are saved automatically for future restoration. Statistics can be restored using RESTORE procedures of DBMS_STATS package. These procedures use a time stamp as an argument and restore statistics as of that time stamp. This is useful in case newly collected statistics leads to some sub-optimal execution plans and the administrator wants to revert to the previous set of statistics. 

Reference: Oracle Database Performance Tuning Guide, Restoring Previous Versions of Statistics 

Q22. Examine the exhibit. 

Which two are true concerning the execution plan? 

A. No partition-wise join is used 

B. A full partition-wise join is used 

C. A partial partition-wise join is used 

D. The SALES table is composite partitioned 

Answer: B,D 

Explanation: * The following example shows the execution plan for the full partition-wise 

join with the sales table range partitioned by time_id, and subpartitioned by hash on cust_id. 

| Id | Operation | Name | Pstart| Pstop |IN-OUT| PQ Distrib | 

| 0 | SELECT STATEMENT | | | | | | 

| 1 | PX COORDINATOR | | | | | | 

| 2 | PX SEND QC (RANDOM) | :TQ10001 | | | P->S | QC (RAND) | 

|* 3 | FILTER | | | | PCWC | | 

| 4 | HASH GROUP BY | | | | PCWP | | 

| 5 | PX RECEIVE | | | | PCWP | | 

| 6 | PX SEND HASH | :TQ10000 | | | P->P | HASH | 

| 7 | HASH GROUP BY | | | | PCWP | | 

| 8 | PX PARTITION HASH ALL | | 1 | 16 | PCWC | | 

|* 9 | HASH JOIN | | | | PCWP | | 

| 10 | TABLE ACCESS FULL | CUSTOMERS | 1 | 16 | PCWP | | 


|* 12 | TABLE ACCESS FULL | SALES | 113 | 144 | PCWP | | 

Predicate Information (identified by operation id): 

3 - filter(COUNT(SYS_OP_CSR(SYS_OP_MSR(COUNT(*)),0))>100) 

9 - access("S"."CUST_ID"="C"."CUST_ID") 

12 - filter("S"."TIME_ID"<=TO_DATE(' 1999-10-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') 


"S"."TIME_ID">=TO_DATE(' 1999-07-01 

00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 

* Full partition-wise joins can occur if two tables that are co-partitioned on the same key are joined in a query. The tables can be co-partitioned at the partition level, or at the subpartition level, or at a combination of partition and subpartition levels. Reference partitioning is an easy way to guarantee co-partitioning. Full partition-wise joins can be executed in serial and in parallel. 

Reference: Oracle Database VLDB and Partitioning Guide, Full Partition-Wise Joins: Composite - Single-Level 

Q23. You are administering a database supporting a DDS workload in which some tables are updated frequently but not queried often. You have SQL plan baseline for these tables and you do not want the automatic maintenance task to gather statistics for these tables regularly. 

Which task would you perform to achieve this? 

A. Set the INCREMENTAL statistic preference FALSE for these tables. 

B. Set the STALE_PERCENT static preference to a higher value for these tables. 

C. Set the GRANULARITY statistic preference to AUTO for these tables. 

D. Set the PUBLISH statistic preference to TRUE for these tables. 


Explanation: With the DBMS_STATS package you can view and modify optimizer statistics gathered for database objects. 

STALE_PERCENT - This value determines the percentage of rows in a table that have to change before the statistics on that table are deemed stale and should be regathered. The default value is 10%. 

Reference: Oracle Database PL/SQL Packages and Types Reference 

Q24. Identify two situations in which full table scans will be faster than index range scans. 

A. A query with a highly selective filter fetching less than 5 percent of the rows from a table. 

B. A highly selective query on a table having high clustering factor for an index. 

C. A query fetching less number of blocks than value specified by DB_FILE_MULTIBLOCK_READ_COUNT. 

D. A query executing in parallel on a partitioned table with partitioned indexes. 

E. A query on a table with sparsely populated table blocks. 

Answer: C,D 


D: DB_FILE_MULTIBLOCK_READ_COUNT is one of the parameters you can use to minimize I/O during table scans. It specifies the maximum number of blocks read in one I/O operation during a sequential scan. The total number of I/Os needed to perform a full table scan depends on such factors as the size of the table, the multiblock read count, and whether parallel execution is being utilized for the operation. 

Online transaction processing (OLTP) and batch environments typically have values in the range of 4 to 16 for this parameter. DSS and data warehouse environments tend to benefit most from maximizing the value of this parameter. The optimizer is more likely to choose a full table scan over an index if the value of this parameter is high. 


* See 6) and 7) below. 

The oracle optimizer choose the best plan and execute the query according the plan. It is 

common to hear that my table has indexes but why oracle does not use indexes rather it is 

using full table scan. There are several reasons behind choosing optimizer full table scans. 

1)The table has no indexes within it. 

2)Table has indexes but they are not appropriate to queries. For example in the table there is normal B-tree indexes but in the query the column used in the WHERE clause contains function. 3)Query access large amount of data. The table has indexes but query against it select almost all of the rows. In that case optimizer might choose to full access of table. 4)Index creation order may not appropriate. You have composite indexes on a table but in the where clause the leading column inside indexes are not used rather trailing columns are used. 5)The table is skewed. For example column gender contains value 'M' 10,000 times but value 'F' only 10 times.6)The table is small. If a table can read in a single I/O call, then a full table scan might be cheaper than an index range scan. Single I/O call is defined by DB_FILE_MULTIBLOCK_READ_COUNT parameter and value defined by blocks.Check it by,SQL> show parameter DB_FILE_MULTIBLOCK_READ_COUNTNAME TYPE VALUE--

db_file_multiblock_read_count integer 16 

7)High degree of parallelism. High degree of parallelism skews the optimizer toward full table scans. 

8)In the query if there is no filtering then full table scan is the choice. 


 If an index has poor cardinality (ie. more than 4% rows with the same index key) then it 

will perform poorly. It will usually be faster to perform a full table scan. eg. Table SALES 

has an index on the column PAYMENT_METHOD which can contain values such as COD, 

CREDIT, CHEQUE, CASH. The statement 


FROM sales 

WHERE payment_method = 'CASH' 

will probably perform so badly that you are better off without the index. 


 Oracle uses the full table scan as it assumes that it will have to read a certain part of the 


Reference: Oracle Database Reference, DB_FILE_MULTIBLOCK_READ_COUNT 

Q25. You are administering a database supporting an OLTP workload. A new module was added to one of the applications recently in which you notice that the SQL statements are highly resource intensive in terms of CPU, I/O and temporary space. You created a SQL Tuning Set (STS) containing all resource-intensive SQL statements. You want to analyze the entire workload captured in the STS. You plan to run the STS through the SQL Advisor. 

Which two recommendations can you get? 

A. Combing similar indexes into a single index 

B. Implementing SQL profiles for the statements 

C. Syntactic and semantic restructuring of SQL statements 

D. Dropping unused or invalid index. 

E. Creating invisible indexes for the workload 

F. Creating composite indexes for the workload 

Answer: C,F 

Explanation: The output of the SQL Tuning Advisor is in the form of an advice or recommendations, along with a rationale for each recommendation and its expected benefit. The recommendation relates to collection of statistics on objects , creation of new indexes (F), restructuring of the SQL statement (C), or creation of a SQL profile. You can choose to accept the recommendation to complete the tuning of the SQL statements. 

Reference: Oracle Database Performance Tuning Guide 11g , SQL Tuning Advisor 

Q26. Examine Exhibit 1 to view the query and its execution plan. 

Examine Exhibit 2 to view the structure and indexes for the EMPLOYEES and DEPARTMENTS tables. 

Examine Exhibit 3 to view the initialization parameters for the instance. 

Why is sort-merge join chosen as the access method? 

A. Because the OPTIMIZER_MODE parameter is set to ALL_ROWS. 

B. Because of an inequality condition. 

C. Because the data is not sorted in the LAST_NAME column of the EMPLOYEES table 

D. Because of the LIKE operator used in the query to filter out records 




B: There is not an inequality condition in the statement. 

C: Merge joins are beneficial if the columns are sorted. 

D: All regular joins should be able to use Hash or Sort Merge, except LIKE, !=, and NOT ... joins. 



 A sort merge join is a join optimization method where two tables are sorted and then joined. 


 A "sort merge" join is performed by sorting the two data sets to be joined according to the join keys and then merging them together. The merge is very cheap, but the sort can be prohibitively expensive especially if the sort spills to disk. The cost of the sort can be lowered if one of the data sets can be accessed in sorted order via an index, although accessing a high proportion of blocks of a table via an index scan can also be very expensive in comparison to a full table scan. 


 Sort merge joins are useful when the join condition between two tables is an inequality condition (but not a nonequality) like <, <=, >, or >=. Sort merge joins perform better than nested loop joins for large data sets. You cannot use hash joins unless there is an equality 



 When the Optimizer Uses Sort Merge Joins 

The optimizer can choose a sort merge join over a hash join for joining large amounts of 

data if any of the following conditions are true: 

/ The join condition between two tables is not an equi-join. 

/ Because of sorts already required by other operations, the optimizer finds it is cheaper to 

use a sort merge than a hash join. 

Reference: Oracle Database Performance Tuning Guide , Sort Merge Joins 

Q27. Examine the Exhibit. Given two sets of parallel execution processes, SS1 and SS2, which is true? 

A. Each process SS1 reads some of the rows from the CUSTOMERS table and sends all the rows it reads to each process in SS2. 

B. Each process in SS1 reads all the rows from the CUSTOMERS table and distributes the rows evenly among the processes in SS2. 

C. Each process in SS1 reads some of the rows from the SALES table and sends all the rows it reads to each process in SS2. 

D. Each process in SS1 reads all the rows from the SALES table and distributes the rows evenly among the processes in SS2. 

E. Each process in SS1 reads some of the rows from the SALES table and distributes the rows evenly among the processes in SS2. 

F. Each process in the SS1 reads some of the rows from the CUSTOMERS table and distributes the rows evenly among the processes in SS2. 





 The execution starts with line 16 (accessing the SALES table), followed by line 15. 


 PX BLOCKITERATOR The PX BLOCK ITERATOR row source represents the splitting up of the table EMP2 into pieces so as to divide the scan workload between the parallel scan slaves. The PX SEND and PX RECEIVE row sources represent the pipe that connects the two slave sets as rows flow up from the parallel scan, get repartitioned through the HASHtable queue, and then read by and aggregated on the top slave set. 

Q28. You are administering a database, where an application frequently executes identical SQL 

statements with the same syntax. 

How will you optimize the query results without retrieving data blocks from the storage? 

A. By setting the CURSOR_SHARING parameter to FORCE. 

B. By using the bind variables and setting the CURSOR_SHARING parameter to EXACT. 

C. By using the CACHE hint to pin the queries in the library cache 

D. By ensuring that RESULT_CACHE_MODE parameter is set to MANUAL and using the RESULT_CACHE hint in the queries. 

E. By creating a SQL plan baseline for the identical statements. 


Explanation: As its name suggests, the query result cache is used to store the results of 

SQL queries for re-use in subsequent executions. By caching the results of queries, Oracle 

can avoid having to repeat the potentially time-consuming and intensive operations that 

generated the resultset in the first place (for example, sorting/aggregation, physical I/O, 

joins etc). The cache results themselves are available across the instance (i.e. for use by 

sessions other than the one that first executed the query) and are maintained by Oracle in 

a dedicated area of memory. Unlike our homegrown solutions using associative arrays or 

global temporary tables, the query result cache is completely transparent to our 

applications. It is also maintained for consistency automatically, unlike our own caching 



RESULT_CACHE_MODE specifies when a ResultCache operator is spliced into a query's 

execution plan. 



The ResultCache operator is added only when the query is annotated (that is, hints). 


The ResultCache operator is added to the root of all SELECT statements (provided that it 

is valid to do so). 

For the FORCE setting, if the statement contains a NO_RESULT_CACHE hint, then the 

hint takes precedence over the parameter setting. 


A, B: CURSOR_SHARING determines what kind of SQL statements can share the same 




Forces statements that may differ in some literals, but are otherwise identical, to share a 

cursor, unless the literals affect the meaning of the statement. 


Causes statements that may differ in some literals, but are otherwise identical, to share a 

cursor, unless the literals affect either the meaning of the statement or the degree to which 

the plan is optimized. 


Only allows statements with identical text to share the same cursor. 

C: The Oracle library cache is a component of the System Global Area (SGA) shared pool. Similarly to other Oracle cache structures, the point of the library cache is to reduce work – and therefore to improve performance – by caching the result of parsing and optimizing SQL or PL/SQL so that subsequent executions of the same SQL or PL/SQL require fewer preparatory steps to deliver a query result. 

Q29. One of your databases supports a mixed workload. 

When monitoring SQL performance, you detect many direct paths reads full table scans. 

What are the two possible causes? 

A. Histograms statistics not available 

B. Highly selective filter on indexed columns 

C. Too many sort operations performed by queries 

D. Indexes not built on filter columns 

E. Too many similar type of queries getting executed with cursor sharing disabled 

Answer: B,D 



* The direct path read Oracle metric occurs during Direct Path operations when the data is asynchronously read from the database files into the PGA instead of into the SGA data buffer. Direct reads occur under these conditions: 


When reading from the TEMP tablespace (a sort operation) 


When reading a parallel full-table scan (parallel query factotum (slave) processes) 

-Reading a LOB segment 

* The optimizer uses a full table scan in any of the following cases: 

-Lack of Index 

-Large Amount of Data 

-Small Table 

-High Degree of Parallelism 

Q30. Examine the Exhibit to view the structure of an indexes for the SALES table. 

The SALES table has 4594215 rows. The CUST_ID column has 2079 distinct values. 

What would you do to influence the optimizer for better selectivity? 

A. Drop bitmap index and create balanced B*Tree index on the CUST_ID column. 

B. Create a height-balanced histogram for the CUST_ID column. 

C. Gather statistics for the indexes on the SALES table. 

D. Use the ALL_ROWS hint in the query. 


Explanation: OPTIMIZER_MODE establishes the default behavior for choosing an optimization approach for the instance. Values: FIRST_ROWS_N - The optimizer uses a cost-based approach and optimizes with a goal of best response time to return the first n rows (where n = 1, 10, 100, 1000). FIRST_ROWS - The optimizer uses a mix of costs and heuristics to find a best plan for fast delivery of the first few rows. ALL_ROWS - The optimizer uses a cost-based approach for all SQL statements in the session and optimizes with a goal of best throughput (minimum resource use to complete the entire statement).