We promise that you will certainly get through the Oracle Oracle exam with ease. Our own experienced professionals are usually devoted to be able to updating the Oracle 1Z0-117 exam dumps. It is possible to keep faith throughout us and we offer 100% money rear. If you possess something wrong along with our 1Z0-117 practice items, please contact us timely. Weve got 24/7 customer service.
2016 Oct 1Z0-117 simulations
Q61. Examine the exhibit.
Which two are true concerning the execution plan?
A. No partition-wise join is used
B. A full partition-wise join is used
C. A partial partition-wise join is used
D. The SALES table is composite partitioned
Explanation: * The following example shows the execution plan for the full partition-wise
join with the sales table range partitioned by time_id, and subpartitioned by hash on cust_id.
| Id | Operation | Name | Pstart| Pstop |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | | | | |
| 1 | PX COORDINATOR | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | | | P->S | QC (RAND) |
|* 3 | FILTER | | | | PCWC | |
| 4 | HASH GROUP BY | | | | PCWP | |
| 5 | PX RECEIVE | | | | PCWP | |
| 6 | PX SEND HASH | :TQ10000 | | | P->P | HASH |
| 7 | HASH GROUP BY | | | | PCWP | |
| 8 | PX PARTITION HASH ALL | | 1 | 16 | PCWC | |
|* 9 | HASH JOIN | | | | PCWP | |
| 10 | TABLE ACCESS FULL | CUSTOMERS | 1 | 16 | PCWP | |
| 11 | PX PARTITION RANGE ITERATOR| | 8 | 9 | PCWC | |
|* 12 | TABLE ACCESS FULL | SALES | 113 | 144 | PCWP | |
Predicate Information (identified by operation id):
3 - filter(COUNT(SYS_OP_CSR(SYS_OP_MSR(COUNT(*)),0))>100)
9 - access("S"."CUST_ID"="C"."CUST_ID")
12 - filter("S"."TIME_ID"<=TO_DATE(' 1999-10-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
* Full partition-wise joins can occur if two tables that are co-partitioned on the same key are joined in a query. The tables can be co-partitioned at the partition level, or at the subpartition level, or at a combination of partition and subpartition levels. Reference partitioning is an easy way to guarantee co-partitioning. Full partition-wise joins can be executed in serial and in parallel.
Reference: Oracle Database VLDB and Partitioning Guide, Full Partition-Wise Joins: Composite - Single-Level
Q62. Examine the Exhibit 1 to view the structure of and indexes for EMPLOYEES and DEPARTMENTS tables.
Which three statements are true regarding the execution plan?
A. The view operator collects all rows from a query block before they can be processed but higher operations in the plan.
B. The in-line query in the select list is processed as a view and then joined.
C. The optimizer pushes the equality predicate into the view to satisfy the join condition.
D. The optimizer chooses sort-merge join because sorting is required for the join equality predicate.
E. The optimizer chooses sort-merge join as a join method because an equality predicate is used for joining the tables.
Not D, not E:
Sort Merge joins are used for UN-Equality and also there is no SORT clause in the SQL.
Note: The optimizer may choose a sort merge join over a hash join for joining large amounts of data when any of the following conditions is true:
The join condition between two tables is not an equijoin, that is, uses an inequality condition such as <, <=, >, or >=.
Because of sorts required by other operations, the optimizer finds it cheaper to use a sort merge.
Q63. Which four types of column filtering may benefit from partition pruning when accessing tables via partitioned indexes?
A. Equality operates on List-Partitioned Indexes
B. Not Equal operates on a Global Hash-Partitioned Indexes
C. Equality operates on System-Partitioned Tables
D. In-List operates on Range-Partitioned Indexes
E. Not Equal operates on a local Hash-Partitioned Indexes
F. Equality operates on Range-Partitioned Indexes
G. Equality operates on Hash-Partitioned Indexes
Explanation: Oracle Database prunes partitions when you use range, LIKE, equality (A, F), and IN-list (D) predicates on the range or list partitioning columns, and when you use equality (G) and IN-list predicates on the hash partitioning columns.
Reference: Oracle Database VLDB and Partitioning Guide 11g, Information that can be Used for Partition Pruning
Q64. Examine the following anonymous PL/SQL code block of code:
Which two are true concerning the use of this code?
A. The user executing the anonymous PL/SQL code must have the CREATE JOB system privilege.
B. ALTER SESSION ENABLE PARALLEL DML must be executed in the session prior to executing the anonymous PL/SQL code.
C. All chunks are committed together once all tasks updating all chunks are finished.
D. The user executing the anonymous PL/SQL code requires execute privilege on the DBMS_JOB package.
E. The user executing the anonymous PL/SQL code requires privilege on the DBMS_SCHEDULER package.
F. Each chunk will be committed independently as soon as the task updating that chunk is finished.
Explanation: A (not D, not E):
To use DBMS_PARALLEL_EXECUTE to run tasks in parallel, your schema will need the
CREATE JOB system privilege.
E (not C): DBMS_PARALLEL_EXECUTE now provides the ability to break up a large table
according to a variety of criteria, from ROWID ranges to key values and user-defined
methods. You can then run a SQL statement or a PL/SQL block against these different
“chunks” of the table in parallel, using the database scheduler to manage the processes
running in the background. Error logging, automatic retries, and commits are integrated into
the processing of these chunks.
The DBMS_PARALLEL_EXECUTE package allows a workload associated with a base table to be broken down into smaller chunks which can be run in parallel. This process involves several distinct stages. 1.Create a task 2.Split the workload into chunks CREATE_CHUNKS_BY_ROWID CREATE_CHUNKS_BY_NUMBER_COL CREATE_CHUNKS_BY_SQL 3.Run the task RUN_TASK User-defined framework Task control 4.Check the task status 5.Drop the task
The workload is associated with a base table, which can be split into subsets or chunks of rows. There are three methods of splitting the workload into chunks.
CREATE_CHUNKS_BY_ROWID CREATE_CHUNKS_BY_NUMBER_COL CREATE_CHUNKS_BY_SQL The chunks associated with a task can be dropped using the DROP_CHUNKS procedure.
CREATE_CHUNKS_BY_ROWID The CREATE_CHUNKS_BY_ROWID procedure splits the data by rowid into chunks specified by the CHUNK_SIZE parameter. If the BY_ROW parameter is set to TRUE, the CHUNK_SIZE refers to the number of rows, otherwise it refers to the number of blocks.
Reference: TECHNOLOGY: PL/SQL Practices, On Working in Parallel
Q65. Which statement is true about an automatic SQL task?
A. It will attempt to tune the currently running SQL statements that are highly resource intensive.
B. It will automatically implement new SQL profiles for the statements that have existing SQL profiles.
C. It will attempt to tune all-long-running queries that have existing SQL profiles.
D. It will automatically implement SQL profiles if a three-fold benefit can be achieved and automatic profile implementation is enabled.
E. It will tune all the top SQL statements from AWR irrespective of the time it takes to complete the task in a maintenance window.
Explanation: Optionally, implements the SQL profiles provided they meet the criteria of threefold performance improvement
The database considers other factors when deciding whether to implement the SQL profile. For example, the database does not implement a profile when the objects referenced in the statement have stale optimizer statistics. SQL profiles that have been implemented automatically show type is AUTO in the DBA_SQL_PROFILES view. If the database uses SQL plan management, and if a SQL plan baseline exists for the SQL statement, then the database adds a new plan baseline when creating the SQL profile. As a result, the optimizer uses the new plan immediately after profile creation.
E: Oracle Database automatically runs SQL Tuning Advisor on selected high-load SQL
statements from the Automatic Workload Repository (AWR) that qualify as tuning
candidates. This task, called Automatic SQL Tuning, runs in the default maintenance
windows on a nightly basis. By default, automatic SQL tuning runs for at most one hour.
After automatic SQL tuning begins, the database performs the following steps:
. Identifies SQL candidates in the AWR for tuning Oracle Database analyzes statistics in AWR and generates a list of potential SQL statements that are eligible for tuning. These statements include repeating high-load statements that have a significant impact on the database. The database tunes only SQL statements that have an execution plan with a high potential for improvement. The database ignores recursive SQL and statements that have been tuned recently (in the last month), parallel queries, DML, DDL, and SQL statements with performance problems caused by concurrency issues. The database orders the SQL statements that are selected as candidates based on their performance impact. The database calculates the impact by summing the CPU time and the I/O times in AWR for the selected statement in the past week.
. Tunes each SQL statement individually by calling SQL Tuning Advisor During the tuning process, the database considers and reports all recommendation types, but it can implement only SQL profiles automatically.
. Tests SQL profiles by executing the SQL statement
. Optionally, implements the SQL profiles provided they meet the criteria of threefold performance improvement.The database considers other factors when deciding whether to implement the SQL profile. For example, the database does not implement a profile when the objects referenced in the statement have stale optimizer statistics. SQL profiles that have been implemented automatically show type is AUTO in the DBA_SQL_PROFILES view.If the database uses SQL plan management, and if a SQL plan baseline exists for the SQL statement, then the database adds a new plan baseline when creating the SQL profile. As a result, the optimizer uses the new plan immediately after profile creation.
Reference: Oracle Database Performance Tuning Guide, Automatic SQL Tuning
Latest 1Z0-117 exam engine:
Q66. Examine the parallelism parameters for your instance:
What are true about the execution of the query?
A. It will execute in parallel only if the LINEITEM table has a dictionary DOP defined.
B. DOP for the statement is determined by the dictionary DOP of the accessed objects.
C. It is generated to execute in parallel.
D. It will execute in parallel only if the estimated execution time is 10 or more seconds.
E. DOP for the statement is calculated automatically.
F. It may execute serially.
F (not C): It may execute serially. See note below.
A, B: Dictionary DOP not used with PARALLEL (AUTO) hint.
D: The default value of parallel_min_time_threshold is 30 (not 10) seconds.
* parallel_min_percent PARALLEL_MIN_PERCENT operates in conjunction with PARALLEL_MAX_SERVERS and PARALLEL_MIN_SERVERS. It lets you specify the minimum percentage of parallel execution processes (of the value of PARALLEL_MAX_SERVERS) required for parallel execution. Setting this parameter ensures that parallel operations will not execute sequentially unless adequate resources are available. The default value of 0 means that no minimum percentage of processes has been set.
Consider the following settings:
PARALLEL_MIN_PERCENT = 50 PARALLEL_MIN_SERVERS = 5 PARALLEL_MAX_SERVERS = 10
If 8 of the 10 parallel execution processes are busy, only 2 processes are available. If you then request a query with a degree of parallelism of 8, the minimum 50% will not be met.
Q67. Examine the query:
The RESULT_CACHE_MODE parameter is set to MANUAL for the database.
Which two statements are true about the usage of the result cache?
A. The SQL runtime environment checks whether the query result is cached in the result cache; if the result exists, the optimizer fetches the result from it.
B. The SQL runtime environment does check for the query result in the result cache because the RESULT_CACHE_MODE parameter is set to MANUAL.
C. The SQL runtime environment checks for the query result in the result cache only when the query is executed for the second time.
D. If the query result does not exist in the cache and the query is executed, the result is generated as output, and also sorted in the result cache.
result_cache_mode: the result cache can be enabled in three ways: via hint, alter session or alter system. Default is MANUAL which means that we need to explicitly request caching via the RESULT_CACHE hint;
As its name suggests, the query result cache is used to store the results of SQL queries for re-use in subsequent executions. By caching the results of queries, Oracle can avoid having to repeat the potentially time-consuming and intensive operations that generated the resultset in the first place (for example, sorting/aggregation, physical I/O, joins etc). The cache results themselves are available across the instance (i.e. for use by sessions other than the one that first executed the query) and are maintained by Oracle in a dedicated area of memory. Unlike our homegrown solutions using associative arrays or global temporary tables, the query result cache is completely transparent to our applications. It is also maintained for consistency automatically, unlike our own caching programs.
RESULT_CACHE_MODE specifies when a ResultCache operator is spliced into a query's execution plan.
The ResultCache operator is added only when the query is annotated (that is, hints).
The ResultCache operator is added to the root of all SELECT statements (provided that it
is valid to do so).
For the FORCE setting, if the statement contains a NO_RESULT_CACHE hint, then the
hint takes precedence over the parameter setting.
Q68. You are administering a database that supports an OLTP application. To set statistics preferences, you issued the following command:
SQL > DBMS_STATS.SET_GLOBAL_PREFS (‘ESTIMATE_PERCENT’, ‘9’);
What will be the effect of executing this procedure?
A. It will influence the gathering of statistics for a table based on the value specified for ESTIMATE_PERCENT provided on table preferences for the same table exist.
B. It will influence dynamic sampling for a query to estimate the statistics based on ESTIMATE_PERCENT.
C. The automatic statistics gathering job running in the maintenance window will use global preferences unless table preferences for the same table exist.
D. New objects created will use global preference even if table preferences are specified.
Q69. Which are the two prerequisites for enabling star transformation on queries?
A. The STAR_TRANSFORMATION_ENABLED parameter should be set to TRUE or TEMP_DISABLE.
B. A B-tree index should be built on each of the foreign key columns of the fact table(s),
C. A bitmap index should be built on each of the primary key columns of the fact table(s).
D. A bitmap index should be built on each of the foreign key columns of the fact table(s).
E. A bitmap index must exist on all the columns that are used in the filter predicates of the query.
Explanation: A: Enabling the transformation
E: Star transformation is essentially about adding subquery predicates corresponding to the constraint dimensions. These subquery predicates are referred to as bitmap semi-join predicates. The transformation is performed when there are indexes on the fact join columns (s.timeid, s.custid...). By driving bitmap AND and OR operations (bitmaps can be from bitmap indexes or generated from regular B-Tree indexes) of the key values supplied by the subqueries, only the relevant rows from the fact table need to be retrieved. If the filters on the dimension tables filter out a lot of data, this can be much more efficient than a full table scan on the fact table. After the relevant rows have been retrieved from the fact table, they may need to be joined back to the dimension tables, using the original predicates. In some cases, the join back can be eliminated.
Star transformation is controlled by the star_transformation_enabled parameter. The parameter takes 3 values.
TRUE - The Oracle optimizer performs transformation by identifying fact and constraint dimension tables automatically. This is done in a cost-based manner, i.e. the transformation is performed only if the cost of the transformed plan is lower than the non-transformed plan. Also the optimizer will attempt temporary table transformation automatically whenever materialization improves performance.
FALSE - The transformation is not tried.
TEMP_DISABLE - This value has similar behavior as TRUE except that temporary table transformation is not tried.
The default value of the parameter is FALSE. You have to change the parameter value and create indexes on the joining columns of the fact table to take advantage of this transformation.
Reference: Optimizer Transformations: Star Transformation
Q70. Which two tasks are performed during the optimization stage of a SQL statement?
A. Evaluating the expressions and conditions in the query
B. Checking the syntax and analyzing the semantics of the statement
C. Separating the clauses of the SQL statement into structures that can be processed
D. Inspecting the integrity constraints and optimizing the query based on this metadata
E. Gathering the statistics before creating the execution plan for the statement
* Oracle SQL is parsed before execution, and a hard parse includes these steps: . Loading into shared pool - The SQL source code is loaded into RAM for parsing. (the "hard" parse step) . Syntax parse - Oracle parses the syntax to check for misspelled SQL keywords. . Semantic parse - Oracle verifies all table & column names from the dictionary and checks to see if you are authorized to see the data. . Query Transformation - If enabled (query_rewrite=true), Oracle will transform complex SQL into simpler, equivalent forms and replace aggregations with materialized views, as appropriate.
. Optimization - Oracle then creates an execution plan, based on your schema statistics (or maybe with statistics from dynamic sampling in 10g). . Create executable - Oracle builds an executable file with native file calls to service the SQL query.
The parsing process performs two main functions:
o Syntax Check: is the statement a valid one. Does it make sense given the SQL grammar documented in the SQL Reference Manual. Does it follow all of the rules for SQL.
o Semantic Analysis: Going beyond the syntax ? is the statement valid in light of the objects in the database (do the tables and columns referenced exist). Do you have access to the objects ? are the proper privileges in place? Are there ambiguities in the statement ? for example if there are two tables T1 and T2 and both have a column X, the query ?select X from T1, T2 where ?? is ambiguous, we don?t know which table to get X from. And so on.
So, you can think of parsing as basically a two step process, that of a syntax check to check the validity of the statement and that of a semantic check ? to ensure the statement can execute properly.
Reference: Oracle hard-parse vs. soft parse