Mohamed Houri’s Oracle Notes

November 28, 2013

Why?

Filed under: Oracle — hourim @ 1:01 pm

Why? This is a fundamental question. And in the context of my work (or actually my hobby to say so) I often say ”Why”. Yesterday I have been handled a query to tune. This query is honored with the following sub-optimal execution plan

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name                    | Starts | E-Rows | A-Rows |   A-Time   |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                         |      1 |        |     10 |00:02:13.09 |
|   1 |  SORT ORDER BY                      |                         |      1 |      8 |     10 |00:02:13.09 |
|   2 |   NESTED LOOPS                      |                         |      1 |        |     10 |00:02:13.06 |
|   3 |    NESTED LOOPS                     |                         |      1 |      8 |     10 |00:02:13.06 |
|   4 |     NESTED LOOPS                    |                         |      1 |      8 |     10 |00:02:13.06 |
|*  5 |      HASH JOIN SEMI                 |                         |      1 |      8 |     10 |00:02:13.06 |
|   6 |       JOIN FILTER CREATE            | :BF0000                 |      1 |        |   1469 |00:00:02.06 |
|   7 |        NESTED LOOPS                 |                         |      1 |        |   1469 |00:00:00.17 |
|   8 |         NESTED LOOPS                |                         |      1 |    307 |   5522 |00:00:00.11 |
|*  9 |          TABLE ACCESS BY INDEX ROWID| T2                      |      1 |    316 |   5522 |00:00:00.07 |
|* 10 |           INDEX RANGE SCAN          | T2_OOST_START_DATE_I    |      1 |   1033 |   8543 |00:00:00.03 |
|* 11 |          INDEX RANGE SCAN           | T1_OBST_OOST_DK_I       |   5522 |      1 |   5522 |00:00:00.08 |
|* 12 |         TABLE ACCESS BY INDEX ROWID | T1                      |   5522 |      1 |   1469 |00:00:00.13 |
|  13 |       VIEW                          | VW_SQ_1                 |      1 |  64027 |   1405 |00:00:07.82 |
|* 14 |        FILTER                       |                         |      1 |        |   1405 |00:00:07.82 |
|  15 |         JOIN FILTER USE             | :BF0000                 |      1 |  64027 |   1405 |00:00:07.82 |
|  16 |          PARTITION REFERENCE ALL    |                         |      1 |  64027 |  64027 |00:01:48.22 |
|* 17 |           HASH JOIN                 |                         |     52 |  64027 |  64027 |00:02:03.37 | --> spot this
|  18 |            TABLE ACCESS FULL        | T4                      |     52 |  64027 |  64027 |00:00:00.34 |
|* 19 |            TABLE ACCESS FULL        | T5                      |     41 |    569K|   5555K|00:02:08.32 | --> spot this
|  20 |      TABLE ACCESS BY INDEX ROWID    | T3                      |     10 |      1 |     10 |00:00:00.01 |
|* 21 |       INDEX UNIQUE SCAN             | T3_CHP_PK               |     10 |      1 |     10 |00:00:00.01 |
|* 22 |     INDEX UNIQUE SCAN               | T3_CHP_PK               |     10 |      1 |     10 |00:00:00.01 |
|  23 |    TABLE ACCESS BY INDEX ROWID      | T3                      |     10 |      1 |     10 |00:00:00.01 |
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("ITEM_1"="OBST"."OBST_ID")
       filter("ITEM_2">"OBST"."START_DATE")
   9 - filter(("OOST"."CHP_ID_START" IS NOT NULL AND "OOST"."CHP_ID_END" IS NOT NULL))
  10 - access("OOST"."START_DATE">=:LD_FROM_DATE AND "OOST"."START_DATE"<=:LD_TO_DATE)
  11 - access("OBST"."OOST_ID"="OOST"."OOST_ID")
  12 - filter(("OBST"."OBS_STAT"=2 AND ADD_MONTHS(INTERNAL_FUNCTION("OBST"."MOD_DATE"),13)>=:LD_CURR_DATE))
  14 - filter((:LN_MIN_ac_number<=:ln_max_number AND TO_DATE(:LD_FROM_DATE)<=TO_DATE(:LD_TO_DATE)))
  17 - access("OBSV"."VEH_ID"="VEH"."VEH_ID")
  19 - filter(((:LN_OPR_ID IS NULL OR "VEH"."OPR_ID"=:LN_OPR_ID) AND "VEH"."ac_number">=:LN_MIN_ac_number AND
              "VEH"."ac_number"<=:ln_max_number))
  21 - access("OOST"."CHP_ID_START"="CHP1"."CHP_ID")
  22 - access("OOST"."CHP_ID_END"="CHP2"."CHP_ID")

This query takes 2 minutes and 13 seconds to complete. Whereas a single operation (operation 19) takes 2 minutes and 8 seconds before feeding back its parent operation 17. At least I know where to focus my attention.

But Why? Why this full table scan?

Just before having this execution plan I have asked to re-gather statistics on tables and columns without histogram. Statistics seems to be Okay. And, by the way, it is normal that a full table scan is chosen by the CBO to generate 5,5 millions rows. Isn’t it?

So where is the problem?

An expert eye would say “look you start manipulating 5,5 million of rows to end up finally only with 10 rows; you need to start small and keep small”
So it is now time to investigate the query

SELECT  
                obst.obst_id obstructionId
               ,oost.comment_clob CommentClob
               ,chp1.ptcar_no StartPtcar
               ,chp2.ptcar_no EndPtcar
               ,oost.track_code Track
               ,oost.start_date StartPeriod
               ,oost.end_date EndPeriod
               ,oost.doc_no RelaasId
               ,obst.status_code Status
        FROM   T1 obst
             , T2 oost
             , T3 chp1
             , T3 chp2
          where obst.oost_id      = oost.oost_id
          and oost.chp_id_start = chp1.chp_id
          and oost.chp_id_end   = chp2.chp_id
          and obst.obs_stat     = 2 
		  and add_months(obst.mod_date,13) >= :ld_curr_date
		  and oost.start_date between :ld_from_date and :ld_to_date          
          and exists (select 1
                        from T4  obsv
                           , T5  veh
                        where  obsv.veh_id = veh.veh_id
		       and (:ln_opr_id is null
                               OR veh.opr_id = :ln_opr_id
                            )
                          and  obsv.obst_id = obst.obst_id
                          and  veh.ac_number between :ln_min_number and :ln_max_number
                          and  obsv.start_date > obst.start_date
                      )          
         order by obst.obst_id;		

As far as the most time consuming operation (table T5 full access) is in the EXISTS part I commented it and run the query which, unsurprisingly, came up instantaneously with about thousand of rows (1469). All in all the commented part when uncommented would normally keep from those 1469 rows only rows that fulfill the EXISTS clause and throw away the remaining rows. We need then to manipulate at least 1469 records. So why this enormous 5,5 millions rows?

Back to the execution plan where I decided to focus my attention only on the following operations:

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name                    | Starts | E-Rows | A-Rows |   A-Time   |
---------------------------------------------------------------------------------------------------------------
|* 17 |           HASH JOIN                 |                         |     52 |  64027 |  64027 |00:02:03.37 | 
|  18 |            TABLE ACCESS FULL        | T4                      |     52 |  64027 |  64027 |00:00:00.34 |
|* 19 |            TABLE ACCESS FULL        | T5                      |     41 |    569K|   5555K|00:02:08.32 | 
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
  17 - access("OBSV"."VEH_ID"="VEH"."VEH_ID")
  19 - filter(((:LN_OPR_ID IS NULL OR "VEH"."OPR_ID"=:LN_OPR_ID) AND "VEH"."ac_number">=:LN_MIN_ac_number AND
              "VEH"."ac_number"<=:ln_max_number))
 

The CBO started by generating 5555K worth of rows using the filter number 19 and pass this enormous amount of data to the HASH JOIN operation number 17 which throw away the majority of those rows from which only 64027 survived the join condition indicated by predicate number 17. It is clear that the ideal situation would have been to (a) join first and (b) filter in a second step. So why (again) the CBO opted for a reversed situation (a) filter first and (b) join after? The filtering bind variables (:LN_MIN_AC_NUMBER,:LN_MAX_NUMBER) are not filtering anything because they represent the real min (veh.ac_number) and max (veh.ac_number)

Why the CBO has opted for this sub-optimal execution plan? What can I do to make the CBO joining first and filtering after? Statistics seems Okay as I said above. I spent a couple of hours searching on how to make this possible without changing the query but failed to succeed. This is why I asked a question on the otn forum where Jonathan Lewis gave me a help and suggested to use the following hint in the subquery

          and exists (select /*+ no_unnest push_subq */ 1
                        from T4  obsv
                           , T5  veh
                        where  obsv.veh_id = veh.veh_id
		       and (:ln_opr_id is null
                               OR veh.opr_id = :ln_opr_id
                            )
                          and  obsv.obst_id = obst.obst_id
                          and  veh.ac_number between :ln_min_number and :ln_max_number
                          and  obsv.start_date > obst.start_date
                      )                   	

Which effectively gave the following optimal execution plan:

---------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name                    | Starts | E-Rows | A-Rows |   A-Time   |
---------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |                         |      1 |        |      6 |00:00:00.56 |
|   1 |  SORT ORDER BY                            |                         |      1 |    254 |      6 |00:00:00.56 |
|*  2 |   HASH JOIN                               |                         |      1 |    254 |      6 |00:00:00.11 |
|   3 |    TABLE ACCESS FULL                      | T3                      |      1 |   2849 |   2849 |00:00:00.01 |
|*  4 |    HASH JOIN                              |                         |      1 |    254 |      6 |00:00:00.11 |
|   5 |     TABLE ACCESS FULL                     | T3                      |      1 |   2849 |   2849 |00:00:00.01 |
|   6 |     NESTED LOOPS                          |                         |      1 |        |      6 |00:00:00.10 |
|   7 |      NESTED LOOPS                         |                         |      1 |    254 |   5012 |00:00:00.09 |
|*  8 |       TABLE ACCESS BY INDEX ROWID         | T2                      |      1 |    262 |   5012 |00:00:00.06 |
|*  9 |        INDEX RANGE SCAN                   | T2_OOST_START_DATE_I    |      1 |    857 |   7722 |00:00:00.01 |
|* 10 |       INDEX RANGE SCAN                    | T1_OBST_OOST_DK_I       |   5012 |      1 |   5012 |00:00:00.03 |
|* 11 |      TABLE ACCESS BY INDEX ROWID          | T1                      |   5012 |      1 |      6 |00:00:00.48 |
|  12 |       NESTED LOOPS                        |                         |   1277 |        |      6 |00:00:00.46 |
|  13 |        NESTED LOOPS                       |                         |   1277 |      2 |      6 |00:00:00.46 |
|  14 |         PARTITION REFERENCE ALL           |                         |   1277 |      4 |      6 |00:00:00.46 |
|* 15 |          TABLE ACCESS BY LOCAL INDEX ROWID| T4                      |  66380 |      4 |      6 |00:00:00.43 |
|* 16 |           INDEX RANGE SCAN                | T4_OBSV_OBST_FK_I       |  66380 |     86 |      6 |00:00:00.28 |
|* 17 |         INDEX UNIQUE SCAN                 | T5_VEH_PK               |      6 |      1 |      6 |00:00:00.01 |
|* 18 |        TABLE ACCESS BY GLOBAL INDEX ROWID | T5                      |      6 |      1 |      6 |00:00:00.01 |
---------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("OOST"."CHP_ID_END"="CHP2"."CHP_ID")
   4 - access("OOST"."CHP_ID_START"="CHP1"."CHP_ID")
   8 - filter(("OOST"."CHP_ID_START" IS NOT NULL AND "OOST"."CHP_ID_END" IS NOT NULL))
   9 - access("OOST"."START_DATE">=TO_DATE(' 2013-11-20 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "OOST"."START_DATE"
      <=TO_DATE(' 2013-11-27 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
  10 - access("OBST"."OOST_ID"="OOST"."OOST_ID")
  11 - filter(("OBST"."OBS_STAT"=2 AND ADD_MONTHS(INTERNAL_FUNCTION("OBST"."MOD_DATE"),13)>=SYSDATE@! AND  IS NOT NULL))
  15 - filter("OBSV"."START_DATE">:B1)
  16 - access("OBSV"."OBST_ID"=:B1)
  17 - access("OBSV"."VEH_ID"="VEH"."VEH_ID")
  18 - filter(("VEH"."ac_number">=1 AND "VEH"."ac_number"<=99999))

This plan is doing exactly what I wanted it to do

---------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name                    | Starts | E-Rows | A-Rows |   A-Time   |
---------------------------------------------------------------------------------------------------------------------
|* 17 |         INDEX UNIQUE SCAN                 | T5_VEH_PK               |      6 |      1 |      6 |00:00:00.01 |
|* 18 |        TABLE ACCESS BY GLOBAL INDEX ROWID | T5                      |      6 |      1 |      6 |00:00:00.01 |
---------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
  17 - access("OBSV"."VEH_ID"="VEH"."VEH_ID")
  18 - filter(("VEH"."ac_number"&gt;=1 AND "VEH"."ac_number"&lt;=99999))

Joining first using the T5 primary key index and filtering in a second step. The Bloom filter and the Oracle internal view (VW_SQ_1) in the first plan disappeared from the new one. You know what the no_unnest hint does and the effect of push_pred hint has got on the query.

I will be very happy if someone could let me know how to obtain the desired execution plan without any hint and without re-writing the query

November 22, 2013

On how important is collecting statistics adequately

Filed under: Oracle — hourim @ 3:22 pm

Very recently I have been handled the following sub-optimal execution plan of a classical MERGE statement between two tables, t1 and t2

--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT                        |           |   379 | 24635 |  1107  (19)| 00:00:05 |       |       |
|   1 |  MERGE                                 | T1        |       |       |            |          |       |       |
|   2 |   VIEW                                 |           |       |       |            |          |       |       |
|   3 |    SEQUENCE                            | SEQ_SEQ   |       |       |            |          |       |       |
|*  4 |     HASH JOIN OUTER                    |           |   379 | 23119 |  1107  (19)| 00:00:05 |       |       |
|   5 |      TABLE ACCESS BY GLOBAL INDEX ROWID| T2        |   372 |  8184 |     5   (0)| 00:00:01 | ROWID | ROWID |
|*  6 |       INDEX RANGE SCAN                 | AUDIT_IND |   383 |       |     3   (0)| 00:00:01 |       |       |
|   7 |      PARTITION RANGE ALL               |           |  5637K|   209M|  1046  (15)| 00:00:05 |     1 |     2 |
|   8 |       TABLE ACCESS FULL                | T1        |  5637K|   209M|  1046  (15)| 00:00:05 |     1 |     2 |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("L"."COL_SK"(+)="COL_SK")
6 - access("AUDIT_ID"=246395)

According to this execution plan there are only 379 rows to be merged. But the client was complaining about the 12 seconds this merge statement is taking and asked for improvement. Well, it is quite clear from the supplied information that the CBO is wrongly hash joining 5,637,000 worth of rows from the probed table t1 (operation 8) with 372 rows from the built table t2 (operation 5 and predicate 6), burning a lot of CPU at operation 4 and finally throwing away the majority of the 5,637,000 rows from table t1 that are not fulfilling the join condition (predicate 6).

This is the MERGE statement?

MERGE INTO T1 L USING
(SELECT COL_SK,LOAD_DTS
FROM T2
WHERE AUDIT_ID = 246395) H
ON (L.COL_SK = H.COL_SK)
WHEN NOT MATCHED THEN
INSERT
(COL1,COL_SK,COL2,LOAD_DTS ,ORIG,AUDIT_ID)
VALUES
(SEQ_SEQ.NEXTVAL,H.COL_SK,-1,H.LOAD_DTS,'MHO',246395);

Looking carefully to the above statement I realized that the 372 rows are coming from this

(SELECT
   COL_SK,
   LOAD_DTS
FROM T2
WHERE AUDIT_ID = 246395
) H

There are 372 values of the join column, COL_SK, to be merged. So why the CBO has not started by getting those 372 values from table t2 and then looked up into the second table t1 using the join column to satisfy the join condition

 ON (L.COL_SK = H.COL_SK)

This kind of execution path becomes more adequate when you know that there is additionally a unique index on table t1 starting with the join column

create unique index IND_T1_UK on t1 (col_sk, other_column);

When I hinted the MERGE statement to use this unique index

MERGE /*+ index(L IND_T1_UK) */ INTO T1 L USING
(SELECT COL_SK,LOAD_DTS
FROM T2
WHERE AUDIT_ID = 246395) H
ON (L.COL_SK = H.COL_SK)
WHEN NOT MATCHED THEN
INSERT
(COL1,COL_SK,COL2,LOAD_DTS ,ORIG,AUDIT_ID)
VALUES
(SEQ_SEQ.NEXTVAL,H.COL_SK,-1,H.LOAD_DTS,'MHO',246395);

I got the optimal plan. The plan I want the CBO to use without any hint.

----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name        | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT                        |             |   379 | 24635 |  1116   (1)| 00:00:05 |       |       |
|   1 |  MERGE                                 | T1          |       |       |            |          |       |       |
|   2 |   VIEW                                 |             |       |       |            |          |       |       |
|   3 |    SEQUENCE                            | SEQ_SEQ     |       |       |            |          |       |       |
|   4 |     NESTED LOOPS OUTER                 |             |   379 | 23119 |  1116   (1)| 00:00:05 |       |       |
|   5 |      TABLE ACCESS BY GLOBAL INDEX ROWID| T2          |   372 |  8184 |     5   (0)| 00:00:01 | ROWID | ROWID |
|*  6 |       INDEX RANGE SCAN                 | AUDIT_IND   |   383 |       |     3   (0)| 00:00:01 |       |       |
|   7 |      TABLE ACCESS BY GLOBAL INDEX ROWID| T1          |     1 |    39 |     3   (0)| 00:00:01 | ROWID | ROWID |
|*  8 |       INDEX RANGE SCAN                 | IND_T1_UK   |     1 |       |     2   (0)| 00:00:01 |       |       |
----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
6 - access("AUDIT_ID"=246395)
8 - access("L"."COL_SK"(+)="COL_SK")

You can easily point out from the above two execution plans that the cost of the NESTED LOOP (1116) is greater than the cost of the HASH JOIN (1107) and this is the main reason why the CBO is prefering the sub-optimal HASH JOIN plan. A HASH JOIN being unable to perform an index lookups on the probed table t1 using the join condition col_sk, the CBO traumatized the merge statement by using a costly full table scan.

Finally, solving my problem resides on understanding why the cost of the NESTED LOOP(NL) is greater than the cost of the HASH JOIN.  

The cost of a nested loop join is given by

Cost of acquiring data from first table +
               Cardinality of result from first table * Cost of single visit to second table

So, in your opinion, what statistics information could play an important role in the decision the CBO makes for the NL join method?  Think a little bit before going down the page




















It is the density of the join column (col_sk).

SQL> select
          num_distinct
         ,density
         ,num_nulls
         ,last_analyzed
         ,sample_size
         ,global_stats
         ,histogram
  from   all_tab_col_statistics
  where table_name ='T1'
  and   column_name='COL_SK';

NUM_DISTINCT    DENSITY  NUM_NULLS LAST_ANALYZED       SAMPLE_SIZE  GLO HISTOGRAM
------------ ---------- ---------- -------------------   -----------  --- -------
5,546,458       0.000000180     0   2013/11/07 22:00:31   5,546,458  YES NONE     --> spot the join column last_analyzed date

select
    table_name
    ,last_analyzed
from all_tables
where table_name = ‘T1’;

TABLE_NAME    LAST_ANALYZED
------------- ---------------------
T1            2013/11/21 05:00:31     --> spot the table last_analyzed date

select count(1) from (select distinct col_sk from t1)---> 5,447,251

Two important points are to be emphasied here:

1. The table t1 has been last analyzed on 21/11/2013 while the join column has been last analyzed on 07/11/2013
2. There is about 100K difference between the real count of distinct(col_sk) and num_distinct of col_sk as taken from the all_tab_col_statistics table

A deeper look on the parameters used to compute the table statistics shows that the statistics are collected using  two non adequate parameters:

  method_opt        -=> null
  estimate_percent  -=> a given value

The first parameter setting when used translates to: collect stats on table and ignore stats on columns. Which explains the difference between the last analyzed date of the table and that of its join column. It also explains the discrepancy noticed between the num_distinct of the join column and its real count when taken directly from table t1. My prefered value of the method_opt parameter is:

 method_opt            -=> 'for all columns size 1'

Which collects stats for all columns without collecting histogram.

The second parameter(estimate_percent) indicates at which sample (precision) statistics should be gathered. Starting from 11g and above, my preferred value for this parameter is

estimate_percent  -=> 'dbms_stats.auto.sample_size'

Particularly when approximate_ndv is set to true

SELECT DBMS_STATS.get_prefs('approximate_ndv') FROM dual;

DBMS_STATS.GET_PREFS('APPROXIMATE_NDV')
--------------------------------------------------------------------------------
TRUE

Back to my MERGE problem. When the statistics have been gathered again on table t1 using the above adequate dbms_stats parameters

BEGIN
   dbms_stats.gather_table_stats  (ownname          => user,
                                   tabname          => 'T1',
                                   estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
                                   cascade          => true,
                                   method_opt       => 'FOR ALL COLUMNS SIZE 1'                          
                                  );
END;
/

the CBO started selecting automatically the optimal plan using the NESTED LOOP JOIN as shown via the following execution plan taken from memory


----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name        | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT                        |             |       |       |  1078 (100)|          |       |       |
|   1 |  MERGE                                 | T1          |       |       |            |          |       |       |
|   2 |   VIEW                                 |             |       |       |            |          |       |       |
|   3 |    SEQUENCE                            | SEQ_SEQ     |       |       |            |          |       |       |
|   4 |     NESTED LOOPS OUTER                 |             |   357 | 21777 |  1078   (1)| 00:00:05 |       |       |
|   5 |      TABLE ACCESS BY GLOBAL INDEX ROWID| T2          |   357 |  7854 |     5   (0)| 00:00:01 | ROWID | ROWID |
|*  6 |       INDEX RANGE SCAN                 | AUDIT_IND   |   367 |       |     3   (0)| 00:00:01 |       |       |
|   7 |      TABLE ACCESS BY GLOBAL INDEX ROWID| T1          |     1 |    39 |     3   (0)| 00:00:01 | ROWID | ROWID |
|*  8 |       INDEX RANGE SCAN                 | IND_T1_UK   |     1 |       |     2   (0)| 00:00:01 |       |       |
----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
6 - access("AUDIT_ID"=:B1)
8 - access("L"."COL_SK"="COL_SK")

Now, before ending this article I would like to show two things :

The predicate of the sub-optimal plan

Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("L"."COL_SK"(+)="COL_SK")
6 - access("AUDIT_ID"=:B1)

And the predicate part of the optimal plan

Predicate Information (identified by operation id):
---------------------------------------------------
6 - access("AUDIT_ID"=:B1)
8 - access("L"."COL_SK"="COL_SK")

In the optimal plan we start by taking the 357 (or 379) rows to be merged from the outer table using predicate 6 and then we scan the inner table using this 357 join columns via an index range scan. The majority of the rows are eliminated much earlier.

In the sub-optimal plan it is the last operation, the HASH JOIN OUTER that eliminates the majority of the rows. Which means in this case : we started big and finally finished small. But to have a performant query we need to start small and try to keep small.

Bottom line: when you meet a sub-optimal execution plan using a HASH JOIN driven by a costly full table scan on the probed table while you know that the probed table could be very quickly transformed to an inner table of a NESTED LOOP join scanned via a precise index access on the join column, then verify the statistics on the join column because this one plays an important role in the cost of the NESTED LOOP join.

November 18, 2013

SQLTXPLAIN under Oracle 12c

Filed under: Tuning tools — hourim @ 8:14 am

I like very much Tanel Poder snapper and Carlos Sierra SQLTXPLAIN . They represent valuable performance diagnostic tools. Unfortunately I am still waiting to find a customer site where I will be allowed or granted necessary privileges to install and to use them. There are client sites where I have been asked to tune queries without having the possibility to execute dbms_xplan.display_cursor. Let alone installing SQLTXPLAIN under SYS user or having grant select on x$ tables.

This is why I have installed them in my personal laptop and I am using them very often in my personal Oracle Research & Developments (R&D). Although, personal work are peanuts when compared with onsite oracle consultancy work, I didn’t renounce to ”home” use them.

I have already successfully installed SQLTXPLAIN on Oracle 11.2.0.1. My first work on SQLTXPLAIN was to go back to personal engineered performance problems ”traditionally” solved and ask myself  “What would I have pointed out using SQLTXPLAIN in such a performance issue?’’. Up to now this is my sole strategy of using this tool like what I have published here and here

My second step in deepening my SQLTXPLAIN R&D was to buy Stelios Charalambides book Oracle SQL Tuning with Oracle SQLTXPLAIN. In the meantime Oracle 12c has been released and naturally I have installed this release after having uninstalled the last one.

I am still reviewing this book. Last week I finished reviewing Chapter 8 and thought that it is time now to devote few time on this tool again because it makes no sense to review this book without having at least the main html report produced by SQLTXTRACT module of the SQLT tool. So I decided to start installing it on my personal Oracle 12c database.

C:\sqlt\install>sqlplus sys/sys@orcl as sysdba

SQL> select DBID, name, CDB, CON_ID, CON_DBID from v$database;

DBID        NAME      CDB     CON_ID   CON_DBID
---------- --------- --- ---------- ----------
1352104669 ORCL      YES          0 1352104669

As you can see from the above select I am going to install SQLTXPLAIN on the container DB

SQL> start sqcreate

…

SQUTLTEST completed.
adding: 131117093318_10_squtltest.log (160 bytes security) (deflated 61%)
no rows selected

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Oops!! I have been abruptly disconnected; something went wrong for sure. After having tried several times without success I decided to contact Mauro Pagano from the Oracle support. Thanks to the generated installation log file I have sent him, he immediately answered me that I am trying to install SQLT on a container DB which is impossible for this moment. He kindly suggested me to install it on a pluggable DB and to let him know the results of this new installation. So I embarked again on a new installation

First I have figured out how to connect to the pluggable data base

C:\sqlt\install>sqlplus sys/sys@localhost:1521/pdborcl as sysdba;

SQL> select name, con_id from v$active_services;

NAME               CON_ID
------------------ --------
pdborcl              3

If the pluggable data base is not already open then open it


SQL> alter database pdborcl open;

alter database pdborcl open
*
ERROR at line 1:
ORA-65019: pluggable database PDBORCL already open

And finally I launched the sqcreate which I have preceded by the sqdrop for a clean starting situation


SQL> start sqcreate

………

SQLT users must be granted SQLT_USER_ROLE before using this tool.

SQCREATE completed. Installation completed successfully.

Hopefully this time the installation finished successfully with the last two above instructions which I have religiously followed

SQL> create user mohamed identified by mohamed;

User created.

SQL> grant SQLT_USER_ROLE to mohamed;

Grant succeeded.

Am I now ready to use this tool under 12c pluggable db? Let’s test


C:\sqlplus mohamed/mohamed@localhost:1521/pdborcl

SQL> create table t1 as select rownum n1 from dual connect by level<=10;

SQL> create index i1 on t1(n1);

SQL> select * from t1 where rownum<=1;

N1
----------
1

SQL> select * from table(dbms_xplan.display_cursor);

SQL_ID  7yzrbhp4b6vhr, child number 0
Plan hash value: 3836375644

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |     2 (100)|          |
|*  1 |  COUNT STOPKEY     |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL| T1   |     1 |     3 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM<=1)

And what if I apply SQLTXTRACT to my above simple sql query?


SQL> start c:\sqlt\run\sqltxtract 7yzrbhp4b6vhr My_Password

…/…

SQLDX files have been created.
Archive:  sqlt_s86941_sqldx.zip
Length  Date       Time    Name
------- ---------- ----- ----
4631   17/11/2013 10:20  sqlt_s86941_sqldx_7yzrbhp4b6vhr_csv.zip
28048  17/11/2013 10:20  sqlt_s86941_sqldx_global_csv.zip
4363   17/11/2013 10:20  sqlt_s86941_sqldx_7yzrbhp4b6vhr_log.zip
---------                -------
37042                     3 files

adding: sqlt_s86941_sqldx.zip (160 bytes security) (stored 0%)

SQLTXTRACT completed.

And finally it works as shown via this pdf file sqlt_s86941_main

Bottom line

  1. The SQLTXPLAIN plan is currently available to be installed only on a pluggable db in Oracle 12c
  2.  You have on Mauro Pagano a very modest person always ready to help you trouble shooting SQLT installation or using SQLT different modules
  3.  You need to start exploring this tool. It is worth the investigation believe me

November 14, 2013

Expert Oracle Database Architecture: buy your Oracle job insurance

Filed under: Book review — hourim @ 3:42 pm

Oracle Database ArchitectureIn the process of reviewing books I have bought almost a decade ago (first edition), today is the turn of Tom Kyte Expert Oracle Database Architecture.

The review of this book, and a couple of others that will follow I hope in a near future, will not be a classical review in which I will explain what I learnt in Chapter 1 and what I’ve most appreciated in Chapter 4 and so on. However, what I prefer to emphasize is what this book procured me in my daily Oracle consultancy work. It is like if I was shooting in the dark before reading this book and the light came up after I have started investigating the content of this book.

I remember a performance meeting crisis where I have been invited as one of the Oracle suffering application developers. The application was inadequately range partitioned by a partition key that was never invoked in the client business queries in which there were practically no partition elimination. All partitioned tables have been given a composite primary key (id, partition key) policed via a unique locally partitioned index. Few days before the crisis meeting, I was religiously reading Chapter 13 about Partitioning in which, among other interesting things, Tom Kyte explained the relationship between Local Indexes and Unique Constraints which should absolutely include the partition key in their definition to be allowed to exist. Oracle enforces uniqueness only within an index partition—never across partitions.

To trouble shoot this performance issue, the newly recruited DB -Architect suggested with authority to (a) transform all global (or b-tree) indexes into locally partitioned ones and (b) to get rid of the partition key from the primary local index.

I am against advises that say ”transform all”. In Oracle there are always  ”It depends” situations that make the ”change all” advice very often if not always a bad one. But the Tom Kyte words about local Indexes and unique constraints were still ringing in my ears so that I couldn’t resist the temptation to stop by and say to the architect ”You need to review your partitioning skills before suggesting such an impossible unique index change”. I would have never said that if I haven’t been in touch with this book.

This book gave me the necessary self-confidence I was lacking to develop performant scalable and available Oracle applications. It gave me a good picture of how Oracle works. I learnt via the content of this book how to model and test a situation before jumping to a hurried conclusion.

I need no effort to persuade you to have this book with you. Just go to ask tom web site and see the enormous work the author has done and you will realize how intelligent you could be by buying this book. 

November 12, 2013

Log file sync and user commits

Filed under: Oracle — hourim @ 10:50 am

This is a brief note to demonstrate from where a difference between the number of log file sync wait event and user commits might come. Very recently I have been pointed out in an otn thread that my AWR report contains inconsistent numbers of log file sync and user commits as shown below:

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class
db file sequential read 1,234,292 6,736 5 44.97 User I/O
DB CPU 5,251 35.05
log file sync 83,846 1,594 19 10.64 Commit
log file switch completion 1,256 372 296 2.48 Configuration
enq: TX – index contention 19,327 310 16 2.07 Concurrency

In this 60 minutes AWR report I have 83,846 log file sync waits suggesting 23 commits per second (83,846/3600)

Instance Activity Stats

Statistic Total per Second per Trans
user commits 701,112 193.72 0.8

But why the Instance Activity Stats part of the same report is showing 193 user commits per second? From where this discrepancy is coming from?

Well, I have been given a clue also in the same otn thread which I have tested and which I am reporting here in this article

SQL> create table t (n1 number);

SQL> start c:\commits#

NAME                        VALUE
--------------------------- -----
user commits                 5

SQL> start c:\events#

EVENT                      TOTAL_WAITS
-------------------------- -----------
log file sync               5

In the next simple PL/SQL anonymous block I am inserting into a for loop and committing outside the loop only one time

 SQL>  BEGIN
          FOR j in 1..10 LOOP
             insert into t values(j);
          END LOOP;
        commit;
      END;
     /

 PL/SQL procedure successfully completed.
 

If I check now the log file sync and user commits numbers I will see that the figures have been both incremented once and hence are in perfect synchronisation

SQL> start c:\commits#

NAME                VALUE
------------------- ------
user commits        6

SQL> start c:\events#

EVENT                TOTAL_WAITS
-------------------- -----------
log file sync        6

However, if I commit inside the loop (you shouldn’t commit across fetch by the way, this is not a recommended PL/SQL habit) the figures start their deviation.

 SQL> BEGIN
      FOR j in 1..10 LOOP
       INSERT INTO T VALUES(J);
       commit;
      END LOOP;
 END;
 /

PL/SQL procedure successfully completed.

 

SQL> start c:\commits#

NAME                    VALUE
----------------------- ----------
user commits             16   --> old value was 6

SQL> start c:\events#

EVENT                    TOTAL_WAITS
------------------------ -----------
log file sync             7  --> old value was 6

And here where the discrepancy starts. I have an extra 10 user commits while my log file sync wait event underwent a single wait increment. This is a PL/SQL feature which seems to increment the log file sync per PL/SQL block call and not per user commit call.

Bottom line: you have better to worry about your log file sync figures instead of your user commits value when these two values do not match

In case you want to play with this example then I’ve attached here below the commits# and events# sql files respectively

select b.name, a.value
from
v$mystat a,v$statname b
where
a.statistic# = b.statistic#
and b.name = 'user commits';

--

select a.event, a.total_waits
from
 v$session_event a
,v$event_name b
where
a.event = b.name
and  b.name = 'log file sync'
and  a.sid = 578 -- adapt with your sid
;

November 8, 2013

Interval Partitioning and PSTOP in execution plan

Filed under: Oracle — hourim @ 3:32 pm

The last release of the Northern California Oracle Users Group Journal published a Tom Kyte article entitled Advice for an Oracle Beginner?  In which the author wrote the following phrase “Participation in the Oracle community is what took me from being just another programmer to being ‘AskTom’. Without the act of participating, I do not think I would be where I am today.”

That is 100% correct. I am trying to participate in several Oracle forums including OTN, French Oracle forum and Oracle-list. If I can’t bring my help I try to understand the Original Poster (OP) question and analyze answers brought by other participants. Proceeding as such, I learnt many things. Among them what I learnt today via this otn thread.

For convenience (because it will be very easy for me to find it when needed) I decided to summarize what I learnt to day here via this very brief article. Observe carefully the following execution plan and spot particularly the PSTOP information (1048575)at line 1 and 2

-------------------------------------------------------------------------------------------
| Id  | Operation                | Name                   | Rows  | Bytes | Pstart| Pstop |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |                        |       |       |       |       |
|   1 |  PARTITION RANGE ITERATOR|                        |     2 |    70 |   KEY |1048575|
|*  2 |   TABLE ACCESS FULL      | PARTITION_INTERVAL_TAB |     2 |    70 |   KEY |1048575|
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("TRADE_DATE">=TRUNC(SYSDATE@!))

Does this particular table contain 1,048,575 partitions? Of course it doesn’t.

The name of the table already gives a hint of what is happening here. The table is range partitioned and it is also using interval partitioning feature.  I will expose the second condition that causes the apparition of this magic number in a moment. Here below the model

SQL> CREATE TABLE partition_interval_tab (
  n1 NUMBER
 ,trade_date DATE
 ,n2 number
 )
 PARTITION BY RANGE (trade_date)
 INTERVAL (NUMTOYMINTERVAL(1,'MONTH'))
 (
 PARTITION p_1 values LESS THAN (TO_DATE(' 2013-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
 ,PARTITION p_2 values LESS THAN (TO_DATE(' 2013-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
 );

SQL> insert into partition_interval_tab values (1, trunc(sysdate), 100);
SQL> insert into partition_interval_tab values (2, trunc(sysdate + 20), 200);
SQL> commit;
SQL> select * from partition_interval_tab where trade_date = trunc(sysdate);

N1 TRADE_DATE                N2
---------- ----------------- ----------
1 20131108 00:00:00        100

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name                   | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |                        |       |       |    15 (100)|          |       |       |
|   1 |  PARTITION RANGE SINGLE|                        |     1 |    35 |    15   (0)| 00:00:01 |   KEY |   KEY |
|*  2 |   TABLE ACCESS FULL    | PARTITION_INTERVAL_TAB |     1 |    35 |    15   (0)| 00:00:01 |   KEY |   KEY |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("TRADE_DATE"=TRUNC(SYSDATE@!))

Nothing special to point out here when the predicate on the partition key represent an equality. But spot what happens when I change the predicate part to be an inequality (>=)

SQL> select * from partition_interval_tab where trade_date >= trunc(sysdate);

N1 TRADE_DATE                N2
---------- ----------------- ----------
1 20131108 00:00:00        100
2 20131128 00:00:00        200

-------------------------------------------------------------------------------------------
| Id  | Operation                | Name                   | Rows  | Bytes | Pstart| Pstop |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |                        |       |       |       |       |
|   1 |  PARTITION RANGE ITERATOR|                        |     2 |    70 |   KEY |1048575|
|*  2 |   TABLE ACCESS FULL      | PARTITION_INTERVAL_TAB |     2 |    70 |   KEY |1048575|
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("TRADE_DATE">=TRUNC(SYSDATE@!))

And here you are; the magic number 1048575 appears which seems to represent the upper band of

Thanks again for Jonathan Lewis who showed this information in the above mentioned otn thread.

I also include few references I have found when searching the number 1048575 on the internet

https://forums.oracle.com/thread/2230426?start=15&tstart=0

http://docs.oracle.com/cd/B28359_01/server.111/b32024/part_avail.htm

And this number seems also to be used by Oracle when auto tuning the db_file_multiblock_read_count parameter value

November 7, 2013

Cost-Based Oracle Fundamentals: a magic book

Filed under: Book review — hourim @ 3:40 pm

Cost Based OptimizerIt was back in late 2005 or early 2006 when I was prompted by an Oracle magazine issue to buy the then best Oracle seller book of the year: Oracle Cost Based Fundamentals. When Amazon shipped me this book, I was really disappointed when I started browsing its content. The first question that came to my mind was: is this book speaking about an Oracle technology? You can imagine how much this question revealed my degree of Oracle ignorance by that time. To my defense I was working for a customer as a PL/SQL developer under Oracle 8i. My tasks were to faithfully transform business requirements into a technical requirements and then into a set of stored procedures. Oracle 8i was under Rule Based model while the book I bought explains fundamentals of Oracle Cost Based Optimizer.  I was in such a situation that inevitably the content of this new book was not matching my interests. So I put it on hold.

Several years after, I started a new job where trouble shooting performance issues was a crucial part. The application was upgraded from 8i (Rule Based) to 10gR2 (Cost Based) and was suffering terrible performance problems. It was time for me to wake up both the book and my head from their hibernation. Degustation of the book content begins.

More than 4 years after I have started trouble shooting performance problems and particularly bad query execution time, I am still using and savoring the content of this unrivalled and unequalled book.

There are 14 chapters in this book; I am not going to tell you what chapter is must to read or what has most retained my attention. This is not a book to only read and re-read. This is a book to learn by heart. This is a book you should always have with you when trouble shooting queries bad execution time. This is a Swiss knife allowing a CBO dissection.

Simply put, this book is so that if, when speaking with an experienced Oracle tuning DBA-Developer, I came to realize that he still has not read this book then I immediately measure the gap he has to fill before he will start doing correctly his job (unless he has read the Performance Guide itself :-)).

Index design I : Partitioned Index: Global or local?

Filed under: Oracle — hourim @ 2:28 pm

Recently I was investigating a performance problem using the SQL Monitoring feature looking for SQLs taking more than 5 seconds, when one query retained my attention. I drilled down to its corresponding SQL Monitoring Report and started looking carefully to its execution plan. There was a particular index range scan operation which retained my attention because it was consuming a huge amount of logical I/O (buffers). A careful examination of the index and its underlying table reveals that the latter is range partitioned while the former is a local non prefixed index (a locally partitioned index which doesn’t include the partition key in its definition).

My curiosity is so that I issued a query to see how many non prefixed indexes exist in the whole bunch of partitioned tables this application owns. The query of course returned several rows. When I asked the developer about the reason, he said that this is a ”standard” they have adopted because they are, every couple of months, truncating old partitions; and having local indexes (even non prefixed ones) is a good idea in this case because they don’t have to rebuild any global indexes (by the way there is the UPDATE GLOBAL INDEXES clause for that).

And here where the problem resides: ignoring the technology. Partitioning is a nice feature which could damage the performance when it is wrongly designed. Creating a locally non-prefixed index without knowing the collateral effects they can produce if partition pruning is not guaranteed is something I have very often seen in my daily consultancy work. In order to explain this particular client situation I have engineered the following partitioned table with 1493 partitions (you should open the file in a new page copy the content into a .sql file and execute it). Table to which I have attached a locally non prefixed index (LC_NON_PREFIXED_TYP_I).  Here below are the observations I can emphasize when selecting from this table:

SQL&gt; desc partitioned_tab

Name               Null?     Type
------------------ --------- -------------------------------
MHO_ID             NOT NULL NUMBER(10)
MHO_DATE           NOT NULL DATE       -- partition key
MHO_CODE           NOT NULL VARCHAR2(1 CHAR)
MHO_TYP_ID         NOT NULL NUMBER(10) -- indexed column

The important question here is how would react the database to a query that doesn’t eliminate partitions (because it doesn’t include the partition key in its predicate) and which will be honored via a locally partitioned non prefixed index. Something like this:

select * from partitioned_tab where mho_typ_id = 0;

In the presence of an index of this type:

CREATE INDEX LC_NON_PREFIXED_TYP_I ON partitioned_tab (MHO_TYP_ID) LOCAL;

Here are the results


-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name                  | Starts | E-Rows | A-Rows |Buffers |Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                       |      1 |        |   1493 |   2984 |      |       |
|   1 |  PARTITION RANGE ALL               |                       |      1 |   1493 |   1493 |   2984 |    1 |  1493 |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID| PARTITIONED_TAB       |   1493 |   1493 |   1493 |   2984 |    1 |  1493 |
|*  3 |    INDEX RANGE SCAN                | LC_NON_PREFIXED_TYP_I |   1492 |   1493 |   1493 |   1492 |    1 |  1493 |
-----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
3 - access(MHO_TYP_ID=0)

Statistics
----------------------------------------------------------
0     recursive calls
0     db block gets
2984  consistent gets
0     physical reads
0     redo size
28937 bytes sent via SQL*Net to client
372   bytes received via SQL*Net from client
4     SQL*Net roundtrips to/from client
0     sorts (memory)
0     sorts (disk)
1493  rows processed

Spot how many times the INDEX RANGE SCAN operation has been started: 1492 times. Compare this number to the number of table partitions (1493) and you will find that in such a kind of situation you will do N-1 INDEX RANGE SCAN operations (where N is the number of partitions). That is an enormous waste of time and energy.

Why 1492 INDEX RANGE SCANs?

It is simply because a locally partitioned index contains multiple segments in contrast to a b-tree index which consists of a single segment.

SQL> select count(1) from dba_segments where segment_name = 'LC_NON_PREFIXED_TYP_I';

COUNT(1)
----------
1492

I am not saying that you don’t have to create a locally non prefixed index. What I am trying to emphasize is that when you decide to do so be sure that your queries will eliminate partitions and will hence prune down to a single index partition as it is shown here below when my query is doing partition pruning

SQL> select * from partitioned_tab
     where mho_typ_id = 0
     and  mho_date = to_date('01122012','ddmmyyyy');

MHO_ID MHO_DATE          M MHO_TYP_ID
---------- ----------------- - ----------
1 20121201 00:00:00 Z          0

-------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name                  | Starts | E-Rows | A-Rows | Buffers | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                       |      1 |        |      1 |       2 |       |       |
|   1 |  PARTITION RANGE SINGLE            |                       |      1 |      1 |      1 |       2 |     2 |     2 |
|*  2 |   TABLE ACCESS BY LOCAL INDEX ROWID| PARTITIONED_TAB       |      1 |      1 |      1 |       2 |     2 |     2 |
|*  3 |    INDEX RANGE SCAN                | LC_NON_PREFIXED_TYP_I |      1 |      1 |      1 |       1 |     2 |     2 |
-------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(MHO_DATE=TO_DATE(' 2012-12-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
3 - access(MHO_TYP_ID=0)

Since Oracle has succeeded to eliminate non touched partitions (PARTITION RANGE SINGLE) it has pruned down to a single segment index range scan as shown by the Starts information which equals 1. In addition, the consistent gets (Buffers) has been drastically reduced from 2984 to only 2.

That’s when your query is able to eliminate partitions. However, if you have a particular query that can’t eliminate partitions and that you want to cover via an appropriate index then in this case you have better to not local partition the index. Let’s see this in action


SQL> alter index LC_NON_PREFIXED_TYP_I invisible;

Index altered.

SQL> create index gl_typ_i on partitioned_tab(mho_typ_id);

create index gl_typ_i on partitioned_tab(mho_typ_id)
*
ERROR at line 1:
ORA-01408: such column list already indexed

Damn!!! I can’t do it in 11.2.0.3.0

SQL> drop index LC_NON_PREFIXED_TYP_I;

SQL> create index gl_typ_i on partitioned_tab(mho_typ_id);
SQL> select * from partitioned_tab where mho_typ_id = 0;

------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name            | Starts | E-Rows | A-Rows |Buffers | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                 |      1 |        |   1493 |   1496 |       |       |
|   1 |  TABLE ACCESS BY GLOBAL INDEX ROWID| PARTITIONED_TAB |      1 |   1493 |   1493 |   1496 | ROWID | ROWID |
|*  2 |   INDEX RANGE SCAN                 | GL_TYP_I        |      1 |   1493 |   1493 |      4 |       |       |
------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(MHO_TYP_ID=0)

Statistics
----------------------------------------------------------
1     recursive calls
0     db block gets
1496  consistent gets
1493  physical reads
0     redo size
28937 bytes sent via SQL*Net to client
372   bytes received via SQL*Net from client
4     SQL*Net roundtrips to/from client
0     sorts (memory)
0     sorts (disk)
1493  rows processed

And spot how many index range scan we did this time? Only one. Because there is only one segment for this type of index

SQL> select count(1) from dba_segments where segment_name = 'GL_TYP_I';

COUNT(1)
----------
1

You can also point out that in contrast to the locally non prefixed index we did 50% less of logical I/O – from 2984 down to 1496.

By the way, why do you think Oracle allows the creation of the non prefixed index LC_NON_PREFIXED_TYP_I when it is declared as non unique and refuse to obey you  when you want to create it as a unique index?

SQL> create unique index lc_non_prefixed_typ_i on partitioned_tab (mho_typ_id) local;

*
ERROR at line 1:
ORA-14039: partitioning columns must form a subset of key columns of a UNIQUE index

Simply because Oracle has already measured the impact this kind of index can have on the insert performance if it has allowed it to exist. In this case a mho_typ_id could go in any of the 1493 partitions. How would Oracle proceed to check if the inserted mho_typ_id value has not been already inserted (or is being inserted) without impeaching others to insert into the whole bunch of the 1492 partitions? Is this scalable and performant? Of course it is not.

Bottom Line: when you create a locally non prefixed index (index that doesn’t include the partition key in its definition) then be sure that queries using this index will eliminate partitions. Otherwise,  the more partitions you have the more index partitions you will range scan and the more logical I/O you will do

Create a free website or blog at WordPress.com.

Tony's Oracle Tips

Tony Hasler's light hearted approach to learning about Oracle

Richard Foote's Oracle Blog

Focusing Specifically On Oracle Indexes, Database Administration and Some Great Music

Hatem Mahmoud's blog

Just another blog : Databases, Linux and other stuffs

Mohamed Houri’s Oracle Notes

Qui se conçoit bien s’énonce clairement

Oracle Diagnostician

Performance troubleshooting as exact science

Raheel's Blog

Things I have learnt as Oracle DBA

Coskan's Approach to Oracle

What I learned about Oracle

So Many Oracle Manuals, So Little Time

“Books to the ceiling, Books to the sky, My pile of books is a mile high. How I love them! How I need them! I'll have a long beard by the time I read them”—Lobel, Arnold. Whiskers and Rhymes. William Morrow & Co, 1988.

Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Oracle Scratchpad

Just another Oracle weblog

OraStory

Dominic Brooks on Oracle Performance, Tuning, Data Quality & Sensible Design ... (Now with added Sets Appeal)