Mohamed Houri’s Oracle Notes

March 17, 2015

A precise index

Filed under: Oracle — hourim @ 7:50 pm

Below are a select statement not performing very well and its corresponding row source execution plan:

SQL> select
       {list of colums}
    from
      tx_tables bot
    inner join ty_tables_tmp tmp
     on account_id      = tmp.account_id
     and trade_id       = tmp.trd_id
     where transferred <> 1;
----------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                        | Starts | E-Rows | A-Rows | 
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                             |      1 |        |    301K|
|   1 |  NESTED LOOPS                |                             |      1 |        |    301K|
|   2 |   NESTED LOOPS               |                             |      1 |     75 |    301K|
|   3 |    TABLE ACCESS FULL         | TY_TABLES_TMP               |      1 |      2 |      2 |        
|*  4 |    INDEX RANGE SCAN          | TX_TABLES_IDX1              |      2 |  43025 |    301K|
|*  5 |   TABLE ACCESS BY INDEX ROWID| TX_TABLES                   |    301K|     38 |    301K|
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("TRADE_ID"="TMP"."TRD_ID")
   5 - filter(("BOT"."TRANSFERRED"<>1 AND "ACCOUNT_ID"="TMP"."ACCOUNT_ID"))

Statistiques
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     278595  consistent gets
          0  physical reads
          0  redo size
   10597671  bytes sent via SQL*Net to client
     221895  bytes received via SQL*Net from client
      20131  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     301944  rows processed

Let’s put aside the inadequate statistics (the CBO is estimating to get 43K rows at operation in line 4 while actually it has generated 301K) and let’s try to figure out if there is a way we can follow to avoid starting TX_TABLES TABLE ACCESS BY INDEX ROWID operation 301K times. The double NESTED LOOP (known as the 11g NLJ_BATCHING) is driving here an outer row source of 301K rows (NESTED LOOP operation at line 2) which starts henceforth the inner operation TABLE ACCESS BY INDEX ROWID 301K times (see the Starts column at line 5). If we get rid of the NESTED LOOP at line 1 we might then be able to reduce the number of times the operation at line 5 is started. And maybe we will also, as a consequence of this starts operation reduction, decrease the number of corresponding logical I/O. Annihilating the nlj_batching feature can be achieved by using the no_nlj_batching hint as shown below:

SQL> select
       /*+ no_nlj_batching(bot) */
       {list of colums}
     from
      tx_tables bot
    inner join ty_tables_tmptmp
     on account_id     = tmp.account_id
     and trade_id       = tmp.trd_id
     where transferred <> 1;
----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name                        | Starts | E-Rows | A-Rows |  
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                             |      0 |        |      0 |
|*  1 |  TABLE ACCESS BY INDEX ROWID| TX_TABLES                   |      1 |     38 |    301K|
|   2 |   NESTED LOOPS              |                             |      1 |     75 |    301K|
|   3 |    TABLE ACCESS FULL        | TY_TABLES_TMP               |      1 |      2 |      2 |
|*  4 |    INDEX RANGE SCAN         | TX_TABLES_IDX1              |      2 |  43025 |    301K|
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(("BOT"."TRANSFERRED"<>1 AND
       "ACCOUNT_ID"="TMP"."ACCOUNT_ID"))
   4 - access("TRADE_ID"="TMP"."TRD_ID")
Note
-----
   - dynamic sampling used for this statement (level=2)

Statistiques
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     278595  consistent gets
          0  physical reads
          0  redo size
   10597671  bytes sent via SQL*Net to client
     221895  bytes received via SQL*Net from client
      20131  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     301944  rows processed

Although the Starts column is not showing anymore those 301K executions, the number of logical I/O is still exactly the same. The performance issue, in contrast to what I was initially thinking about, is not coming from the nlj_batching feature. As a next step I have decided that it is time to look carefully to this query from the indexes point of view. The above two execution plans have both made use of the TX_TABLES_IDX1 index defined as shown below:

TX_TABLES_IDX1(TRADE_ID, EXT_TRD_ID)       

There is still a room to create a precise index which might help in this case. This index might look like the following one:

SQL> create index TX_TABLES_FBI_IDX2
                 (TRADE_ID
                 ,ACCOUNT_ID
                 ,CASE WHEN TRANSFERRED <>1 THEN -1 ELSE NULL END
                 );

Which, once created, it has allowed the initial query (without any hint) to be honored with the following execution plan:

-----------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                         | Starts | E-Rows | A-Rows |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                              |      1 |        |    301K|
|   1 |  NESTED LOOPS                |                              |      1 |     75 |    301K|
|   2 |   TABLE ACCESS FULL          | TY_TABLES_TMP                |      1 |      2 |      2 |
|*  3 |   TABLE ACCESS BY INDEX ROWID| TX_TABLES                    |      2 |     38 |    301K|
|*  4 |    INDEX RANGE SCAN          | TX_TABLES_FBI_IDX2           |      2 |   4141 |    301K|
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("BOT"."TRANSFERRED"<>1)
   4 - access("TRADE_ID"="TMP"."TRD_ID" AND
       "ACCOUNT_ID"="TMP"."ACCOUNT_ID")
Note
-----
   - dynamic sampling used for this statement (level=2)

Statistiques
----------------------------------------------------------
        199  recursive calls
          0  db block gets
     108661  consistent gets
        229  physical reads
          0  redo size
    8394791  bytes sent via SQL*Net to client
     221895  bytes received via SQL*Net from client
      20131  SQL*Net roundtrips to/from client
          7  sorts (memory)
          0  sorts (disk)
     301944  rows processed

Spot how the new index has not only get rid of the nlj_batching double nested loop and reduced the number of operations Oracle has to start but it has also reduced the logical I/O consumption to 108K instead of the initial 278K. However, we still have not changed the predicate part of the query to match exactly the function based part of the new index (CASE WHEN TRANSFERRED <>1 THEN -1 ELSE NULL END) which explains why we still have a filter on the TX_TABLES operation at line 3. As always with function based indexes, you need to have the predicate part of the query matching the definition of the function based index expression. Which in other words translate to this new query (look at the last line of the query):

SQL> select
       {list of colums}
     from
       tx_tables bot
       inner join ty_tables_tmptmp
     on account_id     = tmp.account_id
     and trade_id       = tmp.trd_id
     where (case  when transferred <> 1 then -1 else null end)  = -1;

Here it is the new resulting executions plan:

-----------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                         | Starts | E-Rows | A-Rows |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                              |      1 |        |    301K|
|   1 |  NESTED LOOPS                |                              |      1 |      1 |    301K|
|   2 |   TABLE ACCESS FULL          | TY_TABLES_TMP                |      1 |      1 |      2 |
|   3 |   TABLE ACCESS BY INDEX ROWID| TX_TABLES                    |      2 |     75 |    301K|
|*  4 |    INDEX RANGE SCAN          | TX_TABLES_FBI_IDX2           |      2 |   4141 |    301K|
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("TRADE_ID"="TMP"."TRD_ID"
               AND "ACCOUNT_ID"="TMP"."ACCOUNT_ID"
               AND "BOT"."SYS_NC00060$"=(-1))
Statistiques
----------------------------------------------------
          0  recursive calls
          0  db block gets
     108454  consistent gets
          0  physical reads
          0  redo size
    8394791  bytes sent via SQL*Net to client
     221895  bytes received via SQL*Net from client
      20131  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     301944  rows processed

Notice now that when I have matched the predicate part of the query with the function based index definition there is no filter anymore on the TX_TABLES table which, despite this time has not been of a noticeable effect, it might reveal to be a drastic enhancement.

Bottom Line: precise index can help the CBO following a better path .

Advertisements

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

Tony's Oracle Tips

Tony Hasler's light hearted approach to learning about Oracle

Richard Foote's Oracle Blog

Focusing Specifically On Oracle Indexes, Database Administration and Some Great Music

Hatem Mahmoud Oracle's blog

Just another Oracle blog : Database topics and techniques

Mohamed Houri’s Oracle Notes

Qui se conçoit bien s’énonce clairement

Oracle Diagnostician

Performance troubleshooting as exact science

Raheel's Blog

Things I have learnt as Oracle DBA

Coskan's Approach to Oracle

What I learned about Oracle

So Many Oracle Manuals, So Little Time

“Books to the ceiling, Books to the sky, My pile of books is a mile high. How I love them! How I need them! I'll have a long beard by the time I read them”—Lobel, Arnold. Whiskers and Rhymes. William Morrow & Co, 1988.

EU Careers info

Your career in the European Union

Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Oracle Scratchpad

Just another Oracle weblog

OraStory

Dominic Brooks on Oracle Performance, Tuning, Data Quality & Sensible Design ... (Now with added Sets Appeal)

%d bloggers like this: